Jump to content

K^2

Members
  • Posts

    6,181
  • Joined

  • Last visited

Everything posted by K^2

  1. Relax. Nobody said it's a traversable wormhole. And an untraversable one has all of the same properties as a black hole. It has an "interior" region, where all sorts of crazy stuff can be happening, which is located between two exterior regions, which can be located in different places in the same universe, or different universes. For an untraversable wormhole, the transition from exterior to interior region is guarded off by an event horizon. You can enter interior region from either side, but you can't exit. You will be stuck in the middle. The whole idea of black holes as wormholes comes from the Kerr solution to the Einstein Field Equations. It predicts that spacetime inside a rotating black hole, and any supermassive black hole is a rotating one in practice, has multiple regions, the most "interior" one stretching out to infinity and working as a whole universe. In effect, it's not "inside" the black hole at all, but rather on the other side of a wormhole. What's even more interesting, if a black hole rotates fast enough, Kerr solution predicts that event horizon vanishes, exposing a naked singularity. That also, as a byproduct, makes a traversable wormhole. Connecting our universe with the one on the other side. Unfortunately, or perhaps fortunately, Kerr solution is not stable in the interior. It is a solution, not the solution. Last time I have checked, a stable, ground state solution to the rotating black hole was still not known. Naked singularities are still just a quirk of the theory with no practical consequence, and the only known ways to stabilize a traversable wormhole is to have negative energy densities. Same requirements as a warp drive.
  2. Yeah, you're right. There are escape trajectories bellow 3/2, but they require you to have vertical velocity away from the black hole. If you were to have a periapsis bellow 3/2, vertical velocity at that point would be zero, and all of these trajectories lead to the event horizon. So you can't actually have a periapsis bellow 3/2. And I'm not sure what you mean by cubic term in the effective potential. The r'' EQM has the "gravity" term looking like (r+r²)/r4, which would give a cubic-like contribution to the effective force, but I don't see anything like a cubic term in the potential. v = 0.99c, rs = 1, rp = 1.45 - 1.55
  3. Blue, orbits bellow 3/2 are unstable. Not all of them spiral in, however. If you try to make a near circular one, yes. And ellipticals will eventually decay. But you can have an escape orbit with periapsis bellow 3/2.
  4. Interior solutions that are wormhole-like are unstable. As for wormholes connecting two points in our space, we do not have a theory that makes any predictions. Creation of such wormhole requires a topology change. General Relativity assumes constant topology. It makes predictions on how to manipulate and stabilize wormholes, but not on how they can be created.
  5. No short term ones. Long term, it is still exposure to a powerful oxidant. It promotes generation of free radicals, that ca wreck havoc on your tissues. Lung cancer, for example, would be a thing to watch out for. We really need buffer gases in what we breathe.
  6. Pure ox atmo is bad in so many ways that I don't even know where to start, but the fact that metals tend to be flamable in that is probably a good start. You'll also start oxidizing, well, just about anything on the planet that hasn't been thoroughly oxidized already, depleting your atmo. Finally, it's not good for living things either. Plants simply won't make it at all, and it won't be good for humans or animals under long exposure, either. Have you noticed that oxygen is actually an antisceptic under certain conditions? Yeah. It's not really good for you, unless thoroughly diluted.
  7. Sure. Actuall mirrors, for example. Glass is opaque in IR. The problem is that glass is also a pretty poor heat conductor. The surface exposed to space would cool off, but it wouldn't conduct heat off the metal frame all that great. I suspect, real space radiators are made out of aluminum and are coated with a paint that is not entirely unlike thermal paste in composition to give them IR opaqueness without restricting the heat flow. That would be consistent with white or silvery appearance as well. Ultimately, what you are looking for is not a gloss finish, however, but an albedo. As close to 100% of visible light should be reflected as possible. But direction you reflect it into is irrelevant. You can have light scattered evenly in all directions, and that will make finish look snow-white. You can have tiny clusters with fine finish which will reflect lighti n random directions. That will make it look silvery-gray. Or you can go with fine gloss finish, which will make it look like a mirror. Either way works, but the first two are far more practical to implement, so that's what you see in space.
  8. I don't think enough of the ship will survive re-entry and impact to make it possible to distinguish pre- and post- re-entry damage. The most believable version so far is that damage was done by collision with third stage during separation, perhaps, due to third stage engines not shutting down correctly. If so, any clues to the causes would far more likely be discovered on the third stage itself, but these things are designed to thoroughly disintegrate during re-entry.
  9. Current rotation will become completely irrelevant during re-entry. It will all be up to aerodynamics. And while it's certainly not going to land in one chunk, there is a high probability of significant debris reaching ground. So if it happens to deorbit over densely populated area, there could be damage. Responsible thing would be to shoot it down once it's bellow 150km or so. It'd be a very good test for anti-ICBM systems, and any debris it produces would be guaranteed to rapidly re-enter and burn up. I'm sure both US and China have systems they'd be happy to try out on such a nice target.
  10. We've been specifically looking for this with warp field interferometry for a few years now. There is nothing fantastical about that. We all know that space-time can be distorted to do all kinds of cool stuff. The challenge has been in artificially creating distortions of sufficient magnitude to be measurable.
  11. There are a whole bunch of contradicting statements flying about. a) No, warp field would not explain any FTL phenomena. That has requirements that are not meant. But I'm not sure there has actually been an observation of anything going FTL. EM Drive resulting in light interference consistent with a very weak warp field is possible. c) While a warp field can produce thrust by interacting with nearby matter, I'm pretty sure we'd notice a warp field of that sort of strength earlier. d) A sufficiently strong warp field would require rather high energy densities. I'm not going to outright claim that it's impossible to achieve with a resonance chamber, but it seems suspect. I'd wait for an official announcement from NASA, or better yet, an actual publication that would provide details of the measurements.
  12. There is precisely one physical phenomenon that is anything like the Kraken drive in the real world. It is the Mossbauer Effect. In order for it to actually be usable as a space drive, however, the universe would have to contain a lattice. That might be possible if universe is a closed manifold with a satisfactory symmetry condition. It would then make a lattice with itself, in a sense. Even then, however, getting a recoil is going to be very difficult, and by no means 100% efficient. So I can only see this as an improvement on photon drive efficiency. Not a total replacement. And, naturally, you'd need to have energy source that makes photon drive feasible to begin with. That's almost exclusively a realm of matter-antimatter drives. Still, the wonderful thing about the relativistic rocket formula is that if we can boost a photon drive by even 50%, it would open up a world of opportunities for interstellar travel. So this might have practical applications eventually. But not any time soon. We simply aren't talking about the sort of delta-V where we can rely on anything fancier than an ion drive.
  13. You don't usually push all the registers. Just the "relevant" ones. (TBD by the compiler.) Unoptimized code will almost never push any working registers to the stack. What always goes onto the stack is the return address and almost always a base pointer. But stack is used for a lot more than that. Stack is used to pass variables to a function, it is used for nearly all local variables within a function, and it is used in intermediate operations of complicated algebraic expressions. For example. Consider a very simple function. int sum(int a, int { int c = a + b; return c; } If you compile it without optimization and call sum(3,4) from main, the stack will get the following workout. Push 4 to the stack. Push 3 to the stack. Push return address in main to the stack. (Followed by jump to address of sum function in memory.) Push current base pointer (belonging to main) to the stack. Set base pointer to match stack pointer. Decrement stack pointer by 4 to create space for c on the stack. Use base pointer to read values of a and b, compute the sum, and use base pointer again to store it in c. Copy answer from c to return register. (Usually eax) Pop base pointer from stack. (Returns it to base of main.) Pop return address from stack, and return to main function. Now imagine all of that on top of your A(m,n) function. Basically, all it does is stack operations simply because of the way you've set up recursion. Processor does very little of actual algebra. It will spend, maybe, one cycle in a hundred doing math, and the rest of the time will be wasted on branching, function calls, stack operations, and all of the mess associated with it. (Branch predictions, cache predictions, etc.) When you want to test computational capabilities of the CPU, you really want to avoid all of that nonsense. You want to serve mathematical operations in as predictable a manner as possible, making sure that cache remains consistent, and any branches you must have are very well predicted. Then all the CPU is doing is pulling values out of cache, does math, puts them back into cache, and it can do this very, very fast. That's where you can get tens of billions of operations per second. GPU is kind of a different story. It's really bad at general computing. Especially branching. You generally want to pull as much of that out of GPU code as possible and have CPU look after it. If you can reduce your math problem to "Do this set of operations a million times," you can get a good GPU give you trillions of operations per second. That's why GPU is so awesome at image processing, numerical integration, and artificial neural networks. And, of course, actual rendering. All of these problems are reduced to, "Here is the math you have to do for each point, now do this lots." Edit: If you can read assembly, the sum(a, compiles to the following on x86. (Again, unoptimized) push ebp mov ebp, esp sub esp, 4 mov eax, [ebp+8] add eax, [ebp+12] mov [ebp-4], eax mov eax, [ebp-4] mov esp, ebp pop ebp ret
  14. Modern GPUs are just general vector processors. Lots and lots of arithmetic operations are precisely how you test them. What GPUs are really bad at is any sort of branching, which would make Ackermann absolutely the worst thing to use a GPU for.
  15. Lets start with the fact that A(4,4) is far outside the int range. And why, exactly, are you passing 64 bit integers to it as parameters, but return just a 32 bit int? But that's just minor gripes. Then there is your implementation with forms, which is going to prevent you from running a clean, well optimized code. If you want to write efficient code, write it in C. Finally, the only thing you are really "testing" with this implementation is the stack. Edit: All of the references talk about Ackermann as the benchmark of optimizer, not benchmark of the computer.
  16. Challenge accepted. A faster version. double mysterious2(unsigned int k) { unsigned long long int value = 0x4000000000000000LL - (0x0010000000000000LL >> k); return *((double*)&value); } Also, it works correctly for k > 1023.
  17. Without economic recovery, they won't be able to spend any money on space program within two years. And I don't see the economy recovering without a regime change. If the later happens, we can start discussing further projections. Right now, it's a pointless discussion.
  18. Hohmann Transfer - That has all the math you need, and even works through an example very similar to your question.
  19. Excellent example of why people should not talk about black holes if they do not understand anything about gravity. An object in free fall can safely cross event horizon, as tidal gradient is finite in this case. Surface gravity is defined as acceleration experienced by a static point at the surface. Showing that this value becomes infinite at event horizon is one of the most basic exercises in GR. Surface gravity is the relevant quantity if you wish to suspend or support something at fixed elevation. Talking about "gravity" itself being finite or infinite is pointless. There is no such concept in GR, and classical gravity does not apply. The real quantities in GR are proper acceleration, which is zero in free fall, and tidal acceleration. Free fall acceleration is purely a frame-dependent quantity, and only happens to match surface gravity in nearly flat metrics, which event horizon is an extreme case of not being. Please, refrain from commenting on these thing, especially to"correct" someone, if you have not had a most basic course in differential geometry or GR. That goes for a bunch of people here. This isn't like rocket formula where you can learn one equation and be an expert.
  20. Surface gravity at the event horizon is infinite. Even if you somehow managed to hold on to an object whose extremity is just above, the tension would exceed absolutely any material strength.
  21. MS FSX does support custom hardware. You have to write a bit of code to connect the game's I/O to whatever you've built, but it's totally doable. People have built custom panels that way. The hard part is actually building the hardware.
  22. It's more the former than the later, because early jet designs relied on a centrifugal compressor stage to compensate for inability of axial compressors to generate sufficient compression ratio alone.
  23. I think you got these backwards. A motorjet has a compressor, but not a turbine. The only modern application of motorjets that I can think of is for RC models, using an EDF in place of a motor/compressor stage. It'd be horribly inefficient, but it removes most of the costs associated with model turbojets, so I'm actually quite puzzled that we don't see a bunch of these on the market.
  24. Relationship between flow velocity and pressure gradients is always a chicken or the egg sort of question in fluid dynamics. Of course, from the perspective of solving a problem, it doesn't matter.
  25. The question isn't if the pressure on the pump output is low, but rather if pressure differential through the pipe is low compared to that. You can use a straight pipe lamniar flow approximation after you work out the diameter of the relevant pipe to make sense with flow velocity and flow rate. Of course, in a real pipe, flow velocity isn't going to be uniform, so it's slightly more complex than it sounds.
×
×
  • Create New...