-
Posts
6,181 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
Because LEO is about 2/3 of the way to parking on Lunar, in terms of delta-V, and fuel requirements are exponential. It's the difference between multiple Falcon-9, hopefully Falcon-9R, launches, or wasting a whole SLS just to refresh the crew. Multiple 9R launches can be less than 1/10th of the cost of a single SLS launch you'd need to deliver crew and supplies to Lunar Orbital station. And that's if you insist on going conventional all the way. As it has been discussed in this thread already, you can have a slow VASMIR tug hauling non-perishables between LEO and Lunar using solar power and a tiny fraction of propellant mass. Once these pieces are in place, delivering fuel, water, air, parts, and at least some of the food to the Lunar Orbital Station can be almost as cheap as delivering them to LEO. Putting it all together, conservatively, you get crew and perishables delivered to the station at about 1/10th of the cost via way station, and non-perishables, which is most of the supplies, at about 1/50th compared to direct launch. You'll save enough to cover costs of the way station in under a decade of operation, and seeing how it's something we can use for various Mars missions as well, there is just no contest. Way station is way better.
-
I'm still with KvickFlygarn87 on that. They're weird. Anything to do with SO(N) is weird. Trig functions are just a special case of that weirdness.
-
Z-Man, Pauli exclusion does not apply to electron-positron pairs as these are not identical particles. An electron and positron can share state, and there is no Pauli repulsion between the two.
-
Any realistic plan for a Lunar Orbital Station would include an LEO way station. I don't think anyone would seriously consider direct-to-LTO launches for this. Besides that, station keeping would have to be done with VASMIRs and it would have to be far more self sufficient. At very least, better water and air recycling are a must. But as it should be clear from the above, the next step would be building a LEO way station. It would significantly improve our ability to send missions both in the Earth system and to the inner Sol system. ISS might work as a base for such a station, but a new, dedicated way station with lower inclination might be a much better idea.
-
Common misconception. But the ring station will turn around the center of mass. If one side is heavier, the other side will be further from center of mass, and be "lower" from perspective of centrifugal force. That means water will naturally flow to balance the station. No need for pumps. If there is also enough of an altitude change to set up a temperature gradient within the ring, you might even be able to get away with having natural weather carry moisture around resulting in lakes and rivers that OP mentions. In fact, in a large enough station, this is inevitable. So you'll either have to do rivers or channels of some sort. The only question is how large is large enough for that, and whether any known materials would allow for a station that big. With materials we know, we should be able to do up to about 100k in diameter. That can easily allow 1k+ altitudes within the ring. Like I said, I don't know if that's enough for weather to develop naturally, but you'd be able to set up something weather-like semi-artificially. Whether there is any benefit to that, vs a nice controlled environment, I don't know.
-
If you ignore change of rocket's mass as it burns fuel and assume constant thrust, there is an analytic solution. Given an equation: v' = a - kv2 You can find terminal velocity, vt = Sqrt[a/k] for which v' = 0. Naturally, a = F/m - g, where F is thrust and m is mass of the rocket, which you can also think of as acceleration of the rocket without drag. In that case, velocity changes as the function of time. v(t) = vt Tanh(a t/vt) Tanh is hyperbolic tangent. Tanh(x) = sinh(x)/cosh(x). If your software does not have hyperbolic functions, sinh(x) = (exp(x) - exp(-x))/2 and cosh(x) = (exp(x) + exp(-x))/2. This equation gives v(0) = 0, starts out accelerating at rate a, and then settles on v = vt after a while. Of course, you want h(t). The solution to h' = v from above yields: h(t) = vt2/a ln(cosh(a t/vt)) So all you need to know are a, k, and the amount of time t that the engine burns to figure out altitude and velocity the rocket attains once the engine cuts out. From there on, rocket coasts. v' = -g - kv2 The parameter k is still exactly the same, but without thrust, the only other force is gravity, which also slows down your rocket. Re-defining vt as Sqrt(g/k), we arrive at a solution which is very similar to the earlier equation. v(t) = -vt Tan(g t/vt - c) This is essentially the same equation, but with Tan instead of Tanh and a bit of a twist that without parameter c, v(0) would be 0. And we want v(0) to be whatever velocity the rocket had when engine cut out. So solving for v(0) = v0 we get the value for c. c = Tan-1(v0/vt) This also tells us when the rocket will reach the apex. It will happen the moment gt/vt = c. So all that's left is figuring out altitude. Again, I'm going to use h' = v, so this will need some corrections in a moment. h(t) = vt2/g log(cos(g t/vt - c)) You should note immediately that at the apex, I get h = 0. That's because h(0) is negative. Just a side effect of solving a differential equation. But if all you want is the altitude to which rocket climbs, then -h(0) is exactly what you are looking for. Putting it all together, the full equation for the actual maximum altitude the rocket will reach is the following. H = vt12/a ln(cosh(a T/vt1)) - vt22/g log(cos(-c)) Where: H : Maximum height. vt1 = Sqrt(a/k) : Ascent terminal velocity. a = F/m - g : Initial acceleration of the rocket g : Acceleration due to gravity, or 9.8m/s2 T : Length of time that engine runs. F : Average thrust of the engine. (Assumed to be constant.) vt2 = Sqrt(g/k) : Coasting terminal velocity. c = Tan-1(v0/vt2) : Time offset. v0 = vt1 Tanh(a T/vt1) : Rocket's velocity when engine cuts out. k : Drag coefficient divided by mass. It's probably easier to estimate coasting terminal velocity and get k from that. Or use drag formulas available for rocketry. You should be able to estimate all of these parameters and get a somewhat descent estimate of max height. Of course, if you want to be more precise, you need to take into account the thrust profile and the change in mass. (Which also depends on thrust profile.) That will require numerical integration. If you want to do the numerical integration, I would strongly recommend forgetting about Excell and learning to use Matlab/Octave. They use mostly the same language, but you can get Octave for free. I can walk you through setting up a numerical integrator for this problem in Octave, if you want. Oh, and somebody should check my math above. The functional forms are correct, but I might have messed up the coefficients here and there. Or forgot a minus sign. Or something equally silly. Please, let me know if there is a mistake.
-
Of course, the universe is deterministic. It's more than that. It's already determined. I'm sure you are aware of problems of simultaneity in relativity. What's future to you is present to someone from a moving coordinate system. If your future hasn't happened yet, then how can that even be? Time is just a direction. It's a weird one, and thanks to Statistical Mechanics making it even weirder, we perceive it as time flow, but ultimately, it's still just a direction. Saying that future is undetermined until you get to it is like saying that what's in the next room is undetermined until you walk in. It just doesn't work that way. Not even in Quantum Mechanics. Superpositions? Sure. Undetermined? Never. The observed behavior is a superposition. The fact that you are part of superposition is your problem. If I leave a red ball or a blue ball in the next room, and don't tell you which, from your perspective, the "observed behavior" is a ball of random color when you open the door. But you aren't going to tell me that it's an undetermined system, are you? That's just absurd. You lacking full knowledge of the system doesn't make it random or undetermined. As an external observer, knowing full setup of the system, I know exactly what the outcome is going to be. A superposition of you going to Nepal and you going to Grad School. The fact that your observations differ, again, stems only from the fact that you are part of the system, and you have limited information about the outcome. This is a "more random" sort of situation, because being part of the system, there is no way to obtain full information, but it doesn't make it undetermined. More importantly, as an external observer, I can verify my prediction. Lets step away from superpositions of major historical events, because divergent histories are very difficult to collect back together, (Statistical Mechaaaaaaaanics!) but we can do the same experiment with particles. Lets simplify the experiment. We really just need a yes/no system, not something with slits and screens, etc. I'm sure you agree that it's not the point. So lets take a particle in a superposition of up and down spins. I can determine which it is using Stern Gerlach experiment. But I can also make a more direct measurement using NMR. Using quantum amplification algorithm, I can take the initial state of two particles |00> + |10> (one particle in superposition, second is in "down" state) and convert it to a maximally entangled state |00>+|11>. As I have explained earlier, this is a real measurement. If I decide to run the second particle through Stern Gerlach, I can use the result to determine spin of first particle. Naturally, the only outcomes possible here is that I either end up measuring both particles spin down or both particles spin up. But I'm going to be much trickier. I'm going to run another state transformation. Note that the final state is one of the Bell States. It's the + state. So lets transform the state so that Bell+ state goes to singlet, Bell- goes to superposition triplet, and the remaining two states, are mapped to the remaining triplet states. This transformation is Hermitian, so I can carry it out using NMR. What I'm going to end up with is a pair of particles which can have either a total spin 0 (singlet state) or total spin 1 (triplet states). And the way the experiment is set up, I should only get the singlet state. So I can measure total spin and confirm that it always comes out to be 0 in this experiment. As such, while the result of measurements were in superposition, and if I were to try and do a Stern Gerlach, I'd get "random" results, if I do the experiment differently, I can verify that 100% of the time I confirm the deterministic predictions of Quantum Mechanics, because I eliminate possibility of losing information by becoming part of the experiment.
-
Redshifting/Blueshifting Radiation beyond limits
K^2 replied to Kerbin Dallas Multipass's topic in Science & Spaceflight
Ah, that's where you were going with that. Yeah, that's completely fair. But I would still call it more of a symptom than the underlying cause of such limit existing. It's like observations altering the observed being frequently used as a hand-waving explanation for Heisenberg Uncertainty. It's not wrong, but Heisenberg Uncertainty works regardless of whether someone is trying to make measurements or not, and that's far more interesting. Similarly, field theory as we know it breaks down at Plank Scale regardless of whether anyone is trying to measure anything. The theory itself just doesn't work anymore. -
Redshifting/Blueshifting Radiation beyond limits
K^2 replied to Kerbin Dallas Multipass's topic in Science & Spaceflight
I don't know where you get this from, because there are coordinate systems from which an ordinary photon has a sub-Plank scale wavelength. And like I said earlier, unlike local coordinate transformations, global transformations don't have limits. And if the photon collapses to a black hole in one coordinate system, it should do so in all of them*. Normal photons don't collapse, so the ones past Plank limits aren't going to either. * General Covariance. AKA, Diffeomorphism Invariance. Very important consequence of space-time topology. -
LCD tends to absorb a ton of light even in the "light" mode, which is why LCD screens have such powerful backlights. One not designed to be a screen can do much better, but it's still going to absorb a lot of light. What would probably work way better is something like e-Ink. That stuff can be designed to switch between extremely reflective and extremely good at absorbing light. Just turn the sun-facing surface coal black, and the shadow side silver, and you have yourself a nice amount of heat buildup in the balloon.
-
You are mixing several different things into one. First, there is the fact that state of the particle is described by system's wave function. Second is the fact that superposition principle applies, meaning you can write that wave function as a sum of states. Eigen states are just a special case. Finally, you are throwing in Copenhagen collapse, which is interpretation-specific and isn't part of general Quantum Mechanics. Finally, Copenhagen Interpretation states that the system will collapse into an eigen state of the measurement operator. Why or how that happens is not touched in this interpretation, but is largely expanded on in more general theory. But lets tackle these things one at a time. It actually is. The entire phosphorescent screen transitions to the excited state. However, there is only energy for a single photon in the entire thing. When light is emitted, it is a superposition of all possible emissions. So as a point of fact, the whole thing does light up. But what you record on film is a little different. This is where collapse happens. When you actually take the record of where the photons strike. A more modern experiment would have scintillators and sensors. But the idea is the same. The measurement operator is essentially that of position, so the collapse is to a photon detected at specific location. And yes, these appear to be random. The question remains, however. Does the collapse happen because photons struck sensors? Turns out, it is not. Delayed Choice Quantum Eraser tells us that if we discard data from sensors, it's as if collapse did not happen. How is that possible? Well, we've already covered the fact that the screen glows all at once, but you can't have a fraction of photons emitted. So instead, you have a superposition of photons being emitted from all possible locations at once. We can only measure one photon total, but we can measure any one of these with some probability. So what's to prevent sensors themselves from going into superposition? Turns out, absolutely nothing. For a short enough time. And for a short enough time, all of the sensors are in superposition of all possible combinations of just one of these sensors having been triggered. And we can write down the exact state of these sensors since we know the probability amplitudes. So no collapse took place yet and everything is fully deterministic. Lets keep going. Good. Lets talk about Shroedinger's cat. One atom, one detector, and we set it up to be triggered with a 50% chance. Cat is killed if detector is tripped within the mean life of the atom. Or it is not killed if the atom did not decay within the time window. But the whole point of the experiment is that the cat in the box is in superposition of dead and alive. Why hasn't cat's observation of the result collapsed the system? Well, let us look at this whole experiment formally. Let me call |a> the cat alive state and |d> the cat dead, or dying, state. The atom is going to be |1> for its initial state, and |0> when it decayed. We start out with the |a1> state, and after the mean time elapses, the system turns into a (|a1> + |d0>)/Sqrt(2) state. Do you recognize it? This is an entangled state. This is actually the real outcome of a Quantum Mechanics measurement. Before we start talking about collapse and interpretations, a measurement actually takes a superposition state and creates a superposition entangled state of the measuring device and the system being measured. So what's going on from perspective of the cat? Well, this is where superposition principle kicks in again. It tells us that every state in superposition can be considered separately. We don't need to talk about the entire (|a1> + |d0>)/Sqrt(2) state. We can talk about each of the possibilities |a1> and |d0> separately. Specifically, given some time evolution operator U(t), we can say that U(t)(|a1> + |d0>) = U(t)|a1> + U|d0>. In other words, each state evolves in time as if the other states don't exist. The dying cat sees the broken poison vial and if it could, it would conclude that the atom has decayed. It's not aware that it's in the superposition state. From its perspective, the atom's state, and its own, has collapsed. The cat which is alive also sees the intact vial and from its perspective, decay did not happen. It would, however, also conclude that the state has collapsed. Are cats special? Of course not. Any observer, be it human, animal, or machine. Anything capable of recording a state, in fact, is going to become entangled with the state being measured, and from its perspective, observe collapse. The main alternative to Copenhagen Interpretation is the Many Worlds Interpretation, and it takes precisely that stance. That no real collapse ever happens, and the only thing that's going on is things becoming more and more entangled, leading us to a number of "alternate time lines", which are really just series of states within the grand total superposition which we consider separately. And these are all the same. I can write down the outcome state for each of these. For some, rather approximately. Being a particle physicist, I can tell you not from second hand knowledge that the process of fusion is actually an extremely complicated one, despite the choice of final states being fairly straight forward. For hydrogen atom, though, with a bit of work, I can write a simulation that produces the exact state of the electromagnetic field "after the decay". It will contain a suitable superposition of photons. What you are still being stuck on is the actual collapse, and that is just one of possible interpretations. Copenhagen Interpretation, does, indeed, rely on random chance, but that's just one of the reasons why it's rapidly falling out of favor. In Many Worlds Interpretation, there is no chance at all. Everything is fully determined. Some of the more down-to-earth interpretations recognize that there are limits to how far superposition principle can be pushed. At some point, density of state turns energy spectrum practically into continuum, and then what you have is decoherence, which, from observer's perspective, again, looks just like collapse of the system. But it's not a true collapse either, but rather a state where a lot of information has been lost to chaos. And that brings us to the ultimate point. There are things in Quantum Mechanics which we cannot predict. But it is not because the dynamic is unpredictable. The underlying "randomness" is very similar to what you see in Thermodynamics. There are just too many interacting states, with energy being transitioned back and forward, so it is impossible to know the initial state perfectly. But if you do know initial state and the Hamiltonian, you can evolve it to final state in a completely deterministic way. And so far as we know, he was right. It is a shame that Many Worlds Interpretation was developed a few years after his death. He would have liked it.
-
It is probably worth noting that this is not a guarantee, but it is a requirement for derivative to be defined. A good counterexample is f(x)=x sin(1/x) for x not equal to 0 and f(0)=0. This is a continuous function defined for all real numbers. However, it does not have derivative at 0, precisely, because the slope changes more and more as h gets small.
-
Yes, Fridy's book is certainly under copyright. You can get it on Amazon. I don't know if they offer a digital version, though.
-
Redshifting/Blueshifting Radiation beyond limits
K^2 replied to Kerbin Dallas Multipass's topic in Science & Spaceflight
Gauge bosons already are point particles, and contain a number of singularities. The important bit is that these are integrable. -
One can set up an antimatter fountain. It will probably be done soon enough. But nobody seriously expects to see repulsion. There are all sorts of problems with that, many of which we should have seen symptoms of by now. There is every indication that antimatter behaved exactly as matter does, to within some chirality corrections in weak interaction. Even that is just matter of handedness.
-
Redshifting/Blueshifting Radiation beyond limits
K^2 replied to Kerbin Dallas Multipass's topic in Science & Spaceflight
Actually, this is one of the cases where you don't. Regardless of how short the wavelength is, if it's just a matter of coordinate system change to red shift it back to "reasonable" range, we can describe it. Global coordinate transformations work without limits. It's only when you have to make different coordinate adjustments at different points on a very short length scale when you can't correct for these problems and you need Quantum Gravity. But that will never be the case for a single particle propagating through vacuum. Nope. We can do computations with black holes without any problems. Plank scale is a much more serious issue, and it has to do with the fact that quantizing gravity results in a non-renormalizable theory. This can be addressed with an effective field theory, but only down to Plank scales. I don't know all of the details of algebra involved, but that's the general idea behind it. And it's a very serious theoretical limit. But yeah, it's not that world breaks down at Plank scale. It's our theory that breaks down. -
Yeah, it's definitely good for historical context. I just feel like it can sort of give you some wrong ideas if you start with it. I'd recommend learning some basics of Analysis before reading Principia. Just so that you have a better idea of what's on the right track and what isn't.
-
It's not bad, but a lot of its perspectives are dated. I think, a modern introductory Real Analysis book is a better starting point. Rudin's Real and Complex Analysis book is fantastic, but it's going to be very heavy for someone who isn't used to formal Mathematics. It's also very difficult to follow if you don't know what Fields and Topological Spaces are. For someone who just wants to understand theory behind Calculus a little better, Introductory Analysis: The Theory of Calculus by Fridy is really not bad. It starts with fairly straight forward concepts, doesn't get too abstract, and has many good examples. At the same time, it covers all the bases in terms of limits, differentiation, and integrals in Riemann Integral sense. In short, it teaches you all the Analysis you can learn without having to understand Abstract Algebra, Topology, and Measure Theory.
-
You can still work it out using (almost) the same information. You need series expansion of sin and cos functions and the chain and product rules. tan(x) = sin(x) / cos(x) sin(x) = x - x³/3! + x5/5! - ... [These are series expansions. Technically, they are derived using derivatives, so it's cheating a little... This can be derived differently, but it will take much longer.] cos(x) = 1 - x²/2! + x4/4! - ... d sin(x)/dx = 1 - x²/2! + x4/4! - ... = cos(x) [Taking derivatives using rules from last page] d cos(x)/dx = ... = -sin(x) tan(x) = sin(x) cos-1(x) d tan(x)/dx = d sin(x)/dx cos-1(x) - sin(x) cos-2(x) d cos(x)/dx = 1/cos2(x) [using chain rule and product rule.] x = arctan(tan(x)) [definition] dx/dx = d arctan(tan(x))/dx 1 = d arctan(y)/dy d tan(x)/dx taken at y = tan(x) [Chain rule, again.] 1 = d arctan(y)/dy / cos²(x) d arctan(y)/dy = cos²(x) Here I have to use a bit of trickery. Lets call cos(x) = z. In that case, sin(x) = Sqrt(1-z²). d arctan(y)/dy = z² taken at y = Sqrt(1-z²)/z We can solve for z now, y² = (1-z²)/z² z²y² = 1 - z² z² = 1/(1+y²) And so we have our derivative of arctangent. d arctan(y)/dy = 1/(1+y²) And finally, using chain rule one last time, we get the required. d arctan(x²)/dx = 2x/(1+x4) Naturally, it's way easier if you know some of the shortcuts. But the point is that just knowing the rule for xn and the chain/product rules, you can work out almost everything. Integrals are a different story.
-
There is a relativistic or inertial mass, which is related to energy by the famous E = mrelc². Photons have such mass. There is an invariant or rest mass. This is what in modern physics we mean when we say just "mass". The correct equation that relates it to energy is E² = p²c² + (mc²)². Photons have no rest mass, and so we call them massless particles. For a photon, E = pc. All of the energy comes from momentum. Gravitational mass is equivalent to inertial mass, as a direct corollary to the Equivalence Principle. So photons do have gravitational or heavy mass as well. They are both influenced by gravity and contribute to it.
-
I think, he might have been referring to the interior region of the Kerr metric, which behaves as if it opens up into another universe. The problem is that Kerr solution is known to be unstable in the interior, so this is probably just a mathematical curiosity.
-
Could a Gyroscopic inertial thruster ever work?
K^2 replied to FREEFALL1984's topic in Science & Spaceflight
That is awesome. The behavior in your video is something I honestly did not even think of as a possible mode until I derived it from the equations of motion. It's great to see it in action. And you clearly see that pendulum oscillation is still there as a separate mode. That MIT video is also a great demonstration. Though, it's very hard to get a good estimate for displacement from it. I really want to try and get a setup for a cleaner run. But I do need to get a better gyro. I'm going to see if I can salvage something or even buy one. In both cases, it does look like there is some interference between pendulum mode and gyro precession. I still want to know how destructive that actually is, and since I see no way to get a good analytic result, I think I'll go with simulation. If I get anything interesting out of it, I'll post some animations. I'll see if I can match initial conditions for these two videos as well. -
ABD stands for "All but Dissertation," which I'm writing at the moment. There is nothing a typical recent Ph.D. knows that I don't. And my field is particle physics, which is all Quantum, naturally. If you want a link to my university page as verification of credentials, I'll be happy to PM it to you.
-
Again, not how Quantum Mechanics works. More importantly, we don't need to measure velocity to verify these equations, as I've already specified.
-
I can write down a distribution function for it*. You might be confused because you think that the electron is located at just one specific, randomly distributed place after passing through a double-slit. But it's not. It's actually physically located at all of these positions. Quantum mechanics is completely determinate, because the physical object is the wave function. Not a point particle in an everyday sense of such. Given an initial condition and the Hamiltonian describing the system, one can evaluate the exact state of the system at any later time. That's the very definition of determinism. The fact that you cannot possibly know the initial state exactly is a separate matter, which is true of real world classical problems as well. * The distribution as function of angle θ is proportional to sinc2(d1À/λ sinθ)cos2(d2À/λ sinθ). It needs to be normalized to unity to be the actual probability amplitude. The distance d1 is width of the slits, and d2 is separation between them. The wavelength λ is given by de Broglie equation, λ = h/p, where h is plank's constant, and p is electron's momentum. Naturally, this assumes normal incidence and initial conditions allowing us to treat incoming electron as a plain wave, which is typical for double-slit experiment.