-
Posts
6,181 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
Low-thrust spiral transfer has a very simple analytical solution. Albeit, derivation is slightly more involved than Hohmann. dE/dt = v.F always, and E = -mv2/2 for circular orbit. Taking derivative of later to simplify the former, we have dv/dt = -F/m. Integral on the left side is the difference in velocities between two orbits. Integral on the right side is the total delta-V. So delta-V required is literally sqrt(μ/r1) - sqrt(μ/r2). Which, by the way, @PB666, comes out to only sqrt(2) more delta-V for burn to SOI, unless I missed a factor somewhere in my math.
-
That's not true. Entanglement works with systems sufficiently larger than a neuron. Josephson junctions are a classic example. Distinction between classical and quantum systems is far trickier. But that's kind of irrelevant. Entanglement does not allow for communication. It's one of the core theorems of Quantum Mechanics. There can be any number of oddities of consciousness that have quantum backing, but anybody who tries to explain it via some sort of communication understands neither.
-
The margin isn't as high as you might think. Slowing down the core or turning around a nearly empty booster takes a lot less fuel than it takes to give any appreciable boost to the second stage, which is still fully fueled at that point. Yeah, they can probably get a bit more oomph out of it if they don't plan to recover boosters, but it's probably not going to be worth it for any planned mission. Salvaging the stages is going to be more valuable than a slight increase in payload.
-
Some of them seem to move by pretty quick. Satellites? There's a crap ton of stuff in LEO.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
No. Because all of space expands, any two points of it can interact over finite time, even if they are receding FTL from each other. That felt counterintuitive to me too, untill I checked the math. It does mean gravity gets weaker over distance rather faster than invere square across great spans, but no interaction is ever quite severed. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
That's not how quantization works. If you measure distance traveled during interval, you can only measure an integer number of some unit, which depends on how you measure it. In the most ideal case, the smallest you can get that unit down to is Plank length. However, as you increase measurement time, you are still only limited to an integer number. Lets give concrete example. Say, expectation of the distance traveled in 1 time interval is 2.5 length intervals. That means, you will randomly measure 2 or 3 intervals traveled. However, if you now measure over 2 time intervals, you'll consistently be seeing 5 length intervals traveled. Not 4 or 6. Quantization is always the result of interaction between system and observer. Of course, "observer" is a very loose term, here. So is what constitutes measurement. This is a very common misunderstanding, by the way, and doesn't require one to go into depths of Quantum Field Theory. The energy levels of electron in hydrogen atom are discrete. And I keep seeing a lot of otherwise very educated people imply that electron instantly jumps from one energy level to another when excited. Nothing can be further from the truth! It takes finite amount of time for electron to go from ground state to an excited one, as it absorbs electromagnetic radiation. During that time, electron probability distribution around the atom gradually changes shape, say, from a 1s orbital to a 2p orbital. What's notable, that half-way through absorption, the distribution is a superposition of 1s and 2p states, while photon that was being absorbed is at half-amplitude, which corresponds to fifty-fifty odds of it being detectable and not. If I was to generate a very low energy laser beam at the right frequency, I can actually time this, and leave atom in this half-excited state. However, in order for me to see what energy state it ended up in, my best bet is to put a detector next to it and wait for the photon to be emitted again, as atom's state decays back from 2p down to 1s. The emitted photon would also be at half-amplitude, *entangled* with the original photon. Which means that while odds of me detecting either are fifty-fifty, if I detected one, I would not be able to detect the other. So I either measured the atom to absorb a full photon and re-emit it, or completely fail to absorb a photon. This is where quantization actually comes in. Can we detect these in-between states? In some cases, yeah. A photon at 45° polarization will pass through horizontal filter 50% of the time. But it will pass through a 45° filter 100% of the time. Similarly, I can prepare a pair of two half-excited atoms, and with the right coupling, get them into a state where one is 100% excited, and the other is in ground state. This lets me verify that these intermediate states really do exist. But they cannot be measured directly. So now we can get back to Plank length. While it's certainly a thing, and while it certainly puts a bottom limit on measurements we can make, on the grand scale, universe will always keep working as if that limit isn't there. There is a separate note that can be made here about ultraviolet catastrophe, but that's also largely a matter of how you integrate things over bulk. There's also a little caveat about crystal lattices, where similar quantizations arise due to periodicity of the lattice, and that has really fun side-effects, but it also comes with rather noticeable anisotropy of space, and no such thing has been observed. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
^ This. The reason formulae for acceleration get so very weird, is because of the relationship between coordinate acceleration and proper acceleration. Proper acceleration is what the ship's crew experiences. Consequently, it's also the same as rocket's actual thrust divided by its rest mass. Coordinate acceleration is the acceleration relative to the inertial observer, that's not accelerating along with the rocket. So long as rocket's moving much slower than c w.r.t. the inertial observer, the frame acceleration is almost identical to proper acceleration. But as the rocket gains speed, the two values become quite different. Indeed, coordinate acceleration drops to almost zero as rocket gets close to c. The general relationship is quite complex, but for a special case of acceleration along the velocity vector, the relationship is ap = γ³ac. Note the cubed power on the Lorentz factor. Because distance traveled by rocket is the second integral of coordinate acceleration over time, even if proper acceleration is kept constant, maintaining constant thrust-to-mass ratio, the distance traveled over time ends up being an ugly formula involving hyperbolic functions. I've derived them on paper once, and do not want to repeat the experience. The upshot is that if you can maintain acceleration of 1 Earth gravity on a torch ship, you can go bloody far within a single lifetime. This relationship is so absurd, in fact, that it eventually catches up with the Rocket Equation, and you can travel anywhere within visible universe on a finite amount of fuel. It's still a huge amount of matter you'd have to convert into light to travel the distance, but we're talking planetary mass scale. Not mass of the universe times lots, as you might have expected. For conventional rocket, even a nuclear-pulse powered, it goes back to mass of the universe times lots, so it's definitely a photon drive or bust, but hey, I'll take 'physically possible' here. -
I'll just add a second cache miss. Or a third. Or a fourth. If I know the size of the victim cache, I'll exhaust it preemptively. If not, I'll just have multiple passes of my code before checking the cache. It's practically free for me as an attacker to add extra misses, while adding layers of victim cache is prohibitively expensive after one, two at the most layers. There's the reason why victim cache on every proposed architecture is tiny. This is not the solution.
-
The fact that the line is marked dirty is the exact thing the attack uses. It pre-fetches data from its own space that it knows is going to hit the lines of interest, so it knows these cache lines are clean. Then it executes the attack, and tries to fetch these lines again, timing the reads. Lines marked dirty during speculative execution will take longer to read. And because actual data in privileged memory determines which cache lines get hit, it effectively lets you read privileged memory. Albeit, very slowly. Hashed cache will do nothing, but slightly slow down the attack. I don't need a reversible hash. I just need a known hash, or one I can experiment with in advance. Map 256 lines in memory to corresponding cache lines, and you're golden. For a lot of tasks, cache performance is CPU performance. If speculative execution can't cache, you might as well not bother with speculative execution. This is true, for example, for every single video game out there. Are you prepared to go ten years back in terms of CPU performance in your video games? Nobody else is either. AMD's CPUs are not safe from this attack. They were saved by some minor inefficiencies and a different way in which they handle memory pages. I'm not saying this is fundamentally unfixable, but it will take major changes in architecture. Not some quick fixes and patches.
-
This is really not that easy. First of all, early privilege check is bad. The behavior of privilege check fail is segfault. What do you propose a speculative segfault to look like? Should it start invoking signal handlers speculatively too? The solution of just running with it, assuming permissions are green, and segfaulting if you actually take this branch is a correct one. Anything else leads to even more insanity that potentially has just as many exploits. All at cost of real performance with no gain to show for. Rolling back properly, yes, that's the idea. And all registers and memory states are. Problem is, a speculative cache miss is still a cache miss, and results in lines being read. Un-reading lines of memory from cache is not something you can do on any sane system. And you still can't read any data from cache that has been speculatively obtained, unless you have permission to read the data. So even that's not the problem. The problem is that a speculative branch can read a byte of memory, then read from memory at some base offset + multiplier * value read. Now the cache line hit depends on the value, and you can use a timing attack to figure out which cache line it was. There is no caching scheme in modern CPU that protects you from this while providing a half-reasonable cache performance. There are variations on all of the above that make some CPUs more vulnerable than others. That's why initial version of the attack didn't work on AMD processors. But there are variations of it that will. You can't fix this without complete rework of the architecture, and that will come with enormous costs in development time and performance setbacks. The patches that exist out there do not fix any of this. They simply make it so that the attacker doesn't know where to look. With virtual memory space being so vast that it might as well be infinite, if you give each process a unique pages table, attacker won't be able to figure out which memory to read. They still can read any memory they like, but they don't know the address. The downside, of course, is the switching time, which causes a performance hit. The other part of it, and I might be mis-reading it, so if somebody knows better, please correct me, but the patches that got pushed out only prevent user-space programs from reading kernel-space memory. I think the global tables for all user-space programs are still the same. If so, this still leaves any number of machines out there vulnerable to cross-origin attacks.
-
Gravity at the center is zero, regardless of whether you're in Newtonian or GR Gravity framework. In fact, even if the planet is not spherical, there will be a point of zero gravity. It might not be dead-center for such an object, though. Now, gravity is zero, but stress isn't. The rest of the planet is still trying to collapse, and it's the mechanical stress in the planet's core that's resisting such a collapse. In planet or regular star, the stress resisting the collapse is going to be due to electrostatic forces. In a neutron star, due to nuclear forces.
-
No sane simulation is going to jump between Cartesian and Keplerian coordinate systems, because precision losses on each burst of your engines are going to make it suck. This is particularly bad when gravity is still a dominant force, and you are integrating it in Cartesian. But more importantly, the typical solution is actually a lot easier. The reason Keplerian Elements show up is because they are constants of motion for a 1/R central potential. However, they aren't the canonical set. If you start with Hamiltonian, in place of semi-major axis and eccentricity, you'll end up with energy and angular momentum. This is super useful, because these two have very simple equations of motion under perturbation. dE/dt = v·F dL/dt = r⨯F Energy gives you semi-major axis directly. Magnitude of angular momentum with respect to it gives you eccentricity. Orientation of angular momentum vector gives you inclination and ascending node. The only things missing are argument of periapsis and anomaly. These have equations of motion as well, but they are a pain to work with, and tend to be numerically unstable anyways. Instead, we keep track of Cartesian location of the rocket, r. Given current true anomaly and periapsis, we compute expected position of the rocket and its velocity v. This already takes into account any influence of gravitational forces. We can now apply perturbation force F to compute v', r', E', and L' using your favorite integration method. We include velocity here for numerical stability. For this step, we treat E, L, and r as independent variables. Finally, using these new quantities, we recompute Keplerian Elements. The chief advantage here is that we keep changes due to external forces separate from influence of gravity. That allows keeping track of a nice, clean orbit without nearly as many errors. The other advantage is that you simply don't have to worry about complex computations. The only things you have to do is convert between anomalies and compute position of the periapsis from these. That's just as easy to do in 3D as it is in 2D. You get inclination and argument of ascending node gratis.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
No, ascent trajectories are different depending on available thrust. Whether you have insufficient power, or you can't throttle back, you'll have to adjust to cut the losses. In both cases, you are likely to start pitching over later than you would at optimal thrust, but the gravity turn will be much more abrupt if you have too much thrust than in the case with insufficient thrust. -
There are definitely relativistic effects there, especially if the black hole rotates, which will affect strength and orientation of the fields. But it won't extinguish these fields entirely. As for gravity, thinking of it as a field doesn't make much sense outside of classical physics. There are related fields in General Relativity, but none of them are "gravity" in the same sense we think of it in classical physics. For starters, most of these are tensor fields, not vector fields...
-
First, just to be picky, you have a hypothesis, not a theory. Theory requires rigorous testing. But more importantly and on point, your hypothesis is thoroughly disproven by available data. While black holes accounting for dark matter is an idea that has been put forward, the problem is that we know only about super-massive black holes at the centers of galaxies that have a chance to contribute enough. Gravitational lensing experiments, however, show that dark matter is distributed throughout the galaxy. Distribution is believed to be slightly different than that of luminous matter, but mostly along the same lines. Which means that a few super-massive black holes can't account for it. And regular sized-black holes we've observed can't account for all the necessary mass - they'd have to be far more common than we observe them to be. Finally, there has been proposals that there are tons and tons of tiny, undetectably small black holes buzzing about the galaxy. However, we've not observed any, and there are no mechanisms to explain their formation. So that seems to be out as well. Consequently, black holes do not explain discrepancy between luminous and dark matter content of the universe. Your hypothesis is busted by SCIENCE! But it's not a stupid idea, people did think about it, just happens not to be the case.
-
No. Field lines don't have to follow time-like curves. In fact, magnetism as a property exists specifically because electric and magnetic field-lines are space-like. When people say that nothing can escape a black hole, they mean specifically anything restricted to time-like trajectories, because bellow event horizon, all time-like curves lead to singularity. Any current (particle, wave, literal electric current) has to be restricted to time-like curves. (Otherwise, you have time travel.) So no matter or energy can propagate out, other than by Hawking Radiation. But this restriction simply doesn't apply to fields themselves.
-
A Question About Reaction Control Systems
K^2 replied to Deus Zed Machina's topic in Science & Spaceflight
I've misread the original problem statement. Yeah, if you want to have non-zero acceleration with zero torque, you'd flip the constraints around. I was writing the post under assumptions that he wanted to achieve the opposite. Non-zero torque with zero linear acceleration. The approach is identical, though. If there are any additional forces, they simply have to be added to the equalities and consolidated in the constants column along with desired torque or net force. Although, gravity will generate zero net torque if you use CoG for your datum, which makes the rest a lot easier. Yeah, I did not make an assumption about all forces being co-linear. I'm assuming arbitrary placement of RCS thrusters, each without a gimbal, as is common for reaction control. Perhaps, that's overkill for OP. Edit: Maybe I should just write a control systems module for Unity. This seems to be a topic that keeps coming up, and the math in it always takes a while to explain. -
Solution is obvious. We need radio-telescopes positioned at Earth-Sun Trojan Lagrange points to improve resolving power. Is it too late to convince Musk to change F9 Heavy payload?
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
ISP = F / (m' g), where F is thrust, g is standard gravity, and m' is the fuel mass flow rate in kg/s. If you know gph or Lph fuel consumption of your engine and density of fuel, you can compute m'. And yeah, that's definitely 1.5kN. Sorry about the mis-placed decimal place. Good catch, @winged -
A Question About Reaction Control Systems
K^2 replied to Deus Zed Machina's topic in Science & Spaceflight
You guys are talking through each other. Every rigid body ever is simulated as a point mass + moment of inertia tensor. These aren't contradictory statements, because they describe two distinct degrees of freedom. You accumulate all forces as impulse against point-mass and all torque as change to the angular momentum. @Deus Zed Machina: To solve a problem, first step is always to figure out the correct statement. Technically, what you have here is a Linear Programming problem. Fortunately, it's a very simple one, so you don't need a full LP solver. We wish to match given torque subject to constraints that net force is zero and no thruster exceeds max thrust. Statements in bold will become equations. 1) Σ Fi x ri = τ 2) Σ Fi = 0 3) 0 ≤ Fi ≤ Fi max Here, Fi is thrust of i-th RCS, with Fi max being its max value, Fi = uiFi is the corresponding thrust vector (ui being unit vector in direction of thrust), ri is position of i-th thruster with respect to CoM, and τ is desired torque. How do you solve that? Well, the thing that saves you from going full LP here is that the first two equations are just linear equations with Fi as your variables, and the saturation can be treated as happening one thruster at a time. Lets forget about saturation for a second and pretend that there is no maximum thrust. We just write down first two equations as a matrix equation. T = RF Here, T is a column matrix, containing x, y, z components of τ and 0. F is another column matrix containing F1, F2, F3... The matrix R is the one you have to build from vectors ri and ui to match equations 1) and 2). You might have to write out these sums explicitly to see how the components fall into this matrix, then you can write a program to generate that matrix easily. If you don't know how to write down a system of linear equations as a matrix and solve it that way, look it up. You won't get anywhere in optimization without understanding that. Write some code to practice if you have to. Now, you might notice that we have more equations than we have unknowns. Matrix R definitely isn't a square matrix. So we can't invert it. Fortunately, we don't have to. A Moore-Penrose pseudoinverse matrix can be constructed that solves this particular class of problems in a least-squares manner. If that's a bunch of babble to you, don't worry about it. Here's the formula to use. F = RT(RRT)-1T Here, RT is the transpose of R. So the entire numerical challenge is reduced to finding inverse of the (RRT). Note that this is a square matrix, with number of rows and columns matching number of your individual thrusters. How do you find an inverse of a square matrix? Same way you do on paper. Surprisingly, perhaps, Gaussian Elimination is the best algorithm for it. Though, look up how to use Gauss-Jordan with pivots. It will greatly improve precision, and isn't much overhead. You can also probably just find a library that already does this part for you. Edit: I messed up here. RRT is actually guaranteed to be 4x4, because it has dimensions of the constraints. You can still use Gauss-Jordan, but you can also find an explicit formula for 4x4 inverse matrix. It's not pretty, but it might be worth it to just hard-code that. Finally, you have to deal with saturation. After you solve for Fi, odds are, you'll find some that are either less than 0 or greater than Fi max. There is a "correct" way of dealing with this in LP, but again, we're using simplicity of the problem. Take just one of these, one which is outside of the bounds by greatest amount. Set it to the corresponding boundary. (So if F2 < 0, say F2 = 0) And add this constraint as a new row to matrix R. Only add one row per attempt. Once you expanded R with this extra row, run the whole thing again to generate a new list of Fi. Rinse and repeat. Edit: Adding a row here will make RRT a 5x5 matrix, etc. There is nothing wrong with that, but then you'll have to go Gauss-Jordan way. Alternatively, you can treat Fi you are setting to a constant as a constant in 1) and 2) and re-compute R from that. Then RRT is still 4x4, but matrix F will pick up some new terms. Both are totally acceptable ways, just depending on whether you prefer to have a hard-coded 4x4 inverse subroutine, or if you want to write a generic Gauss-Jordan. After enough iterations, one of two things will happen. Either you'll find a list of Fi that satisfies your constraints, or you'll get to the point where R is already a square matrix (Edit: If you chose to go with R that always has 4 rows, you can just count constraints you've added. Condition you're looking for is the same, number of equations = number of variables), and the solution is still saturated. In that later case, it means that the desired amount of torque cannot be generated with the thrusters you have. Your best bet is probably to take the Fi you got as a result and just return these as the answer. It's not the right answer, but it's the "best" answer you can come up with, to match desired torque as closely as possible given thrusters you have. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
They must have squeezed more efficiency out of plane, engine, prop, or some combination. If you design plane for slower flight, it can climb just as quickly at lower fuel consumption. I'll try to find some more specific info. The numbers you're giving would imply 3x energy efficiency of a Cessna 152, which isn't all that crazy. The exact moment of solstice is the same around the world. It's the moment when Sun and Earth's axis share a plane. It's winter solstice here and summer solstice in the southern hemisphere, or vice versa, but it's the same exact moment. All of the differences can only come from offset of local time from universal time. -
That part actually isn't a problem. If you let out a length of rope out of your ship, it will either hang straight down or straight up due to tidal forces. It might take a while to settle, since these forces are pretty small. But so is drag at the edge of atmosphere.
-
Their projected thrust was about five times higher.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Do you have a link with some additional information, perhaps? Because without details I'm just speculating. That said, the aforementioned C152 takes about 5 gallons of 100LL for an hour of flight. A little over 6 if you are climbing, and it will climb to 20k feet in 30 minutes. That is not a plane designed for efficiency. It's mostly designed for ease of operation and fun of flying. If you build something with larger wings out of better materials, you'll have no trouble cutting it to 4 gallons per hour without sacrificing much in terms of performance. So we're looking at 2 gallons or ~6kg of kerosene for that climb. At 30x flying on Hydrazine, that's 180kg of fuel. That's a bunch, but you can get quite a bit more into a two-seater if you really need to. And a good chunk of that will be compensated by a lighter engine, I bet. So again, pretty terrible, but not so much as to be entirely impractical. Although, I bet the conclusion was, "Well, this works, but where are we going to fly it?" Anywhere in Sol system you could, you definitely want a longer range. Moreover, in the real world, modern batteries can actually do considerably better than this, which is why I don't see us coming back to the concept.