-
Posts
6,181 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
It will not. If you learn the physics of how it "looks" in a specific direction, you will know that it picks up the waves coming down from all directions, but only ones that come from specific bearing interfere constructively, producing signal only from that direction. This is true if your sources are not coherent, which is the case for any natural source or uncoordinated jammers. If you have multiple synchronized, coherent sources, this no longer works. And I can produce a false source at a bearing I chose relative to your passive sonar, at location where no real source exists. Again, wave optics. My background is ten years of quantum physics. That is explicitly physics and mathematics behind wave propagation. I don't know exact engineering that goes into acoustic pickups on a sub, because I'm not a naval engineer. I certainly don't know anything about operation procedures. But I don't need to. I know how to construct an incoming sound wave that is indistinguishable from the wave reaching the sonar from a specified direction where no real source is located. And if the incoming waveform is identical there are no means to distinguish the source. You cannot get information that's not there, no matter how sophisticated the hardware. It's not magic, after all. I can write out the equations, including a mathematical model of a perfect passive sonar, that has ability to select sources by bearing. And I can show that even this perfect, ideal, mathematically precise sonar will be fooled, let alone any actual physical implementation that will do worse. I just don't know if it makes any sense for me to write out equations that are going to involve Fourier transforms and convolutions, or if it's going to bounce right off. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
A phased array produces a hologram. It's omnidirectional. Just because typical usage is steerable beam, you shouldn't think that's the only thing you can do with it. And while angular resolution depends on how many grid points you have, a carrier group has enough coverage to create false source(s) of interference that do not correspond to locations of real ships. It's fairly straight forward to decompose the desired "image" seen from perspective of any sub into individual sources. If you don't understand how that works, I strongly encourage you to learn a bit of wave optics before you continue arguing the point. Especially, if you plan to start with "Um, no." The objective isn't to mask your fleet from passive acoustics, so discussion of how involved it is is meaningless. We're talking about jamming. Throwing so much acoustic interference, that passive or active sonar are absolutely useless. You can have a human or a computer listening, and they'll get bugger all useful with absolutely any jamming method. Nor are we trying to hide a general location of a carrier group. If your navy doesn't know where enemy's carriers are at all times, you've already lost this war. The only downside of jamming is giving away location of the jamming source, making the jammer itself an easy target. An array of jammers can produce false sources preventing this from happening without the enemy sub getting close enough to be immediately detected and engaged. If torpedo aimed at a source of interference passes close enough to actual jammer ship by chance, yes, it will be able to hit it. If it doesn't, it will try to chase the false signal and most likely not hit anything. Alternatively, you can engage active sonar on torpedo early, and hope to pass close enough to something for jamming to not matter. Then you'll hit a random ship, but you give advanced warning to the fleet to use countermeasures. To summarize, without jammers, sub can target specific ship, such as the carrier proper, on passives, and have torpedo engage active sonar when it's too late for target to engage countermeasures. With uncoordinated jammers, each jammer makes an excellent target and can be picked out one by one. With networked jammers, you lob a torpedo, and hope that it passes close enough to something to target it, which will be a random ship in the fleet. Does it provide perfect protection? No. It's a carrier group, and some of the ships are basically there to take hits, because a target that big and that important is not going to be hidden in the modern world. You job is to make hunter sub's job as tough as possible, to give your hunters an advantage, or to force enemy sub to reveal itself early. Networked jamming can potentially achieve all three, while not tying your hands on employing pretty much every other method as well, including decoys. It is a pure advantage over not doing this. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
If you synchronize jammers, such as via radio, you can do something similar to a phased array to create completely false apparent location of the source. If you have a carrier group moving, for example, and you really don't want the subs targeting the carrier, but without painting a giant bullseye on any other ship either, you can sync up jammers on your support fleet to create an apparent false source of interference. It will probably be pretty obviously false, too diffuse and spread to be a single jammer on a single ship, but it won't indicate actual location of any particular ship in the network, either, until the sub gets really close. The beauty of the approach is that a single sub won't have enough information to do a reverse mapping to resolve it into true sources from safe distance, and establishing communication between multiple subs or even with any one sub is notoriously difficult. (Otherwise, subs would just use satellite feed to target surface ships instead of sonar. It's not like carrier groups are hard to find.) This is pretty computationally expensive, so hard to picture it on more than 30 year old hardware, and 20 year old is probably more realistic for when this would become completely viable. At the same time, the principles are solid and not exactly new, so if it's not a feature of modern support ships by now, somebody in the navy isn't doing their job. I haven't heard anyone talking about it, though, so grain of salt, but I'm inclined to suspect that anything like this would be heavily classified. -
Tsunami about to smash an airport... what would you do?
K^2 replied to AeroGav's topic in Science & Spaceflight
If there are GA aircraft, I'd try to make it to one of these. The door and master can usually be forced even if you don't have the key, at least on older models, and unlike jets, a prop will start on demand, and I can be wheels off in minutes. I have enough experience with both trikes and tail-draggers to know that I can take off on whatever's available, and I'd have better than even odds of walking away from the landing, even if I have to land in the field on an unfamiliar aircraft. Of course, a lot of larger airports, even if they have some GA aircraft, would have them quite far from passenger terminals. So there's a pretty big question on that option even being available. Outside of that, idea of getting a raft from one of the regional jets, which have them stored separately, rather than build into the doors, is tempting. But odds of holding onto a raft seem low. You're more likely to be trampled by panicked crowd when people fight over it. Vest does you little good, as if you stay in the water, hypothermia's the thing that will get you. Vests are only good if there will be enough rafts to pick up people shortly after, and odds of that are low. So you want to be high and far from crowds. Roof of the terminal seems like the place to be, and probably try to find access point from outside, since everybody's going to be filling up stairs, etc. Getting into the jet and hoping it survives the surge, giving you opportunity to get into a life raft afterwards, feels like the last resort. I'd rather take my chances with the terminal building. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Altitude plays a huge role in greenhouse effect. If greenhouse gasses were confined to the bottom 1km, the impact would be rather small. They'd mostly work like clouds already do, make IR stick around longer, which would make day-night variation less without impacting the mean temperature all that much. What you have to keep in mind is that the atmosphere works like a refrigerant. It warms up when compressed and cools when it expands. Since convection currents currently circulate the air, this means that perfectly translucent temperature would establish a very predictable temperature gradient from ground to upper atmosphere, getting colder and colder as you go up. This is where things start getting dramatic. First, picture a vacuum world. Lets put a shell of IR opaque, visible transparent material around it. For simplicity, lets say that albedo of the planet itself is constant across the band and << 1. All that happens is that the shell warms up to match ground temperature, at which point it starts to radiate as much heat as the planet did before. Surface temperature goes up by a few degrees to compensate for a bit more IR going up and down. If the shell is thin, no significant change on the planet. If it's thick, it can also act as clouds and reduce day-night temperature fluctuations. Now we throw in atmosphere. At airliner altitudes, the temperature is already at -50oC, and there is still quite a bit of atmosphere to go. Lets throw that shell there. Black body radiation has T4 dependence. At -50oC the planet isn't dumping as much heat as it's receiving from the Sun. The shell has to warm up to the same temperature it would have been in the vacuum case before thermal equilibrium is reached. And that 50oC difference with ground? That doesn't go away. Atmosphere keeps pumping heat between layers until average temperature on the ground is 50oC higher. 0_0 This is why Venus is so %$#@! hot. No amount of "it works like a greenhouse, keeping IR in" can explain that temperature difference. You have to take atmosphere into account, and Venus has very thick, very active atmosphere. All of the heat exchange with Sun and space happens in upper layers, and the thick Venusian atmosphere keeps pumping heat to the ground. This is why everybody's freaking out a little bit. Increasing IR absorption just a little bit at high altitudes will yield a significant increase in ground temperatures. That will raise cloud layers, pushing moisture up, further increasing ground temperatures. That will release various gasses from polar ice caps, making atmosphere thicker and adding greenhouse gasses, further increasing the temperature... And we have no idea where the tipping point is. I mean, the planet has definitely been hotter and had way more CO2, and it has recovered, so I don't think Venusian scenario is likely. But if it takes ten million years, it won't matter much to us. Of course, panicking is pointless as well. Knowing how bad we're doing would be a good start. The system is so complex and has so many moving parts, that I still haven't seen a single model that puts either the "it's already too late, and it's a runaway process" or "it'll quickly recover if we cut back" outside of the error bars. We definitely need more research. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Water is a "better" greenhouse gas (I hate that the name 'greenhouse' stuck, because it is a poor analogy), but we are saturated on impact of water vapor, so more of it does very little difference. CO2 happens to be a good IR absorber in bands were water isn't. It perfectly plugs holes were IR currently ecapes. Crucially, at high altitdes. That's why small changes in CO2 can make so much difference. Desert temperature variations have a lot more to do with a) it almost always being clear weather, impact of which @Green Baron covered, and b) lack of bodies of water to regulate temperature. Humidity of the air has almost zero direct impact. Further, greenhouse effect is more about shifting mean by moving thermodynamic equilibrium higher up, into colder air, then it is about day-night variations. Clouds have more impact on later, because they scatter both visible light and IR. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Rule of thumb, rounded for sub-sonic, pointy for supersonic, and speed of sound in water is much, much higher than in the air. But as people said, it depends on what you are optimizing for. The only torpedo in operation that I'm aware of that benefits from pointy nose is VA-111 Shkval, and even that because it generates a gas envelope to travel through, which has some characteristics similar to supersonic shock. That does allow it to be the fastest underwater vehicle (unless one of the other supercavitating prototypes has beaten it recently), but one may argue that moving underwater not through water is almost cheating. -
Help with the curvature of space-time meaning and equations.
K^2 replied to steuben's topic in Science & Spaceflight
In general, curvature of space-time is a 4-dimensional rank-4 tensor called Riemann Tensor. Which is an exceptionally ugly object. (Takes 4^4 = 256 individual real numbers to describe in a particular coordinate system!) Fortunately, there are some scalar quantities that encode useful information about curvature. The simplest of these is Ricci Scalar, which is just a full inner product of the Ricci Tensor with itself. Problem is, vacuum is Ricci flat, R = 0. In KSP, value above zero would indicate that you are experiencing aero/lithobraking. Fortunately, there are good reasons to try and gauge how curved space-time is in places where it is Ricci flat. In particular Kretschmann Scalar tells you how "bent" the space-time is. In general, computing it still requires you to evaluate the full Riemann Tensor, and then do some inner products with itself. For a general space-time configuration, a very complex task. Fortunately, KSP simplifies things a lot. Gravity within a particular sphere of influence is perfectly mapped to Schwarzschild Metric. Getting from there to Kretschmann Scalar is a lot of work, but given how frequently the two concepts creep up in GR, no surprise, it's been done already. And the general solution is K = 12 rs2 / r6. Here, rs = 2 μ / c2 is the Schwarzschild Radius, and r is distance from center of the body that produces gravity in this sphere of influence. In other words, altitude + body radius. So in principle, if you want to limit how fast your FTL system can operate, or whether it can operate at all, based on "curvature of space-time", using Kretschmann Scalar is a decent option. Just keep that sixth power in denominator in mind. The value of K will drop really fast as you move away from the relevant body. Does this make any physical sense? Some, definitely. Alcubierre is designed to operate under K = 0 (Minkowski, to be precise). And as that value increases, weird things do start happening. Can some variant of warp work with K > 0, and fail past a certain limit? It seems plausible. There are angular momentum conservation problems with FTL in vicinity of a source of gravity, which suggests that the drive will be radiating and losing energy that way. So there will probably be an energy drain as you move near a planet/star, and it will probably increase with K. Although, probably in some very odd fashion. I wish I could give you more concrete answers on that bit, but this is so very not my field of study, and all the papers I've seen on warp avoid topic of warp near planetary bodies like a plague. No wonder, really, as the math goes from very difficult to absolutely nightmarish. The above is the best I can come up with based on simplified models, using Alcubierre as a base, and trying to account for conservation laws enforced by relevant symmetries. Hope this helps a little. -
None that I know of. Everything we've used in our experiments was either written from scratch in C/C++ or in something like Mathematica/Matlab/Octave. You could probably get good results with something like SciKits in Python, but I've never used these in this kind of context. Regardless, it's unlikely you'll find something that just works out of the box. You'll need to be comfortable with programming, have some basic understanding of simulation and numerical integration, and various optimization approaches. This is the kind of problem that gives a phrase, "this isn't rocket science," it's meaning. This is definitely rocket science. Of course, not doing it to aerospace precision standards makes it a bit easier, but only so much. So what we've seen, is that in an unconstrained SSTO problem, the rocket "likes" to lift off at TWR of about 2, throttle up to 3 as it begins gravity turn, and then cut back almost to zero and coast until close to parking orbit, at which point it throttles up to 100% until circularized in the parking orbit. It's not that different from what people usually do in KSP, but naive expectation would have been more consistent use of the engines during ascent. Adding dynamic pressure limits, staging, and available throttle bands for each engine makes the solution a lot more exciting, of course.
-
This will generally put you in the ballpark. Exact optimization has a pretty weird profile of thrust and pitch which will depend on a whole bunch of factors, including how your engine's ISP varies with altitude and when your staging kicks in. The only way I know to get the correct profile for a specific rocket is to have a good simulation and run optimization on it. There was a thread here very long time ago where several optimization strategies were used. Personally, I tried out a genetic algorithm, which gave decent results, but using splines and running a generic multivariate optimization seemed to work better. Losses of the "good rule of thumb" ascent vs optimized solution are pretty minimal, though. So in KSP, I wouldn't bother. In the real world, there could be any number of more important constraints, like reducing max dynamic pressure, giving better abort opportunities, etc. So you'd probably design around these, rather than chase after fuel-optimal solutions.
-
Needed a brick recently. Got red clay brick. Cost me $0.60. Felt weird going through checkout with a single brick. I think, cinder blocks were a bit pricier, but they're bigger, too. Yeah, brick/concrete construction is not cheap.
-
I don't think saying "all cryptography" would be a stretch here. Though, you'd have to stretch definition of RNG a little. On the topic of procedural generation, it's important to distinguish between stateful and stateless generation. Minecraft is stateless. Map consists of just the seed until you start visiting chunks and start messing with them. Once you approach a chunk, that chunk is generated and recorded, and from there on remains on disk. Dwarf Fortress is stateful. The entire map has to be generated upfront, with geological processes simulated, weathering applied, etc. Because of this, the entire map has to be in memory during generation, and offloaded to disk once created. But the upside is that DF won't have conflicting features that just happened to cross, like you see in MC.
-
I was recently looking into making some reasonably historically accurate sealing wax (don't ask) and vermillion came up as a historical red die used for that purpose. Vermillion powder is made by crushing naturally occurring cinnabar crystals, so while I don't know how far back it's been used, grinding a crystal into powder doesn't seem terribly high tech. And the color is quite striking red, which won't fade over time, except due to accumulation of dirt over it. Access to particular types of mineral dies are likely to be very regional, though. I don't know nearly enough about geology to even guess which types might be wide-spread, and which are going to be very rare.
-
You underestimate the power of idiocy in context of absolute bureaucracy with no real oversight. If somebody decides it's worth covering up because it makes them look bad, it won't matter that a coverup is destructive to future of the company or the country. It's only if blame can safely be shifted onto someone else that they'll do a proper investigation. Which is a possibility, of course. But whatever findings Roskosmos publishes, take it with a healthy dose of skepticism. I do believe that they'll try to prevent it from happening in the future, though. Nobody wants anything like this happening again. I just don't have high expectations for it being handled transparently.
-
Which it might have gone through, because it would have passed that test. The hole was sealed. Otherwise, they would have detected the leak much, much sooner. The makeshift seal simply didn't hold in space.
-
I'm not sure how much I'd trust the report. Although, if Roskosmos is genuinely concerned and will undertake an open investigation together with NASA, that might change my mind. In either case, though, not really something we'd be able to do anything about it, like you said, so "wait and see" is still the best option.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Still a side-effect of Venturi. Pressure inside the part is lower, due to moving air, so external pressure is squeezing on the outside, pushing it like a wedge towards the hose. There's ram pressure due to incoming air as well, of course, but whether one or another wins out is a matter of geometry and flow speeds. In this particular case, the ram pressure is lower. I don't have a good demonstration with that exact geometry, but I believe this demo shows identical effect with different geometry. -
@KSK mentioned a few good points. I'd add that kerosene has significantly lower viscosity than diesel, which makes a difference when you consider how much energy goes into turbopump operation. Especially, in rockets which pump fuel through channels in the nozzle bell for cooling. Diesel also tends to produce a lot more soot in combustion, but I don't know if that's going to be a factor for rockets. Obviously, carbon buildup on the nozzle is bad, since that can cause instabilities in the flow, which will reduce efficiency at best, and lead to destructive vibrations at worst. But whether it can actually happen at typical combustion chamber temperatures found in kerlox rockets, I have no idea.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
@Cunjo Carl That is not a constructive attitude, but your call. A statement was made concerning neutrinos that has no basis in fact. You helped reinforce that false notion, and I wanted your help in rectifying it. That was clearly a mistake, because the net result is that you've helped an anti-intellectual remain smugly ignorant, and he will, no doubt, continue spreading his nonsense in other threads. Personally, to me, knowledge is always more important than pride. As much of the later as I have, I'll gladly swallow it to get the facts straight. And honestly, I don't know why we even have a Q&A thread in the science subforum, if that's not the attitude that people are willing to take. If we aren't prepared to defend a point, because we are afraid of being proven wrong, then we might as well use a magic 8 ball to answer the questions. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
I've had to fix a few mistakes on Wikipedia before, but I think I'm used to the STEM articles on Wikipedia, where mistakes are relatively easy to point out and fix. In most scientific contexts, answers are hard to find, but very easy to verify if you know the basics. So the articles are very rarely wrong on something so fundamental. Hence the habit of falling into a general attitude of, "Wiki is probably right, until proven otherwise." I'll have to keep in mind to take a different attitude towards historical and archaeological articles. What @Green Baron says about bronze vs copper use makes perfect sense to me purely from material science / geology standpoint. There are good places to get malachite and related ores in Egypt, so the plank for working with copper is very low. But tin, in any culturally significant quantities, would have to be imported. And while copper has many wonderful properties and uses on its own, it won't replace stone tools and weapons. The metal just isn't hard enough. My confusion has been entirely on chronology of when significant tin trade started to take place. -
What looks like a very neat line near the center of that chart took me by surprise, but then I realized that these were all carbohydrates. Neat. Sure. How is either fundamentally different from kerosene? Now, kerosene does have some nice properties which make it a much better rocket fuel, but the bottom line is that if it burns, you can use it as a rocket fuel. IIRC, Mythbusters used a salami for a hybrid motor once...
-
I feel like there must've been some misreporting about it. Because I remember some talk about using ISS crew to do the inspection. But then I learned that the orbiter was nowhere near the ISS, and given the orbits, didn't have enough fuel for a rendezvous. So even if there was absolute certainty of damage to Columbia, there was very little anybody could do about it. If somebody knew for sure, and decided not to tell anyone, maybe they did the right thing?
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Cool. Thanks, that was the part I was missing. I remember learning something like 3k BCE in school, and yes, I've checked to see if it's consistent with Wikipedia. But yeah, I don't expect these things to be set in stone (sorry). It does make me wonder what altered the accepted timeline on this. I'll try and find some papers, as you've recommended. Would you mind giving a number? I could do it myself, but that would defeat the purpose in that discussion. On a more concrete note, while observing low energy neutrinos is absurdly difficult, hence no observations yet, what alternatives are there? That early stars didn't produce them? Or that they were not impacted by expansion? I appreciate the experimentalist's desire for confirmation, but as a theorist, I have to think about the alternatives. Not finding slow, background neutrinos would indicate a very serious problem in Standard Model. I can see us finding too many or too few, requiring adjustments somewhere, but if we find none, it'd put the very existence of neutrinos into question, because our interpretation of experiments observing them rely on the very models that would be broken. In short, degree to which I can doubt existence of slow neutrinos is dominated by degree to which I can doubt neutrinos in general. So I'm pretty confident saying that slow background neutrinos do exist. Inherently, QC is only faster for certain classes of problems. Specifically, QC can search for solutions in parallel. This doesn't help with a simple direct problem. If I want to multiply two numbers, classical or quantum computer will do it in the same number of operations. But if you have many sets of numbers that you need to multiply together, QC can do these at the same time. Likewise, if you want to know factors of a number, which requires many attempts to divide that number by potential factors, QC can try many divisor at the same time, speeding up the process. So while not every problem benefits from QC, many problems can be reframed in a way that they do. Of course, the specific hardware matters as well. Modern computers are limited by electrical properties of silicon. QCs will probably run on something entirely different. Some of the candidates involve electron spins or photonic circuits. These operate at a much higher frequency, so a QC like that might simply perform operations faster to begin with. Such computers will be a lot faster than modern ones even on problems that do not benefit from quantum mechanics. The only commercially available QC that I'm aware of is capable exclusively of a very specific subset of simulated annealing problems. These have a very narrow window of applications, and they aren't terribly fast to begin with, as they are built out of tiny superconductor magnets. So they can only compete with classical computers on very few problems. But apparently, these have uses in machine learning, so there's a lot of research being done in that direction. A general purpose QC is probably very far off, because I still haven't anything that scales beyond a few qubits. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
You were perfectly happy with my response to you on that topic back then. I've made precisely two claims. 1) In standard academic usage in English language, Ancient Egypt refers to a time period starting around 3,000 BCE. 2) That time period roughly coincides with beginning of Bronze Age. Now, if you disagree with either of these two, you could have replied. And I'm happy to continue the discussion now if you wish. But if you accept both of these claims, Ancient Egypt was a Bronze Age civilization as a direct consequence of 1) and 2). I'll also happily accept your corrections on the extent of use of bronze in culture, as well as whether or not that played any role in the collapse. There are a whole lot of details there that I don't have the faintest idea about. But the discussion never went that way. All you said was, "most of Egyptian history is neolithic," which isn't something I contradicted at any point whatsoever. I work hard to try and be better. It's up to you whether you want to contribute to that or not. But don't tell me that I failed because you didn't bother to reply. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
I don't expect people to be competent in everything. I expect people to understand limits of their competence. At least when they are demonstrated. Is that seriously too high of a bar for a sci & space subforum? When I make a mistake and somebody points it out, my first response is to thank them for it. Because it makes me better, and it's these rare occasions for which I mostly visit this forum. If somebody else makes a mistake, I point it out just as I'd want it pointed out to me. If it's questioned, I provide an explanation. And I tend to have pretty good patience for it. I will write out pages-long detailed explanations, and I will break it down as far as I possibly can into understandable terms. But that patience is finite and does not apply to willful ignorance. People who argue from nothing but ignorance, choosing to remain oblivious to their own incompetence do make me angry. And they should make you angry too. That kind of behavior reduces ability of others to learn something from a discussion. It is bad for absolutely all of us. It is not a behavior that should be protected.