-
Posts
6,181 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
Legality of building your own rockets in the US
K^2 replied to Ultimate Steve's topic in Science & Spaceflight
I'm not a lawyer, but... AFAIK, there are no federal laws explicitly regulating rocketry. There is, however NFPA 1127, which defines high-power rocketry and it is going to be the basis for governing law in many jurisdictions, as well as be the guideline for many fire marshals who might cite you for violating fire safety. Hence, NAR and TRA certification for HPR that do comply with NFPA 1127. You also have to be mindful of airspace. Federal regulations require you to clear significant launches with FAA. Well, any potential hazards to air traffic. FAA, however, tend to be pretty friendly about it, so long as you launch in a low population area with no significant air traffic. Generally, they'll just issue a NOTAM for your launch window, and pilots will steer clear of the area. Then there is the matter of propellant for your rocket. Almost anything you'll be using is some kind of regulated. Either as a fire hazard, material for explosives, or in drug manufacturing. Almost none of it is strictly illegal, but it might require certification, and will probably land you on some watch lists. Finally, local laws vary. You'll have to research laws specific to your state, county, and municipality. There could be any number of additional hoops to jump through with these. -
Here am I sitting in my tin can... how do I know where's prograde?
K^2 replied to Laie's topic in Science & Spaceflight
Let me stop you right there. If you know your orbital elements, the only instrument you need is a stop watch. If you find yourself needing a gyroscope, it already means you don't know the orbital elements. The rest of your reply is rendered irrelevant by this. Of course. I've been trained to fly with VOR, ADF, and perform ILS approaches. The fact that I'm actually mentioning radio nav-aids in the very post you are quoting should also have been a clue that I'm aware of how these things work. In practice, however, inside-out navigation, where ship can track own position, came later. First systems tracked the craft from ground, and had instructions relayed to pilot if necessary. If you have ever actually had to dial in radials while switching airways, you'd probably have some idea why. Having to deal with navigation system, while also trying to compute orbital elements, and control a ship would be suicide. Unless you have a computer to do most of these operations, you're much better off leaving it to the ground team. P.S. I grew up near a Soviet missile base, and not far from Zvezdny to boot. Majority of physicists I've learned from growing up had been specifically trained to be responsible for making sure the warheads make it to their targets. If we're going to start arguing about Soviet ICBM navigation systems, you're going to lose. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Rotating black hole can be used to generate energy via frame dragging in the ergosphere. There are several variations on how exactly one would do it, but amount of energy you can extract from a black hole is truly astronomical in every sense. -
Here am I sitting in my tin can... how do I know where's prograde?
K^2 replied to Laie's topic in Science & Spaceflight
I see gyroscopes mentioned a few times in this thread. I'm curious to how people imagine using these for finding prograde direction. Gyroscope is a purely local instrument. Therefore, you should immediately realize that it cannot measure something that is frame-dependent, like orientation or direction of travel. It can measure changes in both of these, of course. And keeping track of changes in orientation is sufficient for you to know how the craft is oriented with respect to some fixed point, and that's good enough for orientation, so long as you can correct for drift once in a while. You can also use gyros to keep track of changes in velocity using gyrointegration. The idea is that you place an off-axis gyro that can precess freely, and count the number of turns it makes. With constant angular velocity of the gyro itself, this has a fixed proportion to change of velocity along direction perpendicular to precession axis. However, this is where we run into a bit of bother. This measures changes relative to a free-falling frame. If you place a gyro off-axis on Earth, you know that it precesses like a top, because standing on the ground is equivalent to accelerating upwards at a rate of 1g. A ship in orbit is in free-fall. A gyrointegrator shows a flat zero throughout the flight, while the velocity with respect to ground changes. People have done navigation by gyros alone. Early ICBMs certainly did that. But the idea there isn't to find where prograde is. It's simply to have the rocket follow a precomputed flight path, using gyros for orientation and gyrointegration, and adjusting thrust and orientation to stay on the program. The program flight is pre-computed according to a numerical model, so we could compute the prograde direction from that if everything's going according to plan, but if the rocket ever deviates from the plan, you will no longer be able to use information available to find prograde. Navigation by stars is equally problematic. It tells you orientation of the craft, but you aren't going to measure the direction of travel. You have to have external inputs for that. If you can track stars and Earth, you can make that work. Parallax of the Moon might be measurable as well. More precise navigation can be done with radio. Back in the day, if new re-entry procedure had to be established, the craft was "sighted" from Earth via radio or even visually to establish its heading and speed, and the correct timing and orientation for re-entry burn could be simply dictated to the pilot once the math was cranked out. Now things are a bit easier with all sorts of automation. GPS, of course, is fantastic for that in LEO. It has more than enough precision for most orbital operations. Once you leave Earth, things get more complicated. The coolest nav project that's out there, undergoing testing, IIRC, is navigation by pulsars. These are basically nature's GPS. They are far enough out that they might as well be static in the sky, and their pulse timing is absurdly precise. By simply counting pulses from several known pulsars, you can measure your position anywhere in Solar System or well outside of it to within a few hundred meters. You probably want additional correction for landing, but otherwise, this is good enough for interplanetary navigation. -
The thrust generated by light sail is very low compared to its mass. At low altitude, Earth's gravity will easily overpower it. You have to be far enough from Earth so that the gravity is weak enough for light sail to keep the craft suspended.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
The Sun is 100% efficient as a fusion reactor. It's another matter that it will not burn entirely through, due to its small size. If you want to convert more of its mass into energy, you need to make larger stars, not smaller ones. Of course, if you don't think fusion itself is sufficiently efficient, then what you really want is a black hole. If used right, it can convert nearly 100% of mass into energy. Again, the key here is to go bigger, not smaller. -
Nonsense. std::vector::size does a trivial return on a size_t member holding occupancy. Compiled to an x64 target, that's a 64 bit field. And since it's a template class, compiler has full access to the internals in every translation unit. Writing const size_t size = m_vector.size() simply creates an alias if there are no side effects. (e.g. resizing). That means the same register will be used whether it's your code using this alias variable, or internal code of the std::vector class. When you write const int size = m_vector.size(), compiler is still probably going to keep it in the register, but now it is forced to use distinct registers for your int value and internal size_t value, even if there are no side effects that would warrant it. And that dramatically reduces ability of the optimizer to make your code better. There are only so many registers to go around, and at some point, it will mean that memory is being accessed when it doesn't need to be. On a critical path, that's a 10x loss in performance, because you thought the type conversion is taken care of by compiler. Coding styles exist for a reason. Some of it is purely readability, like "const int" vs "int const". But a lot of it is performance-driven. And because not all of us dive into specs and compiler design deep enough to know exactly how each case is going to be handled, maintaining good style must be adhered to as matter of habit. If your excuse for writing sloppy code is that it's not critical for it to be clean or efficient, then you'll write crap code in places where it does need to be clean and efficient. Small strings are allocated on the stack. This is implementation-specific, but on most compilers you can expect at least 15 characters for free. So declaring them in a loop or creating a bunch of temps is perfectly fine. Instantiating a class in a loop because you didn't think to use vector::assign isn't. Not only is it cleaner, idiomatic code, it would be literally easier to write.
-
Prefer: enum class Mark : char { EMPTY = '.', LOADED = 'o', CLICK = '.', BANG = '*', SEPARATOR = '\t' }; // And purely for convenience std::ostream& operator<<(std::ostream& stream, const Mark mark) { return stream << static_cast<char>(mark); } Placing defaults in global scope is also not great. When you're forced to do something like this, consider at least putting them into anonymous namespace. That will at least prevent linker from trying to keep track of them. Using constexpr is another good way to reduce overhead of global constants. Prefer: const size_t nChamberCount = m_arrChamber.size(); Using int will force a completely unnecessary type conversion. Some people would also argue for use of auto here, but I can see both sides. Specifying size_t is more informative. In general, most places people use int, they should probably be using size_t. It will automatically conform to the size of register, and because of that, optimizes much, much better. If you have a simple counter loop, you almost always want to have size_t as your counter. Notable exceptions: You need signed value or that 32 bit footprint for cache-heavy applications. m_arrChamber.assign(m_arrChamber.size(), true); Besides the fact that this should probably use <random> rather than std::rand, using branches in loops like this, when you have wraparound, is rarely a good idea. Predictions on these kinds of branches tend to be bad, and it's a lot cheaper to use a bit of extra algebra. for (size_t i = 0; i < nRoundCount; ++i) { m_arrChamber[(nChamberIndex + i) % nChamberCount] = true; } This is entirely unnecessary. Each constructor of CTest implicitly constructs a std::vector, followed by explicit resizing, and destroys all of that in the destructor. So you end up with nTestCount malloc/free pairs, where one would fully suffice. The array can be trivially reused with a call to std::vector::assign for a fraction of a cost of a malloc. There's also zero reason not to use for-each-style loop in runTest(), like you did in showChambers(). They have an identical structure with respect to their use of the bLoaded value.
-
You could have tried reading a bit of the thread to see specifically why none of this would take place. Would have taken you what, ten minutes, perhaps? This is not a particularly high threshold for effort to put in before jumping in with your own assertions, especially ones so plainly based on nothing but bad science fiction.
-
Yeah, this problem is solved by brute force the easiest. There's some clever math one can do to arrive at the same result, which may generalize better, but I don't see a reason to bother with it.
-
Absolutely agreed. Again, I'm not saying this isn't a valid interpretation of question in OP. Just that I don't see why it should be the default interpretation. There are two different setups that can lead to exactly the same question being asked, and the answer is 1/3 for one setup and 1/2 for other.
-
Devil's advocate: Technically, if we have NO other information, default is 50/50 as per Principle of Maximum Entropy. In general, if we have no constraints on probability distribution, uniform distribution is the most reasonable assumption. If you do have some information, distribution that maximizes entropy while maintaining constraints is the best guess. For example, if you know the mean and variance of some sample, normal distribution is a very good guess. As with anything in statistics, this doesn't work always, but simply more often than any alternative, yet it's a surprisingly powerful method, allowing to get very believable probability distributions when very little information is available. This approach is also used in signal processing and image enhancement. Of course, this falls apart as soon as we start throwing additional information at it. I've just checked on my suitcase of Aztec gold, and since I have possession of it, and it being very unlikely that there are two such suitcases, I can adjust my estimate of probability of you having a suitcase full of Aztec gold to rather unlikely. Somebody with no prior knowledge of relative rarity of laptops vs Aztec gold would have to accept the 50/50 claim, however. That's not to encourage turnings this into argument from ignorance, as we certainly have a wealth of statistical information on gender distribution across regions and age groups, and can always do better than 50/50 for this.
-
And in this setup it's 50/50. Because if both are boys, he's guaranteed to tell you that one's a boy, but if one is a girl, it's a 50/50 chance. So you end up with a bias towards boy-boy that compensates for the fact that there are two permutations of boy-girl/girl-boy. Id est, consider meeting 8 such individuals in a row. Two of each child combination BB, BG, GB, and GG. You'll get both BB parents tell you that one is a boy, and one of each BG and GB, giving you 4 total parents saying "one is a boy". Of these, the other is also a boy for 2 of them. That's half. It's only if you know in advance that the parent would ALWAYS say "one is a boy" if they had one, then we end up 6 out of the above 8 parents in the "one is a boy" group. But that absolutely does not follow from OP's question. I'm not saying it's an invalid interpretation, but one that's no more valid than the 50/50 alternative that you just presented. P.S. Had to look up what a "pram" is. In US, the term used is usually "stroller".
-
Yeah, but if you choose to say "one is a boy" or "one is a girl" after you saw the group, based on whether there is at least one of these, then we're back to 50/50. That's crucial. Edit: Specifically, consider this setup. I walk on the street. I see a group with two children. I pick one child at random, if it's a boy, I say, "At least one is a boy." If it's a girl, I say "At least one is a girl". The choice is entirely random, and so it's not "first" or "older child" or any of that nonsense. Do you agree that this is consistent with wording of the OP's problem? If not, then could you please clarify what the difference is? Could be purely linguistic disagreement. But if you agree that this setup is consistent with wording of the problem, it's trivial to show that in THIS setup, the other child can be a boy or a girl with 50/50 odds.
-
I just don't see it being implied in the wording. Imagine that the teller has decided in advance that he'll pick the gender for the intro based on the first child they see. It happened to be the boy, so it was "one of the children was a boy, what are the odds that so was the other." If that first child happened to be a girl, the teller would instead ask "one of the children was a girl, what are the odds that so was the other". It just worked out that it was a boy in this case. The statement of the problem doesn't tell us if the teller decided in advance that they'll be only considering groups where at least one child is a boy, or if they chose to say "boy" or "girl" based on observation. And that changes the outcome, because now half of the encountered groups will have resulted in "at least one is a girl" telling, and since we have the "at least one is a boy" telling, they are discarded. So we're left with 1/2 of initial sample instead of 3/4, and within that remaining 1/2, the odds of other child being boy or girl are 50/50. Once we decide on interpretation, this is a very simple problem. But choice of interpretation is ambiguous with respect to wording. There is nothing in the problem telling us that choice to ONLY consider groups with at least one boy was made apriori.
-
I don't disagree that it's AN interpretation. I just don't see it as the only valid way of reading the problem statement. The "At least one is a boy" could have been a pre-selection criterium, or it could be reaction, with "At least one is a girl" having been just as likely. We can't tell which it is from the problem statement, but it's a critical piece of information to gauge the probability.
-
You are. We're trying to decide between "SELECT * FROM encounters WHERE child1 = 'boy' AND child2 = 'boy';" and "SELECT * FROM encouners WHERE child1 = child2;"
-
Wording doesn't make any difference here. The only question is how, if at all, you discard potential matches from the random sample. This is very easy to confirm with simulation as well. If you simply walk by groups of parents with two children, discarding any where at least one of the children isn't a boy, yes. You will arrive at the answer you're hinting at, because you've actually completely excluded 1/4 of groups from your sample. But that isn't inherent in the statement of the problem. You could interpret it as that you've ran into the first group with two children, picked one of them, and said, "one of the children is a <gender>, what are the odds that so is the other?" And here, we are back to 50/50, because you did not discard anyone. The question is entirely ambiguous on this matter, which is why I absolutely hate this puzzle coming up. It tends to start a crap fight between two groups of people with rather rudimentary understanding of statistics, nor willing/able to write a simulation to back up what they are claiming.
-
You certainly can't stay within that region for long. But an orbit can dip into that region, so long as apoapsis is multiple SR away. And you kind of want an elliptical orbit for when it's time to get out. Either way, yes, having an ergosphere makes it all a lot easier. At that mass, Hawking Radiation matches output of Sun's photosphere in spectrum, hence the "Black Hole Sun." Moon's HR would be a bit cooler and comparable in intensity. In other words, nothing remotely detectable from Earth.
-
Being a mad scientist is about placing big bets on what you've calculated to be winning moves. Experiencing spaghettification has no net-positive outcome under any circumstance. Even sending in henchmen seems like a waste. Maybe the failed henchmen, as a punishment... But I digress. Perhaps, the overwhelming majority of people who would want to use a black hole as a time machine are crazy enough not to care to be torn apart. But once we filter it down to these who'd be able to pull it off in the first place, given an opportunity, we're down to people who have a plan. And that plan doesn't involve being atomized.
-
Small black holes tend to have too much tidal force to be practical as such. It becomes overwhelmingly strong long before you start noticing relativistic effects. If your goal is to travel thousands of years into the future, you want a really, really large black hole. Something like the Sagittarius A*. You can orbit one of these with periapsis really close to event horizon without any ill effects, then utilize the insane Oberth effect to depart without much trouble.
-
Yes, but Moon-mass black hole has such a tiny cross-section that very few things would actually fall in. It'd probably form a tiny little accretion disk, but it won't present a danger.
-
I've been pondering about this idea. You can wear a latex suit and be, well, moderately comfortable, despite the fact that your internal pressure is fighting external pressure, with latex contributing just a touch one way or another. In theory, one could have an electronic feedback system that adjusts elastic coefficient of a much tougher fiber, to act exactly like external pressure + light elastic of latex or similar. If you adjust fast enough, you can have freedom of movement without losing confinement. Unfortunately, the only I know how to adjust elastic coefficient quickly in this kind of range is with changes in temperature. And that... would not be comfortable. Especially, in the aforementioned areas. Still, I wonder if something piezoelectric would be viable. It definitely has the strength, and helical coils of piezoelectric fiber might even have the range. You might complain that I'm basically trying to imagine artificial muscle, but this is different in that it needs to adjust elastic coefficient only, and it needs not be terribly efficient at it, so long as it keeps behaving roughly in accordance with Hooke's law. So it is, technically, a simpler problem.
-
^ Having the same twist for up and down stairs is a bad idea. Imagine going the "wrong" way on one of these a bit too fast, and Coriolis force making these effectively vertical all of the sudden. Might not be terrible on the way up, but if that happens on the way down, you're going to have a VERY bad fall. In general, I'm not sure I want stairs on that thing. Elevator with padded walls might be a better call.
-
NH3? OH NO3! Seriously, though, if this works out, we need a better name for it.