Jump to content

K^2

Members
  • Posts

    5,902
  • Joined

  • Last visited

Reputation

4,922 Excellent

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I think we'd need to attach some numbers to it, and it's not trivial. The first question I have is how fast the capsule can hit the ground in order to stop without burring itself far below the surface, because there's a limit to that. Both from the type of surface you hit and from the material properties of the capsule. The second question I'd have is if we do find a sweet spot for the speed at which the capsule buries itself most of the way in, but leaves an exit on the surface, how big of a crater is that going to dig? Which, again, might depend on the type of soil you're hitting. Even without putting numbers to it, I think it's pretty clear that the method is far from universal. Hitting a swamp might be comparable to hitting water, and hitting hard rock is unlikely to give you any significant penetration without going far beyond what any theoretical material you could use for capsule can withstand. (We're not even talking about the pilot here.) But if we kind of ignore the fact that it's a game where you need to be able to plunk the exit point anywhere, and go with a more realistic, "Yeah, we're going to be picky about the landing site," something like sand or soil might be in the right ballpark. That said, the entry velocity will be moderate, far below orbital, meaning we will need to rely on another method of braking. It could still be entirely passive, such as aerobraking, or rely on the retrorockets of some sort. Either way, having a number for velocity on impact might give us an idea of whether it gives you AA evasion benefits or not. I'll try to do a quick scan through literature, because the naive thoughts I had on how to estimate the impact speed for given penetration depths are not giving me anything reasonable. There might be good models for soil as loose particles that should be appropriate here.
  2. How is that different for motion between the Earth and the Moon? Or between a starship and a star it's traveling to? How do you measure relative velocities across empty space? You can't put a little turbine out and measure how fast the "space" moves past you. That's the most fundamental principle of relativity. Mathematically, you fill the space in between with probe particles that are infinitesimally close to each other, and adding up all the relative velocities across the chain. (Because remember, motion is relative so if the chain is "static" relative to you, it might be moving relative to me.) If you do that between distant galaxies, you can't tell the difference between the two galaxies moving apart or the space in between expanding. In practice, you send a beam of light instead. Or rather, you just wait for it to happen naturally, because it happened billions of light years away and billions of years ago... But regardless, with light, you again run into the same problem. Whether the object far away is receding or the space in between is expanding, you'll get the same result. Motion due to expansion is true relative motion, and it is true relative motion at the superluminal speeds for the galaxies. What if I told you that if you write it in the framework of GR, properly accounting for frames of reference, effects of driving a vehicle across the planet, and saying that the vehicle's wheels cause the planet to spin underneath with vehicle staying put are identical? It's just a mathematical perspective. We happen to use math that explains this type of motion through space-time geometry, because that's the easiest thing for us to put into formulas. There is an equivalent formulation of it under which you just "drive the Jeep". That is, fly the ship at FTL speeds. It's just the kind of math that makes the epicycles of a Geocentric model look quite reasonable in comparison. There are other problems with the Alcubierre Drive, of course. I actually missed the mark a bit on saying the behavior has to be different in curved sapce-time. I had time to revise my understanding in the years since... It's a lot worse. The Alcubierre Drive just straight up doesn't work if it carries any non-zero mass inside of it. Even in flat space-time. The negative energy of the bubble walls has to cancel the mass energy of the ship, or the bubble starts radiating gravity waves. So how do you avoid angular momentum conservation problems when warping around a star system? Super easy, barely an inconvenience. You make sure that the total mass that gets transported is zero. Where you get that negative mass at origin and what you do with it at the destination is left as an exercise to the reader. *Her. I refuse to corroborate on whether this has anything to do with the effects of FTL or time travel.
  3. There are entire galaxies, observed and measured to be receding from us at the speeds exceeding the speed of light. It's not about the boundary. Stuff in the universe is moving relative to other stuff in this universe at superluminal speeds. This isn't some hypothetical on a napkin. It's a firmly established cosmological fact. Causality is a mathematical statement. There are concrete theorems and several notable conjectures that are yet to be proven within a framework. If you don't understand it, all it signals is limits of your education on the subject. Cool. I was doing research in particle physics. That is, I specifically worked with the concept of matter propagating at energies where space-time metric is the ruling factor, and understanding time ordering is crucial in getting correct results that match experimental data. You can dream all you want. The measurement precision on QM and GR set the expected scale limitations on where these break down. Classical mechanics was breaking down at sizes much larger than an atom and masses smaller than these of a planet. We could work at these scales and exploit these violations. The QM holds for many orders of magnitude below the scale of any known particle, and at masses exceeding these of galaxies. Humanity isn't going to reach these numbers. Even if there is a higher order theory that is more correct, we aren't going to see the difference, because it'd take several times the energy of the known universe to do anything that does. So unless you want to believe in magic fairies that break down the space-time barrier for us as some sort of a favor, we are going to have to work within the confines of the theory that we have at least until we make crossing the universe as easy as sending a GPS satellite to orbit. Because that's a prerequisite for getting to these scales. The best we can hope for is understanding the theory we have a lot better and make full use of it. We aren't tapping into a fraction of the near-magical bull crap that QM and GR actually say are possible. Which already covers FTL, wormholes, teleportation, time travel, computational capacity that borders on omniscience, and more. It just has to follow the rules that have already been firmly established. You can't flap your arms and fly despite the fact that we've figured out jetpacks. No matter how much you want to imagine it.
  4. I was trying to get an actual mention in, but your name wasn't showing up for some reason. Anyways, glad to know you saw it eventually. I'm the one who posted the original electric rotor ascent on Jool video, and I was super excited that somebody made a working mission out of it.
  5. "Discrete" on the timescale of the age of the Solar System. These are still very slow migrations from one stable location to another. Take two massive planets around a star and consider their mean interaction in the constant of motion coordinates for the central potential. When the planets are far out of resonance, the energy and momentum flow between the two are very slow. The orbits will drift, but very, very slowly. As the two approach the resonance, the transfer rate becomes much higher, with some sort of an equilibrium at the exact resonance. For just two planets, all that really means is that they'll drift slowly and then almost "snap" to resonant orbits like a pair of magnets when they get close to resonance. Again, on the "age of the star system" kind of time scale. As you start adding more objects to the system, interactions get more complex. There aren't just drifts, but also precessions. Several planets might be happily spinning in their own planes for ages and ages as the planes of their individual orbit slowly precess, until the two planes align, and suddenly, these two planets are strongly interacting with each other or with some 3rd body, causing their orbits to start changing rather rapidly on the cosmic scale. Point is, there are a lot of quaistable arrangements that become unstable once some of the parameters of the system happen to align in a certain way, then they become highly unstable, and start shifting until a new quasistable arrangement is achieved. Truly dynamically stable systems are exceptionally rare. There's currently only one system I'm aware of (HD 110067) that is suspected to have all of its known planets nearly co-planar and in simple resonances with each other, therefore, being exceptionally stable. Pretty much everything else we've found has some combo-breakers in the system that are orbiting out of the main plane, with high eccentricity, or way out of resonance, meaning they'll throw a wrench into the stability at some point in the future. Solar System is unusually messy, based on what we've seen so far in other star systems, but not incredibly so. The current arrangement is stable enough, and there is no expectation of drastic shifts for the near future, but in the system's past, we've had a lot of rearrangements that would come in bursts of activity for the aforementioned reasons.
  6. The 3D image isn't meant to represent the plumes. It represents regions that have different wave-propagation properties than the rest of the mantle, which happen to correspond to some, but not necessarily all plumes. That sums it up nicely.
  7. Yeah, I'm going to just say that nozzle efficiencies quoted in that section are impossible due to black body radiation losses at the temperatures in the detonation region that this design implies. Estimates in spoiler. Point is, if you were to build a NSWR to that section's spec, almost all of the energy from the detonation region would be escaping as X-rays, drastically reducing the thermal energy that can be converted into the exhaust velocity by the nozzle. It's still a cool concept, if you dial it down a bit. The previous section building up a more realistic proposal for 6,730s version seems a lot more plausible. Again, that 4th power in temperature does a lot of damage to rockets that convert thermal energy into propulsion, but only on the high end. There's a lot of room to grow beyond what we can squeeze out of chemical rockets. But if we want to get these 6-figure ISP engines, we have to have a different way of accelerating the exhaust. It has to be some sort of an electromagnetic drive.
  8. You're overreacting. No way the update release will go that badly.
  9. Do you have a quote on that? Six-digit ISP would put the core temperature in a 100MK ballpark, which cools basically instantly by emission of hard gamma radiation. I'm wondering how that's addressed.
  10. I don't see a way for Intercept to make interstellar drives anything but exceptionally useful for in-system travel. It's been stated directly that the interstellar distances are meant to be realistic. I don't think we still got any confirmation how that correlates to the 1/10th scale of the game and potential relativistic limits of the real world, but we're still talking distances in billions of km under the most generous interpretation. It takes 1 year accelerating at 1g to reach light speed. I know we're all going to have warp set to absolute max the game allows when traveling between the stars, but there are inherent limits to how much warp that can be, and if it takes hours of real time, that just simply won't do for a game. Interstellar ships have to accelerate at a pretty good clip to make the game playable. Maybe not quite 1g, but a sensible fraction of it even when fully loaded with fuel and cargo. So what we learn from all of the above is that the interstellar drives are going to be very efficient, having ISP several orders of magnitude above anything we had in the original game or have access to now. And that to be pushing all that fuel at a reasonable acceleration, these engines will have to come with very good TWR. Far better than ions, likely better than the NTR engines have been, and possibly approaching TWR of the chemical engines in the game, depending on just how big the interstellar gaps are. There might be ships you want to build that are just too small to make use of an interstellar drive in the end game, or ships that have to go in the atmosphere. So I don't mean to say that there will be no uses for other engines. But I cannot think of any sensible barrier Intercept can put in the game that will allow interstellar gameplay to be fun and not make interstellar drives hands down the absolute top choice for in-system hauling.
  11. At some scale, you do have to start coming up with something different. What are you going to do against a mass driver strike on a colony? Turn off the Sun and release a fake Mars? I can already write a Horizons query that will tell me where any given crater on Mars is going to be at any specific point in time for the next century, and there is no reason to believe that protection of fixed assets is going to become less relevant.
  12. These are different labels, though. I can replace label '2' with a label and then 1 + 1 = , but that's still the same number, just written differently. It's much more exciting when 1 + 1 is equal to some other number.
  13. My preference would be full terminator, then. No need to have the mushy brain if you have this sort of tech. Get me out of the meat prison, and put me into something a bit more reliable. And the flesh on the surface is just for aesthetics and blending in.
  14. So yeah, in general, if you have a standard double elimination tournament between n teams, where n is a power of two, there will be 2n-2 or 2n-1 games to be played, depending on whether the winner's bracket champion wins or loses their first try in finals. So you may need to run 127 games for 64 teams. But distributing them to 10 pitches is not trivial. At first, you can probably just send teams to pitches as they open up. If this was always the case, and games took roughly the same duration, you'd expect about 13 rounds of games to get through these 127. But of course, when you get down to just a few teams, you have to wait for some of the games to resolve before you know who is even playing whom. So the last few games end up utilizing only a few pitches. That said, in practice, it's usually great for tension. I used to do robotics in High School, and the competitions were always like this. You're in the pit, making final adjustments, somebody makes the run to the official brackets to see who we're playing next and at what table. Do we have another ten minutes to run a test after a battery swap? No? Well, cross fingers and hope for the best. The chaos of parallel games in double-elimination is perfect for that kind of a competition.
×
×
  • Create New...