Jump to content

K^2

Members
  • Posts

    5,068
  • Joined

  • Last visited

Everything posted by K^2

  1. You can start losing surprisingly heavy elements if you dip close enough to the star early enough in the formation through combination of heat and gravity. I'll grant you that a planet in that situation is on its last legs before being swallowed outright, and it would take a miraculously well-timed boost from something else falling into the star, but out of all the stars in all the galaxies... Unless that something else got ejected or absorbed into the primary. Again, we have no idea what sort of dynamics that system could have been going through before settling down. Merging binary star system, for example, can produce an absolutely wild distribution of planets and compositions which will otherwise will look tame and normal a few billion years after the stars merged. I wouldn't expect to encounter these kinds of system very often, but if we give developers a fiat of picking an interesting system to exemplify in the game, none of this goes outside plausible. I don't disagree. And it'd be nice to get a nod to that somewhere in the game, pointing out how bizarre it is for a planet to have these properties. I mean, obviously that's why developers want it in the game. I just don't think it's a bad reason to put a planet like that in the game, so long as it merely stretches the limits of what's physically possible for a real world, after the 10x scale correction, of course. If it was just absolutely impossible, I would be against it too. Merely very, very, unlikely is acceptable if it's in the name of good gameplay and interesting place to explore. I mean, if we start getting technical about it, Lathe is already a good example of a world that's exceptionally unlikely to happen. But it's a fun moon to have in the game and I'm glad it's there.
  2. There are several processes that can strip planet of lighter elements and prevent it from accumulating helium. Two most likely are spending a lot of time very close to the primary and then migrating out to its current location, potentially by redirecting another planet into the star in the process, or a close encounter with a gas giant early in the formation. Both of these can result in a planet that's basically a core of a gas giant with a thin crust of rocky material on top and an insubstantial atmosphere for that heavy of a planet. Since a larger fraction of the planet's mass is due to its larger iron core, the average density is also a lot higher. There was a bit of a discussion of an exoplanet candidate with similar apparent characteristics a while back. Though, perhaps, still not quite as dense as Ovin, it shows that such events happen in star system formation.
  3. You have to land on it without crashing first. At 4G, unless the atmosphere is as thick as Eve's, parachutes aren't going to do much good. And I strongly expect it to have a very weak atmosphere based on the images. That planet is going to be a graveyard of broken ships.
  4. Are you just trying to build a chain? Or do you need at least one tower visible from every point on Earth? Even in the later case, you can come up with a space-filling pattern that's more efficient. Think a branching out fractal-like structure. Simply filling all space with a grid of tower is not the optimal solution if you're trying to build fewer towers.
  5. Wasn't there something about Rask/Rusk situation being re-worked, because there were problems with n-body? It might be just patched conics throughout the game.
  6. Oh, absolutely. What I'm talking about is stuff that's been built directly into the game executable. Actual example from a recent game, we had to generate convex hulls for some geo for a feature involving physics particles for FX. It was handful of objects with low vertex counts, so it was cheap enough to do the build on demand. But because the library generating the convex hulls was under GPL (or similar license), the shared code containing that library would not get compiled in for release version. Which meant we had to move the generation of collision hulls to the builder, and now we had to track the dependencies to make sure that the collision hulls are built for all the right object and not for all the objects, and then platform-specific dependencies became a factor, and it was all in all a couple of weeks of work just to get the build process for these sorted, whereas the initial proof-of-concept on-demand loading was done in an afternoon. And yeah, in principle, the entire builder could be released as open-source project with all of its libraries separately from the game. (Edit: Or as you point out, still as proprietary tool, but with open-soruce libraries that contain the actual GPL code delivered separately.) That's not going to happen for that particular game, because that's a whole another can of worms that the studio doesn't want to deal with, but it's not because of any legal restrictions. My point, however, wasn't about situations where tools are intentionally released by the game developers to be used by modding communities. It was entirely about the situations when tools get left in because they weren't worth a hassle to remove. In the above example, if we could ship the game with that hull generator, we would have. There just was no reason to pull it out of the game other than licensing. And there are plenty of games out there with dev features left in the game either disabled or simply unused. And when you are trying to ship a game that is easy to mod, sometimes, all you have to do is absolutely nothing. If you already have build tools integrated into your executable, you just ship it as is.
  7. In a tl;dr way, yes. But if you want a bit more detail, as far as it relates to games and game data, streaming is ability to load data from hard drive directly into RAM without involving CPU for anything beyond kicking off the process. And yeah, what that lets you do in practice is load assets when you need them, because you don't have to stop the game to handle the loading. The game engine notices that it's missing some asset data, issues a request, and just keeps ignoring any objects needing these assets until the assets are loaded. This can lead to pop-in if you are very aggressive about it, but can also be entirely seamless if you have a bit of a headroom on RAM and have different level of detail versions of your assets that can be loaded separately. A good use case in a game like KSP is not loading high resolution terrain textures until you are in SoI of the planet that needs them. Possibly, even loading only select biomes for the highest quality versions. This might cause things to look blurry for a few frames when you are switching between ships, but you get a lot of flexibility out of it that usually makes it worth it. One caveat relevant here is that any data you might want to stream has to be in the exact format you want to use in the game. Like, you wouldn't stream a JPEG image, because you have to decode JPEG before you can display it, and if your CPU has to spend time decoding the data, that often leads to choppy frame rate or freezes. Obviously, you can handle a few operations like that, and you usually have to, and a lot of hardware these days, notably all major consoles, support some sort of compression in streaming. You can think of data as being in ZIP files on the HDD/SSD and gets decompressed as it's being loaded by a dedicated chip, so again, no CPU use necessary. But otherwise, the data needs to be ready for use. And this is where it gets a little tricky with modding. If the game expects data that has been processed, and you just added fresh, unprocessed data by installing a mod, something somewhere needs to figure out that this happened and prepare the data before it can be streamed. The way KSP mods work, some of that is done by the modding SDK you use to create modded parts, but a lot of data has to be built by the game. There are a whole bunch of ways to handle it. The simplest is to not worry about it until you actually try to load the data, and then do the processing if it's needed then. Yes, that will definitely cause a freeze in the game, but if that only happens once per install of a particular mod, that's not a huge deal. A more complete solution is generating a dependency graph and checking it when the game starts. For a game like KSP2, that shouldn't be too hard, especially, since you have a part list upfront and you can just check to see that all the data you need for every part is available when you start the game without having to actually load everything.
  8. Doesn't have to be a single blob for everything, there are a lot of implementation options here, but that's the gist, yeah. Most of that data takes up considerable CPU time to prepare and very little disk space to store. Just cache it and invalidate cache if the source file updated. Side note, I'd be shocked if there is no way to make processing a lot faster either, but if you cache it, it's kind of a moot point. No reason to over-optimize something that will only happen once for a lot of players, and only once per mod install for almost everyone else. I have encountered cases where build tools get intentionally cut from released game binaries because they contained some libraries with infectious GPL or similar licensing. Basically, if they were to release the game with these libraries, they'd have to publish source for the entire game. But I've also worked on projects where the tools are literally there in the published game, only with some editing features turned off, and the only reason people can't mod the game easily is because the source format and directory structure aren't publicly shared. In general, though, developers need a reason to disable these features, and because that's work, they usually leave all or huge chunks of them in. It's usually easier to ship a game with build tools than without. For KSP2, baking part info in some way or form is going to be necessary. And caching baked files is going to be necessary for streaming. I think they're pretty much forced to write the tools we'd want for much faster loading. And because the developers don't want to wait for unnecessary build times either, they're pretty much guaranteed to auto-detect source file changes and build stuff on demand. Basically, all of the features we want. All Intercept has to do to make the game easily moddable and reduce loading times for everyone is just not turn off that feature when they ship. That said, if for some reason PD or Intercept go back on modding and try to lock us out of critical tools, I'm here to help build replacement tools. Unless PD forces Intercept to install some sort of cheat detection, I don't think we have to resort to anything that would be a forum violation to share. In the worst case scenario, I still think modding is going to be worth it, even if it ends up against forum's ToS and all discussion of mods for KSP2 will have to be moved elsewhere. I hope that doesn't happen, but I'm confident that modding will be a big part of KSP2 either way.
  9. Fair. I'm oversimplifying. "Compiling" might be a better term here. What I mean is taking the text data and turning it into the in-game representation - everything but the actual memory allocations, which, of course, have to take place at the runtime. All of the operations in building the internal data only really need to be done once. Except that this is exactly how it works on pretty much every major game project and people do edit text files that get built exactly once. Caching of binary builds is a standard practice in games industry. If I edit a scene file, whether I did that with a dedicated tool or simply opened it up in notepad and changed markup data, the game will know that the binaries are out of date and rebuild them. Yes, when we ship games, we often only build the binary cache and many games ship with the build tools stripped out, but there are enough games out there that maintain the dev-environment behavior of checking for new/modified markup files and will rebuild parts of the cache that are out of date - often, specifically to be more mod-friendly.
  10. You are showing "Part loading" as 70% of texture loading time. That's not at all insignificant. And all of it is from parsing configs for individual files. Yes, the main config parsing doesn't take long, but the parts loading is still a parsing issue. When loading a few kB of text takes 70% of what it takes to load all the multi-MB textures of the parts, there's a huge problem. And fixing this would reduce loading time by about a quarter based on what you're showing here. Which is not insignificant at all.
  11. Parsing of the part configs takes shockingly long, and that can definitely be fixed. But yeah, given the way KSP2 is being built, there is a lot of data that will have to be streamed already. They might as well recycle some of that tech for the mods.
  12. It's never that simple. It's possible to intentionally preserve some degree of backwards compatibility, but you are usually handicapping yourself by doing so. There are better places to spend developer resources at. If the mods are good and useful, someone will port them over.
  13. I agree, but to me, this suggests that there is more than one way to solve the problem. Yes, we could have the contract system replaced with something entirely different, but I think the core concept is fine. It's just the fact that it gets repetitive, because every contract is such a basic construct. "Position satellite in orbit." "Rescue Kerbal from orbit/location." "Test part." "Collect data." All of these are fine to do once or twice, but then it turns into a grind and not even a fun one. A relatively simple fix to this would be to link a bunch of these objectives together into a mission. What if instead the contract is to rescue the stranded Kerbal, deliver them to their ship in orbit, get some parts to that ship to fix it, take the ship to a destination, land it there, perform measurements, then bring the kerbal and collected science back. Same basic parts, but now you have a bit of a narrative to keep you invested, and the number of ways this can all be combined is a lot higher. Plus, provided that each step has its own rewards in credits and reputation, by the time you are finished the set, you are set for a while. And even if you have to run multiples of these throughout the game, because of the number of permutations, it can always be at least somewhat fresh. The generator for missions like this can be pretty simple - you really just need to make sure that whatever combo that's being generated is within constraints of player's current technical level and that the player is compensated appropriately for every step. And because this style heavily rewards combining resource investment for multiple steps, like carrying an engineer on your rescue ship so that you can perform the repairs without separate launch, if you spend a bit more time solving the problem creatively, you actually get significantly higher payout.
  14. Oh, so you're talking not only about changing the time step, but also limiting maximum warp? Yeah, that's workable. Though, might still be annoying if you have a craft in elliptical orbit trying to raise it with ions over multiple orbits, and your warp keeps dropping every few days/weeks/months of game time as that probe dips close to primary. I mean, in the perfect world, if this was written in something that compiles to native with good optimization and by someone who's an expert in numerical methods, we wouldn't be having this discussion. I've made an argument in an older thread that KSP2 should be runnable on a Switch if it was written from scratch specifically to be optimized for that hardware. I mean, that presumption is a fantasy, but if resources were available, yeah, it can be done. Problem is, KSP2 is a Unity game written in C# by Unity devs, most of whom, at best, implemented a Verlet or simple RK4 integrator at some point by copying it from some manual. You only have to look at how bad time warp was in KSP, even without any physics at all, to see why "theoretically solvable" and "practically solvable in given environment" aren't the same thing. Intercept does have a physics programmer who, hopefully, is a bit more experienced in that sort of thing. Though, I've seen my share of, "We just put it into Matlab and let it do its thing," in academia. But even if we assume that their programmer is actually good at writing optimized numerical code, this is considerable amount of work to implement, debug, and maintain as new features are introduced throughout the development cycle. And the amount of physics-specific work on KSP2 is almost staggering for such a small team. There is the new craft simulation, various optimizations for large craft/stations, physics sync for MP, continuous collisions, etc. There's way more work there in total for a single person to handle in 2 years, especially for someone for whom KSP2 is going to be their first game dev experience. So the question shouldn't be, "Can you write a 1M+ time warp for a well-written custom engine." It should be, "Could you pick up a Unity game with unfamiliar to you code, and implement a 1M+ time warp in less than a month without breaking anything." And that's the question that is way harder to answer, because we don't know the exact skillset of people involved, how the rest of the game handles time steps, etc. And I don't have certainty that it will be implemented well. So the limitations of time warp implementation can still very much spill into very real constraints they'll have to follow in terms of interstellar distances. There are optimizations you can take here too. If you take upper bound on curvature of target's trajectory and ship's trajectory, you can inflate SoI and do sphere-line test as early rejection. Most of the time, either the time step will be short, or curvature small, resulting in SoI only slightly larger than original, meaning you can reject majority of checks early and only have to do iterative tests for a few objects. This adds a lot of complexity - see paragraph above - and still isn't free, but it goes back to, "If you had resources to do this proper, it wouldn't be a problem." Well, that's my point. In KSP, in order for this to happen, you have to be moving fast and encounter either the outer edge of or a very small SoI. In that case, unless your trajectory would have taken you through the body, the deflection is tiny, and overall trajectory isn't altered much. For in-system flight, this is a tiny annoyance that might require a small correction burn somewhere. In KSP2, for a torch ship, thrust will be dominant force almost always. This allows for a much stronger scattering effect due to SoI encounter. For a worst case, picture a situation where you glance an SoI with trajectory curvature due to thrust being close to curvature of SoI boundary. Instead of being inside SoI for 0 or 1 tick, which won't make a difference, it's now between zero and many. The diversion can be sufficient to bring you even closer to the gravity source, meaning the projected paths between short and long integration steps are going to be very different. But even dipping into the SoI briefly can apply unexpected gravity impulse very early in the voyage. Now, if you are very careful about applying the same logic to your planning trajectory and simulation, that's fine. A bit of chaos doesn't hurt anyone. But if another ship under power dipped close to its primary, dropped you to lower warp, and that change in time step caused the SoI transition to register, your ship on an interstellar voyage can get deflected from its planned trajectory. It will still likely be a fraction of a degree, but at interstellar distances, that's a difference between your ship going to a nearby star and your ship going into an empty void. And making mid-transfer course corrections for a torch ship is a rather different order of magnitude problem than an in-system mostly-ballistic transfer. Again, not an unsolvable problem by any means, but a huge thorn that didn't exist in KSP and that KSP2 team will have to deal with. There's a C version of that book, btw. Super useful. I don't know if they've adjusted the algorithms for that sort of thing significantly, though. I've mostly been using it as a reference for linear algebra and polynomial integration.
  15. You are assuming only one craft. This breaks down hard when you have multiple craft on different trajectories, each one requiring its own integration step. You can take the smallest requested time step of all powered craft, but what that results in is one ship doing a dive to the star to pick up speed via Oberth would be killing performance for everything else that's out in free space. You can also try to do break-step integration for all of the different craft, and that it's own little nightmare to manage. It's like painting a curve on an n-dimensional grid. And then you have the SoI checks. If you are on the outskirts of the star system, the gravity from primary might be low enough to warrant a long integration step, but what if that step brings you inside of an SoI of an outer planet? So now you have to select integration step, do analytical solution for trajectory within the step, do the sweep of the rocket position against the sweep of SoI running on its own trajectory, repeat that for every craft under power, and then update everything either in break-step or based on the shortest of time steps. And then you still need to figure out where within this time step you are planning to update unpowered ships, colonies, and supply routes, because that also has to be ticking at the same time. None of it is impossible, but there's a lot going on with many tricky edge cases. This isn't your textbook integration problem. It's complex algorithmically and numerically hard. And solving it poorly will result either in artifacts in navigation or variable performance issues. So you really can't take any questionable shortcuts on this. It has to be done right. And the higher up you go on the time warp multiplier the harder it gets. So I do wonder at what threshold Intercept is going to call it.
  16. The problem is precisely that the moment you add thrust, these "whoops, it missed the boundary," become, "whoops, the integration was complete garbage and now the ship is on trajectory drastically different than what was shown in the planner." It takes it from annoyance to completely unplayable. Granted, you could apply shorter time-step to ships under power and go for coarser steps for anything ballistic - and you can even do continuous collisions to avoid some of the problems from KSP, but this is all very far from trivial. Simply increasing time step under high warp is not going to be a good solution for KSP2. I'm not saying Intercept isn't going to throw in a towel and simply do that, but a good solution requires a lot more nuance, and I hope they at least attempt it.
  17. Even during takeoff and landing there is enough authority to counteract rolling torques due to bad CoM. What you don't want is to have a lot of that torque already there in case of an engine failure, where you need extra authority to keep the single operating engine from flipping the plane over. So it's still very important to make sure your CoM is inside the envelope, but yeah, if people move side-to-side during cruise, it won't matter at all. Back-and-forward is a major problem even in cruise, not because you wouldn't have authority, but because you loose dynamic stability. In particular, CoM shifting too far back can easily be catastrophic. It creates a pitch-up tendency, which only grows if not immediately corrected. A plane with CoM too far aft once stalled cannot be recovered. I'm not aware of any passenger planes going down because of that, but cargo planes with unsecured cargo that shifted in flight absolutely have. Given the state of understanding of hypersonic flight at the time? Very doubtful. I mean, how early in the 50s? Sputnik 3 in '58 was already over a ton. That's more than enough for a manned flight to orbit. You just don't have the tonnage for an engine that brings you back. If we are talking about early 50s, there was tech for suborbital flights, but engines you needed for orbital flight were still in development. As much as V2 has demonstrated the potential, the fundamental design is not scalable to an orbital rocket. A lot of the components had to be completely reimagined, and that takes time. Getting from V2 in '45 to Sputnik in '57 was already very fast. Too many things lined up in the 60s tech-wise. Better understanding of hypersonic flows, better understanding of rocket engines, better materials, computers. Can't forget computers. Even if US didn't drag their feet early on and beaten USSR to first man in orbit, and even if Apollo 1 didn't go up in flames and luck was entirely on US side, it'd still be just a few years sooner.
  18. Agreed! But there's a lot of room there. 0.42ly would still put nearest star at more than 50,000 the distance to Jool. As discussed at length, I think that's on the outer edge of playable with ~1G accelerations. On the other hand, anything less than 500x might not feel like there is a real vastness of space in between, requiring considerable change in tech to reach. But this still leaves a huge amount of room in between. And whether you want to go high or low on that range will depend on a number of factors we don't really know. What the tech tree progression will be like, how easy it is to get to 1G or above, and how exactly will time warp work.
  19. It's not really a translation issue. It's purely historical. The 200th Brigade of the 14th Army Corps of the Russian Navy is part of the Coastal Troops arm of the Navy. That arm of the Navy has been formed relatively recently from parts of coastal defense artillery, motorized brigades, and naval infantry (marines), etc. Some of these forces traditionally belonged to the Red Army, including some of the motorized brigades, and so they have retained their Army organization style. Hence, the 200th Brigade is still included in an Army Corps.
  20. It's a Unity game, and so it is written in C#. C# compiles to CIL which does a lot of dynamic linking, so it holds on to a lot more symbols than you'd think. A decompiler does a pretty good job of converting compiled C# back into source code. You lose a lot of local variable names that you have to guess the meanings of, but the function and class names are preserved, so it's not actually that hard most of the time to figure out what's going on. I've decompiled KSP a few years ago to look around. Mostly, looking at how forces are applied to the craft. Now, it's worth mentioning that, at least the time, the code was also obfuscated, but they used a standard, off-the-shelf obfuscator for which there was a good disobfuscation program out in the wild. Depending on what is used now, this might or might not still work. I mean, sure, but KSP struggled with 100k. We're talking doing everything KSP did across multiple star systems, potentially for a lot more objects, getting at least some form of colony update tick in there, and then also doing all the extra math for all the ships traveling under power. Sure, I expect KSP2 to be much better optimized overall, but even then, simply maintaining that 100k time warp as maximum is a challenge. There are also gameplay reasons why higher time warp might not be the solution you're looking for. Even in KSP it feels a bit odd to just skip years of time for a longer mission. Now we're talking decades, and in a game where a lot more is happening over time. Shouldn't you be collecting science, updating your tech, building colonies? Simply losing this much time on a warp doesn't feel great. While in time warp? Building a ship in VAB might actually be ok. You'd be sharing CPU resources, but VAB isn't too expensive to run. Anything else is problematic, though. How do you edit a colony that's being simulated at 100k warp? Intercept is likely going to need to implement special time step for that already, one that assumes that no parameters change for a long time. Now you start introducing changes. At best, it's hard to implement. At worst, a source of endless bugs. I wouldn't touch that. And of course, while you can launch new missions, you'll have to drop out of warp to do the actual launch, and then the time slices for that mission are likely to be so short that you won't be making progress on interstellar mission. Yes, I suspect some players will have a bunch of interstellar missions in flight at once with various alarms set, but this is turning from rocket game into ATC game. Some people will enjoy that and more power to them. You just can't rely on this to be solving your gameplay design challenges.
  21. If you do the math, it takes about a year to get to 1c at 1g. (Which makes 1ly/yr a very convenient unit of acceleration!) If we ignore relativity, trip to Proxima Centauri takes just over 4 years at 1G. At 100k time warp, that would still be 21 minutes of real time. This is starting to get into territory of bad gameplay. This isn't a flight sim where you have to actually fly the plane and long voyages make sense. If you put a rocket into time warp and have to wait for 20 real life minutes, it's not a good experience. And i your acceleration drops to 0.1G, that goes up to over an hour of real time under maximum warp that was available in KSP. And this is the nearest star. This is just not practical. Distances between stars will absolutely have to be at least an order of magnitude smaller than in the real world. That would bring it to KSP scale. As mentioned earlier, that's still a 7 minute trip to the nearest star under 100k time warp and constant 1G acceleration. This is getting into playable territory, similar to outer planets of KSP, but it's still not great. Distances might have to be shrunk even further. Especially, if time warp with thrust will have to be at a lower multipliler. So yeah, 1G continuous seems like a lot when you're working on a scale of planets in the Solar System, but the moment you go into interstellar distances, it's still going to take a very long time to get anywhere.
  22. Stars other than the Sun still provide a good amount of light. I don't know if you've ever been in the middle of absolute nowhere, with zero light pollution, on a clear moonless night. It's dark, but it's not pitch black. Once your eyes adjust properly, you can see environment around you and even pick up hints of color. Everything is very muted, and shapes do blend together, but they're there. Looking at the stuff we got to see in previews, I'm pretty sure KSP2 is being rendered with HDR in mind. That means you can adjust exposure, simulating eye adjustment to light levels, and even filter colors based on light intensity, giving everything that muted look. I don't know if that's how Intercept will chose to handle it, but it's certainly an option, and would make the game quite playable far from the Sun. That said, it will certainly be a good incentive to place some additional lights around your interstellar ships to illuminate anything you might need to interact with during the voyage.
  23. This was in reference to particles whose positions are recorded exactly. Their momentum is infinitely indeterminate, meaning their speed can be anywhere between 0 and c.
  24. All of these things have to be checked for during every time step, however, as you are computing the trajectory. It doesn't matter that any of these conditions will simply end Warp, you have to check where it happens, and where you cross SoI boundaries depends on your trajectory computations. So you have to do a single time step of trajectory update, perform all of the SoI checks, adjust ship's mass for fuel spent, then go to the next step. So if trajectory computation is complex and requires more steps, you have to do SoI checks more frequently. And because SoIs move on their own rails, these checks aren't trivial. Fuel checks don't depend on trajectory, but at least the way KSP does these checks, aren't trivial either. Consider core stage + booster in KSP with shared plumbing. Main engine is burning fuel from core stage and the booster, while booster burns fuel from booster only. When the booster runs out of fuel depends on consumption on the main stage. If you are doing this check incrementally, you have to check your entire ship's plumbing on every single tick of the simulation. Yes, you can pre-solve fuel consumption and know at what time which engines will cut off, and it's something that should be done, but that does require special code to write, verify, and maintain. Since code for fuel checks during normal flight is designed to work on a request system, because it has to deal with variable thrust and ISP. And this is the point I'm trying to make, not that it can't be done, but that it's a complex problem with a lot of tricky edge cases and a lot of custom-written code just for the purpose of solving the warp-under-thrust. That's all we're doing here. Computing trajectory and verifying that the ship has resources to stay on said trajectory and hasn't encountered any obstacles. But you do have to do all these checks still and you do have to mix them in with trajectory updates, because fuel consumption affects the ship's mass, which affects the trajectory, which affects whether or not you cross SoI and if so, at what exact position. And trajectory computations here are about the most expensive kind you get in simple mechanics. Gravity and thrust don't mix well computationally. So you end up with very short time steps to get accurate trajectory. And if you have shorter time steps, you need to do more checks on fuel and SoI because, again, you need these for every step. And as a point of comparison, KSP had all the time-warped ships on rails. They just follow conic patches and check for SoI boundaries. No collision checks - you simply drop out of Warp if you get within certain distance of planets and moons, and no fuel checks, as the engines aren't running. And KSP still struggled at 100k warp with enough space junk floating about. In KSP2, this is so, so much worse. Of course, KSP's trajectory updates were very poorly optimized, so there are going to be a lot of wins right out of the gate by simply writing better code, but at the end of the day, it's still a hard problem for a tiny developer team. This isn't to call for concern, or anything. From the very inception of KSP2, this was one of the core problems that needed to be solved, and I don't think anyone on the team would underestimate that, so I'm sure the necessary work has been put into it. But some corners might have had to be cut. 100k might be a little too much to expect. And if the time warp is any less than 100k, you basically have to pull in the interstellar distances from the traditional 1:10th scale. Alternatively, there might be limits on how many ships we have in flight under power during warp, which will make trip planning a bit harder. Or any number of other potential limitations or sacrifices made to make this feature work.
×
×
  • Create New...