Jump to content

K^2

Members
  • Posts

    6,163
  • Joined

  • Last visited

Everything posted by K^2

  1. I don't think it's an ethics debate. Even if you were to consider life that could evolve as an ethics question, there is basically no chance of that happening in either ecosystem. Both are only going to become more inhospitable as time goes on if left to the natural course. And treating simple life from perspective of ethics is silly. Your immune system commits genocide against simple organisms on a daily basis. So the question is purely utilitarian. Is there something to be gained from studying these organisms in their natural environment? Or can we collect samples, catalogue, and lose little to nothing from complete replacement? I think by the time we actually are in a position to terraform either environment, we'll be decidedly the latter.
  2. If it's very clean naturally formed ice, yes. Though, icy bodies tend to have higher albedos as their surface tends not to be that pristine. It's fine for an estimate, though. You are playing very loose with heat conduction there. It takes time for a linear gradient to establish. As materials start to warm up, gradient starts out a lot steeper, meaning the surface temperature is going to be higher, and the skin layer is going to be very close to peak. The boundary condition here is nasty, so Mathematica told me to get lost, but I'm curious now, so I'll set up a quick simulation tomorrow. Looking at situation with real bodies at ~1AU, though, I don't expect it to be even close to 270K. But I owe you some numbers. Yeah, that's kind of the bigger problem here. The evaporation rates for ice I was able to find vary between about 1mm/hour and 10mm/hour in vacuum, but yeah, it's significant. And while temperature drop drastically reduces the rates, you have to have them go decidedly cryogenic if you want the ice to last for anything like geological time scale. Between liquid water and atmosphere. Earth is pretty good at distributing the heat. Water can absorb a lot of day/night variation, and atmospheric circulation can actually help move heat from equator to poles, so that it can be radiated away from larger area. So under the right conditions, temperature on Earth can be prevented from rising much above average, and average can be well bellow freezing if the planet is covered with ice. The second part is the aforementioned evaporation. If ice evaporates into an atmosphere, and your entire planet is frozen, that ice is going to get re-deposited somewhere else. Some amount of water will always be in atmosphere as moisture, but it's not a significant amount compared to ice on the surface. On the other hand, if you have no atmosphere, the vapor is going to be blown away by the solar wind. So anything that evaporates is effectively gone for good. Yes, some amount of re-deposition is still going to happen, but you are going to be losing most of the ice to the vacuum of space unless it's really, really cold and evaporation rate is negligible.
  3. But then you'll have to explain gravity, which is even harder. And before anyone suggests putting a chunk of neutron star or a tiny black hole in the center to generate the gravity, that would cause too much stress on the shell, causing it to collapse. So unless Ringworld Engineers have built Minmus out of scrith just to mess with Kerbals, I don't think that's going to work.
  4. That's not how it works. At relevant time scales, more than a few meters of rock might as well be perfect insulation, so the surface can be considered as a infinite flat plane in equilibrium with stellar flux and background of the cosmos for purposes of thermal analysis. While the average surface temperature without greenhouse gasses and with high albedo can be bellow freezing, without atmosphere you also have very high fluctuations between day and night, and day high is going to be about 70% above average equilibrium, which is way, way beyond the freezing temperatures. Yes, near poles, situation is different. Stellar flux incidence angle is very low and it's possible to have regions of near permanent shade. But Minmus has "ice" lakes in equatorial regions. That's flat out impossible by a margin that's not even close.
  5. It's not about atmosphere. Given proximity to Kerbol, Minmus is too hot for ice. Even if it's cooled by sublimation of said ice, it'd disappear entirely too fast. And it cannot be replenished by external sources, as that would generate additional heat. In short, icy Minmus just isn't possible. But I do like glassy Minmus solution.
  6. So that's Charr, Gurdamma, and Glummo confirmed for Debdeb system, I guess, with Donk and Merbel being two of the moons there as well. I hope that Ovin is in another, yet unnamed star system, otherwise, Debdeb is getting all the rings.
  7. The underlying problem is not transportation, but the die shortage. There are only so many factories in the world capable of processing silicone wafers into chips of various quality. The process is energy, resource, labor, and equipment intensive. That means that coming up short on even one of these can cause problems. COVID has acted as a catalyst for a perfect storm of shortages across the sector, and now the demand has backed up. The problem is that absolutely everyone with need for a high end chip is competing over the same manufacturing capacity. PC and console components (RAM, CPU, GPU, SSDs...), cell phones, car computers, components for servers, routers, and gateways, and a bunch of other tech you might not even think of in day-to-day. Some of the lower end stuff can be shifted to older processes, but a lot of it can't simply because it was designed with more modern components in mind. The recent releases of new generations of consoles, GPUs, and CPUs kind of matched up to bring the problem to the attention of the consumer, but just because demand for these died down a bit and supply had a chance to catch up a little, doesn't mean the problem went away. On the contrary, we are expecting the situation with some of the existing manufacturing to get worse, as some areas are now also impacted by draught, labor shortages, and potentially makes it difficult to replace some of the equipment used on production lines. There are new facilities being built, and supply will eventually catch up. But some people expect another year or two of shortages and outrageous component prices.
  8. I would prefer each SoI to have its own coordinate system and the attitude indicator to just change with SoI change just like the altitude indicator does now.
  9. Yes, a little. The actual semiconductor physics of it is complex - I've spent a year on a condensed matter track before switching to particle theory, and most of it is still bizarre to me, but in terms of practical consequence, what you have control over is clock frequency, voltage, and temperature of the die. The thing you actually care about with performance is the clock frequency. The number of instructions performed by CPU, outside any time wasted waiting on memory or whatever, is a multiple of CPU clock frequency. (You can also overclock memory and GPU also with their own limitations and caveats.) The problem is that there is a limit to how fast transistor states can switch, so if you get the clock speed too high, you end up with errors. You can compensate for that by increasing the voltage applied - kind of equivalent of throwing switches harder to get them to move faster to a new position, but there's a limit to that before you risk physical damage to the circuits. (Again, not unlike physical switches.) Finally, cooling the die helps to reduce thermal bounce, which both allows you to go a bit higher on the voltage and helps given voltage settle the transistors faster. So if you can cool the die more, you technically can get away with even higher clock speeds, which is why you see all the records set with liquid nitrogen setups. That really only helps you to squeeze the last drops of performance, though, so just a good cooling setup is adequate for most uses. But where it comes back to power consumption is that both increasing the voltage and increasing the clock speed increase the amount of power consumed by the chip. And 100% of power consumed becomes heat, so even if you aren't trying to drop the temperature down, overclocking usually requires a good heat management setup just to prevent overheating. And laptops are disadvantaged on both of these. You have limited amount of energy available to power the CPU - certainly on batteries, but even when plugged into a wall, that portable power supply can only do so much, and you are limited on the thermals as well. Laptop just doesn't have room to fit a good cooling setup. Because of that, the laptop CPUs are usually designed to be more power efficient and less performant, but they do usually come with multiple power modes. You need very little CPU power to browse the internet, and being able to throttle down is important to conserving battery power. What usually happens is that the clock speeds will go down, voltage might adjust down as well, and some cores might become disabled to make the batteries last as long as possible, and only kicking into high gear when you're playing a game or something. Even with a laptop, the manufacturers will usually stay on the safe side of keeping the thing reliable, so you can usually squeeze a bit more performance out of the system, especially, if you're plugged into the wall. Whether it's easy to configure will depend on the motherboard and CPU used. I have been able to overclock some laptops a little bit to make the games run better, but there is really not a lot of room to work with usually before you start running into heat throttling. That is, CPU detecting that it's overheating and starting to drop the clock speeds. (If it continues, it may shut down entirely.) So if you want to tinker with performance, you really ought to be doing that on a custom-built desktop actually designed to take the punishment.
  10. Oh, man, layers upon layers. Optimization is a huge topic. Very broadly speaking, if we limit it to games, you are looking at one of the following categories. 1. Algorithm improvements. Sometimes you can just rearrange how operations are performed and make the code faster. This can be literally reducing number of instructions that have to be executed, reduce time lost to memory latency, or reduce time interacting with OS or hardware. Usually, you don't see big wins in something like this after the game is released, but every once in a while there's a stupid mistake that somebody missed that makes a huge difference. 2. Threading optimizations. Sort of related to the above, but when you are working with multiple threads running in parallel, you are sometimes losing time on threads having to wait for each other. So by simply rearranging when operations are performed you can sometimes get huge performance wins. Again, usually, you get that out of the way before the game is released, but sometimes, improvements like that can come in after release. A particular case is if the code was originally optimized for a very specific core count (*cough*consoles*cough*) but later, re-optimized to cover broader range of possible CPUs. 3. Removing unnecessary code. Things like writing logs can really slow performance down, and sometimes that's accidentally left in the final game. Finding and removing that stuff helps and it's more common than you'd think. 4. Engine/Library/Driver improvements. Especially if you're using 3rd party engine like Unreal or Unity, just because you're done working on the game, doesn't mean they're done improving the engine. Sometimes, it makes sense to switch to a new version of an engine, and sometimes, it runs a lot better. (Also, sometimes worse, but that's what you get with relying on 3rd party software sometimes.) Likewise, an update to something like a graphics drivers might fix something your game has been relying on, in which case, it's a welcome surprise of better performance. It's rare, but it happens. 5. Hardware improvements. Just because your hardware didn't change, doesn't mean the code wasn't updated to make better use of hardware improvements you already have. This could be done with an explicit change to the game code or be picked up as part of the engine or library updates as in the previous section. In either case, you end up with your hardware better utilized giving you better performance. 6. Code optimization. If computer executed the code the way it's written by programmer, things would run at least ten times slower than they do. With a modern compiler, code is first converted into some sort of internal representation, with compiler removing anything that's found to be unnecessary and simplifying some loops and function calls. Then the representation is converted into machine code for particular target architecture, and compiler removes redundancies, shifts things around to make better use of registers, and may even rearrange order of instructions to make them fit better into the pipeline. When CPU executes instructions it will also convert them into micro-code and potentially re-arrange them to improve execution. Now, the programmers have very little control over any of that if any. But updates to compiler and associated libraries can result in better code produced by simple recompiling the project. Likewise, the way your CPU converts instructions into microcode is subject to firmware updates. Some of the optimizations also have to be enabled, and again, you'd be surprised how often games ship with some of the code unoptimized. Obviously, if it was a global disable, somebody would notice, but a few unoptimized functions in a core loop can really slow the game down. There are tools that let you examine what's going on with the code. We can look at time spent on specific function calls, how busy individual cores on the CPU are, when various calls to OS and hardware are made, how much time has been spent waiting for other resources, and so on. But it's still a lot all at once, so learning how to improve your own code and how to look for problems in code handled by someone else is a huge part of being a games programmer.
  11. You can start losing surprisingly heavy elements if you dip close enough to the star early enough in the formation through combination of heat and gravity. I'll grant you that a planet in that situation is on its last legs before being swallowed outright, and it would take a miraculously well-timed boost from something else falling into the star, but out of all the stars in all the galaxies... Unless that something else got ejected or absorbed into the primary. Again, we have no idea what sort of dynamics that system could have been going through before settling down. Merging binary star system, for example, can produce an absolutely wild distribution of planets and compositions which will otherwise will look tame and normal a few billion years after the stars merged. I wouldn't expect to encounter these kinds of system very often, but if we give developers a fiat of picking an interesting system to exemplify in the game, none of this goes outside plausible. I don't disagree. And it'd be nice to get a nod to that somewhere in the game, pointing out how bizarre it is for a planet to have these properties. I mean, obviously that's why developers want it in the game. I just don't think it's a bad reason to put a planet like that in the game, so long as it merely stretches the limits of what's physically possible for a real world, after the 10x scale correction, of course. If it was just absolutely impossible, I would be against it too. Merely very, very, unlikely is acceptable if it's in the name of good gameplay and interesting place to explore. I mean, if we start getting technical about it, Lathe is already a good example of a world that's exceptionally unlikely to happen. But it's a fun moon to have in the game and I'm glad it's there.
  12. There are several processes that can strip planet of lighter elements and prevent it from accumulating helium. Two most likely are spending a lot of time very close to the primary and then migrating out to its current location, potentially by redirecting another planet into the star in the process, or a close encounter with a gas giant early in the formation. Both of these can result in a planet that's basically a core of a gas giant with a thin crust of rocky material on top and an insubstantial atmosphere for that heavy of a planet. Since a larger fraction of the planet's mass is due to its larger iron core, the average density is also a lot higher. There was a bit of a discussion of an exoplanet candidate with similar apparent characteristics a while back. Though, perhaps, still not quite as dense as Ovin, it shows that such events happen in star system formation.
  13. You have to land on it without crashing first. At 4G, unless the atmosphere is as thick as Eve's, parachutes aren't going to do much good. And I strongly expect it to have a very weak atmosphere based on the images. That planet is going to be a graveyard of broken ships.
  14. Are you just trying to build a chain? Or do you need at least one tower visible from every point on Earth? Even in the later case, you can come up with a space-filling pattern that's more efficient. Think a branching out fractal-like structure. Simply filling all space with a grid of tower is not the optimal solution if you're trying to build fewer towers.
  15. Wasn't there something about Rask/Rusk situation being re-worked, because there were problems with n-body? It might be just patched conics throughout the game.
  16. Oh, absolutely. What I'm talking about is stuff that's been built directly into the game executable. Actual example from a recent game, we had to generate convex hulls for some geo for a feature involving physics particles for FX. It was handful of objects with low vertex counts, so it was cheap enough to do the build on demand. But because the library generating the convex hulls was under GPL (or similar license), the shared code containing that library would not get compiled in for release version. Which meant we had to move the generation of collision hulls to the builder, and now we had to track the dependencies to make sure that the collision hulls are built for all the right object and not for all the objects, and then platform-specific dependencies became a factor, and it was all in all a couple of weeks of work just to get the build process for these sorted, whereas the initial proof-of-concept on-demand loading was done in an afternoon. And yeah, in principle, the entire builder could be released as open-source project with all of its libraries separately from the game. (Edit: Or as you point out, still as proprietary tool, but with open-soruce libraries that contain the actual GPL code delivered separately.) That's not going to happen for that particular game, because that's a whole another can of worms that the studio doesn't want to deal with, but it's not because of any legal restrictions. My point, however, wasn't about situations where tools are intentionally released by the game developers to be used by modding communities. It was entirely about the situations when tools get left in because they weren't worth a hassle to remove. In the above example, if we could ship the game with that hull generator, we would have. There just was no reason to pull it out of the game other than licensing. And there are plenty of games out there with dev features left in the game either disabled or simply unused. And when you are trying to ship a game that is easy to mod, sometimes, all you have to do is absolutely nothing. If you already have build tools integrated into your executable, you just ship it as is.
  17. In a tl;dr way, yes. But if you want a bit more detail, as far as it relates to games and game data, streaming is ability to load data from hard drive directly into RAM without involving CPU for anything beyond kicking off the process. And yeah, what that lets you do in practice is load assets when you need them, because you don't have to stop the game to handle the loading. The game engine notices that it's missing some asset data, issues a request, and just keeps ignoring any objects needing these assets until the assets are loaded. This can lead to pop-in if you are very aggressive about it, but can also be entirely seamless if you have a bit of a headroom on RAM and have different level of detail versions of your assets that can be loaded separately. A good use case in a game like KSP is not loading high resolution terrain textures until you are in SoI of the planet that needs them. Possibly, even loading only select biomes for the highest quality versions. This might cause things to look blurry for a few frames when you are switching between ships, but you get a lot of flexibility out of it that usually makes it worth it. One caveat relevant here is that any data you might want to stream has to be in the exact format you want to use in the game. Like, you wouldn't stream a JPEG image, because you have to decode JPEG before you can display it, and if your CPU has to spend time decoding the data, that often leads to choppy frame rate or freezes. Obviously, you can handle a few operations like that, and you usually have to, and a lot of hardware these days, notably all major consoles, support some sort of compression in streaming. You can think of data as being in ZIP files on the HDD/SSD and gets decompressed as it's being loaded by a dedicated chip, so again, no CPU use necessary. But otherwise, the data needs to be ready for use. And this is where it gets a little tricky with modding. If the game expects data that has been processed, and you just added fresh, unprocessed data by installing a mod, something somewhere needs to figure out that this happened and prepare the data before it can be streamed. The way KSP mods work, some of that is done by the modding SDK you use to create modded parts, but a lot of data has to be built by the game. There are a whole bunch of ways to handle it. The simplest is to not worry about it until you actually try to load the data, and then do the processing if it's needed then. Yes, that will definitely cause a freeze in the game, but if that only happens once per install of a particular mod, that's not a huge deal. A more complete solution is generating a dependency graph and checking it when the game starts. For a game like KSP2, that shouldn't be too hard, especially, since you have a part list upfront and you can just check to see that all the data you need for every part is available when you start the game without having to actually load everything.
  18. Doesn't have to be a single blob for everything, there are a lot of implementation options here, but that's the gist, yeah. Most of that data takes up considerable CPU time to prepare and very little disk space to store. Just cache it and invalidate cache if the source file updated. Side note, I'd be shocked if there is no way to make processing a lot faster either, but if you cache it, it's kind of a moot point. No reason to over-optimize something that will only happen once for a lot of players, and only once per mod install for almost everyone else. I have encountered cases where build tools get intentionally cut from released game binaries because they contained some libraries with infectious GPL or similar licensing. Basically, if they were to release the game with these libraries, they'd have to publish source for the entire game. But I've also worked on projects where the tools are literally there in the published game, only with some editing features turned off, and the only reason people can't mod the game easily is because the source format and directory structure aren't publicly shared. In general, though, developers need a reason to disable these features, and because that's work, they usually leave all or huge chunks of them in. It's usually easier to ship a game with build tools than without. For KSP2, baking part info in some way or form is going to be necessary. And caching baked files is going to be necessary for streaming. I think they're pretty much forced to write the tools we'd want for much faster loading. And because the developers don't want to wait for unnecessary build times either, they're pretty much guaranteed to auto-detect source file changes and build stuff on demand. Basically, all of the features we want. All Intercept has to do to make the game easily moddable and reduce loading times for everyone is just not turn off that feature when they ship. That said, if for some reason PD or Intercept go back on modding and try to lock us out of critical tools, I'm here to help build replacement tools. Unless PD forces Intercept to install some sort of cheat detection, I don't think we have to resort to anything that would be a forum violation to share. In the worst case scenario, I still think modding is going to be worth it, even if it ends up against forum's ToS and all discussion of mods for KSP2 will have to be moved elsewhere. I hope that doesn't happen, but I'm confident that modding will be a big part of KSP2 either way.
  19. Fair. I'm oversimplifying. "Compiling" might be a better term here. What I mean is taking the text data and turning it into the in-game representation - everything but the actual memory allocations, which, of course, have to take place at the runtime. All of the operations in building the internal data only really need to be done once. Except that this is exactly how it works on pretty much every major game project and people do edit text files that get built exactly once. Caching of binary builds is a standard practice in games industry. If I edit a scene file, whether I did that with a dedicated tool or simply opened it up in notepad and changed markup data, the game will know that the binaries are out of date and rebuild them. Yes, when we ship games, we often only build the binary cache and many games ship with the build tools stripped out, but there are enough games out there that maintain the dev-environment behavior of checking for new/modified markup files and will rebuild parts of the cache that are out of date - often, specifically to be more mod-friendly.
  20. You are showing "Part loading" as 70% of texture loading time. That's not at all insignificant. And all of it is from parsing configs for individual files. Yes, the main config parsing doesn't take long, but the parts loading is still a parsing issue. When loading a few kB of text takes 70% of what it takes to load all the multi-MB textures of the parts, there's a huge problem. And fixing this would reduce loading time by about a quarter based on what you're showing here. Which is not insignificant at all.
  21. Parsing of the part configs takes shockingly long, and that can definitely be fixed. But yeah, given the way KSP2 is being built, there is a lot of data that will have to be streamed already. They might as well recycle some of that tech for the mods.
  22. It's never that simple. It's possible to intentionally preserve some degree of backwards compatibility, but you are usually handicapping yourself by doing so. There are better places to spend developer resources at. If the mods are good and useful, someone will port them over.
  23. I agree, but to me, this suggests that there is more than one way to solve the problem. Yes, we could have the contract system replaced with something entirely different, but I think the core concept is fine. It's just the fact that it gets repetitive, because every contract is such a basic construct. "Position satellite in orbit." "Rescue Kerbal from orbit/location." "Test part." "Collect data." All of these are fine to do once or twice, but then it turns into a grind and not even a fun one. A relatively simple fix to this would be to link a bunch of these objectives together into a mission. What if instead the contract is to rescue the stranded Kerbal, deliver them to their ship in orbit, get some parts to that ship to fix it, take the ship to a destination, land it there, perform measurements, then bring the kerbal and collected science back. Same basic parts, but now you have a bit of a narrative to keep you invested, and the number of ways this can all be combined is a lot higher. Plus, provided that each step has its own rewards in credits and reputation, by the time you are finished the set, you are set for a while. And even if you have to run multiples of these throughout the game, because of the number of permutations, it can always be at least somewhat fresh. The generator for missions like this can be pretty simple - you really just need to make sure that whatever combo that's being generated is within constraints of player's current technical level and that the player is compensated appropriately for every step. And because this style heavily rewards combining resource investment for multiple steps, like carrying an engineer on your rescue ship so that you can perform the repairs without separate launch, if you spend a bit more time solving the problem creatively, you actually get significantly higher payout.
  24. Oh, so you're talking not only about changing the time step, but also limiting maximum warp? Yeah, that's workable. Though, might still be annoying if you have a craft in elliptical orbit trying to raise it with ions over multiple orbits, and your warp keeps dropping every few days/weeks/months of game time as that probe dips close to primary. I mean, in the perfect world, if this was written in something that compiles to native with good optimization and by someone who's an expert in numerical methods, we wouldn't be having this discussion. I've made an argument in an older thread that KSP2 should be runnable on a Switch if it was written from scratch specifically to be optimized for that hardware. I mean, that presumption is a fantasy, but if resources were available, yeah, it can be done. Problem is, KSP2 is a Unity game written in C# by Unity devs, most of whom, at best, implemented a Verlet or simple RK4 integrator at some point by copying it from some manual. You only have to look at how bad time warp was in KSP, even without any physics at all, to see why "theoretically solvable" and "practically solvable in given environment" aren't the same thing. Intercept does have a physics programmer who, hopefully, is a bit more experienced in that sort of thing. Though, I've seen my share of, "We just put it into Matlab and let it do its thing," in academia. But even if we assume that their programmer is actually good at writing optimized numerical code, this is considerable amount of work to implement, debug, and maintain as new features are introduced throughout the development cycle. And the amount of physics-specific work on KSP2 is almost staggering for such a small team. There is the new craft simulation, various optimizations for large craft/stations, physics sync for MP, continuous collisions, etc. There's way more work there in total for a single person to handle in 2 years, especially for someone for whom KSP2 is going to be their first game dev experience. So the question shouldn't be, "Can you write a 1M+ time warp for a well-written custom engine." It should be, "Could you pick up a Unity game with unfamiliar to you code, and implement a 1M+ time warp in less than a month without breaking anything." And that's the question that is way harder to answer, because we don't know the exact skillset of people involved, how the rest of the game handles time steps, etc. And I don't have certainty that it will be implemented well. So the limitations of time warp implementation can still very much spill into very real constraints they'll have to follow in terms of interstellar distances. There are optimizations you can take here too. If you take upper bound on curvature of target's trajectory and ship's trajectory, you can inflate SoI and do sphere-line test as early rejection. Most of the time, either the time step will be short, or curvature small, resulting in SoI only slightly larger than original, meaning you can reject majority of checks early and only have to do iterative tests for a few objects. This adds a lot of complexity - see paragraph above - and still isn't free, but it goes back to, "If you had resources to do this proper, it wouldn't be a problem." Well, that's my point. In KSP, in order for this to happen, you have to be moving fast and encounter either the outer edge of or a very small SoI. In that case, unless your trajectory would have taken you through the body, the deflection is tiny, and overall trajectory isn't altered much. For in-system flight, this is a tiny annoyance that might require a small correction burn somewhere. In KSP2, for a torch ship, thrust will be dominant force almost always. This allows for a much stronger scattering effect due to SoI encounter. For a worst case, picture a situation where you glance an SoI with trajectory curvature due to thrust being close to curvature of SoI boundary. Instead of being inside SoI for 0 or 1 tick, which won't make a difference, it's now between zero and many. The diversion can be sufficient to bring you even closer to the gravity source, meaning the projected paths between short and long integration steps are going to be very different. But even dipping into the SoI briefly can apply unexpected gravity impulse very early in the voyage. Now, if you are very careful about applying the same logic to your planning trajectory and simulation, that's fine. A bit of chaos doesn't hurt anyone. But if another ship under power dipped close to its primary, dropped you to lower warp, and that change in time step caused the SoI transition to register, your ship on an interstellar voyage can get deflected from its planned trajectory. It will still likely be a fraction of a degree, but at interstellar distances, that's a difference between your ship going to a nearby star and your ship going into an empty void. And making mid-transfer course corrections for a torch ship is a rather different order of magnitude problem than an in-system mostly-ballistic transfer. Again, not an unsolvable problem by any means, but a huge thorn that didn't exist in KSP and that KSP2 team will have to deal with. There's a C version of that book, btw. Super useful. I don't know if they've adjusted the algorithms for that sort of thing significantly, though. I've mostly been using it as a reference for linear algebra and polynomial integration.
  25. You are assuming only one craft. This breaks down hard when you have multiple craft on different trajectories, each one requiring its own integration step. You can take the smallest requested time step of all powered craft, but what that results in is one ship doing a dive to the star to pick up speed via Oberth would be killing performance for everything else that's out in free space. You can also try to do break-step integration for all of the different craft, and that it's own little nightmare to manage. It's like painting a curve on an n-dimensional grid. And then you have the SoI checks. If you are on the outskirts of the star system, the gravity from primary might be low enough to warrant a long integration step, but what if that step brings you inside of an SoI of an outer planet? So now you have to select integration step, do analytical solution for trajectory within the step, do the sweep of the rocket position against the sweep of SoI running on its own trajectory, repeat that for every craft under power, and then update everything either in break-step or based on the shortest of time steps. And then you still need to figure out where within this time step you are planning to update unpowered ships, colonies, and supply routes, because that also has to be ticking at the same time. None of it is impossible, but there's a lot going on with many tricky edge cases. This isn't your textbook integration problem. It's complex algorithmically and numerically hard. And solving it poorly will result either in artifacts in navigation or variable performance issues. So you really can't take any questionable shortcuts on this. It has to be done right. And the higher up you go on the time warp multiplier the harder it gets. So I do wonder at what threshold Intercept is going to call it.
×
×
  • Create New...