Jump to content

K^2

Members
  • Posts

    6,181
  • Joined

  • Last visited

Everything posted by K^2

  1. That wouldn't actually make any difference. If you're actually going to try and cut amount of processing the game has to do by changing how it plays, the two low hanging fruit are timing out debris, having it disappear after a bit, and either reducing part counts, possibly by combining some functional parts, or just making rockets rigid. That's all very doable, but at that point, you're really making KSP-light, which I'm not sure is worth development budget either.
  2. Intercept is a mid-size studio. That's more descriptive of their resources and doesn't get into the corporate structure and publisher-developer relations. Because yeah, there are independent studios working on AAA budgets and there are tiny dev teams that work under a publisher, so these things aren't always corelated. The fact that Intercept is working under umbrella of Private Division, rather than Take Two directly, is an indicator, however, that publishers would very much like gamers to keep associating their name with massive AAA budgets. Which is where the whole perception of indy vs AAA probably comes from. And yeah, none of that really has anything to do with the topic other than some overlapping terminology, but it's not like there was much left to discuss on OP's question.
  3. Average distance between two atoms in a 12 meter beam is 4 meters. There is no useful concept of just an average distance. And if you are looking instead at an average distance between nearest neighbors, which is a useful quantity, you absolutely need to know the crystalline structure. You can't just assume one atom per cube with the side equal to interatomic distance, because lattices aren't always simple cubic. And in fact, simple cubic gives you the absolute worst packing, and in practice, the distance between nearest neighbors will almost always be about 10% higher than the average you get assuming a cubic lattice. And anyone who'd bother to even try plugging this into real world numbers would know this. Atomic weight of copper is 63.546amu. Density of copper is 8.96g/cm3. distance(?) = (63.546amu / (8.96g/cm3 * 6.022 * 1023))1/3 = 2.28A What's the actual distance? 2.55A (see atomic diameter, which assumes atoms are touching nearest neighbors). Oh, look at that, wrong by more than 10%. How did that happen? Well, copper has a face-centered cubic lattice. So lets use correct formula. distance = (4 * 63.546amu / (8.96g/cm3 * 6.022 * 1023))1/3 / sqrt(2) = 2.55A And that's right on the money. This is a simple case, because copper has a nice, clean structure. Alloys often do not, so figuring this out for alloys is going to be complicated.
  4. In addition to composition and density, you need to know lattice structure. In a simple cubic lattice, volume of a single cell is distance3, and it contains one atom on average. In a face-centered cubic, there are 4 atoms per cell and the volume is 2 * sqrt(2) * distance3, so the density is significantly higher with the same interatomic distance. Here's a good resource: Atomic Packing Factor. And the explicit formulas for interatomic distances for given lattice (distances in cm if density is in g/cm3): Simple cubic: distance = (atomic mass / (Avogadro's number * density))1/3 Face-centered cubic: distance = (4 * atomic mass / (Avogadro's number * density))1/3 / sqrt(2) Body-centered cubic: distance = sqrt(3) * (2 * atomic mass / (Avogadro's number * density))1/3 / 2 Hexagonal: distance = (6 * atomic mass / (Avogadro's number * density))1/3 / (3 * sqrt(2))1/3 This all assumes that you have just one type of atom in your lattice. If you have an alloy, things get complicated. There is no simple formula for the alloys, because there are a lot of ways atoms can be distributed within an alloy. And the variance in distances you get with different lattices should tell you that you can't simply use a one-size-fits-all formula unless you're going for a very rough number.
  5. Yes. My very first post in this thread states that it wouldn't make sense to spend resources necessary to port to Switch, but it shouldn't be treated as "KSP2 can't run on Switch." These aren't the same statements. This entire discussion hangs on a fiat that for whatever reason, resources to do this properly are available. So we aren't discussing financial limitations, but only the technical one. Yes, it's an academic discussion. Yes, the port isn't going to happen. I'm 100% with you on that. I just don't like the statement that the problem is hardware. And I'm not saying this to put any sort of blame on development team. It just so happens that given the choice of engine and frameworks used, porting KSP2 to Switch would require greater resources than available to Intercept, all of which is a sensible financial decision. For component update loop, sure. But there are a few tasks, like the orbital sim, which require a lot of math. And these things can be hand-optimized to give you something like a 2:1 margin over optimizing compiler. As an anecdote, I was helping port an MMO to an XBox One. Optimizing vector and quaternion math for SIMD gave me a 20% reduction of the entire animation pass. Then I got another 10% by changing memory layout and improving cache coherence. This is over code that was already native and fairly well written. An orbital sim is even cleaner, as it doesn't have to jump around as it traverses skeleton and do logic for LoD and special bones with custom logic. Yes, cases where it's worth to do hand optimized code are exceptional, but this is a very good candidate. And sure, this wouldn't be something I jump to immediately. I'd start by optimizing the algorithm and seeing how well it performs as is. Then I'd try running it with built in jobs queue on another core. Then I'd see if burst gets me what I need. And if I'm still unhappy with the amount of debris the sim can support before starting to lose frames, then I would go to hand-crafted code. But the point is, we do have that headroom here. If there are resources and time to do the port proper, this is the absolute limit to which we can push things, and 1,000 pieces of debris at 5ms per frame is what I'm comfortable promising before actually starting work on it. And hey, maybe it's possible to spread component update over frames enough that you're happy taking up 15ms per frame. Or maybe a few hundred pieces of debris is adequate. And then you don't need to go to all that trouble. Still good to know how far it can be pushed. Technically, that last bit is already a problem in KSP. I'm not sure on all of the implementation details, but the way collision is handled is that several terrain tiles are generated around the player and are updated "every once in a while". It is possible to move fast enough along terrain to where you outrun the physics update and fall through terrain. That said, we have a bit more flexibility with render mesh. You don't technically have to even do the mesh update on the CPU. It can all be delegated to GPU. The way you do that is by having a fixed terrain mesh always centered on player. The vertex shader for that mesh will do a lookup into the texture and adjust verts. You do need to make sure that the textures are streamed in, but there's hardware to handle that. The other way is to actually build out the mesh, but even that's not nearly all that expensive. You do need to march over every vertex and adjust its height, but you're still dealing with tiles that only have high resolution near the player. And this is one of the cases where having unified memory plays in your favor, as locking and unlocking a vertex buffer shouldn't be nearly as expensive as it is on PC. Either way, this isn't even on the radar as difficult task, and fidelity can be sacrificed here if necessary with no detriment to gameplay. Yes, I'm sure. The sort of games I'm working on have thousands of objects colliding with each other just for the FX. These are convex hull collisions with contact point constraints. A rocket composed of a few hundred parts with a few hundred welds shouldn't even register as CPU load. Unity + PhysX is just a horrible combination for a rocket-building game. With DOTS/ECS and Havok you can get the sort of physics performance you get with modern games. Since I don't expect KSP2 to be making use of it, if you were to get an eccentric philanthropist to give you tens of millions of dollars needed to port of KSP2 to Switch, this is how you get all of the KSP2 physics onto the system and still get better performance than PS4 Pro and XB1X do. And there's headroom left even there. A custom physics engine can do even better, and one that can handle KSP2 doesn't need to be nearly as complex as Havok is. Especially if you ride on top of Havok's collision detection, which is completely fine, and just write a custom solver.
  6. I can always look at generated code in debugger, but then what? When writing in C/C++, there is a lot more control over what gets generated. I have full control over memory layout, I can drop in streaming instructions, there are intrinsics that will prevent re-ordering, I can make sure more or fewer things are inlined, I can build jump tables to avoid unnecessary calls and returns or virtualization costs... C# gives me none of that. If I'm unhappy with code that got generated, tough luck. I know. It's part of what my team does, along with maintenance and optimization of in-house physics and animation engines. Yes. We've already established that in order to squeeze KSP, let alone KSP2, onto Switch, we need a team significantly larger than the team that made the game. That's the premise of this entire discussion. Can we make KSP2 on Switch if we throw AAA resources at it. I'm arguing that the answer is yes, because I've worked on similar type of problems for larger games. We're not talking about running this on Atari, for crying out loud. Again, KSP is not a big game. It does NOT do anything that should be CPU intensive. It just doesn't. All of the performance problems of KSP are from bad optimization. And while squeezing it into Switch would be going above and beyond, it works without any cuts. Again, if you think that something I've described above requires cuts, point it out. You're just repeating that something has to be cut, even after I budgeted out everything that needs to be done for the full KSP2 game, but you aren't pointing to any specifics. That's not an argument. That's the exact opposite of an argument. Terrain can and probably is streamed almost directly. The overhead is minimal and split between many, many frames. LOD system is part of rendering thread and isn't different from any other game, so that's taken care of. Fuel use is part of component update and we have a healthy budget for that. Also can be split between frames. And I've covered all of that. We can break it out, and I can schedule out the entire game for you. I'm just worried at this point that you'll just say that we're obviously missing something even then. Planning out projects like this is part of my job. It's what I do. Yeah, and the CPU needs almost none of that. In fact, we'll want to reduce memory footprint as much as possible, as Switch has bad cache performance. But we simply aren't using RAM for anything in anything like GB-quantities. Physics sim is a few MB at most. All of the craft data is going to be in hundreds of kB. Maybe a few MB if we want to keep all the craft in the system loaded. We'll also need scratch space for loading textures and generating meshes for planets, but that's still in tens of MB. If you are good at unloading stuff you don't need after you're done with it, I'd be very surprised if you need more than 100MB to have everything running. Almost the entire memory footprint of the game is going to be textures. There will probably be a considerable amount of mesh data, especially, when on the surface of a planet. All of that goes to graphics, and we've already established that we can chop the visual quality until it fits. We can get it down to XBox 360 era graphics if we have to, and that thing had 512MB of shared RAM. So RAM isn't a problem at all here. We should absolutely only be discussing CPU performance.
  7. Anything will have context switches if it's not kernel level code, as it's up to OS to schedule tasks. And again, if custom jobs code needs to be native it can be. My only real concern with Unity's job system is collisions with render thread. If it's clever enough to avoid these and lets you schedule an idle core, great. If not, there are ways to get around that. Yeah... SIMD is sort of the main problem. I'm not going to out-perform an optimizer on general tasks, because I know that in best cases I can match it, and often, a good optimizer will just generate better code than I can. SIMD is one notable exception. Even highly specialized SIMD compilers, like Intel's ISPC, can't come up with clever ways of re-ordering operations to get maximum performance. Cross product is a good example where you can get a 3:1 performance advantage with clever use of SIMD. The other thing is that I can profile and tune native code compiled from C/C++ source. With a burst compiler, you just have to cross your fingers and hope. If there's a bottleneck due to some specific way it decided to chew on cache, you aren't fixing it. If you're trying to squeeze absolutely every last bit of performance out of a system, going native is the only way. Not a single optimization mentioned above simplifies the game in any way. We're talking purely about performance improvements. The only thing that has been suggested as an actual downgrade is visuals. So we are discussing a feature-complete game, and if you think that a particular aspect cannot run as is, I'm yet to see it pointed out.
  8. Ah, one's probably taken up by OS and not available to devs. That's not uncommon for game consoles. Just didn't think they'd do that to a system with only 4 cores to begin with. That would definitely require to stagger more tasks and shrinks time available to orbital computations to something like ~5ms. Lets run with that. You don't have to use Unity's job system. You can thread things in C# without relying on any frameworks. While it is extra effort, it does result in less overhead and more control over how things are scheduled. So it'd be easy enough to run a compute thread on the core not busy with anything else. Yeah, that's why 5ms is 5M cycles. That's a more useful measure of real computation you have, anyways. With just one core available, C# overhead becomes too costly. Change of plan. Convert orbital computations code to native. There isn't really any serious problem with running native code in Unity other than it becomes a bit less portable. But since we're specifically putting in effort into a Switch version, and we're agreed that money is no barrier, then re-writing the orbital computations code in C/C++ and compiling it specifically for Switch isn't really that big of a deal. The binaries then get loaded and executed directly from C#. So we still get the 5M cycles of CPU time to do computations, and we still don't need more than 5k cycles per piece of debris on average, so we can still have 1k pieces of debris with smooth 30FPS. Are they planning an AI system that does complex strategy or control large complex ships? No, because there wasn't a rec for that. Are they planning to introduce creatures with complex animations? No, because they'd have more recs for animation. Everything they can be making, based on jobs that have been posted and what we know Intercept got from Space Theory, is accounted for. KSP2 simply cannot have surprises that will break the CPU budgets above. The only thing we aren't considering is whether physics can actually run on main thread as is. It's entirely possible that switching over to Havok physics, which means huge restructuring, is necessary. But with Havok physics and good optimization of all other code, possibly using native code for a few critical pieces, everything that needs to run on CPU fits EASILY with way better performance than we currently see of KSP on consoles. There isn't a "we don't know" there. Like I said, KSP is a small game and KSP2, while significantly larger, is still not a big game. It's not hard to plan these out if you have experience doing just that, both in terms of time it takes to write the code and in terms of CPU budgets. Your claim that KSP can't run on Switch is based on nothing but observation that it runs poorly on consoles, and the game is horribly optimized. It shouldn't run on PC as bad as it does. Other games run way more physics than KSP does PLUS all the stuff that KSP doesn't do and manage solid framerates that don't turn into slideshow whenever something explodes. Some of it is Unity, and you can't fix that without re-writing the game from scratch, but a lot of it is just poorly optimized code and, given time and resources, you can do a lot better.
  9. Lets break it down. Here are major CPU tasks. Rendering - lives on its own thread, takes up a logical core, and we don't need to consider it further. Physics - solver needs to happen on a single thread, collisions and BVH updates could be spread out, but PhysX sucks, and bottles up main thread. Animation - can be safely distributed, but Unity might not. Lets say it happens in line. Fortunately, not a lot to do. Component updates - The calls happen on main thread, so this is definitely something you want to unload to jobs, as this includes aerodynamics, resource use, etc. World State - For KSP2 all colony stuff should happen here. Jobs, definitely jobs. UI - This is basically part of component update in Unity, and it's light enough. Can stay on main. Orbits - As we don't need to consider interactions, this can safely be don in jobs. Because physics, animation, and UI all live on the main thread, we want to move everything we can off. We also have two logical cores effectively spoken for with rendering and main threads. I'm not sure how well Unity uses the co-processor on Switch, so I'm going to assume the worst and plan for everything to run on just the four main cores with no hyperthreading. So we have two cores to use for jobs. We're also going to target 30FPS. So that's 33ms/core or about 67ms of total compute per frame for all the jobs. Component updates - lets dump most of our compute here. 40ms/frame budget seems very reasonable. Yeah, a lot of that might be eaten by aerodynamics and such, but then we can stagger some of the work between frames. You don't need fuel to update at 30Hz, for example, it'll do just fine at 10Hz, so you can do only 1/3 of computations per frame. Life support, etc can be even less frequent. 40ms is actually a ton of time even if you have to write your code in C#. World Update - Honestly, 10ms tops. Again, most of the exciting stuff that has to do with any new KSP2 features, like colony state, can easily be split between frames. Orbits - With above in mind, can easily dedicate 15ms per frame leaving a good safety margin. Now we're looking at how much overhead we're looking at. Orbital computations are going to be in very predictable, easily JITed loops. There is no memory churn at all - everything we need is already allocated. C# can give you 30% of native easy. So call it 5ms of native time. That's 5M cycles of the main CPU I can dedicate to orbital computations. Given that majority of debris will not be doing any SoI changes on any given frame, and that early rejections are all cheap with optimizations mentioned in previous posts, 5k cycles on average is plenty. So we're looking at 1k pieces of debris easily simulated at solid 30FPS. And that's conservatively. There are still places where corners can be cut. Even, potentially, starting to stagger debris update as well, splitting work between frames. There is still a lot of breathing room. And this is for the most conservative estimate for just the main 4 cores of Switch. There's not that much simulation going on, that's the whole point. Most modern games do more physics simulation for their FX than KSP did for the entire game. And while KSP2 is adding a lot of features, very few of them have anything to do with additional simulation load. The biggest one is world state updates related to colonies and transport routes, and that's all accounted in above generously. The most expensive additions of KSP2 all have to do with rendering, both in terms of CPU and GPU usage, and as you note above, that can safely be scaled down. KSP is not a big game at all. KSP2 is really not that much bigger. It's all relative, of course, but I'm used to planning and budgeting much larger titles. And there is nothing difficult about optimizing the known scope of KSP2 to fit within parameters of Switch hardware. It's just time. I also understand that it's time that Intercept simply doesn't have. When you're looking at a AAA game with hundreds of engineers, and you know it's going to take a month for somebody to take a particular task and make it stable when running multi-threaded to make it fit in CPU budgets, that's fine. When you barely have enough engineers to land your core gameplay features, that suddenly doesn't sound viable anymore. But at the end of the day, we're still talking about Switch version almost certainly not happening because it'd be a very expensive port that's probably not going to pay for itself, and not because Switch hardware just can't run a game of KSP2 scope. The later just isn't true.
  10. Damn it, whose turn was it to make sure the scientists have been fed?
  11. Well, exiting SoI isn't really a problem. You update the position for new frame, and simply check against SoI radius. If you are out, you need to do a bit of extra work to find the exact time you've left the previous SoI, and then simulate forward from there. While that can cost a few thousand cycles, it's going to be a rare occasion in practice. Entering SoI is more interesting, but there are a lot of simplifications here too. You don't need to consider very SoI in the game - just children of current SoI. So if you're orbiting star, only check planets, and if you're orbiting a planet, only check moons. Next, it's very easy to construct an AABB for each SoI you need to check. Feasibility of each AABB collision can be evaluated with just two SIMD instructions, but even if you have to do it with managed C# code, it's done in a few CPU cycles. For any SoI that are still feasible, we have to start doing real work. First step is to break trajectories into segments, inflate the SoI a bit to correct for curvature, and see if there's an intersection there. That's still very cheap. If you found a potential intersect, the final step is an iterative algorithm that finds the exact time you've entered new SoI. The great thing about all of the above is that while the verification steps get progressively more complex, they also get progressively less common. Most of the time, debris are going to be nowhere near SoI boundaries, and you'll be able to reject it really early. And doing iterative solutions for the few pieces that did encounter an SoI boundary or came close enough to have to check, is still very reasonable.
  12. Checks out. Simplest way to derive this is still by using height. And it is fully equivalent to Heron's formula. You can rearrange the terms in a multitude of ways. My favorite by far is 16A2 = (a + b + c)(a + b - c)(a + c - b)(b + c - a), which is equivalent to the above.
  13. Yeah, but I can actually see why someone would think that doing proximity checks first, and only afterwards drilling into the craft to see if it has an antenna, would be a good idea, because proximity check is a cheap computation. In contrast, we know that satellites will happily pass through the mountains if player isn't nearby, and collision scene progressively loading around player is a known cause of some bugs related to moving near surface at high speeds. Finally, even you were to bother to load collisions for distant craft, you'd have to relocate origin to have anything like required precision, and even then, under time warp, you just aren't going to get any collisions, as the game is doing intersection tests rather than sweeps. Basically, it'd be really, really stupid to do proximity checks on debris vs debris for sake of physics, as you'd be just discarding the result. I'm not going to say it's impossible, but I find poorly optimized CommNet to be way more likely.
  14. Superman once went FTL, so according to modern physics, he had to have consumed all the power. Things like flying and eye-lasers aren't even worth mentioning on this scale, honestly.
  15. I was once looking for a simple way to generate a formula for sums of natural powers. In other words, find sum 1k + 2k + 3k + ... + Nk. I failed on simple, but I got a generic algorithm that works and isn't hard to understand if you know a bit of calculus. Start with an observation that d/dx (x0 + x1 + x2 + ... ) = 1 + 2x + 3x2 + ... Which has values we're interested in for x = 1 and power in question being 1. This does, however, break down on second derivative, as d/dx (1 + 2x + 3x2 + ... ) = 2 + 6x + 12x2 + ... But we can fix it by simply pre-multiplying by x. d/dx (x + 2x2 + 3x3 + ...) = 1 + 4x + 9x2 + ..., which gives us the sums we want for x = 1 and power being 2. So lets define operator D[f(x)] = x * d/dx[f(x)]. Then D2 (x0 + x1 + x2 + ...) = 1 + 4x + 9x2 + ..., We can also compute partial geometric sums Σ{0, N} xn = (1 - xN+1)/(1 - x) for x < 1. Derivation in spoiler. Putting it all together, we can express the actual sum we care about as a limit. 1k + 2k + 3k + ... + Nk = Lim{x->1-} Dk[(1 - xN+1) / (1 - x)] And there you have it. General formula for summing up any natural power of integer sequence. What do you mean that's not it? You want to actually expand it? Oh, fine. Lets just do the simple case of k = 1. D[(1 - xN+1) / (1 - x)] = x * d/dx ( (1 - xN+1) / (1 - x) ) = x * [ (1 - xN+1) - (1 - x) * (N + 1) xN ] / (1 - x)2 = (x - (N + 1) xN+1 + N xN+2) / (1 - x)2 Using L'Hopital rule twice to find the limit. Lim{x->1-} ... = Lim{x->1-} ( N (N + 2) xN+1 - (N + 1)2 xN ) / (2 (x + 1)) = Lim{x->1-} ( N (N + 1) (N + 2) xN-2 - N (N + 1)2 xN-1 ) / 2 = (N (N + 1) (N + 2) - N (N + 1)2) / 2 = N (N + 1) / 2. So 1 + 2 + 3 + 4 + ... + N = Lim{x->1-} D[(1 - xN+1) / (1 - x)] = N (N + 1) / 2 QED. Using this method to derive an expression for k = 2 takes about a page. I've been able to get results for k = 3 and k = 4 in Mathematica. There is probably a systematic way to attack these that gives you some straight forward recursion relationship, but I haven't been able to see it. So for the time being, this is filed away in my brain as "curious, but not useful."
  16. Yeah. Could it be something related to CommNet? There are definitely bad ways to code connection checks that will cause lag spikes when things enter/leave proximity of each other.
  17. No, it doesn't. Game only checks for collisions if it's within range of player. So it's O(n) check, not O(n²). Not how that works either. While there is plenty of inefficient garbage in .NET, if all you're trying to do is iterate through a list and do a simple distance check, the loop will get JITed and you'll have very close to native performance so long as you don't do anything silly, like start allocating memory. Which I can't guarantee Squad didn't do, but there's definitely a way to do all of this without taxing CPU at all.
  18. All of these things are pretty cheap computationally, so I don't expect impact on loading times. But you are correct about it adding to code complexity, so it would take time to develop and debug, which adds to development costs. That's... honestly a surprise. It shouldn't. I don't know what KSP is doing there, but I can walk you through computations that need to be done for update and even run benchmarks if you'd like. With correct implementation, you should be in high hundreds of thousands or low millions before it has frame-rate impact. I tend to run my KSP games very clean, with small amount of debris, so I've never noticed this, but if it's really that bad, it's either a bug or a really bad error in code. I hope they're not trying to still run time steps during warp, that'd be a pretty obvious mistake. (Edit: Actually, I wonder if the problem is the game trying to draw trajectories... Still shouldn't be that bad, but that can at least explain some of it.) Yeah, looking at cited cache on Switch, it looks limiting. But it's not all about the amount. The original PS4 and XB1 (the non-Pro version) had abysmal cache performance in certain situations. The biggest problem was with interlocked atomics, and these are used basically everywhere if you're doing any sort of multi-threading. Given the architecture, I suspect it's not as much problem on Switch. Also, animation is where cache performance hurts you the most, especially, low size and/or associativity of cache. Even KSP2 doesn't look like it will have a lot of that. Kerbals are pretty simple, so I don't expect them to have 200+ bones over which you need to interpolate ten different animation fragments. That's the sort of task on which your cache starts to cry, usually. Blending a few tracks over a few dozen bones hasn't been a problem for anything remotely recent for over a decade.
  19. Agreed. That's why a Switch port would have to convert all of that, which would make it unreasonably expensive to do that. But again, not a hardware limitation so much as choice of framework for the main game. Keep in mind, I'm saying that KSP2 can be made to run on Switch with minimum sacrifices to game quality. Not that it would make financial sense to do so. During time-warp everything's on rails. Outside of time warp, only loaded craft (within 2km?) are simulated and everything else is on rails. And for the loaded craft gravity is just folded into other forces, so it's just part of simulation. KSP2 will have to figure out how to do this for torch ships. I'm not sure what they have in mind here, but technically only a ship within SOI needs to be numerically integrated. For a ship in interstellar space, outside of any SOI, there is an analytic solution for constant thrust. So these can still be on rails, leaving just a few in-system torch ships that are in flight at any given time. That's reasonable even under 1M time warp we'll probably need for interstellar. A weld is a type of constraint that the KSP already uses to hold modules together. It doesn't make a ship completely rigid, because there is Baumgarte relaxation applied (or equivalent - I don't recall exactly how PhysX handles this). That effectively turns every weld into a very stiff spring, allowing some flex. Now, I would argue that having had completely rigid rockets would have been fine from the start, but we're kind of stuck with what we have now. Here's the thing, though. The methods used to solve constraints are all iterative. If you can guess the approximate solution and start with it, you can simulate physics in way fewer iterations. So what if we guess that the ship approximates a rigid body? Well, we can pre-compute the pseudo-inverse for the constraints equations assuming a rigid body, use the sum of forces and torques to substitute accelerations for every module, and get approximate constraint forces as an output. We can then feed these approximations as a starting point into a single iteration of a general constraints solver to get a very nice, very stable simulation with way, way less computational effort. If that's still not enough, the next step is starting to turn some short stacks into rigid bodies. Do you really need the flex between these in-line reactions wheel, battery, and monoprop tank you threw on top of your fuel stack? Probably not. Turn them into a single rigid body, and you now have a lot less to worry about during simulation. Is all of this easy to do? No. It requires custom solver, somebody who knows how to do physics simulation, and significant amount of time developing and testing. So I don't expect many of these optimizations to happen in KSP2. But it is something that can be done if somebody just gave you a large budget for a Switch port for some unknown reason.
  20. At least one of the ships in the initial trailer looks a bit like a beam core antimatter drive. But hard to say. We definitely have a Project Daedalus style fusion drive, though, which is also a decent improvement on Orion. Finally, we don't know what distances are going to be like. Sun is in a less dense portion of a galaxy, so 4ly to nearest neighbor is definitely not on the low side for stars. And star systems in KSP have already been shrunk. So having all neighboring stars fall in 0.1ly - 0.3ly range wouldn't be unreasonable. That would let you reach nearby stars within a few decades, which would still require major time warp to make it playable, but within reason.
  21. Sure, sure. But KSP doesn't need to deal with a dozen characters with close to a hundred animated bones each, running complex animation graphs and mixing several animated fragments, while updating thousands of FX particles from dozens of emitters, and managing path finding and behavior trees for several enemies. This is what most of the CPU time is spent outside of feeding GPU with things to draw in most games. KSP has very little or none of the above, while running physics comparable to what you run for debris collision on a typical PC game. The reason KSP manages to still tax CPU is crap optimization, I'm afraid. Unity's physics is not the best. It is heavily based on PhysX which isn't a great starting point, but I don't know if it accounts for all of it. I've been able to do decent enough vehicle simulations with PhysX cheaply enough to run on an MMO server. Of course, that didn't involve using PhysX constraints solver, and that's first thing Squad should have thrown on the trash pile rather than trying to fix it. Even with that in mind, I'm not sure why performance is quite as bad as it is. Maybe I'm underestimating how bad PhysX solver is, after all. That said, new versions of Unity do provide for a way to use Havok instead. Now Havok has actually been paying attention to improvements in constraints systems and solvers, so they've gotten to the point where their engine is decent. It provides better stability while running a lot lighter. That means if you want to go with KSP-style craft construction, you can do so with way fewer welds and still have less Kraken. For a similar craft, you can probably cut CPU load by an order of magnitude at least. That alone should be enough to run KSP on switch as is, unless there's another sink somewhere. Of course, the correct solution is to have a custom solver for the craft, one which makes use of the fact that pretty much every constraint is a weld, so you're really building a rigid body. Even if you want to keep the flex, so you want to continue having floppy joints, solving the craft as rigid and populating the constraint cache with result will mean you can probably get a good solution in a single iteration without any sacrifices to stability. That will give you nice, stable physics for all but the largest craft with absolutely minimal CPU impact. There are still a lot of unknowns in terms of what Intercpet wants to do with KSP2 that's different from KSP. Colonies and shipyards, depending on how they're built, could be pretty major resource hogs. Multiplayer always throws wrench into systems. So there are places where KSP2 might end up far more CPU intensive. And maybe these are features that'd have to get scaled down for a Switch port. But the core of the game can most certainly run on that console. Now, grain of salt is that I've never worked with a Switch hardware. But it will outperform a 360 in any fair benchmark, and I'm fairly familiar with the later. I also have some experience optimizing games to run on console hardware they weren't built for with focus on physics, animation, and FX performance. So I'm pretty confident with the above statements. No argument there. But that's part of why Unity's such a sand trap. You stand up a game on Unity because you don't have resources to do it with a more sophisticated engine, and then you don't have resources to optimize your performance on it. Not to hate on Unity, or anything. There's certainly a niche for it, but it's not the best engine for every game made on a budget.
  22. Honestly? Like, honestly, honestly? It can be done by a competent port team. There's more than enough processing power to handle KSP2 with downgraded graphics. Problem is, they've never done that sort of optimization even for original KSP, and KSP2 is a much more ambitious project with insufficient resources to spend on optimization. So people who say it can't be done because Switch isn't powerful enough are wrong. But yeah, unlikely to happen because the necessary resources just won't get invested.
  23. I'm saying that if we had a population that was never exposed to any sort of influenza or coronavirus, an introduced strain of influenza would probably spread through it more rapidly than CoV-2. The fact that pretty much everyone have had dozens of strains of flu over the course of their life makes it a lot more likely that any given's individual immune system is already prepared for a new strain. At the same time, CoV-2 is pretty much new to most people, so it spreads faster for now. This is purely from gauging R0 change by eye and not based on any sort of study, of course. I can easily be wrong here. The idea, then, is that we might be more resistant to new strains of CoV-2 than influenza once we have a vaccine and most people have either been previously exposed or vaccinated, even if CoV-2 ends up mutating just as rapidly. If it doesn't mutate as rapidly, that's cherry on top. If CoV-2 is both inherently more infectious and mutates as or more rapidly as influenza, then we're just going to have to make seasonal COVID vaccines in perpetuity. That would be less than optimal. And again, this is based on me literally doing a search for some papers citing R0 numbers and how they change during outbreak and not even doing proper analysis on these. I'm slightly more confident in above than just guessing, that's about it.
  24. The only reasonable trajectory for interstellar is a straight line. If you are traveling slow enough for your trajectory in interstellar space to curve noticeably, no amount of time warp will help you.
  25. Can we just make this a difficulty setting? Along side re-entry damage and things like that?
×
×
  • Create New...