Jump to content

Does KSP benefit from the X3D CPUs?


Motokid600

Recommended Posts

These chips supposedly are better for simulation games. So Im wondering if with both KSP1 and KSP2 if we will see any benefits from the AMD X3D series CPUs. I know little to nothing about software, API, game engines and how they work. So I suppose its asking for Unity to take advantage of the v-cache on the X3D chips? I know its probably too early to say for KSP2 given its current status, but for those who know Unity better can we expect any performance advantage from these chips? Pardon my ignorance.

Link to comment
Share on other sites

I doubt X3D will help with KSP2. It's a cache improvement, and I don't think that's the bottleneck for the game. KSP2 is still main thread bound, and you generally only start getting into L3 cache bottlenecks once you're making good use of all the threads.

Link to comment
Share on other sites

I'm curious as well,  although it's way to early to tell with KSP2 in its current state. Overall performance needs to be optimized first, but when it does I'd really like to know what would get best performance. I'm currently running an 12700KF, Hypertreading off gets me the same performance as on, likewise with the efficiency cores on or off. X3D CPU's have been around for a little while now, but its hard to find good comparisons or benchmarks for KSP1 sadly. 

Link to comment
Share on other sites

17 hours ago, K^2 said:

I doubt X3D will help with KSP2. It's a cache improvement, and I don't think that's the bottleneck for the game. KSP2 is still main thread bound, and you generally only start getting into L3 cache bottlenecks once you're making good use of all the threads.

That's not entirely correct. Whether cache helps or not depends more on the kind of instructions and data the CPU must process.

If your application needs to do the same couple dozen things over and over on very similar data, then both the instructions and the majority of the data can be effectively cached, and every new instruction cycle scores a cache hit. This allows the thread to proceed immediately (for cache latency values of "immediate"). If, however, each new instruction is something unpredicatbly new, and requires large amounts or quasi-random selections of data, then the CPU must wait for a main system memory access - or, if it's really, really unlucky, for a disk access. These things take multiple orders of magnitude longer than reading from cache. While modern CPUs are very proficient at predicting execution order and finding other things to do while they wait for something, that doesn't make the waiting instruction itself finish any faster. And if that instruction happens to be holding a lock on the main thread, then well, the entire main thread is briefly paused while some file is accessed on the disk.

Thus, different applications respond differently to large amounts of cache. Those that can't make use of it don't care, and those that can make use of it will absolutely love it and post massive gains up until they wander into diminishing returns. Which, again, vary from application to application.

This does not really rely on fully loading all cores. Even a purely single-threaded application running on an 8-core, 16-thread CPU can love cache, or not.

In the case of KSP (1 or 2), the big ticket CPU item is physics simulation for the active vessel. If this task specifically loves cache, then the KSP games will run significantly better on X3D CPUs. If this task doesn't love cache, there won't be much of an advantage, even if some other tasks do love cache.

I'm not sure we can benchmark this effectively on the current version of KSP2, though. It would likely need to be done by someone who has a system with a massive high-end GPU, and two comparable Ryzen CPUs with and without v-cache (such as a 5800X and a 5800X3D). It might involve loading a vessel somewhere in deep space away from all celestial bodies. If the vessel is bonkers enough, it should generate the CPU load needed - but whether the results would be cleanly reproducable enough for a meaningful benchmark, I can't say.

 

Edited by Streetwind
Link to comment
Share on other sites

On 4/8/2023 at 7:39 AM, Streetwind said:

That's not entirely correct. Whether cache helps or not depends more on the kind of instructions and data the CPU must process.

If your application needs to do the same couple dozen things over and over on very similar data, then both the instructions and the majority of the data can be effectively cached, and every new instruction cycle scores a cache hit. This allows the thread to proceed immediately (for cache latency values of "immediate"). If, however, each new instruction is something unpredicatbly new, and requires large amounts or quasi-random selections of data, then the CPU must wait for a main system memory access - or, if it's really, really unlucky, for a disk access. These things take multiple orders of magnitude longer than reading from cache. While modern CPUs are very proficient at predicting execution order and finding other things to do while they wait for something, that doesn't make the waiting instruction itself finish any faster. And if that instruction happens to be holding a lock on the main thread, then well, the entire main thread is briefly paused while some file is accessed on the disk.

Yeah, and there's certainly no one-size-fits all - you really have to profile and see what you're hitting. But as a general rule, a single thread might be L1 or L2 limited. If a single thread is L3 limited, either something's horribly wrong with your CPU architecture or you are doing something especially awful with your algorithm. A modern L3 is designed to feed 8+ threads. If you can saturate that with a single thread without intentionally trying to, any multithreaded task is going to struggle. In case of X3D, you're only really getting benefits if you blew out your L1 and L2, and the X3D gives you an L3 hit instead of a miss. And X3D isn't going to help you accessing any memory you've never touched before. It only helps if you've had other memory access walk all over the relevant cache lines. That happens all the time if you have 20 threads doing their own thing, but if you have mostly the one thread doing the work, it tends to be pretty unlikely.

Based on everything we've seen of KSP2's performance, I'd be shocked if KSP2 has a lot of L3 misses that X3D would make any difference to. I'd be happy to be proven wrong with a benchmark, but everything we know already heavily favors single-thread optimizations in the architecture, which means faster cores and more L1 and L2, and that's not at all the sort of things that X3D helps you with.

Link to comment
Share on other sites

9 hours ago, K^2 said:

Based on everything we've seen of KSP2's performance, I'd be shocked if KSP2 has a lot of L3 misses that X3D would make any difference to. I'd be happy to be proven wrong with a benchmark, but everything we know already heavily favors single-thread optimizations in the architecture, which means faster cores and more L1 and L2, and that's not at all the sort of things that X3D helps you with.

That couldn't be further from reality.
I haven't tested anything in KSP 2, but I can say with certainty that the architecture is very similar to KSP 1.
It's a huge single-threaded C# codebase calling in a lot of mostly static, scattered, and large memory allocations.
Which is why a huge L3 cache does literal wonders, because suddenly all those "not-hot" but still "grabbed-many-times-every-frame" memory allocations are in the L3 cache instead of requiring a round trip to RAM.

I can personally confirm that KSP 1 gets a huge performance increase from the 5800X3D compared to an mildy overclocked 6700K.
For reference a 6700K has per core L1/L2/L3 cache of 64KB/256KB/2MB vs 64KB/512KB/96MB for the 5800X3D. In terms of raw IPC (with the overclock on the 6700K), the 5800X3D is only 15-25% better (tested with a variety of single threaded benchmarks, including some mildly cache sensitive ones).
Yet the effective throughput increase in terms of frame time in KSP 1 is 70-90%.
This has also been confirmed by other people having doing some (quite rigorous) benchmarking in KSP 1 with a 5800X3D, and is in-line with the results in other games where size of the data set accessed every frame in quite large.

Edit :

All this being said, to compare apples to apples, compared to intel 12th/13th gen or to the Ryzen 7XXX series, the 5800X3D is lagging behind quite a lot in terms of raw IPC and frequency, and all those newer gen CPUs (including intel ones) have massively increased L2/L3 caches compared to a 6700K, so the comparative benefit of the extra L3 cache of the *X3D (including the new 7*X3D series) is likely much less dramatic than the above results.

For example, in Cities: Skyline (another mainly single-threaded Unity/C# game with very similar bottlenecks as KSP), the 5800X3D only provide a 10-20% frame time increase over a 5800X (non-3D). The 5800X is essentially the same CPU, but with a per-core cache of 64KB/512KB/32MB (vs 96MB for the X3D), but it is clocked a bit higher.

What this demonstrate is that increasing cache size is very beneficial when you go from a case where this is a major bottleneck in a memory-access intensive game, but past the point where you can fit most of the frequently used data set in the cache, increasing cache size stops yielding significant perf improvements. A 6700K with 256KB/2MB of L2/L3 cache per core is vastly bottlenecked in those cases, but the difference between 512Kb/32MB and 512KB/96MB is much less significant, and a similar reasoning applies to Intel 12/13th gen where the per core L2/L3 cache sizes vary from 1MB/20MB on mid/low-tier models to 2MB/33MB on mid/high tier models.

Edited by Gotmachine
Link to comment
Share on other sites

1 hour ago, Gotmachine said:

This has also been confirmed by other people having doing some (quite rigorous) benchmarking in KSP 1 with a 5800X3D, and is in-line with the results in other games where size of the data set accessed every frame in quite large.

Could you help me find some of those benchmarks done by others? I don't doubt you, I'm just really curious about the details.

Link to comment
Share on other sites

This was on discord in private channels, so I can't link you, but one people made automated benchmarks, executing a predefined launch sequence in the same savegame, and monitoring frame times with a custom plugin, all that in a relatively heavily modded game. He compared a 6600K @ 4.4Ghz with a 5800X3D. Average frame time was 73 vs 124 FPS in bench 1, 45 vs 89 FPS in bench 2 . 1% lows were 50% better in bench 1, and actually 15% lower in bench 2, which isn't surprising given that KSP has a chronic issue with some specific heavy loads / cache rebuilds kicking in at semi-regular intervals with a relatively high frequency.

I've personally got very similar results in much less rigorous testing both in stock and in a mildly modded game, comparing a [email protected] Ghz and a 5800X3D.

Edited by Gotmachine
Link to comment
Share on other sites

8 hours ago, Gotmachine said:

That couldn't be further from reality.
[...]
I can personally confirm that KSP 1 gets a huge performance increase from the 5800X3D compared to an mildy overclocked 6700K.
[...]
Yet the effective throughput increase in terms of frame time in KSP 1 is 70-90%.

That flies in the face of everything I've seen profiling other engines. I don't mean to say that I doubt the results - based on other links this seems to be well established, but no half-way optimized engine should even be allowing that kind of garbage cache performance from its component architecture.

That said, Unity keeps doing things I don't expect from a civilized engine, so I probably should hook up profiling tools to both KSP and KSP2 and take a look at what's going on there.

With that in mind, yes, it would absolutely be worth testing to see if KSP2 is going to be the same way, but also, if it's still the case with latest Unity, somebody at that company should be reprioritizing what they're investing their engineering effort into.

Link to comment
Share on other sites

5 hours ago, LoSBoL said:

Found these

I'm quite skeptical about the results in the first video. Stock KSP with a craft that simple should be getting a solid 150+ FPS on a 5800X3D, not 80 FPS. For me (on a 5800X3D), launching a stock Dynawing (~150 parts) results in 280-290 FPS average during launch, 330-340 FPS in space and with ~100 parts on reentry, still getting 160-170 FPS. And I doubt the craft in the video is actually more than 50-60 parts, so something is not right with the setup, like some driver enforced VSync.

1 hour ago, K^2 said:

That said, Unity keeps doing things I don't expect from a civilized engine, so I probably should hook up profiling tools to both KSP and KSP2 and take a look at what's going on there.

The Unity/Mono pair is certainly part of the reason why things are scaling that way. KSP, and to an even greater extent modded KSP is also far from doing things in an optimized way. It's a game whose foundations are built on Unity 4 from 12 years ago.

As for KSP 2, it's quite pointless to even try to assess it's performance at this point. The thing is coded like a C# desktop application, with absolutely zero regard to even the most basic guidelines on to how to use C# correctly in the context of Unity. The thing is spitting 2-3MB of GC allocations per frame, and thta's only the tip of the iceberg.

But to some extent, it's perfectly understandable to have some types of games benefit from a large L3 cache.

Factorio is a not entirely, but still mostly single-threaded game, which is quite renowned for being a very optimized codebase using a lot of low level optimizations, especially in terms of memory packing. Yet it benefits massively from the additional L3 cache on a 5800X3D, simply because by nature, it's "main" in-memory data set is huge, to the point that a 5800X3D beat a 13900K by a whooping 30%, despite its raw IPC being roughly 50% less...

Edited by Gotmachine
Link to comment
Share on other sites

1 hour ago, Gotmachine said:

Factorio is a not entirely, but still mostly single-threaded game, which is quite renowned for being a very optimized codebase using a lot of low level optimizations, especially in terms of memory packing. Yet it benefits massively from the additional L3 cache on a 5800X3D, simply because by nature, it's "main" in-memory data set is huge, to the point that a 5800X3D beat a 13900K by a whooping 30%, despite its raw IPC being roughly 50% less...

For many "grid" games and simulations, yes. A lot of simulations in fluid dynamics and lattice QCD employ clever space-filling curves to keep things that are spatially close together closer together in memory, because even when running on GPU with a cache that's much better optimized for these sort of workloads, the massive datasets that you have to deal with will absolutely destroy you if you don't pay attention to coherency. (I've had cases in simulation where it was worth it to copy a grid transposed for a certain pass, then transpose-copy the data back to get the performance improvement.) So it's worth the not insignificant computational overhead to go for a very complex layout. For games like Factorio, space-filling curves might not even be an option, so no matter how much you optimize, you're at a minimum straddling the stride of your grid which will hurt you on L3. And this can get particularly gnarly if you have to execute updates on the grid in a specific order to adhere to all of the game's rules, and that's often the case in these sorts of games.

Even going to something like the aforementioned Cities Skyline, the grid is substantial. Definitely easier to optimize than Factorio's, but I wouldn't expect it to work well with the cache with a naive implementation. In KSP, though? If your engine's component and asset models are coherent, there shouldn't be a way for the developers to screw it up. Again, not without trying to. I'm not saying it like Squad didn't make a mess of it - they most certainly have, but even with the game object model of Unity, the engine should be protecting you from the worst of it. Like, that's the engine's frigin' job. And yes, yes, I know, if individual behavior scripts start allocating left and right, it's very easy to end up with things tripping over each other, but that's why you supposedly outsource the engine development to the professionals. If you're writing the engine, you control the allocation and how the scripts are executed. Pool the memory. Group the execution together. It's textbook stuff.

And yes, I'm upset. It seems Unity keeps finding ways to disappoint me. And yes, I know there's a lot KSP and KSP2 teams could have done to make it better, but forgive me for holding dev teams of fewer than 50 people in total to a different standard than a multi-billion corporation whose entire job it is to make a game engine.

Edited by K^2
Link to comment
Share on other sites

1 hour ago, K^2 said:

It seems Unity keeps finding ways to disappoint me. And yes, I know there's a lot KSP and KSP2 teams could have done to make it better, but forgive me for holding dev teams of fewer than 50 people in total to a different standard than a multi-billion corporation whose entire job it is to make a game engine.

"Good" depends on your definition of "good". Here you mean "high performance". But Unity never was about performance, it's good for plenty of reasons, and performance isn't (or at least wasn't until recently) one of them.

Yet it's hard to argue against its massive success, and things exists for a reason.
Most if it is due to it being the most productive engine for putting small to mid sized games out of the pipeline as quick as possible.
You can get a game (and not only a game, but any 2D/3D application) done in Unity with less man-hours, and less skilled man-hours than with any other option out there.
And Unity's primary market is mobile games and various other "I need a 2D/3D frontend" applications, an area where you just need to the thing to run, and ideally to run on several target platforms with minimal additional investment.
In those markets, it doesn't need to be pretty, it doesn't need to be optimized, it just need the smallest possible concept-to-product cost. That's also why Unity is quite a success in the indie games scene.

But over the years, in trying to address every possible platform and every possible market, Unity has become capable of doing everything, but it is good at nothing.
It's the jack-of-all-trades of the game engines, but the foundations are starting to seriously lag behind the state of the art.
And they have acknowledged that. They have many multi-year ongoing foundational improvements in the pipeline, many of them are specifically about addressing historic performance limitations.
SRP, UITK, DOTS, Burst, Jobs, Unity Physics, moving away from Mono to the modern .NET ecosytem, all those combined will probably put back Unity in the game, so to speak.

There is definitely a lot of frustration and valid criticism as to how long all those ongoing changes are taking to come to fruition.
They tried to do all that incrementally, sometimes with a lack of focus, and with the goal of maintaining as much backward compatibility as possible, which also stem from the fact that again, a large share of the Unity market is devs relying on being able to make their next product by reusing 80% of existing assets and workflows. There was some recent acknowledgment of those mistakes, and it seems they are slowly changing how they are handling those projects, but as you said, it's a multi-billion corporation where things have quite a bit of inertia.

Arguably, KSP 2 development started in the wrong timeframe, where none of those newer options were in a stable enough state to rely on them in production (although many games did with quite some sucess in the same timeframe).
Not that it really matters anyway, given that for the most part, KSP 2 is just a mild refactor of the KSP 1 codebase, so I doubt that it would have changed anything.

Edited by Gotmachine
Link to comment
Share on other sites

On 4/11/2023 at 1:41 AM, Gotmachine said:

Not that it really matters anyway, given that for the most part, KSP 2 is just a mild refactor of the KSP 1 codebase, so I doubt that it would have changed anything.

Do you have any reference for this (e.G. data mining)? Some people claimed this  after EA-launch but they didn't give any arguments expect their (relatable) disappointment  on the state of the game. So I didn't take them too seriously. 
You however have some street creds in modding KSP and already made some rational technical arguments so I'm interested how you came to this conclusion. 
BTW: Big thanks for your work on KSP Community Fixes and Kerbalism, I'm doing a Science mode run with Kerbalism and hadn't this much fun with KSP in a long time.

Link to comment
Share on other sites

58 minutes ago, jost said:

Do you have any reference for this (e.G. data mining)?

"Data mining" indeed. Funny how that expression is being freely used since KSP 2 is out when any mention of what this really mean has been the subject of a moderator crusade for years.

But yeah, in many ways, this is KSP 1 2.0 : the "breaking backward compatibility is allowed" update.
They took KSP 1, incrementally refactored the codebase taking advantage of modern Unity advancement where relevant (asset loading with addressables, json serialization, PBR...), shuffled a few things around to allow the "thrust under warp" feature, gave it a visual facelift, and that's it for the most part.
The reason most people somewhat familiar with the KSP 1 codebase (or just game/software engineers) are disappointed is because it's just that : a very cautious refactor that didn't even try to address the core issues of KSP 1.
And given how much resources they were given by T2/PD, KSP 2 could have been so much more.

The reasons those resources got wasted are not very clear (project leadership issues ? scope creep ? wrong priorities ?), but what is very clear is that they underestimated how critical and difficult engineering a game like KSP is, and thought they could avoid the whole issue by being conservative and copypasting the KSP 1 implementations, falling into the trap that those implementations had massive and fundamental issues to begin with.
If KSP 2 has been delayed 3 times for 2 years, and likely won't deliver a feature-complete 1.0 before at least a year, and was released as EA as a hot buggy mess, it's because advancement on the software engineering and codebase is lagging years behind the rest.
Planet and parts art assets are ready, including assets for unreleased features. Sound design is complete and very polished. Even the UI is an relatively good place. The codebase by contrast looks like it has gone through dozens of iterations and is barely out of prototyping.
There are many parts of it that feel like the first quick-and-dirty-get-it-somewhat-functional implementation, others feel like they spent months on the drawing board trying to make it extra fancy, some parts are straight-up copypasted from KSP 1, and the overall code quality is very, very uneven.

KSP 1 is often qualified as a huge mess of spaghetti code, and they clearly spent a lot of time refactoring the whole thing with a textbook Model-View-Controller pattern.
Unfortunately, textbook patterns don't make good games, features and performance does.
And KSP 2 doesn't address any of the long standing issues of the KSP 1 core features, nor its performance issues.

They will never get better performance than KSP 1 in terms of part count being a CPU bottleneck. They made things even worse in that regard, for many reasons, but mainly because now all parts on all vessels (instead of just the active vessel in KSP 1) contribute to the CPU bottleneck.
Their core architecture doesn't implement, and unless they have that huge refactor already in the works, will never be able to implement any of the usual patterns that could have alleviated that issue (multithreading, data-oriented programming).
Their MVC pattern does nothing to help on that end, and TBH, the level of coupling between everything is already making the MVC pattern mostly pointless.

The aerodynamics and buoyancy integrators are still based on the objectively terrible per-part drag cube system. That system is intrinsically inconsistent, not especially good from a performance PoV, and requires handling tons of corner cases all around the codebase.
The joint/RB physics are still based on the PhysX integrator, which is simply totally inadequate for the task, not only in terms of simulation stability and achieving gameplay intent, but also in terms of performance because they are forced to implement tons of extremely hacky workarounds to make it work in ways it doesn't support.
The resource (ie, fuel) query/processing system they have put together is a counter-example, they did actually reimplement it differently than in KSP 1. But interestingly, it manages to not address any of the long standing issues of its KSP 1 counterpart, while having abysmal performance compared to the KSP 1 implementation.
The awful global GameEvents internal messaging system ? They kept it with a pointless facelift, not fixing the conceptual flaw that the thing is uselessly broadcasting messages to every entity in the world, and also managed to make it slower...
One example of something they are rethinking from scratch is the thermodynamics. With resource processing, this will be the only major departure from the KSP 1 core subsystems, it will be interesting to see how this pan out when they finally manage to release it.

Some people are speculating that they are working on some large refactors in parallel branches, and this is indeed how development works.
But with every patch and feature that get released, the cost of merging such refactors becomes exponentially higher, and given that they are likely under huge pressure to get the 1.0 features out in less than a multi-year timeframe, I wouldn't bet on such refactors ever happening.

There are a few things that are better than in KSP 1. People have noticed that for the most part, long loading times are gone, but this is mostly just a side effect of the game not being limited by constraints that existed 12 years ago in Unity.
There are areas where they rewrote things with some success. While they did a very questionable job with the terrain texturing shader (which is a huge GPU bottleneck), their rewrite of the PQS mesh generation subsystem is one of the few thing that is objectively good.
There are small improvements here and there. I'm personally very unimpressed by the overall UI/UX, but most people seem to like what they did here, and this is an area where they can easily improve, contrary to other more core aspects.

To temper my words, unless the game gets canned because of low sales, them failing to deliver the 1.0 features in a reasonable timeframe, or a bit of both, I still think they will end up with a quite clunky, but somewhat enjoyable game that will achieve its high level goals, just like KSP 1 did.
It's just that in many ways, and specifically those where KSP 1 was at it's worst, KSP 2 is just as bad, and that leaves a bitter taste for a specific population of long time KSP 1 players and modders.

Okay, big off-topic rant.

Link to comment
Share on other sites

5 hours ago, Gotmachine said:

"Data mining" indeed. Funny how that expression is being freely used since KSP 2 is out when any mention of what this really mean has been the subject of a moderator crusade for years.

But yeah, in many ways, this is KSP 1 2.0 : the "breaking backward compatibility is allowed" update.
They took KSP 1, incrementally refactored the codebase taking advantage of modern Unity advancement where relevant (asset loading with addressables, json serialization, PBR...), shuffled a few things around to allow the "thrust under warp" feature, gave it a visual facelift, and that's it for the most part.
The reason most people somewhat familiar with the KSP 1 codebase (or just game/software engineers) are disappointed is because it's just that : a very cautious refactor that didn't even try to address the core issues of KSP 1.
And given how much resources they were given by T2/PD, KSP 2 could have been so much more.

The reasons those resources got wasted are not very clear (project leadership issues ? scope creep ? wrong priorities ?), but what is very clear is that they underestimated how critical and difficult engineering a game like KSP is, and thought they could avoid the whole issue by being conservative and copypasting the KSP 1 implementations, falling into the trap that those implementations had massive and fundamental issues to begin with.
If KSP 2 has been delayed 3 times for 2 years, and likely won't deliver a feature-complete 1.0 before at least a year, and was released as EA as a hot buggy mess, it's because advancement on the software engineering and codebase is lagging years behind the rest.
Planet and parts art assets are ready, including assets for unreleased features. Sound design is complete and very polished. Even the UI is an relatively good place. The codebase by contrast looks like it has gone through dozens of iterations and is barely out of prototyping.
There are many parts of it that feel like the first quick-and-dirty-get-it-somewhat-functional implementation, others feel like they spent months on the drawing board trying to make it extra fancy, some parts are straight-up copypasted from KSP 1, and the overall code quality is very, very uneven.

KSP 1 is often qualified as a huge mess of spaghetti code, and they clearly spent a lot of time refactoring the whole thing with a textbook Model-View-Controller pattern.
Unfortunately, textbook patterns don't make good games, features and performance does.
And KSP 2 doesn't address any of the long standing issues of the KSP 1 core features, nor its performance issues.

They will never get better performance than KSP 1 in terms of part count being a CPU bottleneck. They made things even worse in that regard, for many reasons, but mainly because now all parts on all vessels (instead of just the active vessel in KSP 1) contribute to the CPU bottleneck.
Their core architecture doesn't implement, and unless they have that huge refactor already in the works, will never be able to implement any of the usual patterns that could have alleviated that issue (multithreading, data-oriented programming).
Their MVC pattern does nothing to help on that end, and TBH, the level of coupling between everything is already making the MVC pattern mostly pointless.

The aerodynamics and buoyancy integrators are still based on the objectively terrible per-part drag cube system. That system is intrinsically inconsistent, not especially good from a performance PoV, and requires handling tons of corner cases all around the codebase.
The joint/RB physics are still based on the PhysX integrator, which is simply totally inadequate for the task, not only in terms of simulation stability and achieving gameplay intent, but also in terms of performance because they are forced to implement tons of extremely hacky workarounds to make it work in ways it doesn't support.
The resource (ie, fuel) query/processing system they have put together is a counter-example, they did actually reimplement it differently than in KSP 1. But interestingly, it manages to not address any of the long standing issues of its KSP 1 counterpart, while having abysmal performance compared to the KSP 1 implementation.
The awful global GameEvents internal messaging system ? They kept it with a pointless facelift, not fixing the conceptual flaw that the thing is uselessly broadcasting messages to every entity in the world, and also managed to make it slower...
One example of something they are rethinking from scratch is the thermodynamics. With resource processing, this will be the only major departure from the KSP 1 core subsystems, it will be interesting to see how this pan out when they finally manage to release it.

Some people are speculating that they are working on some large refactors in parallel branches, and this is indeed how development works.
But with every patch and feature that get released, the cost of merging such refactors becomes exponentially higher, and given that they are likely under huge pressure to get the 1.0 features out in less than a multi-year timeframe, I wouldn't bet on such refactors ever happening.

There are a few things that are better than in KSP 1. People have noticed that for the most part, long loading times are gone, but this is mostly just a side effect of the game not being limited by constraints that existed 12 years ago in Unity.
There are areas where they rewrote things with some success. While they did a very questionable job with the terrain texturing shader (which is a huge GPU bottleneck), their rewrite of the PQS mesh generation subsystem is one of the few thing that is objectively good.
There are small improvements here and there. I'm personally very unimpressed by the overall UI/UX, but most people seem to like what they did here, and this is an area where they can easily improve, contrary to other more core aspects.

To temper my words, unless the game gets canned because of low sales, them failing to deliver the 1.0 features in a reasonable timeframe, or a bit of both, I still think they will end up with a quite clunky, but somewhat enjoyable game that will achieve its high level goals, just like KSP 1 did.
It's just that in many ways, and specifically those where KSP 1 was at it's worst, KSP 2 is just as bad, and that leaves a bitter taste for a specific population of long time KSP 1 players and modders.

Okay, big off-topic rant.

Thanks a lot, I woudn't call it a big off-topic rant.  This is the kind of insight and well-reasoned criticism  I can appreciate and understand.  You are making some good points it will be interesting to see how things will develop. 

At the moment I don't even own KSP2 (due to my old potato there wouldn't be a point) so I'm hoping for  improvements (in the game and my economical resources ;) ). In the meantime I'm just hanging around in the KSP2 part of this forum.

Link to comment
Share on other sites

@GotmachineThanks for demystifying some things about what KSP2 is, and is not, under the hood.

Some of the things I remember being most frustrating, time consuming and momentum-killing in my KSP1 days were the problems caused by switching or loading vessels, like spontaneous disintegrations resulting from physics startups when reloading a vessel you left safely landed or in a stable orbit, or when coming out of timewarp. And I'm noticing similar things happening now in KSP2. I was really hoping for KSP 2 to fundamentally improve on the glitches and instabilities resulting from its physics system. Can you say anything about what it would take to improve on this, is it even possible to fully solve? (To, in their own words "slay the kraken")

Edited by Lyneira
Link to comment
Share on other sites

I saw pretty decent gains in both KSP1 and KSP2 (patch 1) going from a Ryzen 7700 to 7950X3D - generally about a 50% FPS gain or so. I also had a 100% FPS gain (20 > 40 fps while flying a small plane around KSC) in one scenario, though that might be an outlier - my testing may have been imperfect, but it was definitely a benefit.

KSP1 tests also showed about a 50% gain to FPS usually, though in some scenarios like 4x phys warp burn of a large craft, no gains.

Link to comment
Share on other sites

I recently upgraded my cpu from 5900x to 7800x3d, I built a 300 parts rocket for stress testing.

GPU: 4090  Graphic settings 4k max

I found 7800x3d about 20% - 50% faster compared to 5900x all the time (both with reasonable pbo overclocking)

Before lunch: 43fps vs 28fps

during lunch: 22 fps vs 18

into cloud: 25fps vs 21fps

into space: 33fps vs 25fps

 I don't have 12th or 13th intel cpu for comparison  but I feel the gain was brought by raw IPC improvement, so 13900k might perform  similar with x3d as well

Link to comment
Share on other sites

3 hours ago, zzyzz said:

I recently upgraded my cpu from 5900x to 7800x3d, I built a 300 parts rocket for stress testing.

GPU: 4090  Graphic settings 4k max

I found 7800x3d about 20% - 50% faster compared to 5900x all the time (both with reasonable pbo overclocking)

Before lunch: 43fps vs 28fps

during lunch: 22 fps vs 18

into cloud: 25fps vs 21fps

into space: 33fps vs 25fps

 I don't have 12th or 13th intel cpu for comparison  but I feel the gain was brought by raw IPC improvement, so 13900k might perform  similar with x3d as well

Interesting, can you share a bit of the viewing conditions (where were you looking at) when getting these FPS scores? And can you share the craft file in any way? I would like to set the performance off against my 12700KF.

Link to comment
Share on other sites

On 4/14/2023 at 11:36 AM, Gotmachine said:

Some people are speculating that they are working on some large refactors in parallel branches, and this is indeed how development works

I really enjoyed your post - and have seen indications (and speculation) about parallel branches.  The distinct difference between EA Release and Patch 1 kind of supports this. 

I'm wondering if some of what Markum said in her AMA indicates this as well?  At least indirectly.  She seems to say that they had to break down the game for the EA run, albeit very indirectly. 

Have you seen her comments and does it sound like they may have a better working product 'waiting in the wings' or are we largely stuck with WYSIWYG? 

Link to comment
Share on other sites

Here are the possibly relevant quotes: 

Most difficult: establishing the roadmap. We started from an endpoint "here's the game as a whole", but when you go into early access, it's not 50% of each feature, it's milestone on milestone - each building on top of each other. It took us months to sort out. There's still moments where we think about moving things around, but yeah trying to take this absolute behemoth of a game and parse it out into a bunch of different phases.

... 

Remember that question about the roadmap? This is one of the outcomes when everything is building on top of each other.. We wanted to make sure exploration is about exploration. 

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...