Jump to content

I'm worried about the possible system requirements of KSP2


Recommended Posts

2 hours ago, vv3k70r said:

Yes, most engines have some dll implemented phy for common use which is flat map on 3d or 2d. But these engines cannot handle anything like in KSP. Inside them is a made up fhisics that handle a narrow range of values that are adapted to scale, display and data structure.

Pretty much none of this is true. DLL stands for dynamically linked library. Even if physics in a particular engine comes as a dedicated library, which isn't always the case, nobody in their right mind would link that dynamically. Please, avoid using terms if you don't understand them.

And if canned physics engines couldn't handle "anything like KSP," then we wouldn't have KSP, because KSP runs on PhysX which is as plain and boring physics engine as they come. And it works because the only things that are handled by the physics engine in KSP are collisions and joint forces on the craft. That's it. That all it does and all it needs to do in KSP.

All other forces, like aerodynamics, propulsion, and gravity are applied by individual components during update. Same as in any other video game. Because when you shoot an object in an FPS and that sends the object flying, it's not the physics engine that responds to the shot. The projectile or weapon component detects a collision and then applies an impulse in exactly the same way the thruster component on a KSP craft applies impulse to the craft. None of it works any differently.

The only reason KSP exists at all is because under the hood it's not that different from any other game.

2 hours ago, vv3k70r said:

What You see in game is unity. But it is just a display from dynamicly build flat 3d of local structures that unity can handle. But speeds avilable in games allow players to crash this whole structure just by getting fast on land. In common engine You just put max_speed value and problem is solved, here it would be an offence even if terrain cannot be switched that fast.

You are literally describing streaming. It's been a problem in games for a very long time. Have you ever played GTA III or later? For modern games, it's just a fact of life that you can't have everything loaded at once. And every single modern game engine handles this pretty much the same way. The only difference with KSP is that it has a greater LOD range which comes at a cost of lower visual fidelity. Again, there is absolutely no special code because all of this is, once again, handled by Unity, which is a generic game engine, not written for games like KSP specifically.

And if I suggested to any of game designers that instead of handling streaming we'll just slow down the player movement, they'll complain against me to CTO. In basically ever game, we have to handle resource loading regardless of how fast the player is moving. In games where player is on foot, we load higher fidelity assets. In games where player is moving by vehicle or some other means, we load lower fidelity assets so it can be done faster. Most game engines will be able to adapt based on how fast you're moving. Because even in games where you can't move all that fast as a character, you do occasionally want to have camera pan around or move quickly.

2 hours ago, vv3k70r said:

In such a solution You load to graphic engine only local objects for display and handle phy on Your own process. It is why KSP quite often do not match with display, especialy when You go fast and low. It is why it need workarounds with joints strength and so on. And quickload it is just a disaster. I would not expect anything else.

That won't help if you bottleneck in physics, and any time your frame rate in KSP drops because the craft is very complex or when things collide and explode, it's not the rendering that has trouble keeping up, it's PhysX not managing to do its job. This has nothing to do with physics and rendering running in lockstep, which is standard fare for most engines. Though, many do allow  several frames of simulation to run per single rendered frame. That just won't help you unless you do have a much better physics engine, one that can actually handle higher simulation rate.

And yes, when PhysX craps out and starts lagging the sim, it increases duration of the frame, which increases the simulation time step, which makes everything worse, so once things start going bad, they continue going bad. It just has nothing to do with rendering.

2 hours ago, vv3k70r said:

Would You even think of coding everything again after years just to make it corect?

If I was working on KSP2 project, I'd just write a custom physics sim and be done with it. It'd be faster than trying to fix it any other way. But I'm one of very few people who actually does that sort of work for a living. I don't think Intercept has resources to make significant changes at this point. So they're basically stuck doing the same thing KSP has been doing, maybe finding a few small improvements along the way. Most likely by replacing some weld joints with rigid bodies.

Link to comment
Share on other sites

9 minutes ago, K^2 said:

Have you ever played GTA III or later?

Last GTA I remember was top down view.

15 minutes ago, K^2 said:

And yes, when PhysX craps out and starts lagging the sim, it increases duration of the frame, which increases the simulation time step, which makes everything worse, so once things start going bad, they continue going bad. It just has nothing to do with rendering.

Intresting. I always had a problem with heavy graphics, but I didnt work with graphics and optimized phy not related to objects (more to bounding boxes without any idea how would object looks like).

19 minutes ago, K^2 said:

If I was working on KSP2 project, I'd just write a custom physics sim and be done with it. It'd be faster than trying to fix it any other way.

I will start with same aproach. I just used to do everything from scratch because market products handle many things that I do not need.

20 minutes ago, K^2 said:

But I'm one of very few people who actually does that sort of work for a living.

I'm not, just a hobby from past.

 

Link to comment
Share on other sites

8 hours ago, K^2 said:

In short, a more recent architecture helps. Even between two CPUs that have comparable performance in pure math computations, the more recent architecture likely to have better cache coherence and prediction allowing for better performance in games that are designed to make good use of multiple cores. Most Unity games, however, are going to be bound by either main thread or rendering thread performance, so your best bet is to look for CPU with best single-thread performance. I don't expect that to be much different with KSP2.

Cheers for both you and @Incarnation of Chaos for taking your time and explain it. I made the mistake of simply not believing the IPC rumors of ryzen and am considering swapping my 10700K for a 11th gen  if the IPC is worth it. I really hope we end up in a full scale IPC war. 

Link to comment
Share on other sites

2 minutes ago, dave1904 said:

Cheers for both you and @Incarnation of Chaos for taking your time and explain it. I made the mistake of simply not believing the IPC rumors of ryzen and am considering swapping my 10700K for a 11th gen  if the IPC is worth it. I really hope we end up in a full scale IPC war. 

https://www.cpubenchmark.net/singleThread.html

Current gen Ryzen processors are killing intel single thread performance scores currently and the 5600X seems to be the best bang/$

Link to comment
Share on other sites

6 hours ago, mcwaffles2003 said:

Current gen Ryzen processors are killing intel single thread performance scores currently and the 5600X seems to be the best bang/$

You have to look at game benchmarks to get full picture. It's a bit of combination of games being generally better optimized for intel, in part due to compilers, so don't expect a quick change there; but also, games are more demanding than benchmark tests due to how they use branching and cache. And if you look at game benchmarks, the picture is more mixed. Intel is still holding ground in some price ranges quite convincingly, while AMD has penetrated a lot of high end and mid-high PCs.

Given how close everything is right now, you really have to look at benchmarks for your planned build to confirm, but two rules of thumb stand out. If you plan to overclock, generally Intel is still better value. Unless, you plan to stream your games or expect to upgrade CPU soon. In later cases, go AMD Ryzen, as they are better at supporting background tasks while playing and plan to stick with current sockets for a while longer.

Also, all of this is quickly changing, and availability is becoming a limiting factor in some cases. So you really have to do research for your specific case.

Link to comment
Share on other sites

15 hours ago, K^2 said:

There's a concept of thread-local storage. It used to be a pain to implement properly in platform-independent way, but now there are standard ways of handling it in modern C/C++ and C# as well as some other modern languages. It gives you a simple way for each thread to know what it's working on.

Atomic operations are just these that are performed all at once. If you are doing anything complicated, you'll need locks or some other mutex, but the interesting cases include increment/decrement, exchange, or compare-and-exchange operations which have interlocked equivalents natively supported by modern CPUs. Again, that's something that used to have horrible platform-specific support, but now it's handled with std::atomic in C++ and Interlocked class in C#.

What the physics solver is doing, whether it's implemented directly or via something like Sequential Impulse, is solving a constrained least squares problem. Example of constraints being that contact force cannot be negative - objects don't (usually) stick to surfaces, so the contact force will only push in one direction. You can also use constraints to do traction limiting when simulating a wheeled vehicle. But the general strategy for solving these is very similar to solving a system of linear equations by an iterative method. So to get a rough idea of what the multithreaded solution would be like, think of how you'd implement Gaussian Elimination algorithm across multiple cores. Imagine that you've already brought the first N rows to  triangular form and you are now working on N+1st row. You need to subtract from current row every previous row divided by its pivot element and multiplied by the element in corresponding column in N+1st row. Lets say, I have several cores available. Rather than do one row at a time, I'll kick off the process on the first core with the first row. Once the second element of N+1st row settles, I'll launch the second core working on second row. It has all the data it needs, and so long as decrement operations on the N+1st row are atomic it doesn't matter that two threads are working on it at once. Once the 3rd element settles - that is, both cores 1 and 2 have processed it - the 3rd core can be launched on 3rd row, and so on up to the core count. The synchronization can be handled with a single monotonic counter which can also be handled with atomic increment.

This is still nowhere near 100% CPU utilization, there is a lot of overhead, especially when you consider that x86-64 CPUs still don't handle atomic increment and decrement with floating point values, and we're just talking about a toy version of the problem. There are applications where this kind of optimization is crucial. If you ever want to learn a trick or two about squeezing every last bit of performance when working with giant matrices, talk to a Lattice QCD theorist. But I don't think a physics solver for games is a use case. I'll take a simpler, more stable algorithm and I'll find something else to occupy all the other cores with. Like animation and AI. Collision is the only part of physics that, in my opinion, is worth farming out. Maybe BVH updates, depending on your implementation. But not the core solver.

For context; the specific implementation i worked with was ThreadLocal in Java. And it required quite the hassle to get working properly, ended up having to use an Abstract Class to extend Thread and then extend that into a concrete class. Even afterwards i had to cast them back and forth to get the right results.

Which all adds overhead, both on the CPU and the programmer. Also, Java is a pain to work with.

Though what's weird is despite knowing basically nothing of that background, i was thinking of a similar solution. Though i was thinking of allowing the threads to keep track of where they were within the matrix, then using a sorting solution (Merge Sort is very multithreadable, but QuickSort might be better) to basically allow any arbitrary # of cores to perform the calculations. Then crash join back into the program, spawn more and spool the data to memory where it could eventually be sorted. 

Yours sounds much cleaner, safer and faster though, cheers!

Link to comment
Share on other sites

I've had my current PC a few years now, still going fine, but obviously 'old tech'.   I just added a load more RAM (now 24GB) to help it along.

I will look at upgrading after KSP2 comes out and i can see what it needs.  Will see how it runs on my current machine anyway initially, but I will be well ready to upgrade by then if I can.  Just need to start saving.

Link to comment
Share on other sites

2 hours ago, kspnerd122 said:

Im getting a new PC

Specs of that

AMD Ryzen 9 3950X

32 Gb of ram

Nvidia Geeforce graphics card

You might want to wait a bit and buy 5950X instead. It's a full generation ahead, has way better performance in games, and the MSRP is only $80 more. You can't get 5950X for that price right now, but that should improve in a few months.

The other reason is that situation with graphics cards is even worse. A basic RTX 2080 is currently going for significantly more than I bought my water cooled OC RTX 2080Ti for, and I got mine before the 3xxx series was even announced. This is absurd. Since good GPU is way more important than CPU for game performance, if you're planning to get good mileage of that Ryzen 9, you definitely don't want to buy a crap graphics card. And unless you're prepared to spend over $3,000 on your build, you should either wait or scale back. If you end up going with something like a 1660 Super on top of a 3950X with 32GB it'd be a Colossal waste of money. If you're looking for gaming performance, and you absolutely must buy the PC right now, you'd be way better off going with something like a 10850K or maybe even 10700KF and pairing it with something  like an RTX 2070 and 16GB of RAM. That will get you a much, much better gaming machine for roughly the same price.

Now, if you're streaming on Twitch and encoding videos in the background while playing your games, or doing some other tasks that can actually make good use of 32 threads, then yeah, maybe 3950X makes sense. But there aren't any games for which that's going to outweigh a good graphics card. And KSP specifically is notoriously bad at multi-threading, and 10700KF will actually have better performance than 3950X in KSP because of its single thread performance.

A 5950X, that's another story. That gives you both the core count and single thread performance. Yeah, the thread count isn't as high, but it does so much more with individual cores. Unfortunately, it has the same problem as GPUs right now. Supply is very short, so you can't buy it at MSRP. So unless you're comfortable overspending by a lot, waiting is a best option if you want to go with a Ryzen 9.

Link to comment
Share on other sites

  • 1 year later...
On 12/26/2020 at 9:05 AM, Kerbal Productions said:

My laptop's processor is an i3-1005G1, which they said is stronger than the other laptops that I have in here that their processors is i3-8145U and i3-10110U.

I'm worried, will my laptop be enough when I run KSP2 with the same laptop I have today? ...

You will again need a fast CPU as it’s again made in Unity and physics are CPU based… (don’t get me started)

Link to comment
Share on other sites

Thankfully the GPU situation has done a complete 180 turn, and at least the Nvidia 30 series cards have become DIRT CHEAP and they're basically flying off the shelves (30 series cards even the 3090 is all anyone will ever realistically need for the next 3-4 years, outside of VERY VERY VERY specific circumstances that would warrant a "workstation" rather than a "gaming PC").

If you DO want a "new-generation" GPU, wait for AMD's GPU's to come out, somehow they managed to make it work "just fine" without needing the quite frankly grossly excessive 600 watts that a 4090 needs (and that's without any overclocking by the way).

CPU wise, as usual Intel's CPU offers the best ratio of watts input to numbers crunched output, but the Ryzen 7950x might have overall higher performance if you can get the thermals under control.

Which leads me to the "space heater" computer.

4090 and 7950x. In the same PC. Great for those cold winter months, because it's gonna be making your room hot AND driving up your electric bill. Isn't that just wonderful? :rolleyes:

Link to comment
Share on other sites

3 hours ago, SciMan said:

30 series cards even the 3090 is all anyone will ever realistically need for the next 3-4 years

I love when I read things like these because it sets a completely unrealistic view of what PC gaming actually is.

I've just upgraded my GPU to a 6800XT, and my friend has done the same. I was using a 1070 he a 970, bought in 2015-2016. I have plenty of friends on a 1060 that still plan to wait next year to upgrade.

And I've been playing VR on the Valve Index since it came out on that 1070.

Link to comment
Share on other sites

On 11/21/2022 at 12:58 PM, SciMan said:

Thankfully the GPU situation has done a complete 180 turn, and at least the Nvidia 30 series cards have become DIRT CHEAP and they're basically flying off the shelves (30 series cards even the 3090 is all anyone will ever realistically need for the next 3-4 years, outside of VERY VERY VERY specific circumstances that would warrant a "workstation" rather than a "gaming PC").

If you DO want a "new-generation" GPU, wait for AMD's GPU's to come out, somehow they managed to make it work "just fine" without needing the quite frankly grossly excessive 600 watts that a 4090 needs (and that's without any overclocking by the way).

CPU wise, as usual Intel's CPU offers the best ratio of watts input to numbers crunched output, but the Ryzen 7950x might have overall higher performance if you can get the thermals under control.

Which leads me to the "space heater" computer.

4090 and 7950x. In the same PC. Great for those cold winter months, because it's gonna be making your room hot AND driving up your electric bill. Isn't that just wonderful? :rolleyes:

If you're going for ultra-graphics, yes, but realistically, any hardware from the last 7 years will last you until probably 2030. Yes, you'll need to downgrade the graphics from ultra as the years go by and newer games become more power hungry, but you don't realistically need a 30 series. You could wait until the 50 series or even 70 series cards before upgrading. I remember when Kabylake came out that people from the intel 2000 CPUs said they were finally going to upgrade. That's a 5-generation leap between the two. 

Also, the 4090 is base 450 watts. The 600 watts is for overclocking. 450 is still high, yes, but GPUs have gotten way more power hungry. And that's just the max power it will draw. Realistically it won't be hitting that 450 watt draw all the time, if at all.

As for CPU, anything over 5ghz will do best, but 4.5ghz top speed will do okay, so you could go for an older Intel or AMD. Of course, the newer ones obviously will get higher clocks. I mean, that 7950X you mention has a max turbo of 5.7ghz, under perfect conditions of course. Realistically the boost is probably in the neighborhood of ~5.3ghz. The I9-13900K has a boost of 5.8ghz on a single core under the perfect conditions. (Perfect conditions: Ample cooling in a cool room and good steady parts like MB and PSU). Realistically you're going to get around 5.4ghz boost. 

On 11/21/2022 at 4:42 PM, Master39 said:

I love when I read things like these because it sets a completely unrealistic view of what PC gaming actually is.

I've just upgraded my GPU to a 6800XT, and my friend has done the same. I was using a 1070 he a 970, bought in 2015-2016. I have plenty of friends on a 1060 that still plan to wait next year to upgrade.

And I've been playing VR on the Valve Index since it came out on that 1070.

I think SciMan might be an enthusiast like myself. :P

Well, I'm semi-enthusiast. I recently upgraded my desktop computer and went I5-13600K and RTX 3090 and 64 gigs of ram. The 64 gigs of ram was kind of needed though. Fired up KSP before the upgrade and noticed it was consuming so much ram that it made my computer hit the 32 gig limit. And of course, that was with the 1080s 8 gigs of ram as well, so KSP was using most of the 40 gigs available to it. Yes, I run a heavily modded setup and refuse to take off the mods I don't use because I never know if/when I want to use them. :P

Edited by GoldForest
Link to comment
Share on other sites

On 11/21/2022 at 5:42 PM, Master39 said:

I love when I read things like these because it sets a completely unrealistic view of what PC gaming actually is.

I've just upgraded my GPU to a 6800XT, and my friend has done the same. I was using a 1070 he a 970, bought in 2015-2016. I have plenty of friends on a 1060 that still plan to wait next year to upgrade.

And I've been playing VR on the Valve Index since it came out on that 1070.

Right, I guess I was misunderstood.
What i was trying (and apparently failing) to do is that the 3090 will be OVERKILL for many years to come if all you are doing is gaming.

And that same 3090 will be more or less overkill for any content creation need in the future as well.

When I say "overkill", I mean "it's gonna let you run at Ultra settings for many years to come" when the same can't be said of most other cards (2080 ti might fall in that category).

But I agree with you, I could probably run KSP 2 JUST FINE on Ultra settings (maybe not 4k resolution) on my old GTX 970.

CPU wise, I have an Intel Core i7-9700k and that's probably gonna be good enough.

EDIT: Of course, I'd be EXTREMELY surprised if modded KSP 2 needs as much RAM as KSP 1 did, because IIRC the reason KSP 1 needs so much RAM is because everything it streams to the GPU it keeps in system memory too, and it keeps it there all the time not just reading it from the disc when it needs it.
KSP 2 shouldn't do that if the developers have half a brain cell among all of them, and I'm 100% sure that they have far more brain cells than that, they're quite intelligent.

Edited by SciMan
Link to comment
Share on other sites

5 hours ago, SciMan said:

he 3090 will be OVERKILL for many years to come if all you are doing is gaming

...at 1080p.  Yes. 

If you are running 4k, 3070 (which I have) is good for old games, and entry-level for newer titles.  (What I consider entry level, given my minimum framerate preferences)

Once you step up to 4k 144, you really do need the newer cards.

Link to comment
Share on other sites

It really depends on the game. Even in most games, the 4090 can't get above 120 fps at 4K, and that's with Ray Tracing and DLSS off.

And the 3090 is overkill at all three of the major resolutions. It can do 60 fps at 4K in some games at Ultra. But most people game at 1080P with 1440P (16:9) coming in second. Even at 1440P, the 3090 can get well over 60fps in most games at ultra.

The 10 series might be old, and you might have to mess with settings, but they're still good cards that should run KSP 2 just fine. You don't really need a 30 series or 40 series card. Just adjust your settings. 

9 hours ago, SciMan said:

EDIT: Of course, I'd be EXTREMELY surprised if modded KSP 2 needs as much RAM as KSP 1 did, because IIRC the reason KSP 1 needs so much RAM is because everything it streams to the GPU it keeps in system memory too, and it keeps it there all the time not just reading it from the disc when it needs it.
KSP 2 shouldn't do that if the developers have half a brain cell among all of them, and I'm 100% sure that they have far more brain cells than that, they're quite intelligent.

You're going to need lots of Ram if you're modding in general. KSP 2 will load everything into memory just like KSP 1. It's the nature of these style of games. All the parts NEED to be loaded up so they can easily be accessed quickly. Not only the parts, but the Planets themselves too. 

Any game that needs to load items up fast will use lots of Ram. Gmod does it. 

And honestly, it is the best style of system for these types of games. Do you really want to sit on a frozen screen for more than a few seconds while it accesses the hard drive, moves data into the ram, and then reads said data? If it's already in ram, it can read it almost instantly, saving literally possibly a minute or more on load time, especially for people with old hard drives, or hard drives that run slower than 7200 rpm. 

Link to comment
Share on other sites

On 11/26/2022 at 1:02 AM, GoldForest said:

And honestly, it is the best style of system for these types of games. Do you really want to sit on a frozen screen for more than a few seconds while it accesses the hard drive, moves data into the ram, and then reads said data? If it's already in ram, it can read it almost instantly, saving literally possibly a minute or more on load time, especially for people with old hard drives, or hard drives that run slower than 7200 rpm. 

We're honestly moving away from the assumption that the primary storage is going to be a mechanical HDD. Most modern games are developed with consoles as a primary platform, as they tend to be the cause of performance bottlenecks, and gen 9 consoles are absolute monsters as far as streaming data from SSD to RAM goes. On top of that, most studios will have very fast SSDs on dev machines to speed up the workflows, so with gen 9 development, even if somebody is testing their builds on PC, there is no longer a point in the development cycle where you're going to be streaming from a slow HDD. There's a bit of a time lag until these kinds of studio changes start impacting the actual games, but we should be just about entering a generation of games that are developed entirely with the, "Oops, we forgot to test the streaming performance with HDD before going into beta. Oh well," kind of attitude. Because of that, I fully expect M.2 to start showing up as part of the system requirements on the new games.

Will this impact KSP2? I don't know. The PS4 and XB1 are still listed as targets, but with the release having been delayed as much as it was, and with these consoles having such major CPU bottlenecks, I am not fully confident that Intercept has been doing a lot of tests on these. My honest expectation is that we'll probably see at least the planets shifted to streaming. There's so much more data for the terrain in KSP2 compared to KSP, that I'm having hard time believing that the team would opt to keep this in memory. On top of that, they seem to be using a lot of techniques common to open world games, which are heavily relying on streaming. So planets should be covered. But that does still leave ships with custom parts, and that might still have to be resident in RAM, and part mods might still have just as much RAM impact as they do to day. More, actually, since there will be more material textures.

Link to comment
Share on other sites

2 hours ago, K^2 said:

We're honestly moving away from the assumption that the primary storage is going to be a mechanical HDD. Most modern games are developed with consoles as a primary platform, as they tend to be the cause of performance bottlenecks, and gen 9 consoles are absolute monsters as far as streaming data from SSD to RAM goes. On top of that, most studios will have very fast SSDs on dev machines to speed up the workflows, so with gen 9 development, even if somebody is testing their builds on PC, there is no longer a point in the development cycle where you're going to be streaming from a slow HDD. There's a bit of a time lag until these kinds of studio changes start impacting the actual games, but we should be just about entering a generation of games that are developed entirely with the, "Oops, we forgot to test the streaming performance with HDD before going into beta. Oh well," kind of attitude. Because of that, I fully expect M.2 to start showing up as part of the system requirements on the new games.

Will this impact KSP2? I don't know. The PS4 and XB1 are still listed as targets, but with the release having been delayed as much as it was, and with these consoles having such major CPU bottlenecks, I am not fully confident that Intercept has been doing a lot of tests on these. My honest expectation is that we'll probably see at least the planets shifted to streaming. There's so much more data for the terrain in KSP2 compared to KSP, that I'm having hard time believing that the team would opt to keep this in memory. On top of that, they seem to be using a lot of techniques common to open world games, which are heavily relying on streaming. So planets should be covered. But that does still leave ships with custom parts, and that might still have to be resident in RAM, and part mods might still have just as much RAM impact as they do to day. More, actually, since there will be more material textures.

Does a decision really need to be made on whether textures are streamed from the HD vs stored in RAM? Can a game not be made to recognize the available RAM and utilize that as a buffer while also streaming? I'd figure that kind of strategy would be a best of both worlds thing.

 

On 11/27/2022 at 11:34 AM, Da Kerbal said:

I have a good cpu but trash GPU

And it's a laptop 

Just run at low graphics settings.

On 11/26/2022 at 12:08 AM, JoeSchmuckatelli said:

...at 1080p.  Yes. 

If you are running 4k, 3070 (which I have) is good for old games, and entry-level for newer titles.  (What I consider entry level, given my minimum framerate preferences)

Once you step up to 4k 144, you really do need the newer cards.

I have a 1080 ti and play games at 4k 60Hz with mostly maxed settings. It bogs down below 60 for some really graphically intense games and I may be eyeballing the RX7000s, but Im not desperate to switch and the cards doing just fine for me as is. I dont play a lot of fast paced games that require high refresh rates but its good enough for apex and halo to me. It's more than enough for KSP with all the visual mods (with the exception of 64k texture map mods but who has that much VRAM?). I just mostly play sims and management style games which is why I care more for resolution than Hz.

 

On 11/21/2022 at 1:58 PM, SciMan said:

Thankfully the GPU situation has done a complete 180 turn, and at least the Nvidia 30 series cards have become DIRT CHEAP

My 1080 ti was $400 and the 3080 ti is still like $800. The price of dirt must have doubled in the last 4 yrs.

Link to comment
Share on other sites

56 minutes ago, mcwaffles2003 said:

I have a 1080 ti and play games at 4k 60Hz with mostly maxed settings

Yeah - many of the cards are quite capable - I don't remember if I wrote about it - but much of the new is shaders and ray tracing and memory improvements that make the newer cards attractive, not merely rasterization performance.  I'd have tried my 970 out on the new monitor had it come before the 3070... and suspect it could have handled 4k with some downgrading of settings.  But the 3070 came first and I couldn't be bothered to revert.

That said; some of the games I play do struggle with framerates.  I'm eagerly awaiting the price-performance of the AMD offerings.

Link to comment
Share on other sites

28 minutes ago, JoeSchmuckatelli said:

Yeah - many of the cards are quite capable - I don't remember if I wrote about it - but much of the new is shaders and ray tracing and memory improvements that make the newer cards attractive, not merely rasterization performance.  I'd have tried my 970 out on the new monitor had it come before the 3070... and suspect it could have handled 4k with some downgrading of settings.  But the 3070 came first and I couldn't be bothered to revert.

That said; some of the games I play do struggle with framerates.  I'm eagerly awaiting the price-performance of the AMD offerings.

what games struggle on a 3070? Are you playing with RTX on?

Link to comment
Share on other sites

4 hours ago, K^2 said:

We're honestly moving away from the assumption that the primary storage is going to be a mechanical HDD. Most modern games are developed with consoles as a primary platform, as they tend to be the cause of performance bottlenecks, and gen 9 consoles are absolute monsters as far as streaming data from SSD to RAM goes. On top of that, most studios will have very fast SSDs on dev machines to speed up the workflows, so with gen 9 development, even if somebody is testing their builds on PC, there is no longer a point in the development cycle where you're going to be streaming from a slow HDD. There's a bit of a time lag until these kinds of studio changes start impacting the actual games, but we should be just about entering a generation of games that are developed entirely with the, "Oops, we forgot to test the streaming performance with HDD before going into beta. Oh well," kind of attitude. Because of that, I fully expect M.2 to start showing up as part of the system requirements on the new games.

Will this impact KSP2? I don't know. The PS4 and XB1 are still listed as targets, but with the release having been delayed as much as it was, and with these consoles having such major CPU bottlenecks, I am not fully confident that Intercept has been doing a lot of tests on these. My honest expectation is that we'll probably see at least the planets shifted to streaming. There's so much more data for the terrain in KSP2 compared to KSP, that I'm having hard time believing that the team would opt to keep this in memory. On top of that, they seem to be using a lot of techniques common to open world games, which are heavily relying on streaming. So planets should be covered. But that does still leave ships with custom parts, and that might still have to be resident in RAM, and part mods might still have just as much RAM impact as they do to day. More, actually, since there will be more material textures.

It's been confirmed that the planets use a load in load out system, meaning that if they aren't on screen, they will unload, or at least the part that isn't 'seen' by the player. From what I understand. 

And Consoles are taking a back seat to development. PC development is coming first for KSP 2. It's been confirmed that consoles won't release at 1.0 and instead will be later down the line, so a focus on SSDs is probably not in the workflow, at least not like you're saying. Also, PS4 and XB1 have been dropped. PS5 and XBSX are the platforms that going to be programmed for, after PC is done and dusted. 

The team has also expressed a focus on 'improving performance on current AND older machines' so I think they have hard drives in mind. 

Link to comment
Share on other sites

×
×
  • Create New...