Jump to content

Will Kerbal Space Program 2 be optimised for low end pc


Ryan@123

Recommended Posts

3 hours ago, Incarnation of Chaos said:

Right now it's just looking sleezy, but if they at least acknowledged the lack of support then people who have these devices know where they're at.

Right now it's looking like the usual development decisions driven by a big "AAA" publisher.

Independent studios sometimes design with Mac/Linux support in mind from early in development, but from what I've seen everything controlled by a big publisher seems to take the "Promise nothing, get a Windows/DX build out as fast as possible, consider porting later" approach... Usually meaning a half-arsed effort or no port at all.

That said, it's another Unity game, and that makes cross-platform pretty easy. We'll see, but given TTI is calling the shots I'm not optimistic.

Link to comment
Share on other sites

For Me, Its A No no.

Im Running On A Intel i5, And Everytime i play it Lags And I Only Get To Play KSP For 11-1 FPS. At First I Know It Maybe The Mods But I Need All Of Those, Second Is I Tried On Low Settings, But Still Lags, Neveer Found Out How To Get Atleast 30 FPS.

Specs:

Intel i5 - 3320M 2.60GHz

16 GB Of Ram (Used To Be 8 GB, Now Upgraded)

Windows 10 Pro 64-Bit X64

Intel HD 4000 Graphics Card.

Edited by miguelsgamingch
Link to comment
Share on other sites

31 minutes ago, miguelsgamingch said:

Intel i5 - 3320M 2.60GHz

That's not low end, that is an 8 years old low end processor, 9 when KSP2 will be released.

 

I don't know what the minimum specs will be, but I wouldn't bet on a decade old bottom of the barrel machine to be supported.

Link to comment
Share on other sites

1 hour ago, miguelsgamingch said:

Neveer Found Out How To Get Atleast 30 FPS.

Uhh, get a machine that's actually designed for gaming? An I5 is fine, a low-power laptop I5 is not.
 

35 minutes ago, Master39 said:

That's not low end, that is an 8 years old low end processor

Age isn't the problem, the fact that it's a low power mobile CPU is. The integrated graphics won't be helping either.

I'm still running a processor from 2013, and KSP(1) runs pretty well, all things considered... But it's a 130W hex-core clocked to 4.3GHz, not a 35W dual-core with ultra-conservative turbo limits and rubbish cooling.

Expecting KSP(any) to run well on a laptop is a stretch. Expecting it to run well on an old, decidedly-low-performance, definitely-not-gaming laptop is futile.

Ed. Going from ivy bridge to comet lake at comparable clock is probably going to be somewhere around 20% single-threaded performance improvement. Nice, but not exactly astonishing, and I'm sure a decent ivy bridge CPU will run KSP2 just fine.

 

Edited by steve_v
Link to comment
Share on other sites

On 8/24/2020 at 7:58 PM, steve_v said:

Uhh, get a machine that's actually designed for gaming? An I5 is fine, a low-power laptop I5 is not.
 

Age isn't the problem, the fact that it's a low power mobile CPU is. The integrated graphics won't be helping either.

I'm still running a processor from 2013, and KSP(1) runs pretty well, all things considered... But it's a 130W hex-core clocked to 4.3GHz, not a 35W dual-core with ultra-conservative turbo limits and rubbish cooling.

Expecting KSP(any) to run well on a laptop is a stretch. Expecting it to run well on an old, decidedly-low-performance, definitely-not-gaming laptop is futile.

Ed. Going from ivy bridge to comet lake at comparable clock is probably going to be somewhere around 20% single-threaded performance improvement. Nice, but not exactly astonishing, and I'm sure a decent ivy bridge CPU will run KSP2 just fine.

 

Ahem. Didn't I Said That I Was Running On A Thinkpad T430s?

Edit: Lucky enough, everytime im in VAB Or SPH I Get 20 FPS But On Launch 10-5 FPS

In Main Menu Its 30 FPS And In Loading 60-30 FPS

:/

Edited by miguelsgamingch
Link to comment
Share on other sites

On 8/24/2020 at 1:58 PM, steve_v said:

Uhh, get a machine that's actually designed for gaming? An I5 is fine, a low-power laptop I5 is not.

Age isn't the problem, the fact that it's a low power mobile CPU is. The integrated graphics won't be helping either.

I'm still running a processor from 2013, and KSP(1) runs pretty well, all things considered... But it's a 130W hex-core clocked to 4.3GHz, not a 35W dual-core with ultra-conservative turbo limits and rubbish cooling.

Expecting KSP(any) to run well on a laptop is a stretch. Expecting it to run well on an old, decidedly-low-performance, definitely-not-gaming laptop is futile.

Ed. Going from ivy bridge to comet lake at comparable clock is probably going to be somewhere around 20% single-threaded performance improvement. Nice, but not exactly astonishing, and I'm sure a decent ivy bridge CPU will run KSP2 just fine.

 

Think you use the old i7 6 core? the one with the server like motherboard with 8 ram slots and you pretty heavy overclocked it.
Had it almost as long as you and yes its an nice system who stood the test of time. 

Replaced it with an 12 core A9 this weekend, definitely faster for stuff like rendering, will have to try out some of my large KSP games. 
Did not really need to upgrade but had an option for an free upgrade from workplace for $2K so jumped :)
Kept the graphic card and storage HD's 
Was pretty tempting to go for an ultra wide monitor instead :) 

Link to comment
Share on other sites

I feel like KSP2 should be usable on lower end devices, maybe just let people turn down graphics if their PC cannot handle it, also laptop users cannot just, install a better graphics card, the bottoms are soldered on, so it is impossible for us to upgrade, without buying a new PC, requiring people with worse computers to buy new ones, really, I get times change but automatically lower grapical settings on lower end PCs, we should not need a specially built gaming PC to play KSP2.

Link to comment
Share on other sites

5 minutes ago, kspnerd122 said:

I feel like KSP2 should be usable on lower end devices

KSP 2 shouldn't. In 5 years time, no-one will have what you're using and KSP 2 won't need all that low-end usability stuff. Better timeproof rather than break backs and be as inclusive as possible, because in 10 years time, you'll probably have everything you need for a theoretically unoptimized KSP 2.

Link to comment
Share on other sites

@Bej Kerman At least let us turn settings down, maybe don't optimize but allow people to select lower texture quality in order to be a bit more inclusive.

also I have 8gb ram, i7 processor, should that be fine for KSP2

why dont they make it so it can use multiple cores, most bad PCs still have multiple cores, but KSP only uses one of them.

Link to comment
Share on other sites

25 minutes ago, kspnerd122 said:

@Bej Kerman At least let us turn settings down, maybe don't optimize but allow people to select lower texture quality in order to be a bit more inclusive.

also I have 8gb ram, i7 processor, should that be fine for KSP2

why dont they make it so it can use multiple cores, most bad PCs still have multiple cores, but KSP only uses one of them.

"i7" coul mean anything, from utterly obsolete to actually an overkill depending on which generation it belongs and whether or not it's a mobile chip, in 2016 I was already suggesting 8 GB of ram only to people with really low budgets or to people buying single bank and committed to update to 16 ASAP and I hope that the fact you didn't quote a GPU isn't because you hope to run on the Intel integrated one.

I'm not saying that it won't work, just don't expect "low end support" to mean that the game will work on any cheap office laptop and the last 5 years or on 10 years old low budget gaming rigs.

 

 

Link to comment
Share on other sites

33 minutes ago, kspnerd122 said:

why dont they make it so it can use multiple cores, most bad PCs still have multiple cores, but KSP only uses one of them.

It does use multiple cores. When you have multiple vessels in physics range they are each being simulated on their own thread, as I understand it. Now for using multiple cores for a single vessel... Rigid body dynamics calculations would become slower and the game would play worse, as I understand it this has to do with core caches which dont normally communicate between different cores

Link to comment
Share on other sites

It's also because threading adds some overhead, to keep your data in sync and avoid races conditions in the code. So yes threading gives you access to way more computational power, but it has a computational cost. And sometimes the cost is very expensive, to the point where your multithread code can behave worse than monothreaded one. Which is one of the issues encountered by Paradox game dev for stellaris, which he explained on his blog.

And while one thread per ship is quite easily done, several thread for one ship needs some serious thinking.

And then there's the whole rendering process (which is usually a big consumer of ressources), which have long been monothreaded. At least until DirectX 11 / 12 and Vulkan. So, to benefit of it you would need to move your graphical engine into DX11 universe, and I'm not sure that Unity can do it well (but I suspect the recent change in Unity version made by KSP is related somehow to this).

Link to comment
Share on other sites

1 hour ago, kspnerd122 said:

What I mean is, graphics should be able to be turned down.

Things can be turned down to a point, after that you need to start taking out features from the game to accommodate more performance.

Link to comment
Share on other sites

8 hours ago, magnemoe said:

Think you use the old i7 6 core? the one with the server like motherboard with 8 ram slots and you pretty heavy overclocked it.

Yeah, I7-4960X on an ASUS X79 board, essentially an unlocked Ivybridge-E  Xeon. They overclock like a champ too, I could actually get more out of this one if I was willing to deal with the heat. Real shame the Xeons are locked, innit?

8 hours ago, magnemoe said:

Did not really need to upgrade but had an option for an free upgrade from workplace for $2K so jumped :)

Lucky sod. :P
I've been thinking about upgrading, but TBH I can't justify it since the only workloads that show it's age are KSP and X4, and an upgrade that would be satisfyingly noticeable would also be pretty expensive.
It's been on the "soon" list for a while now.

 

7 hours ago, kspnerd122 said:

I feel like KSP2 should be usable on lower end devices, maybe just let people turn down graphics if their PC cannot handle it, also laptop users cannot just, install a better graphics card

The graphics usually isn't the problem, the CPU is. If KSP2 has a similar physics system to KSP1, that won't change and no amount of low-spec friendly graphics settings will have it run properly on a crappy laptop CPU.
Most laptops are not remotely designed for gaming anyway, so trying to game on one is going to be suboptimal whatever you do. The only way to fix that would be to remove the signature rigidbody physics simulation, and at that point you might as well be playing SimpleRockets.

Gimping the game and removing features for potato-compatibility is a big fat No.

 

Link to comment
Share on other sites

10 hours ago, kspnerd122 said:

also laptop users cannot just, install a better graphics card, the bottoms are soldered on, so it is impossible for us to upgrade

If your goal is to game on your laptop, you're getting the wrong laptops. I can clearly see screws and a seam on the bottom of my rubbish office laptop (that BSOD's if I give modded Minecraft too much RAM and/or don't scale the screen res way down). If your device can't run a 9yr old game very well as evidenced by our previous encounters and can't be physically opened, it probably shouldn't be running games lest you blow your CPU.

Link to comment
Share on other sites

12 hours ago, Okhin said:

It's also because threading adds some overhead, to keep your data in sync and avoid races conditions in the code. So yes threading gives you access to way more computational power, but it has a computational cost. And sometimes the cost is very expensive, to the point where your multithread code can behave worse than monothreaded one. Which is one of the issues encountered by Paradox game dev for stellaris, which he explained on his blog.

And while one thread per ship is quite easily done, several thread for one ship needs some serious thinking.

And then there's the whole rendering process (which is usually a big consumer of ressources), which have long been monothreaded. At least until DirectX 11 / 12 and Vulkan. So, to benefit of it you would need to move your graphical engine into DX11 universe, and I'm not sure that Unity can do it well (but I suspect the recent change in Unity version made by KSP is related somehow to this).

Unity has the Job system with Burst, you can use Jobs in 3 modes. On multiple threads, on one separate thread or an the main thread. With burst you have performance sometimes faster then default C++. In some Tutorials Unity also pointed out the overhead and said that smaller tasks are faster to execute on the mainthread, An alternative is just a separate thread, which has less overhead than scheduling multiple threads.

I think KSP run only at DX11. DX12 is not ready on Unity (They are working on it in the 2020 versions, but its still in an experimental state)

Vessel Physics are not the only think in KSP. Orbital physics are another part, that can processed outside of the mainthread. 

Link to comment
Share on other sites

1 minute ago, runner78 said:

Unity has the Job system with Burst, you can use Jobs in 3 modes. On multiple threads, on one separate thread or an the main thread. With burst you have performance sometimes faster then default C++. In some Tutorials Unity also pointed out the overhead and said that smaller tasks are faster to execute on the mainthread, An alternative is just a separate thread, which has less overhead than scheduling multiple threads.

I think KSP run only at DX11. DX12 is not ready on Unity (They are working on it in the 2020 versions, but its still in an experimental state)

Vessel Physics are not the only think in KSP. Orbital physics are another part, that can processed outside of the mainthread. 

Ohhhhhh...

Im Running DX12... Beacause im running windows 10.. hmph.. a coincidence

Link to comment
Share on other sites

On 8/31/2020 at 9:32 PM, Master39 said:

Things can be turned down to a point, after that you need to start taking out features from the game to accommodate more performance.

Yes, graphic can be turned down quite a bit. I expect much of KSP2 graphic can be turned down to KSP1 level except the ground. Looks like they want an more rugged ground for game-play reasons. Its more realistic and put limits on rovers. 
You can not turn down cpu demand and non graphic use of memory. Graphic tend to use a lot of memory. 
Memory ended up being an limited factor on PS3 and Xbox 360, in part because of open world games becoming so popular. 

Running games on minimum specifications tend to generate bugs. 

Edited by magnemoe
Link to comment
Share on other sites

On 9/1/2020 at 1:51 AM, runner78 said:

Unity has the Job system with Burst, you can use Jobs in 3 modes. On multiple threads, on one separate thread or an the main thread. With burst you have performance sometimes faster then default C++. In some Tutorials Unity also pointed out the overhead and said that smaller tasks are faster to execute on the mainthread, An alternative is just a separate thread, which has less overhead than scheduling multiple threads.

I think KSP run only at DX11. DX12 is not ready on Unity (They are working on it in the 2020 versions, but its still in an experimental state)

Vessel Physics are not the only think in KSP. Orbital physics are another part, that can processed outside of the mainthread. 

Iv'e seen this said multiple times, and it's always struck me as a bit weird. What's "Default" C++? Are we just limited to #include <iostream>, <cmath> etc? Because if that's the case then i wouldn't be surprised since it would only run on a single core and thread, just to get multiple threads on a single core you'd need at the very least <boost> in C++. Not trying to start anything, just genuinely curious.

On 8/31/2020 at 1:24 PM, kspnerd122 said:

What I mean is, graphics should be able to be turned down.

You'll be able to turn graphics down, and there will likely even be mods to further "optimize" the game by replacing the default textures with much lower resolution ones. But as said previously, it's highly unlikely the bottleneck in KSP2 will be graphical.

Link to comment
Share on other sites

16 hours ago, Incarnation of Chaos said:

Iv'e seen this said multiple times, and it's always struck me as a bit weird. What's "Default" C++? Are we just limited to #include <iostream>, <cmath> etc? Because if that's the case then i wouldn't be surprised since it would only run on a single core and thread, just to get multiple threads on a single core you'd need at the very least <boost> in C++. Not trying to start anything, just genuinely curious.

You'll be able to turn graphics down, and there will likely even be mods to further "optimize" the game by replacing the default textures with much lower resolution ones. But as said previously, it's highly unlikely the bottleneck in KSP2 will be graphical.

Default C++ means if you compare the same algorithm gcc compiled C++ code , Bust compiled code should at least as fast and in some cases (in games many cases) faster then C++. 

Non-default C++ would be e.g. manual SIMD optimized code (very low level in C++), or with Clang/LLVM compiled (Burst based on Clang/LLVM). Burst tries to optimize the code automatically for SIMD, to do this you only use a limited subset of c# in burst job. which allows further optimization.
There are also some benchmark cases where gcc/C++ is faster, but they don't benefit from SIMD, but games benefit greatly from SIMD optimizations.

Her a are some benchmarkhttps://github.com/nxrighthere/BurstBenchmarks 
but it missing SIMD 
benchmarks.

Link to comment
Share on other sites

1 hour ago, runner78 said:

Her a are some benchmark:

The immediate upshot of which being: Mono/JIT is ludicrously slow pretty much all the time, and old-school C compiled with GCC still beats the pants off needlessly verbose C#, even when the latter is restricted to a subset and compiled with the latest shiny toys like Burst/IL2CPP. :sticktongue:
 

Link to comment
Share on other sites

1 hour ago, runner78 said:

Default C++ means if you compare the same algorithm gcc compiled C++ code , Bust compiled code should at least as fast and in some cases (in games many cases) faster then C++. 

Non-default C++ would be e.g. manual SIMD optimized code (very low level in C++), or with Clang/LLVM compiled (Burst based on Clang/LLVM). Burst tries to optimize the code automatically for SIMD, to do this you only use a limited subset of c# in burst job. which allows further optimization.
There are also some benchmark cases where gcc/C++ is faster, but they don't benefit from SIMD, but games benefit greatly from SIMD optimizations.

Her a are some benchmarkhttps://github.com/nxrighthere/BurstBenchmarks 
but it missing SIMD 
benchmarks.

....What?

What does the GCC compiler have to do with any of this? What i was saying is that C++ requires the programmer to import specific libraries and use them for threaded support. So basically i was asking if you were comparing C++ without threading to C# with threading.

Are you saying that C# with the GCC compiler and the Burst libs automatically parallelizes the code?

Link to comment
Share on other sites

The old Mono is slow, that was one of the reasons for Unity Burst to develop. Sooner or later Unity will switch from Mono to .Net5 +, which comes with RyuJIT and is a lot faster than Mono.

These benchmarks do not reflect any game scenarios. As I said, SIMD is missing from the benchmarks, so bust can't show his power in this benchmarks.

18 minutes ago, Incarnation of Chaos said:

...What?

What does the GCC compiler have to do with any of this? What i was saying is that C++ requires the programmer to import specific libraries and use them for threaded support. So basically i was asking if you were comparing C++ without threading to C# with threading.

Are you saying that C# with the GCC compiler and the Burst libs automatically parallelizes the code?

I'm only talking about burst, that has nothing to do with multhithreading, that's what you use the job system for. Bust compiled jobs can run on the mainthread. All this benchmarks are all single threaded.
Automatic SIMD optimation (also calling auto vectorizing) has nothing to do with multithreading, but the SIMD registers of the CPU (SSE, AVX). They can be used to execute several identical arithmetic operations with one cpu instruction at the same time. This is particularly useful for position vectors. Without SIMD this would require 3 instructions for a 3D vector.

45 minutes ago, steve_v said:

The immediate upshot of which being: Mono/JIT is ludicrously slow pretty much all the time, and old-school C compiled with GCC still beats the pants off needlessly verbose C#, even when the latter is restricted to a subset and compiled with the latest shiny toys like Burst/IL2CPP.

The old Mono is slow, that was one of the reasons for Unity Burst to develop. Sooner or later Unity will switch from Mono to .Net5 +, which comes with RyuJIT and is a lot faster than Mono.

These benchmarks do not reflect any game scenarios. As I said, SIMD is missing from the benchmarks, so bust can't show his power in this benchmarks.

Edited by runner78
Link to comment
Share on other sites

1 minute ago, runner78 said:

The old Mono is slow, that was one of the reasons for Unity Burst to develop. Sooner or later Unity will switch from Mono to .Net5 +, which comes with RyuJIT and is a lot faster than Mono.

These benchmarks do not reflect any game scenarios. As I said, SIMD is missing from the benchmarks, so bust can't show his power in this benchmarks.

I'm only talking about burst, that has nothing to do with multhithreading, that's what you use the job system for. Bust compiled jobs can run on the mainthread. All this benchmarks are all single threaded.
Automatic SIMD optimation (also calling auto vectorizing) has nothing to do with multithreading, but the SIMD registers of the CPU (SSE, AVX). They can be used to execute several identical arithmetic operations with one cpu instruction at the same time. This is particularly useful for position vectors. Without SIMD this would require 3 instructions for a 3D vector.

The old Mono is slow, that was one of the reasons for Unity Burst to develop. Sooner or later Unity will switch from Mono to .Net5 +, which comes with RyuJIT and is a lot faster than Mono.

These benchmarks do not reflect any game scenarios. As I said, SIMD is missing from the benchmarks, so bust can't show his power in this benchmarks.

Yeah i knew about SIMD, but not how it related to Burst. I figured i had a fundamental misunderstanding somewhere in the pipeline, and you cleared it pretty well up with this. Cheers.

Link to comment
Share on other sites

×
×
  • Create New...