Jump to content

CPU/GPU & PhysX


brusura

Recommended Posts

Hi

while my station keep on growing the framerate keep on lowering, nothing new here :D, but I was trying to mess around with the the option in nvidia control panel to force gpu take care of simulation (as I recall unity rely on nvidia physx...right? ).

But, forcing cpu or gpu to take care oh physx calculation do not seem to change the outcome at all!

I have a quad core @ 2,5 Ghz and as you can see it look like ksp is using more than one core, also I was expecting 25% cpu not 46% for single thread application anyway these results are the same even with GPU as PhysX processor, only the CPU reach 52% while GPU remain the same ( should have been the opposite )

http://i41.tinypic.com/apdvew.png

So what are your experince about?

Link to comment
Share on other sites

Hi, I really hope the next patch will optimize the game.

Multi-threading and GPU PhysX are major rewrites, not optimisations. The latter may not be appropriate for KSP anyway, the GPU isn't just a faster CPU - it has downsides as well.

Link to comment
Share on other sites

Alright then no physx support and looking at the dev blog next patch will bring optimization only to database, so it will speedup the loading time not the simulation, so nothing yet cronos1082

Link to comment
Share on other sites

Multi-threading and GPU PhysX are major rewrites, not optimisations. The latter may not be appropriate for KSP anyway, the GPU isn't just a faster CPU - it has downsides as well.

I disagree, GPU are much more suited for physics simulation than a general purpose cpu

Link to comment
Share on other sites

I disagree, GPU are much more suited for physics simulation than a general purpose cpu

Not necessarily pushing gameplay physics to the gpu means you have to communicate it back to the cpu for every rendered frame which can be quite slow. It also means the GPU requirements will go up since the GPU will have to run a (depending on your Vessel) very demanding workload which means the same people who now complain because they have a slow CPU will complain because their GPU is too weak. Another problem is the lack of a standardized approach to the problem. Use physX - Radeon users are left out. Use OpenCl, users with older cards are left out. Meaning: You have to to the same work three times, without even reaching all customers (and we haven't even talked about mac and linux - some driver has weird quirks? -> gl with the hate mail). Its just plain wishful thinking to say that squad can implement what AAA studios (let's say crytek and dice since they write their own engines) have not done because it is just not feasible as of today due to the splintered landscape that are gaming PCs ... or "laptops" ... or whatever.

Best course would be when unity will utilitze a multithreading enabled physics solver (physx supports this too, or another library such as havok or bullet). This would balance CPU and GPU demands out - especially since GPU demands will propably rise due to new features such as clouds, etc.

Also Keep in mind that most of the "10x speedup over cpu"or "awesome effects" claims were just inflated marketing talk by nVidia since they deliberately limited cpu-physx to ancient x87 code and single threading to manufacture a marketing advantage for their cards.

Edited by jfx
Link to comment
Share on other sites

do you mean x86 right? You are right jfx not "necessarily" but the type of calculation involved ( mostly floiting point operations used in 3d calculation and physics ) are processed faster anyway by the gpu.

Link to comment
Share on other sites

No, i meant x87 which is the floating point part originating back with the 8086 (!). Using this ancient instruction set to do performance heavy sp-floating point math is like driving your porsche and using only the first two gears. SSE2 is somewhat standard (every 64bit capable CPU is guaranteed to have it - starting by Pentium4 and Athlon64 with the sole exception being the Atom).

main source with conclusive benchmark and profiling results:

http://www.realworldtech.com/physx87/4/

using two cores for physics could easily yield a 2X performance gain. Combined with the benefits of vectorized SSE over x87, it is easy to see how a proper multi-core implementation using 2-3 cores could match the gains of PhysX on a GPU.

Note that utilizing AVX or AVX2 could gain an even larger speedup - they doubled the width of the units again to 256 bits.

and a somewhat lame escuse from an nvidia rep:

http://www.itproportal.com/2010/07/08/nvidia-were-not-hobbling-cpu-physx/

Link to comment
Share on other sites

No, he meant x87 which is the floating point co-proccessor for the x86. They implemented more efficient vector math instructions like MMX and SSE, but the x87 instructions and floating point stack are still there for backwards compatibility. While you can't use MMX and SSE for all kinds of calculations they are good at the same kind of things as the GPU, just a bit less efficiently.

While restricting the CPU code to x87 would increase the disparity between CPU and GPGPU the GPU is still much faster at easily parallelizable calculations because it can run more of them at the same time. Moving from their custom physics engine to a GPU based engine isn't a simple thing to do though, and if they have to shuffle a lot more data out to the graphics card per frame then they currently do then they might only break even on the switch to the GPU. Memory is slow.

Link to comment
Share on other sites

As I understand it, KSP wouldn't benefit from PhysX too much anyway. It's good for oddball stuff like cloth simulations and particles, but not much else, it seems.

There was a dev post this week showing preliminary optimization in the form of memory usage and scene loading.

Link to comment
Share on other sites

I'm pretty sure that Nvidia Physx will work on GPUs.

Nvidia PhysX does work on GPU's with CUDA, but Unity uses an older version of PhysX that doesn't support GPU's, multithreading, or even the old Ageia PhysX cards.

Link to comment
Share on other sites

  • 1 year later...
It also means the GPU requirements will go up since the GPU will have to run a (depending on your Vessel) very demanding workload which means the same people who now complain because they have a slow CPU will complain because their GPU is too weak.

I don't mean to be rude or anything but thats an extremely stupid argument that I see thrown around a lot. No one cares about the people with genuinely [slow] systems. Saying that they shouldn't bother to fix a serious problem just because some people playing on their grandmothers old Win 2000 won't be able to actually play it at a faster frame rate is an absolutely ed argument. Those people will have horrible frame rates regardless, to which you tell them, "Your PC is a piece of [not very nice]. Deal with it." Why would you argue for a worse gameplay experience only because "the same people who now complain because they have a slow CPU will complain because their GPU is too weak." I have one of the fastest CPU's on earth and it's still not playable if I put too many parts on my ship.

Also, I don't know your background and I don't know a particularly large amount about rendering physics with a GPU but I can tell you that is bull. Nvidia and others have had extremely complicated water simulations running in real time on a GPU's with no problem. Doing KSP like physics on a GPU would be child's play. As for transfer speed from CPU to GPU, a PCI-e 16x 2.0(which I have and am using for my GPU) slot has 8GB/s of transfer speed both directions simultaneously. So I'm going to have to call bull on that one too.

Next, here is some actual information on the benefits of GPU acceleration. http://blogs.nvidia.com/blog/2010/06/23/gpus-are-only-up-to-14-times-faster-than-cpus-says-intel/

And I would like to point out that there is a reason why super computers are being built with GPU's now instead of CPU's.

Lastly, use Open CL. Only people with awful/old GPU's wouldn't be able to support it and there would be no need to stop using the current system anyway. Just have it as a fall back for the people who have crap systems.

CUDA is Nvidia only, Mantle is, as of now, AMD only. Open CL might not be perfect on every card but it is the best alternative I see. I don't know how they have physics implemented at the moment, ie, if it is custom code or through Unity itself, but if it is custom code then it should theoretically just be a matter of translating it to Open CL and debugging any problems. It would probably also require threading to be implemented unless Open CL handles that already... I don't know.

I won't claim that any of this is easy to achieve for a smaller studio but I'm just trying to point out that it is completely possible and they should put some actual effort into achieving it because the game is a lot less fun when you can't actually have much more than two rockets docked or a few hundred parts.

Link to comment
Share on other sites

Careful with the language there, Superfish1000. I've edited out some of the words as you'll have noticed, and the forum software does a good job of censoring the rest. At the same time, necro posts are frowned on unless they bring anything substantially new to the table, and I'll be closing this thread as it's 15 months old.

Keep it kid-friendly, and do be civil. As you're new, I'd recommend reading the forum rules.

Other than that, welcome and have fun.

threadlock.jpg

Edited by technicalfool
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...