Page 7 of 10 FirstFirst ... 56789 ... LastLast
Results 61 to 70 of 91

Thread: Unity 5

  1. #61
    Senior Rocket Scientist LethalDose's Avatar
    Join Date
    Nov 2013
    Location
    Kerbifornia
    Posts
    1,207
    Blog Entries
    1
    Quote Originally Posted by Sethnizzle View Post
    Any possibility of getting a dev or two to comment on their opinions of Unity 5, what it could mean for KSP, and what the plans are, if any, for upgrading? I'd be interested in reading about their thoughts.
    I wouldn't hold your breath for a dev comment here in the forum. They rarely post here.

    You may want to watch their blogs, though, for their thoughts on the matter. If they do decide to change, I suspect it'll be a major decision that will be announced.
    "Orbiting is just falling and missing the ground." -Kerrman Oberth

  2. #62
    Spacecraft Engineer DaRocketCat's Avatar
    Join Date
    Jul 2012
    Location
    San Francisco, CA
    Posts
    238
    Yeah, the devs almost never post on the forums anymore, but they often mention new versions of Unity and their possible effects in dev blogs.
    There was a big buzz back when the devs were upgrading ksp to Unity 4.
    Dell XPS 15 9530: i7-4702HQ - 16GB 1600Mhz RAM - GT750M w/ 2GB DDR5 - 512GB SSD - 3200x1800 15.6in screen

  3. #63
    It would be easier to optimize the existing game than it would to make it a 64bit application. The results would be better as well since it would then run on both 32 and 64bit machines.

    It is entirely possible (But a lot of work) to get KSP to load assets much more efficiently than it currently does. This is just talking about memory and not the other savings of 64bit operations.

    The simplest are optimizations of the current assets (I saw a 12mb single texture in the squad folder the other day), loading of compressed textures (dxt), better loading and unloading of planet textures (Your never going to see 2 planets at the same time why are the all loaded in memory?), level of detail meshes for parts. Im sure there are many more that the developers are working on but these alone would provide a substantial boost to performance on 32 bit machines and are currently possible...

  4. #64
    Quote Originally Posted by guest91111 View Post
    Does this mean for nVidia users with a reasonable GPU (GeForce GTX 460) can run 1 000 parts objects with more than 5-10 fps?
    Unfortunately not, Unity only seems to implement the CPU acceleration branch (so far)

    That said, the CPU implementation of PhysX 3.3 reported for Unity5 is significantly better than the CPU branch of PhysX 2.8.x in the Unity 4 series

    Quote Originally Posted by Pierre Terdiman
    So, as far as PhysX is concerned, 3.2 has an issue with memory usage, 2.8.4 has an issue with performance, I guess the conclusion is clear: switch to 3.3. It’s just better. On average,PhysX 3.3 /PCM is 7X faster than PhysX 2.8.4 and 4.7X faster than Bullet here. That’s quite a lot.


    Dig through the whole Coder Corner article for more comparisons, but it seems promising
    "When you are studying any matter, or considering any philosophy, ask yourself only: What are the facts, and what is the truth that the facts bear out. Never let yourself be diverted, either by what you wish to believe, or what you think could have beneficent social effects if it were believed; but look only and solely at what are the facts." Bertrand Russell

    [KOSMOS Career mode TechTree Integration Config] for KOSMOS for use with Sarbian's Module Manager

  5. #65
    assistant kraken tamer. Nemrav's Avatar
    Join Date
    Oct 2013
    Location
    0,0,0 local co-ordinates
    Posts
    372
    I'm sure there will be a unity 5 update WHEN it comes out... But as far as all these things about off-loading to gpu, for me its a big no-no, doesn't anybody here have a bad enough gpu as it is ? sure for some people but what I would like to do is the inverse, unload gpu stuff to the cpu....
    Plz, take a guess at what I changed in this signature >: ) .
    "after tylo, the mun is the hardest place to land." -Nemrav
    RIP - my cancelled content mega-thread, which even caught the eye of Harvestr : http://forum.kerbalspaceprogram.com/...celled-content

  6. #66
    Spacecraft Engineer qromodynmc's Avatar
    Join Date
    Oct 2013
    Location
    Turkey/Adana
    Posts
    242
    it is too early to excite,but still good news. also i have nvidia gpu,good news indeed..

  7. #67
    Quote Originally Posted by guest91111 View Post
    Does this mean for nVidia users with a reasonable GPU (GeForce GTX 460) can run 1 000 parts objects with more than 5-10 fps?
    Meh, just get any i5 cpu and you've achieved this feat very handily.

  8. #68
    Quote Originally Posted by _Aramchek_ View Post
    Meh, just get any i5 cpu and you've achieved this feat very handily.
    I wish it were that easy. I'm running this game on i7 2630qm.

  9. #69
    Junior Rocket Detonator
    Join Date
    Feb 2014
    Location
    United States
    Posts
    17
    This announcement has given me hope for KSP-64!

    ...eventually, haha

  10. #70
    Quote Originally Posted by Streetwind View Post
    Thankfully, they don't have to. No matter how great the CPU algorithm gets, it will never ever come anywhere near the performance of even a lower grade graphics processor. Physics calculations are very much parallel for the most part, which means that the best* thing you can do is throw massive amounts of stupid, low-performing cores at it. CPUs, on the other hand, feature a very small amount of incredibly sophisticated high-end cores. They're built for the complete opposite kind of code.
    I think you're missing a very important point of this though, which is other graphics card companies being able to compete with nVidia. By leaving the algorithm working out in the open in the public CPU, "the GPU" doesn't have to mean just "nVidia's GPU". It means a competitor graphics card maker can implement a driver to get PhysX code to work well on their own GPU. That would be a lot harder if PhysX had been implemented as "use nVidia or it will suck". Your response was written as if nVidia has no need to consider how a competitor might perform with PhysX because their only competition they will ever have will come from the CPU makers. But making a good well working implementation that is graphics-card agnostic opens up the chance for a competitor graphics card to make their own GPU layer for PhysX for their own hardware. That would have been a lot harder had it been implemented in such a way that once a game programmer uses PhysX that guarantees that only the nVidia GPU would be able to implement it well.

    Giving it a good CPU-only implementation requires designing it in good GPU-agnositc way that allows the GPU-enabled driver for it to be drop-in replaceable. Which helps any potential competitors in the future and avoids artificial market lock-in.

    Which is why I'm glad nVidia is doing it this way. They could have decided to behave more like Microsoft typically does, using their current market dominance to try to ensure eternal future market dominance purely through the need for backward compatibility.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •