Jump to content

How long before a performance increase?


Motokid600

Recommended Posts

I've seen this spewed all over the place, and that is absolutely, most definitely, completely, entirely 99% false. First, the human eye doesn't work on an FPS system, and neither does the brain. If it did, we'd see the spokes on a wheel hub while driving down the highway instead of a blur, or solid mass with the the calipers behind said "solid mass". The brain makes a "best guess" to what the situation is and blurs things together if necessary so it "makes sense" and reduces strain. Second, it depends on the specific individual (Which is where that 1% is) and what they're used to seeing on a monitor (Yes, real life is different than a refresh rate on a computer monitor). Third, you put two monitors, side by side, one running at 60hz and another at 120hz and do anything, even running just calculator on your desktop and dragging it around, you will see a difference in what kind of response you get. The 120 will look smoother. This is, of course, assuming that the video card is setup to work at 120hz.

I don't believe the original poster is arguing that the brain does work on an FPS system. He is arguing that there is a threshold for frame rates below which the beta movement and phi phenomenon that create apparent motion become noticeably distracting. Besides, there's no guarantee that we would see perfectly still images even if our brains did see in FPS because there are other factors in motion blur such as exposure time, ISO, and apature size. Examination of 24FPS film stills show examples of motion blur created by slower film speeds and longer exposure times.

Before I bought my 42" TV about a year and a half ago, I stood in walmart, best buy, future shop, and other electronic stores, watching a wall of TVs for HOURS (in total, and yes, literally stood and watched commercials for a good 30-45 minutes at a time at different stores, sometimes the same store on a different week) to decide on what model I wanted. SPECIFICALLY I was looking to see if there was ANY difference in frame rates, or at least what the difference would look like, just to see if there was a difference between 60 and 120, as well as picture quality between manufactures and models, and THEN went up close to look at specific capabilities. I noticed the difference even BEFORE I looked at the individual capabilities of each TV. Standing about 20 feet back, I could see that the 60hz TV was a bit more "jerky" between refreshes. What was showing on the wall of TVs was just through a video repeater, so it wasn't like I was looking at two different signals. Now, there are a few stipulations that I don't have the answer to, and someone can come bite me in the rear for...

1> First, was the repeater working at 120hz? Doesn't matter as it was a stream to whatever the capabilities of the the repeater were AT MINIMUM. So if the repeater threw it off at 60hz, the 120s threw up at 60 and may have "flushed" the two different images together. Point is, I could see a difference.

2> Were the TVs themselves setup for 120? One would like to assume that the TVs come out of the boxes at whatever their top rate speeds were, not at less. Even if there was a chance, not all of them across several walls of TVs were configured as such.

3> Were they hooked up via HDMI, Component or Composite? I COULD see this being a significant difference, especially at the Composite point, but, this isn't the 90s and not many TVs that are being put out on display are run off a composite cables anymore. ESPECIALLY to show off its high end capabilities. (Composite = single cable that splits the RGB signal; Component = Three colors sent across three wires)

Now, if you ask the guy NEXT to me who was wondering WTF I was doing, and asked him if he could see a difference, he MIGHT have said no. Why? With absolutely no disrespect, his brain may not have been wired well enough to SEE the difference, or his brain didn't understand HOW to tell the difference, or he just wasn't paying attention to that kind of detail. Even with all that said, with my seeing that there was a difference AT ALL indicates that either I'm special, like my mommy and daddy told me so, or, the whole FPS thing is just an excuse for bad performance.

It's good that you took the time to look at TVs in a variety of places/times. There are a lot of factors that can affect your viewing experience including artificial light (which also flickers at rates depending on amperage), video feed, and TV type. However, you are confusing the TV's flicker rate with its frame rate. Frame rates are constant and decided by the video being shown. Films run at 24 FPS traditionally, NTSC runs at 30, PAL runs at 25. Your TV will always change frames at the rate specified by the source material. In addition to the source-decided frame rate, TVs also have a flicker rate to minimize eye-strain and promote beta movement and phi phenomenon. While the two are related, all TVs show their source-video at the prescribed rate, but they flicker at rates decided by the maker/model. I don't know if this is true for all TVs, but mine has its flicker rate hard-wired (though I am not aware what the flicker rate is on this model).

If you met someone that is unable to see the difference between 60hz and 120hz televisions, it means that they do not perceive motion the same way you do, not that their brain wiring is inferior. That said, legions on particular portions of the brain do inhibit the ability to see motion and patients describe seeing life as a series of still images.

Out of curiosity, what was the flicker rate of the TV you bought?

Link to comment
Share on other sites

You compare apples to Mangos. TV Shows and movies are generally recorded wit a camera, in a movie with 24fps a single frame was exposed for about 1/24th of a second. Fast motions become blurry, as you can easily see for yourself if you have a photo camera with variable exposure time. TV shows which are recorded at 60 fps tend to look very sterile if compared to movies because 1/60th of a second almost eliminates this natural motion blur.

As pontiac above me has already mentioned you eyes work kind of similar. fast movements appear as a blur to you (very subjective however, since this varies for each person and alertness (adrenaline, alcohol, etc.) levels).

In a Video game however each frame is a snapshot of the game scene without temporal extent - it has an indefinitely short exposure time if you will. If you show 24 of these you will easily see individiual frames as opposed to a 24 fps movie scene.

There are of course attempts to indroduce motion blur in games but at this point thats more to mask awful console graphics than anything else and it looks awful.

Hope that cleared something up.

Actually, film cameras control frame exposures by adjusting the shutter angle. The standard shutter angle is 180 degrees, making the exposure about half the frame rate. Higher angles correspond to longer exposures. Eg. I set my camera to approximately 180 degree shutter by setting my frame rate to 1/24 and my exposure to 1/50 for most shoots, but if I want to recreate a stop-motion effect, I will decrease the shutter angle by lowering my frame exposure(yes, I am a professional videographer).

Actually, only some TV shows are recorded at 60i or interlaced frames. The shows look sterile because each frame is only half an image and not a full image. 60i cameras are effectively recording at 30fps for each half-image, but quality is lost because interlacing was designed as a workaround for image buffering when video was first created. It also allows lower-end cameras to fake high frame rates. Many HD TVs now sow the frames together to recreate the originally intended 30fps, but at lowered quality. The sterility of TV shows using 60i cameras is known as the Soap Opera Effect and is generally seen in productions with limited budgets. High-end productions generally use(d) film or cameras capable of capable of recording at the requisite progressive (real) frame rates of 30 or 25.

Link to comment
Share on other sites

My reply wasn't for the OP. I don't think I actually read the OP, as in post #1, but went to the "most recently read" link, saw this, then decided to spew my thoughts (again - sorry everyone for the wall of text today... I'm bored at work?). The post was in regards to the advert that we see fluid motion at 20-25fps. I'm not blaming Meraco. Don't think anything less of him/her, or of anyone for that matter. I don't care enough about them to warrant skip of a heartbeat of annoyance (Sorry, but its true, and it isn't meant to be mean or rude). What irks me is that this kind of "knowledge" is out there, and its flat out false. I call out the eye ball science degree masters to define 'fluid', define 'average', and define the sample set tested to get the results of either 'fact', not to mention define on what devices and mechanisms were used to test this 'capability' and visual comprehension and speed.

The environmental conditions of all the TVs per store were the same, meaning that I'm not comparing the TVs I saw in Walmart to the TVs I saw at Best Buy, and I understand you're not pointing at that. I can't obviously do that comparison objectively because they're two different physical environments and I'm not trained that well. Although I did notice that all the trims were black..... anyways.. The baseline comparison on the TVs were subject to that exact environment where the TVs were place. For instance, Best Buy had two physically different environments for their TVs. One was in a "dark room" with nothing but incandescent lighting -- or what seemed to be incandescent anyways, while their other models were sitting out on the main show room floor under fluorescent lighting. Looking specifically at how well the image flowed, vsync to vsync, or flicker to flicker, or frame change to frame change, I definitely could tell that there was a better, smoother transition between images between the higher and lower rate of screen updates. There were a few oddballs, but that happens when you do an analysis of anything.

As for seeing life in a series of still images, I have absolutely no question that kind of thing happens. Going from true v-sync vision to frame by frame? Just makes me appreciate even more that I'm healthy and CAN appreciate my 120hz TV. ;)

As for the unit itself, network capable, streams from a windows share albeit rather pitifully (Wondering if its SMB or the TV), quad HDMI in, one hooked up to the PS3, another to hook my my PC whenever I want to play PC games on it. Dual Component in. I THINK it also has composit in as well. No S-Video. VGA In, no DVI. 42". $1200 a year and a half ago, taxes in, 3-year warranty.

Link to comment
Share on other sites

Modders don't usually deal with UNITY if we don't have to; the only commonly used thing that really belongs to unity is MonoBehaviour, Texture2D, and pointless WWW... (Err right, and the GUI graphics... actually Unity /is/ used a bit more than I initially thought about...)
Pretty much everything is a subclass of MonoBehaviour, Part and PartModule subclass it, so I don't get what is this "Modders don't usually deal with UNITY if we don't have to" you speak of. Modders might not know they are dealing with Unity, but they are.
Link to comment
Share on other sites

A game focused on design w. design limits, you could say maybe it wasn't designed too well?

There is no game without limits.

And real life rocket science is all about dealing with real life physical limits.

It would be nice though if in KSP the physical limit of rocket size would be implemented in some other way than framerate drop.

Maybe rockets that are 'to large' should just collapse during launch, but then struts are a means of cheating around the size limit.

Link to comment
Share on other sites

Real rocket scientists wouldn't use ancient technology when they have modern tech available. That's what Unity is compared to some other engines. Ancient tech.

You do now the adage "If it hasn't flown before it won't fly", right? That one was at the root of how SpaceX are revolutionizing launcher design - by actually using technology developed within the last 40 years. So yes, rocket scientists do actually use old technology instead of newer, it seems.

Meanwhile if I turn my programmer's eye (which has training in things like graphical rendering and physics simulation) to the problem of KSP, I make the following judgements (with the disclaimer that I have not read the source, have only dabbled with Unity itself, and do not work for Squad):

Memory: KSP uses a lot of memory, and being saddled with Unity's restrictions, is limited in what it has available. This hurts performance - can indeed crash the game altogether. Add to this a suspicion of memory leaks here and there, and a game explicitly designed to have strong mod support being limited by its memory usage, and this is a significant problem area. A solution, at least for some users, would be to find a way to support 64-bit architecture and thereby expand the available memory. This, however, depends on the Unity engine, which, despite 64-bit support on every last one of their targeted platforms (Windows, Mac, Linux, iOS, Android, PS3 and XBox 360), does not support 64-bit architecture. For some reason. It's quite inexplicable to me.

Graphics: I don't really consider this a huge problem in and of itself. It's not the graphics of this game that will bring modern GPUs to their knees. Both DirectX and OpenGL are able to highly parallelize graphics computation, and can thus show KSP's level of graphics handily with any modern graphics card. Even high parts-count ships and bases ought to work here - the poly count and shader complexity just does not get that high, unless you use some seriously jacked-up mod parts or custom rendering mods.

Physics: The real problem child IMO. Unity's built-in PhysX engine is terrible. It is unsuited to any kind of high-fidelity physics, is prone to bad errors, has flawed colliders and intersection handling, and the list goes on. Especially bad is that it underperforms on non-nVidia computers, as PhysX targets nVidia hardware explicitly, but does not support ATi GPUs for physics processing, for example. But there are options: Bullet is a much superior physics engine which is apparently the target of an integration project with Unity - this would greatly improve performance, especially for non-nVidia users (like myself). Other physics engines exist, and could probably be integrated with Unity to boost its performance on physics tasks - which appears to be the real bottleneck for many-part ships.

CPU architecture support: Unity does not support multi-core CPUs. This exacerbates the above mentioned problems with physics for non-nVidia users, as not only will they not get the excess power of the GPU exploited, their processor will be underemployed as well. I take heart at the mention of partial multi-core support mentioned by Squad, but really, this is something Unity really ought to have in the first place. After all, again, every last one of their target platforms support it. In fact, I'd expect Unity to horrendously underperform on e.g. the PS3 with its cell architecture (8 cores supported by an underpowered control core), or on dual- and quad core mobile units, also a major Unity focus.

In short: Squad can help themselves a fair bit by using a different physics engine, such as Bullet, and/or fixing any memory leaks in the game, and by hacking in partial multi-core support. But for some of the issues, it's the Unity devs who need to get their thumbs out of their butts and stop targeting 2001's technology. Until they do, it would appear we are stuck with certain issues - particularly the memory problem.

Link to comment
Share on other sites

Interesting post. I'd like to add that unity's physX implementation has hardware handling disabled, so even with a GTX youre do not get the hardware phisics calculations (i have a GTX480 myself and could verify this)

Edited by earth
Link to comment
Share on other sites

Physics: The real problem child IMO. Unity's built-in PhysX engine is terrible. It is unsuited to any kind of high-fidelity physics, is prone to bad errors, has flawed colliders and intersection handling, and the list goes on. Especially bad is that it underperforms on non-nVidia computers, as PhysX targets nVidia hardware explicitly, but does not support ATi GPUs for physics processing, for example. But there are options: Bullet is a much superior physics engine which is apparently the target of an integration project with Unity - this would greatly improve performance, especially for non-nVidia users (like myself). Other physics engines exist, and could probably be integrated with Unity to boost its performance on physics tasks - which appears to be the real bottleneck for many-part ships.

Unitys version of physx runs exclusively on the cpu (on a single core of the cpu to be more specific). Nvidia users get the same ****ty performance as AMD users.

I must say the GPU physics is what we are looking for, because even medium range GPUs can do vector calculations very easily with the SIMD units.

For example the GTX480 can crunch hundreds of calculations at a time (when optimized) while a CPU can do a dozen, and this makes up hands down for the slower clock speed of the GPU. Thats also taking that load off the CPU, so it hasnt that nuch calculations left to do, and one core can still be sufficient.

With GTX cards for example, this could represent a five to ten fold increase in fps, and the larger the ship is, the greater is the improvement in fps.

This is true for cosmetic stuff (breaking glass, animated cloth, small flying chunks of a destroyed wall which despawn after a couple of seconds) but not for physical calculations which actually impact gameplay itself. If the ship physics would be calculated on the GPU they would have to be send back and forth between the GPU and CPU for every frame - which is horribly slow and inefficient.

That being said another, better physics engine like bullet or havok can take advantage of multiple cpu cores (while already being faster on a single one). Take a look at the havok site or watch the new frostbite engine trailer to see what they get out of it.

Edited by jfx
Link to comment
Share on other sites

Unitys version of physx runs exclusively on the cpu (on a single core of the cpu to be more specific). Nvidia users get the same ****ty performance as AMD users.

Yeah +1000

one the one hand i think that using physX is ****** because ATi users cant benefit from it, but its even worse cause one the other hand they blocked the hardware accel. it gets from nVidia. Its like banging your head on the wall. Twice.

Other physics engines integrable in unity, with various prices and capabilities, exist and would be preferable to unity's physX.

Link to comment
Share on other sites

This is true for cosmetic stuff (breaking glass, animated cloth, small flying chunks of a destroyed wall which despawn after a couple of seconds) but not for physical calculations which actually impact gameplay itself. If the ship physics would be calculated on the GPU they would have to be send back and forth between the GPU and CPU for every frame - which is horribly slow and inefficient.

That being said another, better physics engine like bullet or havok can take advantage of multiple cpu cores (while already being faster on a single one). Take a look at the havok site or watch the new frostbite engine trailer to see what they get out of it.

Efficient solutions for GPU calculations do exist. think to openCL. (i know that aint physics related, but that is a full fledged framework)

EDiT: oh i got what you meant.

You underestimate the BUS power ;)

Edited by earth
Link to comment
Share on other sites

In short: Squad can help themselves a fair bit by using a different physics engine, such as Bullet, and/or fixing any memory leaks in the game, and by hacking in partial multi-core support. But for some of the issues, it's the Unity devs who need to get their thumbs out of their butts and stop targeting 2001's technology. Until they do, it would appear we are stuck with certain issues - particularly the memory problem.

Is it even possible to migrate the project to a different technology at this late stage of development? Surely that would lead to Duke Nukem Forever problems.

There's a fairly large fraction of this forum who have no idea how video games are made and work. And even among the people who do have a grasp of that, remember that this is SQUAD's first video game project. Opting for Unity was sensible, even if it leaves KSP lacking. For KSP2, they can have more freedom. Maybe even do a kickstarter.

Any game dealing with hundreds of colliding parts is going to run like a rhino, certainly, but KSP does seem run on the slower side of the possible speeds. Compare Red Faction Guerilla or (even better) Wildebeest Games "Detonate".

Link to comment
Share on other sites

CPU architecture support: Unity does not support multi-core CPUs. This exacerbates the above mentioned problems with physics for non-nVidia users, as not only will they not get the excess power of the GPU exploited, their processor will be underemployed as well. I take heart at the mention of partial multi-core support mentioned by Squad, but really, this is something Unity really ought to have in the first place. After all, again, every last one of their target platforms support it. In fact, I'd expect Unity to horrendously underperform on e.g. the PS3 with its cell architecture (8 cores supported by an underpowered control core), or on dual- and quad core mobile units, also a major Unity focus.

This is actually wrong as well Unity has supported multi-core CPU's for awhile as has KSP (if you check your release notes).

Link to comment
Share on other sites

Is it even possible to migrate the project to a different technology at this late stage of development? Surely that would lead to Duke Nukem Forever problems.

A move to bullet wouldn't actually change /too/ much it's just a bit of a job to add it to Unity (Rawbots has done this for example). Unity itself is pretty flexible and the change wouldn't be too much of a problem causer most likely.
Link to comment
Share on other sites

There is a difference between supporting something and utilizing it.

Also the stuff in the patch notes (texture loading or something) was not noticeable at all. En contraire - the game currently loads longer than ever for me.

Link to comment
Share on other sites

There is a difference between supporting something and utilizing it.

Also the stuff in the patch notes (texture loading or something) was not noticeable at all. En contraire - the game currently loads longer than ever for me.

It's actually using them all (you'll find a performance hit happens if you set a CPU affinity) the problem is purely down to physX and the fact that Unity's version is so damn old it's still single threaded and unoptimized (hell it uses x87 instructions something so defunct even Intel tries to deny it exists)

Link to comment
Share on other sites

ksp is not really multithreaded. If you look at it, you can see that the physics calculations are single threaded.

There is one thread for physics and another for sound effects, and there are a lot of sync problems already!

LOL x87! it made me giggle! Thats wasting bandswidth! ;)

Edited by earth
Link to comment
Share on other sites

LOL x87! it made me giggle! Thats wasting bandswidth! ;)

It's depressingly true. On another note the Rawbots team have reported a 4-10x performance increase (depending on complexity of the scene) just by switching to bullet physics.

Link to comment
Share on other sites

Sorry about derailing it again (thread is pointless anyway), but regarding fps topic I remember some pictures, there also was online test but I forget where it was .

http://i.minus.com/iKuvI6jaWm9cy.gif(I decide to not overload page by big gifs)

http://i.minus.com/iRShSUFLPSeAa.gif

[non working spoiler]One of pictures at pair 30 fps and another 60 (intended to be by design - I don't know how well browsers maintain framerate of gifs, have different frame rate anyway) can you distinguish where which one?[/non working spoiler]

Edited by zzz
Link to comment
Share on other sites

The issue is not the CPU or the ram, as long as you have more than 1GB ram.

On my cheap laptop, KSP takes ~600 MB ram on kerbin, and 53% of my CPU.

The big issue is the CPU.

First step is to update your graphics card ;)

Edited by frash23
Link to comment
Share on other sites

Dang, I don't try anything over about 250 parts on my laptop and I know that'll be laggy. I generally don't play on my desktop, but even there I rarely go for more than about 400 on the biggest ships/stations and that is a 3570 OC'd to 4Ghz (4.2Ghz single core) comapred to the i5-3317u in my laptop (1.7Ghz, 2.6/2.4 turbo).

Granted I'd love to be able to throw together ships of unlimited size, but I rarely feel that limited by the roughly 250 part count ceiling I stick with. I can make some pretty big launchers and some decent sized stations, bases and interplanetary ships. I certainly wouldn't mind doubling the number though (I doubt I'd run in to any limits then).

Link to comment
Share on other sites

The issue is not the CPU or the ram, as long as you have more than 1GB ram.

On my cheap laptop, KSP takes ~600 MB ram on kerbin, and 53% of my CPU.

The big issue is the CPU.

First step is to update your graphics card ;)

That is odd because I noticed a difference even with 6 gigs of system ram compared to even 4.

And of course cpu is a issue with this game...

Link to comment
Share on other sites

Theres always someone on this forum telling you how he gets "perfectly fine" performance on his toaster. The thing is: Some people consider five minute loading times and three fps on maximum deltaT with bare minimum details as perfectly fine.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...