Jump to content

Am I right?


TheDestroyer111

Recommended Posts

Am I right that KSP can allocate RAM memory to itself only once (on startup), but can give back the memory to other applications as many times as other apps request it? If not, then why does the game develop pulsating fps drop like if it went out of RAM after I open up some stuff on the internet with KSP in the background, but the FPS drops continue even after I close all other applications than KSP? (If this problem doesn't exist on Windows, does it exist on a Mac? It happens mainly on my Mac, my other computer with Windows is very powerful and very rarely gets less than 60 fps in KSP anyway)

Link to comment
Share on other sites

KSP can request additional memory from the operating system whenever needed, but before asking the OS for more RAM, it first checks if it has some memory already allocated that contains no longer needed data, and will reuse that memory if possible. This process is called "garbage collection", and it sadly causes a noticable load spike.

How often this garbage collection happens depends on how much "new" memory the game requires over time. Game developers typically try to keep this amount as small as possible, and currently the KSP developers are working on exactly that issue. Another possible source of garbage are mods (if the mod's authors don't keep garbage avoidance in mind...), so when comparing your Windows and Mac installation, make sure to use the same mods on both.

A long explanation of the whole process can be found in the Unity Engine's Manual page on Understanding Automatic Memory Management.

I would not expect KSP (or any other program) to care about the RAM usage of other applications, and other applications cannot demand memory back from KSP (not counting the operating system's out-of-memory killer, but if that would be running, you'd notice it more drastically than by stuttering...). There is one thing though that could cause similar stuttering, and that's paging. It doesn't have much to do with KSP (except that KSP uses lots of RAM...), but simply with the condition that physical RAM gets full. Once all open applications together require more RAM than there's physically available on the computer, the OS will start to move data that hasn't beem accessed in a while from RAM to HDD. If now a process wants to access that data, it has to be copied back from disk to RAM, and the disk access can cause a performance impact (if paging happens excessively, it's colloquiall called "swapping hell" due to the performance impact...). In your case, what could happen imght be that KSP+whatever you do in the background causes paging of data which is later on copied back to RAM in small portions whenever something accesses those portions.

Link to comment
Share on other sites

For what I understand, programs don't allocate "physical memory", but just memory. The OS manage pages of memory block and swap them from physical memory to page file (stored on HD). when you start a program, this program request memory. The OS give it and if there is not enough physical  memory it stores pages of memory of other less used programs on drive to free physical memory for that new program. But if the program don't use it, the memory will quickly be stored.

On my last computer on Win XP and also on Win7, I had this issue. Sometimes, my computer would slow down quite harshly. As my computer had 8GB, I forbid the usage of pagefile. Slowdown vanished. Only one game (I don't recall which one) had trouble because there was no pagefile on my computer (but a later patch removed this bug).

Now, I'm on Win10 with a SSD as primary disc (OS, softwares and games). I don't have any problem with pagefile. Maybe it's due to the better chipset infrastructure, or on Win 10 management, or on my 16GB or on my SSD. I don't know.

Link to comment
Share on other sites

3 minutes ago, Warzouz said:

For what I understand, programs don't allocate "physical memory", but just memory. The OS manage pages of memory block and swap them from physical memory to page file (stored on HD). when you start a program, this program request memory. The OS give it and if there is not enough physical  memory it stores pages of memory of other less used programs on drive to free physical memory for that new program. But if the program don't use it, the memory will quickly be stored.

On my last computer on Win XP and also on Win7, I had this issue. Sometimes, my computer would slow down quite harshly. As my computer had 8GB, I forbid the usage of pagefile. Slowdown vanished. Only one game (I don't recall which one) had trouble because there was no pagefile on my computer (but a later patch removed this bug).

Now, I'm on Win10 with a SSD as primary disc (OS, softwares and games). I don't have any problem with pagefile. Maybe it's due to the better chipset infrastructure, or on Win 10 management, or on my 16GB or on my SSD. I don't know.

You are aware that Windows will still page things out even with the pagefile disabled? Your performance increase with it disabled is likely due to it not having to do writes to do so. Personally, I've found it best to set the pagefile size to 0MB minimum and system managed maximum, it's basically free protection from out of memory crashes.

Link to comment
Share on other sites

1 hour ago, Red Iron Crown said:

You are aware that Windows will still page things out even with the pagefile disabled? Your performance increase with it disabled is likely due to it not having to do writes to do so. Personally, I've found it best to set the pagefile size to 0MB minimum and system managed maximum, it's basically free protection from out of memory crashes.

Yes but it won't write pages on drive (which is very slow compared to RAM). Writing pages in SSD may be much faster though. Maybe that's why I don't tweak this option any more.

Even with 8GB I never had out of memory problems.

As for your option to let the system handle it (with 0 min), I this it's the defaut option now on windows 10. But back in win XP (and maybe win 7), even with that option, page file would quickly rise even if there was free physical memory. As if the system would store old pages to disc even if there was plenty physical memory.

To be clear : my XP computer which was turned into win 7 at one time (I don't remember when though) had 4GB at first. It was quite slow. I increased RAM to 8GB. But as the computer was still quite slow and did a lot of had drive access even when doing quite nothing, I was bothered. I noticed that bringing back to front a old task also came with slowdown and drive access, even there was much free physical RAM.

I turn down the page swapfile. Slow downs disappeared ans disc access became much more scared.

Under win 10, as I didn't noticed the same issue, I didn't tweaked this option. As I have a SSD, the issue may still be present but unnoticeable.

 

Edited by Warzouz
Link to comment
Share on other sites

2 hours ago, Red Iron Crown said:

Personally, I've found it best to set the pagefile size to 0MB minimum and system managed maximum, it's basically free protection from out of memory crashes.

A piece of advice that has circulated, in the past, has been "set the minimum and maximum file sizes to be the same."

The idea behind such advice:  Doing that forces the system to allocate a single, fixed-size file, right at the time that it's set up, which typically results in a single contiguous file that's not fragmented (distributed around the disk).  Since it's a fixed size, all the various reads and writes that happen during normal operation only ever overwrite the existing blocks of the file-- those operations would never actually deallocate or re-allocate file blocks (since the size doesn't change), so the file stays nice and unfragmented.  Having a variable-size pagefile means that the file could become fragmented over time, making a significant difference to performance.

However... it was a lot of years ago that I formed the knee-jerk habit of "always make a constant-size pagefile" for good OS hygiene.  There's been a lot of water under the bridge since then, and Windows has been through many major version upgrades, so I have no idea whether that bit of wisdom is still relevant or not.  I've also seen conflicting opinions on whether this ever actually helped all that much in the first place.  Also, with the advent of SSDs, which don't have the "seek times" that physical platters do, file fragmentation might not matter anymore.

So I could be completely out of date and incorrect on this.  :)  Just something to be aware of, on the off chance that it might still be relevant in some cases.

I tried a bit of rummaging around to see if the above advice is pertinent.  Some references that turned up:

...so, I'm not seeing anything conclusive one way or the other, and nothing super recent.  So, take the idea with however large a grain of salt you may prefer.  :)

Link to comment
Share on other sites

Fragmentation of the pagefile never mattered much anyway, since it was being written to and read from in 4kB chunks at random locations within the file (occasionally stringing together enough of them to hit a few MB). The only issue with a large maximum size is if the OS has to enlarge it the system can slow to a crawl while resizing the pagefile on a spinning disk, but consider that it is doing this instead of crashing. And that additional space consumed would be recovered upon next system restart. There is basically no downside to allowing the system to increase the pagefile size; it's free insurance against crashing.

Of course, if you are hitting the pagefile regularly then really you should attack the root of the problem and add more RAM.

 

Link to comment
Share on other sites

1 hour ago, Red Iron Crown said:

Of course, if you are hitting the pagefile regularly then really you should attack the root of the problem and add more RAM.

It was more of an issue back when computers had a tiny fraction of the RAM they do now.  Hard drives got big before RAM did.  :)

(Thus my caveat about "maybe it's not relevant anymore")

 

Link to comment
Share on other sites

File Fragmentation was only an issue on traditional HDD.  Fragmentation in itself meant the files were stored in such a way on the drive that the swing arm inside the drive had to move further out to access the data, or worse, had to move in for part of it and out for the rest.  This would increase seek time.

On SSD, fragmentation will not cause this issue because it is accessed directly like RAM and in fact fragmentation (of sorts) is beneficial to the life of the Solid State chips.  Solid state has a limited amount of writes and erases (but not reads) so modern operating systems (or possibly the drive itself, I'm not entirely sure) automatically use a technique called wear leveling.  This causes data to be written across all storage locations of the drive evenly, because if you wrote to the same location on one of the chips over and over again that would wear out that chip quickly while the rest of the chips on the drive were intact but unusable due to the exhausted chip.  Wear leveling in itself is a form of purposeful and targeted fragmentation.  Using wear leveling, your drive will probably last longer than you really need it to since you will probably want a bigger one eventually.  Furthermore, attempting to defragment an SSD will actually cause more wear and and thus can harm your drive.

So, back to the page file.  Setting it's size to be fixed doesn't matter on an SSD.  The drive and/or OS will wear level anyway so you are actually using the whole drive for the page file at some point.  You just can't see it, it happens behind the scenes and the OS presents it as if it were the same storage space the whole time, even though it isn't.  Furthermore, the drives are so blazing fast that it's read/write processes take no time at all.

There is some argument that a page file on an SSD is a bad thing because they are written and erased so many times compared to traditional file storage for programs and such.  However with wear leveling in place it likely will not matter.  Your drive is going to outlast your need for it or some other piece of the circuitry will fail before the solid state chips do.

Edited by Alshain
Link to comment
Share on other sites

7 hours ago, TheDestroyer111 said:

Am I right that KSP can allocate RAM memory to itself only once (on startup), but can give back the memory to other applications as many times as other apps request it? If not, then why does the game develop pulsating fps drop like if it went out of RAM after I open up some stuff on the internet with KSP in the background, but the FPS drops continue even after I close all other applications than KSP? (If this problem doesn't exist on Windows, does it exist on a Mac? It happens mainly on my Mac, my other computer with Windows is very powerful and very rarely gets less than 60 fps in KSP anyway)

I do not know about the internals of unity, c# and ksp, but i would assume that memory is allocated each time an object, method or function requests. When ksp is running the memory usage usually varies, mostly not towards lower numbers, so some internals do request memory and free it again, the source of so much anger these days :-)

Example: on a scene change from ksp-view to vab the editor might request memory for the representation of all the parts, the graphics system has it's own memory but might also request some for the buffers it uses textures and things. When leaving the vab, that memory is freed again (that's were we usually hold our breath). When enetering vessel view than other objects need memory, like props, the physics engine, etc. So memory usage "breathes" with the objects in use.

An fps-drop might have other reasons than memory usage and may even be outside of ksp. On linux i didn't realize any drops even after hours of play.

 

 

Link to comment
Share on other sites

15 minutes ago, Diche Bach said:

Glad to see this thread. My modded build gets pretty unplayable after a few hours, and especially if I spend a couple hours designing big complex ships in the VAB. Glad to hear that Squad is looking at ways to reduce RAM usag 

They aren't looking to reduce RAM usage, they are looking to reduce the amount of garbage collection and other processes.  This can mean reduced RAM usage as a byproduct, but it can actually mean more RAM usage as well.  Programmers have the ability to choose when their program requests memory from the system.  We call this "Scope".  It used to be taught that variables should be declared (storage space requested) in as narrow a scope as possible.  This meant that the moment that function or subroutine terminated the memory would be freed. 

However, in modern languages it can often be the other way around with garbage collection.  We have lots of memory but GC is slow, so the idea now is to declare frequently used variables (like loop incrementers which are used all the time) at a higher scope so they do not get turned over to garbage collection, however that also means they allocated persistently in memory.  Therefore, more RAM usage.

It's kind of like the computer programming version of recycling.  Instead of making a new variable (or worse, instantiating a new object), we reset the existing one and use it again.  You do have to be careful, though, that two subroutines are not using the same variable at the same time.

People who moved from ANSI C and C++ to C# really have to fight the urge to declare variables at a narrow scope :wink:

Edited by Alshain
Link to comment
Share on other sites

1 hour ago, Green Baron said:

I do not know about the internals of unity, c# and ksp, but i would assume that memory is allocated each time an object, method or function requests.

Unity, C#, whatever -- in the end, it's either C, or a language someone wrote in C, or a language in a language in a language someone wrote in C.  At its very basics, memory is the same for everything.

Ignoring details, a program gets an undifferentiated blob of memory from point A to point B which it can use however it pleases.  It can also ask the operating system to extend the length between A and B.

C manages this block of memory kind of like you would manage blocks on a hard drive -- giving sections to objects objects and remembering what sections aren't used.  It doesn't give them back to the operating system when it's done with them, just keeps a record for itself of what points inside its assigned blob aren't occupied.  This is called the heap.

In short:  Once a program gets memory, it doesn't ever give it back.  The OS can page it to disk if it has to, but can't force it to not exist.

If lots of tiny program objects are created and deleted frequently, the heap can fragment.  This is nothing to do with hard drive fragmentation except that it's the same kind of problem -- there might be thousands of 64-byte chunks scattered around, but to find a 65-byte one, it might have to ask the OS for more.  It just looks like the program's using more and more memory for no reason, when the wasted memory is really the too-small-to-use filler between objects in use.  It's a major culprit of slowly bloating memory use, especially in high-level languages.

Edited by Corona688
Link to comment
Share on other sites

The reference compiler for C# (called 'Roslyn') is written in C# and bootstrapped from previous versions of itself.  So, no it's not written in C.  The project is open source.  The earliest version was likely compiled using a compiler written in C or C++ but bear in mind at the time of use it is machine language.  That's what a compiler does after all.  So the language of the compiler originally compiling a compiler for a new language is irrelevant, it's all machine language in the end.

(if you completely understood that last sentence, kudos to you!)

 

Edited by Alshain
Link to comment
Share on other sites

7 minutes ago, Alshain said:

The reference compiler for C# (called 'Roslyn') is written in C# and bootstrapped from previous versions of itself.  So, no it's not written in C.

It still deals with the same memory interface, a "block" from A to B, probably managed by its own private heap.  That's not a programming language artifact, it's just the way things work.

Link to comment
Share on other sites

12 minutes ago, Corona688 said:

It still deals with the same memory interface, a "block" from A to B, probably managed by its own private heap.  That's not a programming language artifact, it's just the way things work.

It is, however the great power of memory managed and garbage collected languages means you don't have to do it yourself.  This makes it hard to fragment the heap, and memory is released to the system.  However it is cpu intensive.   There are also exceptions where the garbage collector can not access that can cause heap fragmentation.  The only one I know offhand is the Read and Write buffers on a socket.  Typically you want to pool socket objects rather than make new ones.

Edited by Alshain
Link to comment
Share on other sites

21 minutes ago, Alshain said:

It is, however the great power of memory managed and garbage collected languages means you don't have to do it yourself.  This makes it hard to fragment the heap

Garbage-collected languages are among the worst offenders.  Java for instance is infamous for it.  Garbage collection is great for the programmer, but means lots of tiny allocations and deallocations for simple tasks.  Fragmentation was inevitable anyway, but happens faster.

Tracking use is insufficient to avoid fragmentation, it has to move memory that's being used.  If it does that, I can certainly believe it'd be CPU intensive!

Quote

and memory is released to the system.

This I really, really doubt -- because it shouldn't in most circumstances.  Frequently allocating and freeing memory at the segment level encourages fragmentation in the OS itself, and generally a waste of time anyway.  If the program's still busy doing things, it's going to need it right back.  That's why the heap segment is used this way.

It probably can't, either, unless it suddenly finds itself with a few hundred surplus megabytes all stuck together at the very end of the heap segment.

Edited by Corona688
Link to comment
Share on other sites

Memory is released to (and requested from) the kernel, who has the last word over the resources. It has the algorithms for the management. And yeah, in C++ one is strongly encouraged to keep the scope as small as possible :-)

 

Wasn't aware that C#-compiler also makes machine-code, thought of some sort of p-code that has to be interpreted, interesting ... :-)

 

Link to comment
Share on other sites

Aaaaannddd another question got turned into a crazy discussion about everything and nothing xD

So, do I understand it right that my case of pulsing FPS drops is caused by paging, meaning that due to Out Of RAM, some of the data gets onto temporary disk space and lag spikes appear when KSP tries to move the data from temporary disk space back to normal RAM?

Link to comment
Share on other sites

7 minutes ago, TheDestroyer111 said:

Aaaaannddd another question got turned into a crazy discussion about everything and nothing xD

Sorry, but you opened with a very technical question :wink:

9 minutes ago, TheDestroyer111 said:

So, do I understand it right that my case of pulsing FPS drops is caused by paging, meaning that due to Out Of RAM, some of the data gets onto temporary disk space and lag spikes appear when KSP tries to move the data from temporary disk space back to normal RAM?

It's one possibility, it honestly could be a lot of things.  Visual effects like engine smoke and such can cause lag spikes as well.  Part count can cause physics lag.  What are your system specs?

 

Link to comment
Share on other sites

5 minutes ago, Alshain said:

Sorry, but you opened with a very technical question :wink:

It's one possibility, it honestly could be a lot of things.  Visual effects like engine smoke and such can cause lag spikes as well.  Part count can cause physics lag.  What are your system specs?

 

Just like Alshain typically does, he didn't read the OP at all. Read the OP.

As a reminder, the specific type of lag I mean is a lag that pulses between 5-15 fps once every few seconds (AFAIK this kind of pulsating is typically caused by Out Of RAM). This never appears right after getting into the game, and instead it appears when I open some stuff in other programs than KSP. Thing is, the specific type of lag I mean does NOT stop happening when I close all other programs after the lag appeared.

Link to comment
Share on other sites

18 minutes ago, Green Baron said:

Wasn't aware that C#-compiler also makes machine-code, thought of some sort of p-code that has to be interpreted, interesting ... :-)

It can do both.  The specification for C# does not require any specific runtime, however it can also use runtimes like Mono or .NET.

Link to comment
Share on other sites

3 hours ago, Alshain said:

They aren't looking to reduce RAM usage, they are looking to reduce the amount of garbage collection and other processes.  This can mean reduced RAM usage as a byproduct, but it can actually mean more RAM usage as well.  Programmers have the ability to choose when their program requests memory from the system.  We call this "Scope".  It used to be taught that variables should be declared (storage space requested) in as narrow a scope as possible.  This meant that the moment that function or subroutine terminated the memory would be freed. 

However, in modern languages it can often be the other way around with garbage collection.  We have lots of memory but GC is slow, so the idea now is to declare frequently used variables (like loop incrementers which are used all the time) at a higher scope so they do not get turned over to garbage collection, however that also means they allocated persistently in memory.  Therefore, more RAM usage.

It's kind of like the computer programming version of recycling.  Instead of making a new variable (or worse, instantiating a new object), we reset the existing one and use it again.  You do have to be careful, though, that two subroutines are not using the same variable at the same time.

People who moved from ANSI C and C++ to C# really have to fight the urge to declare variables at a narrow scope :wink:

Ah, so C# does suck just as bad as Java then eh!? :sticktongue:

I don't pretend to have mastered the use of pointers and data structures yet, but I'd much rather struggle to learn and use them effectively, i.e., write in C++.

Skyrim, Stellaris, Jagged Alliance 2, Fallout 4: all designed with engines based on C++, all perform well with enormous numbers of mods (120 to 200 separate plugins) as long as they are not mutually exclusive mods). KSP, not so much . . . indeed KSP seems to handle heavily modded builds actually worse than Minecraft does.

So am I being too simplistic or not quite getting anything right here?

Link to comment
Share on other sites

1 minute ago, TheDestroyer111 said:

Just like Alshain typically does, he didn't read the OP at all. Read the OP.

As a reminder, the specific type of lag I mean is a lag that pulses between 5-15 fps once every few seconds (AFAIK this kind of pulsating is typically caused by Out Of RAM). This never appears right after getting into the game, and instead it appears when I open some stuff in other programs than KSP. Thing is, the specific type of lag I mean does NOT stop happening when I close all other programs after the lag appeared.

Ok, well everything I mentioned could cause that but since you are being rude, you are on your own.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...