Jump to content

K^2

Members
  • Posts

    5,071
  • Joined

  • Last visited

Reputation

3,235 Excellent

About K^2

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I would prefer each SoI to have its own coordinate system and the attitude indicator to just change with SoI change just like the altitude indicator does now.
  2. Yes, a little. The actual semiconductor physics of it is complex - I've spent a year on a condensed matter track before switching to particle theory, and most of it is still bizarre to me, but in terms of practical consequence, what you have control over is clock frequency, voltage, and temperature of the die. The thing you actually care about with performance is the clock frequency. The number of instructions performed by CPU, outside any time wasted waiting on memory or whatever, is a multiple of CPU clock frequency. (You can also overclock memory and GPU also with their own limitations and caveats.) The problem is that there is a limit to how fast transistor states can switch, so if you get the clock speed too high, you end up with errors. You can compensate for that by increasing the voltage applied - kind of equivalent of throwing switches harder to get them to move faster to a new position, but there's a limit to that before you risk physical damage to the circuits. (Again, not unlike physical switches.) Finally, cooling the die helps to reduce thermal bounce, which both allows you to go a bit higher on the voltage and helps given voltage settle the transistors faster. So if you can cool the die more, you technically can get away with even higher clock speeds, which is why you see all the records set with liquid nitrogen setups. That really only helps you to squeeze the last drops of performance, though, so just a good cooling setup is adequate for most uses. But where it comes back to power consumption is that both increasing the voltage and increasing the clock speed increase the amount of power consumed by the chip. And 100% of power consumed becomes heat, so even if you aren't trying to drop the temperature down, overclocking usually requires a good heat management setup just to prevent overheating. And laptops are disadvantaged on both of these. You have limited amount of energy available to power the CPU - certainly on batteries, but even when plugged into a wall, that portable power supply can only do so much, and you are limited on the thermals as well. Laptop just doesn't have room to fit a good cooling setup. Because of that, the laptop CPUs are usually designed to be more power efficient and less performant, but they do usually come with multiple power modes. You need very little CPU power to browse the internet, and being able to throttle down is important to conserving battery power. What usually happens is that the clock speeds will go down, voltage might adjust down as well, and some cores might become disabled to make the batteries last as long as possible, and only kicking into high gear when you're playing a game or something. Even with a laptop, the manufacturers will usually stay on the safe side of keeping the thing reliable, so you can usually squeeze a bit more performance out of the system, especially, if you're plugged into the wall. Whether it's easy to configure will depend on the motherboard and CPU used. I have been able to overclock some laptops a little bit to make the games run better, but there is really not a lot of room to work with usually before you start running into heat throttling. That is, CPU detecting that it's overheating and starting to drop the clock speeds. (If it continues, it may shut down entirely.) So if you want to tinker with performance, you really ought to be doing that on a custom-built desktop actually designed to take the punishment.
  3. Oh, man, layers upon layers. Optimization is a huge topic. Very broadly speaking, if we limit it to games, you are looking at one of the following categories. 1. Algorithm improvements. Sometimes you can just rearrange how operations are performed and make the code faster. This can be literally reducing number of instructions that have to be executed, reduce time lost to memory latency, or reduce time interacting with OS or hardware. Usually, you don't see big wins in something like this after the game is released, but every once in a while there's a stupid mistake that somebody missed that makes a huge difference. 2. Threading optimizations. Sort of related to the above, but when you are working with multiple threads running in parallel, you are sometimes losing time on threads having to wait for each other. So by simply rearranging when operations are performed you can sometimes get huge performance wins. Again, usually, you get that out of the way before the game is released, but sometimes, improvements like that can come in after release. A particular case is if the code was originally optimized for a very specific core count (*cough*consoles*cough*) but later, re-optimized to cover broader range of possible CPUs. 3. Removing unnecessary code. Things like writing logs can really slow performance down, and sometimes that's accidentally left in the final game. Finding and removing that stuff helps and it's more common than you'd think. 4. Engine/Library/Driver improvements. Especially if you're using 3rd party engine like Unreal or Unity, just because you're done working on the game, doesn't mean they're done improving the engine. Sometimes, it makes sense to switch to a new version of an engine, and sometimes, it runs a lot better. (Also, sometimes worse, but that's what you get with relying on 3rd party software sometimes.) Likewise, an update to something like a graphics drivers might fix something your game has been relying on, in which case, it's a welcome surprise of better performance. It's rare, but it happens. 5. Hardware improvements. Just because your hardware didn't change, doesn't mean the code wasn't updated to make better use of hardware improvements you already have. This could be done with an explicit change to the game code or be picked up as part of the engine or library updates as in the previous section. In either case, you end up with your hardware better utilized giving you better performance. 6. Code optimization. If computer executed the code the way it's written by programmer, things would run at least ten times slower than they do. With a modern compiler, code is first converted into some sort of internal representation, with compiler removing anything that's found to be unnecessary and simplifying some loops and function calls. Then the representation is converted into machine code for particular target architecture, and compiler removes redundancies, shifts things around to make better use of registers, and may even rearrange order of instructions to make them fit better into the pipeline. When CPU executes instructions it will also convert them into micro-code and potentially re-arrange them to improve execution. Now, the programmers have very little control over any of that if any. But updates to compiler and associated libraries can result in better code produced by simple recompiling the project. Likewise, the way your CPU converts instructions into microcode is subject to firmware updates. Some of the optimizations also have to be enabled, and again, you'd be surprised how often games ship with some of the code unoptimized. Obviously, if it was a global disable, somebody would notice, but a few unoptimized functions in a core loop can really slow the game down. There are tools that let you examine what's going on with the code. We can look at time spent on specific function calls, how busy individual cores on the CPU are, when various calls to OS and hardware are made, how much time has been spent waiting for other resources, and so on. But it's still a lot all at once, so learning how to improve your own code and how to look for problems in code handled by someone else is a huge part of being a games programmer.
  4. You can start losing surprisingly heavy elements if you dip close enough to the star early enough in the formation through combination of heat and gravity. I'll grant you that a planet in that situation is on its last legs before being swallowed outright, and it would take a miraculously well-timed boost from something else falling into the star, but out of all the stars in all the galaxies... Unless that something else got ejected or absorbed into the primary. Again, we have no idea what sort of dynamics that system could have been going through before settling down. Merging binary star system, for example, can produce an absolutely wild distribution of planets and compositions which will otherwise will look tame and normal a few billion years after the stars merged. I wouldn't expect to encounter these kinds of system very often, but if we give developers a fiat of picking an interesting system to exemplify in the game, none of this goes outside plausible. I don't disagree. And it'd be nice to get a nod to that somewhere in the game, pointing out how bizarre it is for a planet to have these properties. I mean, obviously that's why developers want it in the game. I just don't think it's a bad reason to put a planet like that in the game, so long as it merely stretches the limits of what's physically possible for a real world, after the 10x scale correction, of course. If it was just absolutely impossible, I would be against it too. Merely very, very, unlikely is acceptable if it's in the name of good gameplay and interesting place to explore. I mean, if we start getting technical about it, Lathe is already a good example of a world that's exceptionally unlikely to happen. But it's a fun moon to have in the game and I'm glad it's there.
  5. There are several processes that can strip planet of lighter elements and prevent it from accumulating helium. Two most likely are spending a lot of time very close to the primary and then migrating out to its current location, potentially by redirecting another planet into the star in the process, or a close encounter with a gas giant early in the formation. Both of these can result in a planet that's basically a core of a gas giant with a thin crust of rocky material on top and an insubstantial atmosphere for that heavy of a planet. Since a larger fraction of the planet's mass is due to its larger iron core, the average density is also a lot higher. There was a bit of a discussion of an exoplanet candidate with similar apparent characteristics a while back. Though, perhaps, still not quite as dense as Ovin, it shows that such events happen in star system formation.
  6. You have to land on it without crashing first. At 4G, unless the atmosphere is as thick as Eve's, parachutes aren't going to do much good. And I strongly expect it to have a very weak atmosphere based on the images. That planet is going to be a graveyard of broken ships.
  7. Are you just trying to build a chain? Or do you need at least one tower visible from every point on Earth? Even in the later case, you can come up with a space-filling pattern that's more efficient. Think a branching out fractal-like structure. Simply filling all space with a grid of tower is not the optimal solution if you're trying to build fewer towers.
  8. Wasn't there something about Rask/Rusk situation being re-worked, because there were problems with n-body? It might be just patched conics throughout the game.
  9. Oh, absolutely. What I'm talking about is stuff that's been built directly into the game executable. Actual example from a recent game, we had to generate convex hulls for some geo for a feature involving physics particles for FX. It was handful of objects with low vertex counts, so it was cheap enough to do the build on demand. But because the library generating the convex hulls was under GPL (or similar license), the shared code containing that library would not get compiled in for release version. Which meant we had to move the generation of collision hulls to the builder, and now we had to track the dependencies to make sure that the collision hulls are built for all the right object and not for all the objects, and then platform-specific dependencies became a factor, and it was all in all a couple of weeks of work just to get the build process for these sorted, whereas the initial proof-of-concept on-demand loading was done in an afternoon. And yeah, in principle, the entire builder could be released as open-source project with all of its libraries separately from the game. (Edit: Or as you point out, still as proprietary tool, but with open-soruce libraries that contain the actual GPL code delivered separately.) That's not going to happen for that particular game, because that's a whole another can of worms that the studio doesn't want to deal with, but it's not because of any legal restrictions. My point, however, wasn't about situations where tools are intentionally released by the game developers to be used by modding communities. It was entirely about the situations when tools get left in because they weren't worth a hassle to remove. In the above example, if we could ship the game with that hull generator, we would have. There just was no reason to pull it out of the game other than licensing. And there are plenty of games out there with dev features left in the game either disabled or simply unused. And when you are trying to ship a game that is easy to mod, sometimes, all you have to do is absolutely nothing. If you already have build tools integrated into your executable, you just ship it as is.
  10. In a tl;dr way, yes. But if you want a bit more detail, as far as it relates to games and game data, streaming is ability to load data from hard drive directly into RAM without involving CPU for anything beyond kicking off the process. And yeah, what that lets you do in practice is load assets when you need them, because you don't have to stop the game to handle the loading. The game engine notices that it's missing some asset data, issues a request, and just keeps ignoring any objects needing these assets until the assets are loaded. This can lead to pop-in if you are very aggressive about it, but can also be entirely seamless if you have a bit of a headroom on RAM and have different level of detail versions of your assets that can be loaded separately. A good use case in a game like KSP is not loading high resolution terrain textures until you are in SoI of the planet that needs them. Possibly, even loading only select biomes for the highest quality versions. This might cause things to look blurry for a few frames when you are switching between ships, but you get a lot of flexibility out of it that usually makes it worth it. One caveat relevant here is that any data you might want to stream has to be in the exact format you want to use in the game. Like, you wouldn't stream a JPEG image, because you have to decode JPEG before you can display it, and if your CPU has to spend time decoding the data, that often leads to choppy frame rate or freezes. Obviously, you can handle a few operations like that, and you usually have to, and a lot of hardware these days, notably all major consoles, support some sort of compression in streaming. You can think of data as being in ZIP files on the HDD/SSD and gets decompressed as it's being loaded by a dedicated chip, so again, no CPU use necessary. But otherwise, the data needs to be ready for use. And this is where it gets a little tricky with modding. If the game expects data that has been processed, and you just added fresh, unprocessed data by installing a mod, something somewhere needs to figure out that this happened and prepare the data before it can be streamed. The way KSP mods work, some of that is done by the modding SDK you use to create modded parts, but a lot of data has to be built by the game. There are a whole bunch of ways to handle it. The simplest is to not worry about it until you actually try to load the data, and then do the processing if it's needed then. Yes, that will definitely cause a freeze in the game, but if that only happens once per install of a particular mod, that's not a huge deal. A more complete solution is generating a dependency graph and checking it when the game starts. For a game like KSP2, that shouldn't be too hard, especially, since you have a part list upfront and you can just check to see that all the data you need for every part is available when you start the game without having to actually load everything.
  11. Doesn't have to be a single blob for everything, there are a lot of implementation options here, but that's the gist, yeah. Most of that data takes up considerable CPU time to prepare and very little disk space to store. Just cache it and invalidate cache if the source file updated. Side note, I'd be shocked if there is no way to make processing a lot faster either, but if you cache it, it's kind of a moot point. No reason to over-optimize something that will only happen once for a lot of players, and only once per mod install for almost everyone else. I have encountered cases where build tools get intentionally cut from released game binaries because they contained some libraries with infectious GPL or similar licensing. Basically, if they were to release the game with these libraries, they'd have to publish source for the entire game. But I've also worked on projects where the tools are literally there in the published game, only with some editing features turned off, and the only reason people can't mod the game easily is because the source format and directory structure aren't publicly shared. In general, though, developers need a reason to disable these features, and because that's work, they usually leave all or huge chunks of them in. It's usually easier to ship a game with build tools than without. For KSP2, baking part info in some way or form is going to be necessary. And caching baked files is going to be necessary for streaming. I think they're pretty much forced to write the tools we'd want for much faster loading. And because the developers don't want to wait for unnecessary build times either, they're pretty much guaranteed to auto-detect source file changes and build stuff on demand. Basically, all of the features we want. All Intercept has to do to make the game easily moddable and reduce loading times for everyone is just not turn off that feature when they ship. That said, if for some reason PD or Intercept go back on modding and try to lock us out of critical tools, I'm here to help build replacement tools. Unless PD forces Intercept to install some sort of cheat detection, I don't think we have to resort to anything that would be a forum violation to share. In the worst case scenario, I still think modding is going to be worth it, even if it ends up against forum's ToS and all discussion of mods for KSP2 will have to be moved elsewhere. I hope that doesn't happen, but I'm confident that modding will be a big part of KSP2 either way.
  12. Fair. I'm oversimplifying. "Compiling" might be a better term here. What I mean is taking the text data and turning it into the in-game representation - everything but the actual memory allocations, which, of course, have to take place at the runtime. All of the operations in building the internal data only really need to be done once. Except that this is exactly how it works on pretty much every major game project and people do edit text files that get built exactly once. Caching of binary builds is a standard practice in games industry. If I edit a scene file, whether I did that with a dedicated tool or simply opened it up in notepad and changed markup data, the game will know that the binaries are out of date and rebuild them. Yes, when we ship games, we often only build the binary cache and many games ship with the build tools stripped out, but there are enough games out there that maintain the dev-environment behavior of checking for new/modified markup files and will rebuild parts of the cache that are out of date - often, specifically to be more mod-friendly.
  13. You are showing "Part loading" as 70% of texture loading time. That's not at all insignificant. And all of it is from parsing configs for individual files. Yes, the main config parsing doesn't take long, but the parts loading is still a parsing issue. When loading a few kB of text takes 70% of what it takes to load all the multi-MB textures of the parts, there's a huge problem. And fixing this would reduce loading time by about a quarter based on what you're showing here. Which is not insignificant at all.
  14. Parsing of the part configs takes shockingly long, and that can definitely be fixed. But yeah, given the way KSP2 is being built, there is a lot of data that will have to be streamed already. They might as well recycle some of that tech for the mods.
  15. It's never that simple. It's possible to intentionally preserve some degree of backwards compatibility, but you are usually handicapping yourself by doing so. There are better places to spend developer resources at. If the mods are good and useful, someone will port them over.
×
×
  • Create New...