Jump to content

Starman4308

Members
  • Posts

    1,751
  • Joined

  • Last visited

Everything posted by Starman4308

  1. I'd definitely prefer no advanced alien life. It'd be hard to do properly and it'd take a way a bit from the appeal IMO if done incorrectly. Microscopic life would be great to see in places, and you could definitely do the simpler forms of multicellular life (e.g. sponges, plants, tube worms at deep sea vents) without too much trouble. I'd definitely like to see it in places which might prod people into learning something about Earth's history or exobiological concepts.
  2. Maybe now those compute hours can go to actual scientific projects rather than assuming that aliens are screaming constantly in all directions. I'm not sorry to see SETI@Home go, as it's a profound waste of computer resources and electricity when we have other issues like global warming. The modest radio emissions of a sapient species get drowned out pretty quickly by background radio noise. Unless they're right next door or already know we're here, we wouldn't pick it up.
  3. I'd be curious to know the answer. I suspect rockets have several effects distinguishing them from aircraft 1) They put out an awful lot of propellant in a short time, and probably dump out much more water per second than comparable jetliners. 2) They swiftly leave aircraft in the dust; at 1 km/s, even if you're dumping more water per second, it's spread over a longer distance. 3) At a high enough altitude, I suspect the contrails just disappear as there ceases to be enough atmosphere to meaningfully condense the water.
  4. The intake air drain error has indeed been fixed in 1.9.1. I'm unsure about the KAL issue, but that's a very niche thing in any event.
  5. As mentioned: it doesn't matter. So long as you're unable to break up the rate-limiting step (rigid-body physics on the biggest vessel in physics range) into smaller chunks, the job scheduler isn't going to help you. If rigid-body physics takes 90% of the main thread's time, then you can at best obtain an 11% (10/9) speedup by pushing stuff off onto other threads. It doesn't matter how many cores you throw at it, you're limited by how fast you can get through the bottleneck. I'll also point out that vectorization is a less-flexible variant of parallelization. If you can't split the work over multiple threads, you're not going to be able to vectorize it either. Much like with thread-based parallelization, there is no such thing as a magic wand of vectorization. All those speedups you see developers boasting about are speedups obtained for calculations that are amenable to parallelization and vectorization, problems where the calculations are loosely coupled and data is largely independent, but there are some problems that just don't work that way, some problems where the only solution is to put it on the fastest CPU core around.
  6. Technically speaking, you're falling into the same trap as the OP: assuming your opinion is representative of everybody. Assuming I substitute "early access" for "quick unplayable garbage", I can definitely point out the OP as a counterexample. Granted, he/she likely doesn't want quick unplayable garbage either, he/she just has a different idea of how we should get there. To actually get a good idea, you'd need to conduct a poll, and even then, you're going to be biased by where the poll is. Post the poll here? You're biased towards the opinion of forumgoers. What I can say is that I'm one of those people who's wary of early access these days, and KSP 2 is not IMO a good fit for EA, as the general ideas are already there ("KSP, but better"), it just needs good execution. There's a lot of player feedback from KSP 1 that's already going into KSP 2.
  7. There is no such thing as a magic wand of parallelization. Having the world's best job scheduler doesn't matter a whit when there's one big chunky non-parallelizable thread bottlenecking the whole thing. All that does for you is let you balance the load across CPU cores... after you've figured out how to break up the rate-limiting step into separable chunks. A job scheduler just manages parallel tasks, it doesn't help you figure out how to break up a problem into multiple independent chunks that can be executed separately. It would be like expecting multiple bakers to bake a cake more quickly. Sure, some things can be done in parallel (ingredient prep, etc), but at the end of the day, you still need to stick it in the oven, and no number of bakers is going to make that happen any faster. LATE EDIT: I don't mean to sound negative about DOTS or Unity's new job system. I'm sure they're good at what they do, but what they do is not "make KSP faster". KSP has for a long time been bottlenecked by the rigid-body physics involved, something few people in game design have paid attention to because very few video games have to deal with arbitrary player-constructed multi-hundred-part assemblies, and the nature of the problem does not lend itself to easy optimization or parallelization.
  8. KSP does not use PhysX for the rigid-body physics, and PhysX would be poorly suited to it. GPUs are good for lots of relatively simple independent or loosely-coupled calculations (e.g. particle GFX), but the sort of rigid-body physics we're talking about is tightly-coupled and difficult to parallelize, something that's much better suited to CPU calculation than GPU calculation. NVIDIA PhysX is also, you know, proprietary NVIDIA software, and isn't about to run on AMD or Intel hardware.
  9. I'm under the impression that the style of rigid-body physics KSP employs is particularly nasty if you want to parallelize and optimize it, as it requires self-consistent solution of a series of mutually interacting constraints. What I suspect KSP 2 will be doing is much smarter than trying to optimize the rigid-body physics engine: I suspect they're going in for some form of parts welding. If it keeps the total rigid-body-element count in the 100-200 range, that should keep the rigid-body physics from being the limiting factor for KSP 2 performance. From there, you can keep the per-vessel physics single-threaded, reducing the chances of bugs and the difficulty of implementation. Parallelization efforts can be focused on more loosely coupled items like "have each vessel's physics on a separate thread" and "have a thread for the on-rails vessels".
  10. You are quite incorrect saying that we "all" want to play it soon. I'd personally rather wait and play a decently polished game. KSP 1 suffers badly from having gone into early access too soon, the scars of which are still evident in the mess of spaghetti code and incoherent design.
  11. In my field specifically, Particle Mesh Ewald is an excellent O(n*ln(n)) approximation of long range electrostatics in molecular simulation. I know octree approaches are common in astronomical simulations as well. They're approximate, but in many circumstances, you can approximate N-body problems in O(n*ln(n)) or even O(n) time. The specific problem in KSP style rigid body physics is a combination of self consistency (making parallelization hard) and nobody having worked out an easy approximation to solve the system of constraints involved. There might be academic work on similar systems, but it sure hasn't percolated down into game engines.
  12. The primary limiting factor in how KSP looks is the art assets, and that suffers from the same problem that has plagued KSP for a long time: bandages over bandages over bandages over placeholders created by people with no expertise in game development. From what I've seen, I'm inclined to believe that Unity games can look good with talented artists at the helm, and if KSP 2 doesn't look that great, it's likely because their budget didn't stretch to hiring lots of excellent artists. That's been a substantial divide between indie and AAA for a while now: while indie games tend to have more freedom to be creative, AAA studios have all the budget to hire legions of top-tier artists and the graphics programmers to bring their work to the screen with minimal GPU effort. A low-budget game has to get by on creativity, because you're never going to out-polish Call of Duty 5770912: Incrementally Prettier Explosions. The primary limiting factor in how KSP performs is the rigid-body physics engine. The best publicly available rigid-body physics engine is likely tucked away in some scientific or engineering package, and certainly not in a game engine. While I still haven't gotten confirmation, I'm given to suspect it's an O(n^2) problem with current approaches, which means it's always going to be a monster. Even ignoring the fact that rigid-body physics is hard to parallelize, O(n^2) is a bad regime to be in; quadrupling your performance only doubles the number of parts you can add for the same calculation speed. What KSP 2 is likely doing is sidestepping the problem with either static or dynamic parts welding, treating assemblages of parts as a single "part" in the rigid-body physics engine. If the total number of rigid-body elements is kept under ~100, even a sloppy, inefficient rigid-body physics engine is likely able to keep pace. Now, I suspect Unity's rigid-body physics engine is reasonably good, but this goes to show one thing. Working around the limitations of an engine can sometimes be far more productive than trying to optimize or switch engines. KSP 1 hasn't done that, likely out of a combination of difficulty (see bandages on top of bandages leading to fragile spaghetti code) and the probability of breaking savegames. KSP 2 has the opportunity and funding to take a step back and say "we're going to do this right, and that means dealing with this rigid-body physics mess the right way from the start". Switching game engines won't fix the rigid-body physics engine. At best you get marginal improvements (which I find unlikely: almost nobody in game design has to deal with large rigid-body assemblies). At worst, you discover that Unreal has a less efficient rigid-body engine... because much like the developers of Unity, they never saw it as a priority, because who would be nuts enough to design a game requiring assemblies of hundreds of linked parts? EDIT: You know, there's a real parallel to scientific computing here. "Man, these physics are brutal, and my senior Ph.D. student needs to write his/her thesis soon. Can we make some approximations to make it run faster?"
  13. I'd also appreciate an open beta thereof, though there's one substantial problem with that: DRM, or the lack thereof. While I haven't looked extensively, most of the open betas I'm familiar with are for obligately online games which simply don't work if you lack the appropriate client. KSP 2, though, will be substantially single-player with no DRM, making it more likely people just keep going with their open-beta copies. That said, if a studio can afford to do so, I suspect it's much healthier overall to have a studio work with a limited number of testers (such as a "closed" early access/beta) and produce a good rendition of a video game from the moment of public release rather than have huge amounts of feedback "noise" with short-term priorities. With a closed beta, you can justify taking several months to refactor core elements of the game and substantially change how it plays. The moment you go public, even early-access public, you start to get serious bad rep for breaking the game for existing customers, and that's a trap KSP fell into: an inability to do more than tweak the game because it would break existing saves and playstyles.
  14. I haven't read the complete thread, but I Have Opinions (TM). First, I largely concur with the Anti-If Campaign: https://francescocirillo.com/pages/anti-if-campaign. Branching makes things difficult, and ideally there should be a minimum of code branches. Second, there are plenty of ways to incorporate options without explicit if/else branching in the code. Let's take notional life support: one could implement an "if life support enabled, draw LS supplies". Or... one could have the "disable life support" button merely set LS consumption rates to zero. Configure constants, not behavior, and you're much less prone to weird bugs. There will be a few places where you need either branching or polymorphism (e.g. disabling CommNet occlusion might use a "DummyOcclusionChecker" implementation of an "OcclusionChecker" interface). Third, I see little problem with trying to balance for one "intended" set of options while giving players the ability to go up and down in difficulty from there. Finally, all this should be taken on a case-by-case basis, trying to balance game flexibility with the chance of odd side effects. Basic good software practices apply: if you're six hours in, have modifications in a dozen different files, and still aren't finished implementing a feature, think very hard about whether the feature is worth it.
  15. I do hope nobody was hurt! It's some nice footage, even from a dashcam. <Slight soapboxing here> This helps illustrate why we should put more effort into detecting Earth-crossing asteroids, particularly those inside Earth's orbit (where it's hard for amateurs to detect them). Not only are some events (e.g. Cheylabinsk) dangerous and utterly preventable, but less dangerous events could easily serve as a tourist attraction; it would've been neat to have good cameras set up and ready-to-go to catch this one.
  16. The primary alternative is ISRU: In Situ Resource Utilization, ranging from fairly plausible concepts (Sabatier reaction, CO reverse fuel cells) to not-yet-known-feasible (Bussard ramjet). Other than that: you're at the mercy of relying on external magnetic fields (very weak until you get very close to large objects), photon drives (which require inordinate amounts of power), and stored reaction mass (looping right back around to the rocket equation).
  17. I'm not sure if RSS in particular is updated for 1.9. I do know that the full RO/RP-1 package is currently best-supported on 1.7.3, with 1.8.x support being relatively experimental.
  18. Realism on that scale is a very niche thing, and I suspect very few would buy it... and I personally haven't played KSP without RO/RP-1 for years. It would also make the learning cliff even more vertical, plus introducing even more code complexity... it wouldn't work out very well.
  19. It would warm up, the magnitude of which depends on what it interacts with. If nothing else, the cosmic microwave background radiation should warm it up to ~2 Kelvins. It would not in any noticeable way affect any results of impacting Earth (assuming it even manages to interact).
  20. One possible issue would be that while 24 FPS works decently for some types of media, video games and some other 3D-rendered scenes work poorly at 24 FPS. With physical cameras, objects in motion naturally have a bit of blur to them because the shutter speed is finite. With video games, the effective shutter speed is instant: you have no blur baked into it, so the illusion of movement can be broken by running at low framerates. While I don't have experience in video rendering, I suspect the ideal thing to do would be to run at a higher framerate and then post-process, interpolating multiple frames captured from KSP into a single video frame.
  21. By "interaction unlikely"... how unlikely? A 10 meter sphere contains an awful lot of protons, electrons, and neutrons, so even if the probability of one particle interacting with the Earth is low, there's an awful lot of particles to be doing the interacting.
  22. According to Atomic Rockets, you need 2.9*10^32 J of energy to completely destroy Earth: specifically to reduce it to bits of gravel and move the pieces to infinity. I'll assume a density of 2500 kg/m^3. With a 5m radius, this means a mass of 1,308,997 kg. Using non-relativistic kinetic energy equations (KE = 1/2 * m*v^2), it would need to be traveling at 2.1*10^13 m/s, or about 70,000c. Substantially less than what the OP posted. I'm substantially less familiar with relativistic physics, so I may have done the math incorrectly, but I think the required velocity using relativistic kinetic energy equations is c*(1 - 4.1*10^-10), or about 0.9999999996c. All of this, of course, implies perfect conversion of the impactor's kinetic energy to overcoming Earth's binding energy, but even given some inefficiencies, I suspect New York City is toast.
  23. I'm actually wondering if what you're seeing is actually just a numerical error caused by discrete time steps. Let's say you have a rocket engine with an exhaust velocity of 3000 m/s and a full:empty ratio of 2:1. You get 2,079 m/s out of that. Now, let's say you have a mass driver that accelerates half of your ship to 3000 m/s. You impart -3000 m/s to that chunk, and since the masses are equal, you impart +3000 m/s to your ship. You've ejected the same mass out of your ship at the same relative velocity... but gained nearly 50% more dV. All else equal, throwing large chunks of mass out the back is preferable to a steady stream, because with a rocket, some of your impulse is "wasted" on the propellant about to go through the nozzle, whereas with a mass driver, that comes out all at once. How this applies to drain valves with crazy drain rates? Well, if you drain your entire tank in a half-second and you have the default 0.04 sec/tick, you are throwing the mass of your tank out the back in 12.5 discrete chunks like a mass driver rather than in a steady stream. There's a substantial time discretization error that becomes worse the faster you drain your tanks.
  24. Please read and comprehend my posts. I have at no point stated that drain valves operating on intake air is a good thing; on the contrary, I have repeatedly expressed that it is a bug and needs to be fixed. Other than that bug, though, I don't see any huge issues. The drain-valves-to-orbit thing has clearly been primarily a result of intake air shenanigans that were unintentional on Squad's part from the start.
×
×
  • Create New...