-
Posts
6,153 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by K^2
-
Concern about the level of terrain detail in KSP 2
K^2 replied to wpetula's topic in Prelaunch KSP2 Discussion
I don't think performance is the problem. An absolute potato will not be able to run the game's physics at anything like a reasonable framerate, and since people tend to upgrade graphics before the CPU, and the type of procedural placement Intercept uses typically runs as a compute shader on the GPU, I can't imagine this being a problem. For a concrete example, we've been running large open world jungles on PS4 Pro with the view distance that would be totally fine for KSP2 without significant impact to the frame rate. I mean, it'd drop below 30 occasionally, but that was even before the final optimizations round. And we know the KSP isn't coming to 8th gen consoles, at least for now. So this just isn't the limiting factor. The problem is that building large worlds that look neither empty nor cluttered is hard work, and doing this for dozens of worlds, keeping all of them unique, is particularly taxing. The terrain tech we were seeing was still being tweaked half a year ago. This isn't leaving the artists a lot of time to really make these worlds work. I don't know how good the final product will be, but I don't think what we're seeing right now is that. And yeah, that does mean that worlds might end up feeling a bit barren during the early access and, honestly, possibly even a bit into the release. I don't think they're going to be anywhere near as bad as planets in KSP were, but you shouldn't expect AAA hand-crafted game levels of detail on every patch of land on every planet. That's just not possible to achieve. KSP2 will definitely fall somewhere in between, and hopefully high enough to keep exploration interesting. We'll have to just wait and see, though.- 59 replies
-
- 3
-
I like this canon. Kerbals are a forgotten/lost artificially created civilization exploring the space, searching for their origins.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
Mars has strong winds with very sudden gusts - low pressure means that wind speed variations take longer to dissipate, and harsh day/night cycle changes generate a lot of pressure gradients. The response to air shifting can only be so rapid. That isn't just a matter of precise measurement, but you are inherently limited by aerodynamics in how fast you can respond and correct your position. Anything close enough for the blade wash to start clearing sand is also a collision hazard. I'm sure it might be a warranted risk in some situations, but it's not something that would likely be attempted if it can at all be avoided. -
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
As someone who once flew a DJI drone into a wing of a foam-body RC plane, can sort of confirm. (I know that's technically all very different scenarios.) -
Do Wormholes Break the First Law of Thermodynamics?
K^2 replied to RocketFire9's topic in Science & Spaceflight
Honestly, what's happening inside the black hole's horizon is kind of a philosophical question. It's a bit like, "What happened before the big bang." Our physics is fully disconnected from whatever's on the other side. The mathematical model that matches up with what's going outside of the rotating black hole is the Kerr metric. If we do pretend that it describes reality inside the black hole, then the singularity of a rotating black hole is ring-shaped - that is precisely a ring of zero thickness, rather than a donut - and it contains much of the black hole's angular momentum. The rest is in the "rotating" energy of the gravitational field itself. The interior solution is unstable from perspective of an exterior observer, however, and has a lot of non-physical properties, like permitting time travel. So, you know, take all of that with a grain of salt. From our perspective, there's an event horizon, an ergosphere just below, and whatever's inside that has a lot of mass and a lot of angular momentum, and that's all that matters to a physical world in our universe. Edit: Though, there's an interesting caveat. Unlike discussion of time-before-the-big-bang, theoretically speaking, with sufficiently massive black hole, such as the one at the center of our galaxy, you can travel into the interior and find out what that looks like and how it works. If Kerr metric is at all a reliable guide, after passing through both the outer and inner ergospheres, if your craft survived the radiation and had enough fuel to perform the necessary maneuvers, you can get past the inner event horizon, to emerge in some sort of the interior space that should be safe to navigate, and from your local perspective, will extend to infinity. That space is yours to explore, and you can learn all of the secrets of what lies within the black hole, But there is absolutely no way to come back to normal space to let us all know, nor send a message, nor in any way influence what's going on in our universe. So maybe, instead, I should be comparing it to the afterlife, or something like it. It's a place you can go to, but nobody can come back from, and so we can't possibly know what's there.- 31 replies
-
- 1
-
- wormhole
- thermodynamics
-
(and 1 more)
Tagged with:
-
So long as we get to ignore the clouds, we can just precompute all of this, including the difference between (1, 2, 3) and (4). If your camera is on the ground, you absolutely can't. And that's the problem. If you're above the clouds, though, I would submit that nobody's going to notice the difference. So long as you still do cloud shadows for direct light, the lighting difference due to weather is going to be too subtle when viewed from above the cloud layers. There might be edge cases where you're not high above clouds and looking at the terrain below through a cloud break, but I don't think these are common enough and obvious enough to go for a far more expensive technique just to get these right. We only need the dynamic skybox for the first 10km. From there up, we no longer need to recompute it. It's literally always the same, and depends only on one angle, because there are no clouds anymore. We can bake all of the light info it into a 2D texture. So how do you pick up the orange hue on one side of a mountain and gray on the other during sunset/sunrise? And how is it better than getting all of that and the correct cloud illumination when viewed from the ground? As far as I can tell, our approach above the clouds is identical, except I want to use look-up textures instead of doing ray casts. (Because I'm not sure why we're doing ray casts if we're ignoring clouds.) I have a feeling we're about to go to the Shadertoy round.
-
Oh, I'm talking strictly surface rendering, not an orbital ship in flight passing through penumbra. I don't think there's a general approach that solves all situations without RT. But if we separate this into three cases: surface when camera is below clouds, surface + clouds when camera is above, and anything above the cloud layer, we can attack this gracefully. When the camera is above the cloud layer, I would honestly just skip the scatter from clouds to ground. Most of the areas that would be affected aren't going to be particularly visible because of the clouds in the way. So the sky light for the planet when the camera is above clouds can be pre-computed. Scale intensity by distance from the star, to account for elliptical orbits, and otherwise, the light distribution is perfectly consistent when taken relative to the star's position. (This does break down with multiple stars, but I assume all relevant planets will have only one star we can take as dominant effect, and do only direct contribution from the other.) For camera below the clouds, the cloud effect is significant, and you'll want to do the actual honest computation by factoring clouds into your scatter. Transition can be tricky, but I think blending is fine. Likewise, aircraft can use surface lights. It's not exactly right, but I think it's close enough that nobody will notice the difference. You just have to be careful about aircraft actually flying through a cloud. Finally, for anything located very high in atmosphere or outside of it, you have some options. You can still do pre-computed approach by utilizing the cylindrical symmetry, meaning you can still get away with a 2D texture that takes an angle and altitude. (Beyond certain altitude this light stops changing and depends on the angle only, so you can clamp the texture.) Point, but if you're viewing things from the surface at high enough warp, you can start cutting a lot of corners. It does need to be handled to avoid very weird artifacts, though. And per above, I never considered this for other situations. Yeah, but that doesn't help you with rendering surface illumination. If I'm standing on the surface and looking at the terrain, every point in front of me has the sky above. Even without having a single pixel of the sky, each pixel of the terrain now needs to know how much light it receives, and even ignoring the occlusion, that means casting a scatter ray in every possible direction towards the sky, because the clouds can be in any direction, and that way lies madness. Yes, you can cast just a few rays from every point, and then denoise the picture. But that still means doing 5-6 samples of the scatterer for every pixel on the screen to get decent results. And you absolutely must do this every frame. This is not cheaper than building a low resolution skybox once. So yeah, if you always had a perfectly crystal clear sky, you just have a lookup map for each planet, and do deferred. But the moment you have weather you basically have to start thinking about it as a dynamic cube map. I don't see a way to step around that.
-
Nah, I'm not talking about indirect lighting from the terrain. There are, actually tricks for recalculating that dynamically for outdoors environments too, but that's getting into way more complex interactions. That would take us way off topic, so if you want to chat about that, feel free to DM me. The idea is that the info the light probe gives you is how much light you're getting from every direction. To get goo skybox illumination, you want the light to be coming from every direction as well. So if you take the skybox and convert the illumination information from it into spherical harmonics, you can use it as one special light probe that's common to all of your scene geometry. In fact, you don't have to use actual GI with this - you can literally just have this one "light probe" generated from the skybox, and use that to get soft lighting that's dynamic with the time of day. And while computing all of the light probes for a scene is very, very expensive, computing just one from a given cube map over multiple frames is almost free. That's precisely the gap we're trying to close here. You can do the sky as just a screen space post-effect, of course, but then you don't get any of the other benefits. You can, instead, compute that scattering process as a cube map, ignoring screen space entirely, which, again, you only need to do once every so often, since the sun and the clouds aren't moving all that fast. On average, you end up spending about the same amount of time doing either, because you're doing ray casts on more pixels but over more frames, so this doesn't increase rendering costs. You can then use it directly with your PBR as a cube map, of course, but then you have to at least bake a diffuse environment map from it, which is pretty expensive. (O(n2 ln(n)) if you use FFT and convolution theorem.) Alternatively, you can compute spherical harmonics from that cube map (O(kn2), where k is number of harmonics) and feed that to PBR as if it was a light probe. It's basically the same computational cost, but now you have better specular highlights on metallic surfaces, support for anisotropic materials if you want it, etc. There are extra steps you'd have to do if you wanted sky occlusion to work well, which involve actual light probes and, again, a lot more math. But then you actually do have to compute light probes which, as you point out, is a problem for a planet sized world. The conservative approach is to simply ignore occlusion for scattered light. You take horizon at exactly 0° elevation (so you only really have half of a scatter cube map for the sky) and only do occlusion for direct light from the sun using cascade shadows as normal.
-
Steam stream images - KSC Beta Gameplay Timelapse
K^2 replied to Vl3d's topic in Prelaunch KSP2 Discussion
The little yellow and green flashes above the VAB and astronaut complex(?) you see during the night are just stars reflecting in the lakes. The surface normal appears to be a bit wavy, resulting in these stars reflecting only in brief flashes, but you can clearly match them up against bright stars on the skybox. The big purple (and occasionally green) flashes to the right, above the dish, and that I've only seen during the day, but maybe I missed some, are definitely some sort of rendering artifacts. You can tell that it's not something that showed up during capture because you can see them reflecting in the water on some of the brighter flashes, resulting in a big blob. That is consistent with rough but shiny surface of the ocean picking up a very, very bright object. These might actually be point sources of exceptional intensity that are showing up as blobs due to bloom or similar filter. In either case, the fact that blobs look different in reflection tells me that this had to happen during rendering, because if this was a capture artifact, it wouldn't matter where the flash happened in relation to the topography. -
I mean, it can heavily depend on implementation, but as far as modern techniques that give a good, realistic look, yes. And that's what I'm seeing on all of the other examples we have from other planets, so I expect this to be hooked up for Kerbin and structures. Sorry, hazard of talking about something that you work on professionally. I try to correct for it, but as XKCD points out, it's hard to do effectively. Always happy to clarify anything specific if you're interested, though.
-
Not to mention that it's just a simpler way to implement sky light in general, once you have everything else set up. I don't know if there's an easy way to hook it up in Unity (the slightly harder way is with a custom shader node), but a standard approach in custom engines is that you generally already have a system to light a surface from a light probe, and you use that. A light probe encodes the directional information about light at a point, and it's usually stored as spherical harmonics. The advantage is that you can usually get light quality almost as good as from a full cube map, but only storing a dozen or so "pixels" (a bit more for specular probes) instead of a full cube map, which allows you to place the probes in a grid around your map. That's the standard way of doing global illumination (GI). The materials then have a pre-computed response for each spherical harmonic, which means that the shader just has to take the light harmonics from the nearest probe, rotate these to local coordinates, taking normals and binormals into account, multiply them by response function, use a lookup table to get the value for that response pointed towards the camera, and return that value. It's a hell'a lot of math, but because it's all pre-computed and pre-tabulated, the actual runtime is blazing fast, and you get indirect illumination of your surfaces in most realistic conditions. So what you do for scattered sky light is you do a single pass, computing spherical harmonics from the sky cube map. Then you pass these to the material shader, as if it was one of the light probes, and you let the material do its thing. It's basically zero overhead, since the update of the spherical harmonics for the sky can be spread between multiple frames. If you have a unified system, that has clouds baked into it, you'll even get occlusion from these. And the best part is that spherical harmonics preserve directionality of light. So if you have a sunset/sunrise situation, the side of a building facing the sun will be washed in auburn orange, while the side away from the sun will be bluish gray, exactly as you expect in that sort of a scene, and it takes almost zero work once you have the skybox computed. Unity has a light probe system, and it has material nodes designed to work with these. What I don't know is if you can create a fake light node data that you can pass to the shader so that you can just recycle these material nodes and make it work with your custom skybox. If not, you'll have to do the work of building that shader node by hand. That said, from footage from other planets, it's clear that the work has already be done, whichever way they ended up doing it. So now it's literally just a matter of updating the materials used on Kerbin and in structures, including KSC. The point here is that if you don't do the above, you still have to account for sunlight, and if you just do direct illumination from the Sun, it looks like crap. Well, by modern standards - that's what people used to do back in 2000, and it was fine then. Since at least the mid 2000's, people started adding an environmental map to the scene. You usually would bake two of these. One would be just a cube map of the light sources typical to the scene. For the indoors, it might have some fake lights on it, and for outdoors it'd typically be just your skybox. You'd use it as is for your specular light. Then for diffuse, you literally pre-compute the convolution with a cosine function, to get that dot-product factored in from every possible angle. Now instead of doing anything fancy, your entire environment light is just two texture lookups. Finally, if you wanted a day-night cycle, you'd have three of these, day, night, and dusk/dawn maps, that you you rotate them with the movement of the Sun and blend together as time moves from morning to day, to evening, to night. Cheap, reasonably believable, and easy to implement. I think KSP did something similar. Problem is, the moment you have clouds, rain, or any other weather effects, guess what, you have to add a new map variation for each of these. And it never looks quite right with fancy materials. Especially, once you start adding shinier looking surfaces. And if you already have GI working, it's not easier at all to implement! So you might as well go for proper scatter light on everything and have a single set of materials that accounts for it all. tl;dr: You have to do work to do skylight illumination either way, and if you already set up scatter illumination for other worlds, you get it for free for other bodies, even these without an atmosphere at all! So there is zero reason not to.
-
Pretty sure it's this. The KSC likely uses materials that got implemented before the scattering was added, and the artists simply haven't gotten around to updating the materials. It'll get fixed some time either before or during the early access.
-
For Questions That Don't Merit Their Own Thread
K^2 replied to Skyler4856's topic in Science & Spaceflight
What would that do? Keep in mind that for a spherical tank, the field inside a charged shell will be zero, making absolutely no impact on the hydrogen contained within. And even for an elongated tank, you aren't going to get much of a field inside for the same reason, the charges on the opposite side partially screening the effect. So you might get a tiny density increase due to polarization near the tank walls, and you'd have to put in an enormous amount of charge to cause this. I'm very skeptical. Unless you have a different mechanism in mind. Why is that cat full of explosives? Do I need to call animal protection services? -
Generally, "Beta" is just a stage in development. At a minimum, it's just a calendar date. In practice, there are going to be associated builds, but the default isn't even a closed beta - the default is that these builds stay purely internal. They are still marked as "beta content," because they can be used by the marketing team, but it doesn't mean that these were ever intended to be played by anyone outside of Intercept or Private Division. The reason that people think about open and closed beta when they hear "Beta Content," is because that's what people usually see. You'll see a lot more footage and screenshots from open betas, some from closed betas, and only whatever marketing lets out from the internal beta builds. So people end up thinking of "beta" as open by default, or closed if necessary, and not at all about the internal builds that never get shared. The reality is the exact opposite. We're really just seeing a lot more WIP footage from Intercept than most other companies. That's why we saw pre-alpha and alpha footage, which a lot of companies don't release at all, and quite a bit of beta footage now that we're getting close to early access. None of it means that the beta builds have ever been intended to go to anyone who isn't an employee of Intercept or Private Division.
-
Steam stream images - KSC Beta Gameplay Timelapse
K^2 replied to Vl3d's topic in Prelaunch KSP2 Discussion
I'm not a graphics engineer, but being an engine engineer, I had to work with the graphics team a lot and dabble in rendering myself. What we're seeing in this time lapse is 80% there. The atmospheric scattering you see that results in the sky change is a big chunk of the work done. The clouds look very WIP. Based on the atmospherics used, I think I recognize the technique, and the clouds should be hooked into that system for a unified look. This can look as good as Red Dead Redemption 2's atmospherics, because I'm pretty sure it's the same approach. I don't think we'll have quite as much variation, as that's a lot of work, and Intercept is a small team, but the stuff that is supported should look good. Likewise, the terrain illumination from the skybox is just not hooked up. It's not a lot of work, but it's additional work that you normally do after you're happy with how your skybox looks. Since this is a custom skybox, it's not just going to be a checkbox in Unity, and so the tech artists, apparently, still need to go through the game's materials and update them to use scattered light from the sky. I have no doubt that it's on the list of things to do. tl;dr, this is clearly unfinished, and I don't think it represents what the game will look like when it's done. I expect both the terrain and the clouds to be picking up skylight with all the beautiful colors. -
As in above, if we didn't have early access, this would be nearly a guarantee. Both because it can help generate day-one hype and because these people can usually provide good, constructive feedback. It's a fairly standard practice to both send out preview codes for a beta build and to even fly a few creators to the studio so that the user research team can get direct feedback. For a smaller studio, like Intercept, this might be done via their publisher, so it'd be Private Division's user research team, and it can be either Intercept's or PD's offices. But for early access, I'm not as sure. Some form of all of this could still happen. I'm sure there will be additional rounds of interviews, but whether or not any of this is done prior to the EA rollout is unclear. They might just send game keys to the creators on day one and then ask them for feedback later. Or even do the budget version, which is watch their Youtube channel for an early review. Again, it all depends on what marketing strategy PD is going for. It's clear that they're treating this as a very serious title with a lot of money riding on it, so there will be a major advertising push, but it might not happen until the console version is ready, by which point many of us and all of the relevant creators would have been playing early access on PC for months.
-
That part hasn't been confirmed. We know that Lua was at least intended to be the scripting language for the game's missions (tutorials?) seriously enough that it was a requirement in the job description for a game designer. Given the timing, I don't think it's likely this part has changed, and Lua engine is very likely in the game in some form. We also know that at least one engineer has tinkered with Lua bindings for modding. Given that Lua-driven mods would be a lot easier to share for multiplayer or through services like Steam Workshop, and that most of the work to enable this has been done already by Intercept, it's very reasonable to expect that it will be possible to create components with custom Lua scripts, but it's not confirmed. We have been told that compiled C# plugins will still be a viable option. That hasn't been reiterated recently, however, and many other decisions about the game have changed. So we don't know the exact status of these with certainty. Other than that, I suspect the pipeline won't be that much different from KSP. Distributing a Unity scene as an SDK for mods is still the easiest way to let people mod your Unity game.
-
Doesn't seem like a fair comparison. Korolev was an engineer who actually understood the technical challenges. When he was being overly optimistic about something, it's because he believed (even if to a fault) in his own ability to solve any problem. That's very different than just promising the Moon (literally) to investors and then driving actual engineers to stress breakdowns to deliver it, whether or not it is actually remotely possible. Yes, sometimes, it really is possible, and you have the right people in place to make it happen. But if you keep doing it, these people won't stick around.
-
Arecibo message (split from The Ksp mun Easter egg)
K^2 replied to JoeSchmuckatelli's topic in Science & Spaceflight
*Does a polite curtsy.* Thank you, kind sir, but I'm just goong with the mood set by Pthigrivi and tstein. -
Yup, for early 60s, 410T lander to get 10T back up sounds about right for what would be optimistic, before we had more detailed information about surface and atmospheric conditions. Considering N-1 for the job was... a bit presumptuous, though. But that also sounds about right for Korolev and his team. I guess, you need a bit of that optimism and arrogance to push something as ambitious as a space program in the 60s.
-
Arecibo message (split from The Ksp mun Easter egg)
K^2 replied to JoeSchmuckatelli's topic in Science & Spaceflight
That's what berserker probes are for. If you're a species capable of building AI capable of managing an interstellar mission, resource gathering, and replication at the destination, as well as able to launch something at a meaningful fraction of the speed of light, wiping out a civilization that's just starting out with space exploration is fairly straight forward. Yeah, it might give us as much as a few hundred years to develop our tech, but if by the time we get to it, the Oort cloud is filled with replicating murder bots, we're going to have a hard time. -
We knew quite a bit about Venus' atmosphere in the first half of the 20th century. The most optimistic view of it would have had a climb out of Venusian atmosphere as considerably harder than from Earth's, meaning you'd need to land at least a three stage rocket with 60s tech. Vostok's core stage alone was 100T. We got the capability to lift 100T to LEO with the Saturn V first flown in 1967. We got data from Venera 3, telling us that the ascent vehicle would have to be a lot bigger in 1965. With the atmospheric data from Mariner and early Venera probes, the "optimistic" picture got considerably worse. It went from, "This would be very hard," to "Pretty much impossible." If you are still dubious, I would recommend that you sit down and sketch out what a mission might be like based on the most optimistic understanding of Venus in every decade of the 20th and what was foreseeably available at the time, to see if you come up with something you'd think would be worth for someone to turn into an official document. The information we got made the most optimistic version of the mission more complex faster than we could get in the flight capabilities or technologies that would have made it easier, and that would have been very obvious to the engineers of the time. Even today, as we're looking at what we have now, a Venusian surface landing mission would be an absolute nightmare, dwarving every other undertaking we are seriously considering. I don't think anyone is seriously considering actually flying something like this. The closest we have that we might be interested in is a high altitude balloon visit, and even that would be an exceptionally challenging mission.
-
Then I would point to the fate of Venera probes as to why nobody considered landing a crewed mission.
-
I'm just referring to the Venera missions, with Venera 3 through 6 entering atmosphere and Venera 7 through 14 surviving to landing, returning surface data.
-
Are we counting Soviet programs? Because there were a few. US seems to have been much more interested in Mars landing, and only used Venus to practice fly-bys due to shorter mission times with otherwise similar complexity to a Mars mission. That includes uncrewed missions that were flown and crewed missions that were only planned.