Jump to content

What map projection does KSP2 use?


Recommended Posts

This is more of a technical issue, but one I'm concerned about. KSP1 uses rectangular projection which is pretty bad and causes terrain glitches in a number of bodies at 90°N and 90°S latitudes, and the terrain getting increasingly distorted as you approach those points.

To illustrate:

YELKvrc.png

You can see that near the equator each face is close to a square and it gets increasingly distorted is it reaches the poles. That makes is so that the west-east mapping becomes much more squeezed such there is much greater resolution in one direction than it does at the other, and at the end it switches from quads to triangles, resulting in even more distortion as it's difficult to map to a image.

But if instead they used a cubic projection:

https://imgur.com/s4o5onUs4o5onU.png

They could now just map as if they were making skyboxes, it takes some more effort to implement but the end result is much less distorted

Link to comment
Share on other sites

1 hour ago, Jack Mcslay said:

This is more of a technical issue, but one I'm concerned about. KSP1 uses rectangular projection which is pretty bad and causes terrain glitches in a number of bodies at 90°N and 90°S latitudes, and the terrain getting increasingly distorted as you approach those points.

To illustrate:

YELKvrc.png

You can see that near the equator each face is close to a square and it gets increasingly distorted is it reaches the poles. That makes is so that the west-east mapping becomes much more squeezed such there is much greater resolution in one direction than it does at the other, and at the end it switches from quads to triangles, resulting in even more distortion as it's difficult to map to a image.

But if instead they used a cubic projection:

https://imgur.com/s4o5onUs4o5onU.png

They could now just map as if they were making skyboxes, it takes some more effort to implement but the end result is much less distorted

I do not know. Their are some images of some KSP 2 planets and moons. To me, they did not look as distorted as ksp 1. Still, I am not sure.

Edited by PlutoISaPlanet
Link to comment
Share on other sites

Interesting idea.

I don't actually know, but I would think that whatever projection method they use (if any) to get the basic landscape they can manually edit the 'undesirable' bits.

Likewise with adding the nice 'finishing touches'.  The limit, of course, being the amount of time they can allocate to it, and it is doing those 'little details' that can easily use a LOT of time, if you let it.

Edited by pandaman
Link to comment
Share on other sites

4 minutes ago, pandaman said:

I don't actually know, but I would think that whatever projection method they use (if any) to get the basic landscape they can manually edit the 'undesirable' bits.

Using cubic projection sounds like a good way to reduce the manual labour of smoothing out the flaws of rectangular projection.

Link to comment
Share on other sites

48 minutes ago, pandaman said:

Interesting idea.

I don't actually know, but I would think that whatever projection method they use (if any) to get the basic landscape they can manually edit the 'undesirable' bits.

Likewise with adding the nice 'finishing touches'.  The limit, of course, being the amount of time they can allocate to it, and it is doing those 'little details' that can easily use a LOT of time, if you let it.

KSP has a certain degree of procedural generation; while the primary features (biome locations, major craters, canyons, mountains) are predefined, minor geographic features are procedurally generated in some of the bodies such as small craters hills and such. I suspect that procedural generation is the culprit behind the pattern in bodies such as Duna and Mun having streaked patterns at the poles converging at the 90° latitudes creating such erratic terrain that reaching them by rover is difficult to impossible.

Therefore, assuming KSP2 will implement it too, editing it out is not an option

Edited by Jack Mcslay
Link to comment
Share on other sites

The problem with mapping onto a sphere is that a sphere has topological charge of +2, while a flat plane has topological charge 0. That means that no matter what you do, there will be a distortion, and it has to be concentrated somewhere. (Closely related to Hairy Ball Theorem. [snip]

Worse, if you are dealing with a mesh, the distortion has to be concentrated in discrete points. And now you are playing a game of balancing it out. The good thing about polar projection is that all of the topological charge is concentrated at the poles at +1 each. That's a lot of distortion, but it's all in a place you least care about. Who even wants to go to the poles? There's nothing there but ice! Of course, in KSP, we have other planets to visit, and they might actually have exciting poles.

Cubic projection has 8 points on which the charge is concentrated. These are your valence 3 verts on a quad mesh. A valence of 3 on a quad mesh always gives you a +1/4 charge, so this adds up to +2 total on the quad sphere. For anyone doing 3D graphics, btw, yes, that will always hold if you take a closed shape that can be "inflated" into a sphere-ish shape. So when you're making a quad mesh for character, if you take all your valence 3 verts and subtract all your valence 5 verts, you'll get 8, because that +2 charge has to go somewhere. And it's always +1/4 for every valence less than 4 and -1/4 for every valence over 4. And that's why they talk about mesh topology.

Which, naturally, brings us to triangular meshes, where neutral valence is 6, and anything with valence 5 gives you +1/6 charge. And, of course, the shape we all know and love with exactly 12 valence 5 vertices - the icosahedron. And just like you can take a cube and inflate it into a cube sphere, you can take an icosahedron and inflate it into an aptly named icosphere.

The advantage of icosphere is that it distributes the distortion most evenly. The disadvantage is that the mapping is all over the place and it won't pack into a rectangular texture very well. You also have the distortions show up in 12 locations total. Two of them are trivially shoved into poles, but that still leaves you with 10 in the subtropics.

With the cubic map, there is a lot of distortion around the 8 points, and shoving any 2 into polar regions kind of defeats the simplicity of cubic mapping, so you end up getting stuck with four in each hemisphere, not far enough North not to cause some visual artifacts. But it is way, way easier to implement efficiently than the icosphere, which is why cubic maps show up all over the place. One of the neat examples is Space Engineers. Their voxel planets are cube-mapped. It's even easier to see that in their Medieval Engineers game which is set on a voxel planet for no reason other than to show off, and you actually get a world map, which is broken down into 6 "square" regions.

On the net, I think either a cube sphere or an icosphere would be an improvement, and I guess cube would be easier, as much as I prefer the elegance of icosphere.

 

Edited by Vanamonde
Link to comment
Share on other sites

45 minutes ago, K^2 said:

With the cubic map, there is a lot of distortion around the 8 points, and shoving any 2 into polar regions kind of defeats the simplicity of cubic mapping, so you end up getting stuck with four in each hemisphere, not far enough North not to cause some visual artifacts. But it is way, way easier to implement efficiently than the icosphere, which is why cubic maps show up all over the place. One of the neat examples is Space Engineers. Their voxel planets are cube-mapped. It's even easier to see that in their Medieval Engineers game which is set on a voxel planet for no reason other than to show off, and you actually get a world map, which is broken down into 6 "square" regions.

Well, do you think an icosphere would increase the need for processing power by much? It is using the Unity engine. I don't KNOW this but I would guess the icosphere would give more maps to render in the same given area where using the cube would be less overall. I do understand the maps in cubic would use more memory, but it would be a one load and done, where triangular maps could potentially load and unload far more often leading to lag.

 

Just thinking aloud really, never attempted to use triangular maps beyond simple 3D movie applications and you know that is no benchmark for gaming.

 

Link to comment
Share on other sites

1 hour ago, Dientus said:

Well, do you think an icosphere would increase the need for processing power by much? It is using the Unity engine.

A bit, but most of it is going to be in shader, so it being Unity won't matter. It's going to be native code running on the GPU, and as we know with KSP, graphics is where you have some breathing room with these kinds of games. You do need to build terrain collision, which does reside in system memory, but I would still advise building the geometry with compute shader and then exporting it from VRAM to system RAM. Since terrain collision generation isn't done every frame, having to move data from GPU to CPU isn't going to be a bottleneck even on systems with discrete graphics. And Unity does provide a way to do all of this.

Link to comment
Share on other sites

1 hour ago, K^2 said:

Who even wants to go to the poles? There's nothing there but ice!

There's a number of potential good uses for poles:

  • There's science to be obtained there in most bodies
  • In bodies with sufficient inclination you could position a lander there and get months of perpetual daylight, and move to the other pole for the other half of the year
  • In bodies with very long solar days, a rover could travel relatively short distances at a time and remain under daylight driving near one of the poles
  • We're now going to be able to build launch sites,  as such having some of them in polar locations can be useful
  • Bodies covered in liquid often have frozen poles, making them massive flats handy for landing
  • Or they might be the opposite and create interesting landing challenges, such as Vall's 7000m peaks at the poles (which are broken due to KSP's glitchy poles)
  • Atmospheric bodies have lower temperatures at the poles, making them useful if you need to dissipate a lot of heat
Link to comment
Share on other sites

9 minutes ago, K^2 said:

A bit, but most of it is going to be in shader, so it being Unity won't matter. It's going to be native code running on the GPU, and as we know with KSP, graphics is where you have some breathing room with these kinds of games. You do need to build terrain collision, which does reside in system memory, but I would still advise building the geometry with compute shader and then exporting it from VRAM to system RAM. Since terrain collision generation isn't done every frame, having to move data from GPU to CPU isn't going to be a bottleneck even on systems with discrete graphics. And Unity does provide a way to do all of this.

The way I see it an icosphere would increase performance, first because the heightmaps would necessarily be composed of triangles instead of quads, and triangles deform much smoother than quads do, needing less polygons for the same amount of detail, second by reducing distortion you also reduce the amount of data to be processed at the distorted locations, needing a less robust routine to deal with it or maybe ignore it entirely as the distortion would be small enough it could be ignored.

Link to comment
Share on other sites

17 minutes ago, Jack Mcslay said:

The way I see it an icosphere would increase performance, first because the heightmaps would necessarily be composed of triangles instead of quads, and triangles deform much smoother than quads do, needing less polygons for the same amount of detail, second by reducing distortion you also reduce the amount of data to be processed at the distorted locations, needing a less robust routine to deal with it or maybe ignore it entirely as the distortion would be small enough it could be ignored.

It's not all that straight forward. The actual terrain you are going to render is going to have sub-meter resolution to look good. Even at Kerbin size, you aren't going to have a world texture like this. So you are always going to have a texture defining coarse features with some material tiling and possibly procedurals defining local geometry and texturing of terrain. Granted, when rendered from great altitude, an icosphere makes it way, way easier to just have a few subdivisions, slap a texture over it, and have a nice looking planet. But I'd argue that this is also when you care the least both about distortion and about the rendering time. Rendering a planet from high orbit is cheap. Lighting and atmosphere can get expensive, but that's all done at pixel stage and, hopefully, entirely deferred. So no connection to the mapping.

Mapping is going to matter primarily when you are on the ground or flying low over terrain. And a quad mesh can actually simplify a lot of that. Yeah, doing a great job of fixing up the valence 3 corners is going to be hard, and might create overhead, but if you are ok with them being a little meh, maybe even design your maps to conceal it - like, if it's oceans at all 8 corners on Kerbin, and maybe some crater features on other planets, then maybe you're ok with that, and you want to go with a much simpler, more performant approach. On the other hand, yes, icosphere will make things way more uniform, and you won't have to try and hide the seams. But it will require more complex math for things like LoD tiling and smoothing as well as texture lookups for coarse features.

I'm with you overall. I do think icosphere gives a more elegant, more consistent solution, and it can be more performant at the same quality. But I don't think it's going to be a performance win if Intercept is happy to hide some seams with creative design. Especially, if they don't make much improvement on the underlying terrain tech compared to KSP. In the perfect world, it'd be all virtual textures, hardware tessellation, and compute shaders for collisions. In reality, we are probably still going to see just mesh patches both for rendered terrain and collisions. And that's still a lot easier to do with a square(ish) grid.

Link to comment
Share on other sites

On 4/15/2021 at 8:57 PM, K^2 said:

The problem with mapping onto a sphere is that a sphere has topological charge of +2, while a flat plane has topological charge 0. That means that no matter what you do, there will be a distortion, and it has to be concentrated somewhere. (Closely related to Hairy Ball Theorem. [snip]

Why not just use an unwrapped cube instead of a rectangle for the map?

Link to comment
Share on other sites

Is it possible to combine methods? IE use UV for most of the sphere but change to cubic or icosphere for the poles?

I assume it is possible but I imagine the complexity of making the textures becomes much higher as a result. 

Link to comment
Share on other sites

10 hours ago, Bej Kerman said:

Why not just use an unwrapped cube instead of a rectangle for the map?

Oh, it doesn't matter what shape on 2D you are mapping to - just that it's a 2D shape. The problem is that to have a nice mapping, you need open sets to map into open sets. It's possible to show that it's impossible to do so for a sphere with a single projection. At an absolute best, you can map all but one point with a single mapping. An unwrapped cube effectively has 6 different mappings - one for each face. Which is perfectly fine. But it still means you'll have trouble at points where multiple mappings come together. For a cube, that's your 8 original vertices of the cube.

Sorry that I'm being a bit hand-wavy about what a "nice mapping" means and why open sets matter. If you want to chase this down, the relevant topic is differential geometry (or differential topology). You'll want at least a minimal intro to topology for that, for which you'll want an intro to real analysis and just a touch of measure theory. And you can probably get through both if you have a solid background in calculus. I'm sorry if it's coming out as a bit pretentious, but differential geometry is one of these subjects where math gets so abstract that it's only worth to either just state some results as facts or, if you really want to understand it, to go through and learn the background. If there's a way to give simplified explanations, I don't know it. Maybe I'm just not good enough at the subject, but it's best I can do.

Edit: I think I have an illustration for where things go wrong, though. Imagine that you have a sphere. Imagine that you are texturing it by mapping sphere to a cube, then unwrapping the cube to a 2D texture. Now, picture that you started drawing a straight line on a sphere (technically, a great circle). What happens to that line as it crosses from one cube face to another on your texture? Well, the line will probably not look straight, which isn't a big deal. But when it crosses from one cube face to another, it's going to get a sharp kink. The orientation of the tangent at that point becomes discontinuous. In general, a smooth curve on a sphere isn't guaranteed to be a smooth curve on the texture and vice versa. Which isn't a disaster, but it can lead to artifacts if you aren't careful. And that's a big part of why even though you are using a single texture for your unwrapped cube, you really have to treat the 6 regions corresponding to each face as a distinct mapping.

Edited by K^2
Link to comment
Share on other sites

On 4/15/2021 at 10:50 PM, Dientus said:

Well, do you think an icosphere would increase the need for processing power by much? It is using the Unity engine

With mordern Unity (Jobs + Burst + new Mesh API) shouldn't be a problem on the CPU either. I saw an example, a coin animation at 60 fps (in the editor) when a new mesh was created every frame with over 100,000 quads.

Link to comment
Share on other sites

×
×
  • Create New...