eggrobin

[WIP][1.5.1, 1.6.1, 1.7.x] Principia—version Ferrari, released 2019-08-01—n-Body and Extended Body Gravitation, axial tilt

Recommended Posts

Indeed, no GR here.

GR + KMP would melt my brain.... "Has he been here? Is he going to be here? Will he be here, as I'm seeing him now, if I travel to the future, even though he has already been here from my point of view?"

Share this post


Link to post
Share on other sites

Aiiee, wait, what, Jeb is here, but he's also there .. but .. ?!?

My brain would explode, but that's actually pretty close to what having a two-year-old in the house feels like, so I guess I'm used to it. :D

Share this post


Link to post
Share on other sites

By the way, I have coded up a data parallel version in F# of the NDSolve algorithm you (OP) referenced. I haven't run any real performance tests, in particular in a KSP context, but I could probably use equivalent test cases to those you have to see if there are any gains from a straight-forward parallel array solution. I'm using .NET Async<T> and F# Array.Parallel, so I'm not sure about the amount of marshalling and such going on ....

Thanks again for all the excellent references, my continued reading has made me something of a H. Yoshida fan :D

This one had an interesting bit about softening: Symplectic integrators and their application to dynamical astronomy (Kinos-h-i-ta, H., Yoshida, H., & Nakai, H.)

(forum does not like romaji)

I'll try putting some tests together and see if I can integrate (uh, the software kind) your KSP stuff with my solution. I'm not expecting all that much, but I've purposely written it as to be easily convertible to OpenCL kernels. I have doubts about using the .NET thread pool inside the Unity process, it might just degenerate to context swapping...

Edited by SSR Kermit
forum context

Share this post


Link to post
Share on other sites
Aiiee, wait, what, Jeb is here, but he's also there .. but .. ?!?

My brain would explode, but that's actually pretty close to what having a two-year-old in the house feels like, so I guess I'm used to it. :D

You could accelerate to relativistic speeds, then slow down and go back. I you calculated your trajectory correctly, your two-year-old would be old enough not to make your brain explode, while you would have aged significantly less than him.

By the way, I have coded up a data parallel version in F# of the NDSolve algorithm you (OP) referenced. I haven't run any real performance tests, in particular in a KSP context, but I could probably use equivalent test cases to those you have to see if there are any gains from a straight-forward parallel array solution. I'm using .NET Async<T> and F# Array.Parallel, so I'm not sure about the amount of marshalling and such going on ....

A data-parallel integrator? That sounds interesting! Could you put up the source on GitHub?

Thanks again for all the excellent references, my continued reading has made me something of a H. Yoshida fan :D

You're welcome! I should get around to writing a second explanatory post at some point... There is a lot of overlap with my answers to your questions, so that much work is already done.

This one had an interesting bit about softening: Symplectic integrators and their application to dynamical astronomy (Kinos-h-i-ta, H., Yoshida, H., & Nakai, H.)

(forum does not like romaji)

You can add a cyrillic character: KinoshÑ–ta.

Since you're the category theorist in the room, what do you think of the way the Geometry assembly (now called CTSGeometry) is structured? I'll be rewriting it in C++/CLI to integrate (this word is too polysemic) physical quantities, but nothing fundamental should change.

Edited by eggrobin

Share this post


Link to post
Share on other sites
A data-parallel integrator? That sounds interesting! Could you put up the source on GitHub?

I will be putting it up eventually, but it's an undocumented hard-coded hacking bonanza as it is :)

Data parallel in this context is pretty simplistic, it's simply the observation that integration is partitioned with a single dependency across the computational front; it's independent in the number of objects and components. The relevant part looks like this:

while (t < tmax) do
let ÃŽâ€qStage = ref (Array.zeroCreate hypDim)
let ÃŽâ€pStage = ref (Array.zeroCreate hypDim)
for i = 1 to stages do
let f = force(ks.kBodies())
// closures (over immutable references to internal structures)
let ap = fun ix e -> let e' = e + ÃŽâ€t * b.[i-1] * f.[ix]
p.[ix] <- pPrev.[ix] + e'
e'
let aq = fun ix e -> let e' = e + ÃŽâ€t * a.[i-1] * p.[ix]
q.[ix] <- qPrev.[ix] + e'
e'
// note the dependency of q on p; this is the computational front
ÃŽâ€pStage := Array.Parallel.mapi ap !ÃŽâ€pStage
ÃŽâ€qStage := Array.Parallel.mapi aq !ÃŽâ€qStage
// Parallel compensated summation, independent in the "phase space hypervector";
// ie. for every component
let sp = fun ix e -> let ÃŽâ€p = e + pErr.[ix]
p.[ix] <- pPrev.[ix] + ÃŽâ€p
pErr.[ix] <- (pPrev.[ix] - p.[ix]) + ÃŽâ€p
pPrev.[ix] <- p.[ix]
let sq = fun ix e -> let ÃŽâ€q = e + qErr.[ix]
q.[ix] <- qPrev.[ix] + ÃŽâ€q
qErr.[ix] <- (qPrev.[ix] - q.[ix]) + ÃŽâ€q
qPrev.[ix] <- q.[ix]
[ async { Array.Parallel.iteri sp !ÃŽâ€pStage};
async { Array.Parallel.iteri sq !ÃŽâ€qStage}
] |> Async.Parallel |> Async.Ignore |> Async.RunSynchronously
// time calculations (not async to avoid ref clutter, little gain)
δt <- ÃŽâ€t + tErr
t <- t + δt
tErr <- (tPrev - t) + δt
tPrev <- t

That being said, simplistic as it is, profiling reveals that 30% of the CPU time is spent in parallel computations (force calculations being the major contributor to the rest), so there is a significant gain in speed at the expense of eating lots of ram. It's really a great fit for GPGPU! Using octrees and FMM, the force calculations can be parallelized in a similar fashion, but then we're talking tree codes!

My next step is to formalize the above to a polymorphic parallel vector/matrix/array library to clean up the syntax and hide as much as possible of the asynchronous primitives.

Since you're the category theorist in the room, what do you think of the way the Geometry assembly (now called CTSGeometry) is structured? I'll be rewriting it in C++/CLI to integrate (this word is too polysemic) physical quantities, but nothing fundamental should change.

I'm mostly impressed you have the patience for that working in C# :D

Honestly I don't know much about the interface towards Unity/KSP, so I can't say if the structure is suitable or not.... From a general stand-point it's what I would expect, and that's usually a good sign with code.

I think any implementation of abstract algebra will only show it's efficacy when you start using it (eh, that's another of those "did I just say that") -- I usually do that in typed functional languages since metaprogramming is more or less inherent to that style. Much of what you are doing would probably be a lot more concise in an ML style language (Haskell, F#/OCAML). The extreme is Haskell's type classes that lets you embed categories directly in the category of types-- operators (eg ∧) get their expected properties through simple instance declarations of the underlying types with minimal boilerplate coding. Once you've coded the type classes, of course, but that's a lot less typing than OOP classes!

F# measurement types work pretty well, but not so much when you do things like n3/2... are they the same as the ones accessible from C++ I wonder?

Edited by SSR Kermit

Share this post


Link to post
Share on other sites
I will be putting it up eventually, but it's an undocumented hard-coded hacking bonanza as it is :)

Data parallel in this context is pretty simplistic, it's simply the observation that integration is partitioned with a single dependency across the computational front; it's independent in the number of objects and components. The relevant part looks like this:

while (t < tmax) do
let ÃŽâ€qStage = ref (Array.zeroCreate hypDim)
let ÃŽâ€pStage = ref (Array.zeroCreate hypDim)
for i = 1 to stages do
let f = force(ks.kBodies())
// closures (over immutable references to internal structures)
let ap = fun ix e -> let e' = e + ÃŽâ€t * b.[i-1] * f.[ix]
p.[ix] <- pPrev.[ix] + e'
e'
let aq = fun ix e -> let e' = e + ÃŽâ€t * a.[i-1] * p.[ix]
q.[ix] <- qPrev.[ix] + e'
e'
// note the dependency of q on p; this is the computational front
ÃŽâ€pStage := Array.Parallel.mapi ap !ÃŽâ€pStage
ÃŽâ€qStage := Array.Parallel.mapi aq !ÃŽâ€qStage
// Parallel compensated summation, independent in the "phase space hypervector";
// ie. for every component
let sp = fun ix e -> let ÃŽâ€p = e + pErr.[ix]
p.[ix] <- pPrev.[ix] + ÃŽâ€p
pErr.[ix] <- (pPrev.[ix] - p.[ix]) + ÃŽâ€p
pPrev.[ix] <- p.[ix]
let sq = fun ix e -> let ÃŽâ€q = e + qErr.[ix]
q.[ix] <- qPrev.[ix] + ÃŽâ€q
qErr.[ix] <- (qPrev.[ix] - q.[ix]) + ÃŽâ€q
qPrev.[ix] <- q.[ix]
[ async { Array.Parallel.iteri sp !ÃŽâ€pStage};
async { Array.Parallel.iteri sq !ÃŽâ€qStage}
] |> Async.Parallel |> Async.Ignore |> Async.RunSynchronously
// time calculations (not async to avoid ref clutter, little gain)
δt <- ÃŽâ€t + tErr
t <- t + δt
tErr <- (tPrev - t) + δt
tPrev <- t

That being said, simplistic as it is, profiling reveals that 30% of the CPU time is spent in parallel computations (force calculations being the major contributor to the rest), so there is a significant gain in speed at the expense of eating lots of ram. It's really a great fit for GPGPU! Using octrees and FMM, the force calculations can be parallelized in a similar fashion, but then we're talking tree codes!

Neat! Force computation really takes most of time here (it's a division and a square root for every pair of bodies), so if you can parallelise that things are significantly sped up.

I'm mostly impressed you have the patience for that working in C# :D

I'm translating it to C++ now. At least I'll have header files so I can have an idea of how my types work without skimming over dozens of lines of implementation. I miss the package specifications of Ada...

I think any implementation of abstract algebra will only show it's efficacy when you start using it (eh, that's another of those "did I just say that") -- I usually do that in typed functional languages since metaprogramming is more or less inherent to that style. Much of what you are doing would probably be a lot more concise in an ML style language (Haskell, F#/OCAML). The extreme is Haskell's type classes that lets you embed categories directly in the category of types-- operators (eg ∧) get their expected properties through simple instance declarations of the underlying types with minimal boilerplate coding. Once you've coded the type classes, of course, but that's a lot less typing than OOP classes!

F# measurement types work pretty well, but not so much when you do things like n3/2... are they the same as the ones accessible from C++ I wonder?

I'm not very familiar with the languages you mention. It's nice that there is some support for units in F#, though it's more focused on units than on quantities; it's not quite coordinate-free enough :).

C++ doesn't really have native support for that, but you can make your own and it's really powerful. You can take a look at my physical quantities here; note that this can't be done in C# (or other CLR languages) because I need templates:

https://github.com/eggrobin/Principia/blob/master/PhysicalQuantities/PhysicalQuantities.h

https://github.com/eggrobin/Principia/blob/master/PhysicalQuantities/NamedQuantities.h

https://github.com/eggrobin/Principia/blob/master/PhysicalQuantities/Units.h

https://github.com/eggrobin/Principia/blob/master/PhysicalQuantities/Constants.h

Usage (surprisingly intuitive):

https://github.com/eggrobin/Principia/blob/master/PhysicalQuantities/StaticTest.cpp

Of course, this really needs to be wrapped inside some affine spaces for stuff like absolute temperature/pressure/anything, but as elements of 1-dimensional vector spaces (multiplication of physical quantities is a tensor product, see [Tao 2012]) they work well.

EDIT:

I looked up type classes, and C++ templates are sort-of-kind of related to that (actually type classes are constraints on template parameters). They wanted to add that to C++11, but it didn't work out for C++11, maybe it'll come in a future standard.

Note that this would not add something fundamental on top of templates, it would just make specifications and compiler errors easier to read (there's concept maps and axioms on top of that, but it's another story); if you look at a C++ template, say


[color=blue]template[/color]<[color=blue]typename[/color] [color=teal]T[/color]> [color=blue]void[/color] Foo([color=teal]T[/color]);

you can then call Foo(x) for all x whose type has all the operations used by the implementation of Foo, i.e., whose type is in the type class corresponding to these operations. So C++ has type classes, they're just implicit. I guess you could use static asserts to make them more explicit, but that's ugly.

EDIT:

I forgot to mention that my implementation of physical quantities is strongly influenced by [Kenniston 2002], with some differences in design (Dimensionless and double are not the same thing, there is a Unit type).

Edited by eggrobin

Share this post


Link to post
Share on other sites
Will you add something like this? ;-)


const Unit<Length> Meter = Metre;

This is not the standard spelling as defined by the BIPM. :) I might add it in a separate AmericanSpellings.h file though.

Also, did I seriously write

[color=blue]const[/color] [color=teal]T[/color] Foo;

everywhere? This should be

[color=teal]T[/color] [color=blue]const[/color] Foo;

otherwise I'll go insane if I have to deal with pointers... *Opens Visual Studio*

Share this post


Link to post
Share on other sites

I miss the package specifications of Ada...

I'm looking at your indicated birth date and in that context that statement doesn't make a whole lot of sense :P

Most universities in the west had stopped teaching Ada by 2000, but I had picked it up as a preparation for Uni the years prior. I miss Ada too, but at least I have VHDL....

It's nice that there is some support for units in F#, though it's more focused on units than on quantities; it's not quite coordinate-free enough :).

Which is precisely why I didn't use them in my code; I had to cast everything anyway in the end.

That looks more or less exactly like what I was going for but found the measurements of F# to be lacking in. I would only want Dimensionless to be polymorphic over primitive and integral (there's that word again) types with simple values, such as complex numbers, but I think that would be difficult in a language without type inference; at least it would make parameters unwieldy with massive template tags and that kind of defeats the purpose.

I looked up type classes, and C++ templates are sort-of-kind of related to that (actually type classes are constraints on template parameters). They wanted to add that to C++11, but it didn't work out for C++11, maybe it'll come in a future standard.

Note that this would not add something fundamental on top of templates, it would just make specifications and compiler errors easier to read (there's concept maps and axioms on top of that, but it's another story); if you look at a C++ template.

There are some important differences, the relationship between the proposed C++11 Concepts and Haskell-style type classes is something like a catamorphism, which in this case roughly says you can express one in the other under a simple reduction/expansion (a 'fold'). The following illustrates it rather nicely:

Haskell:

class Functor f where
fmap :: (a -> -> f a -> f b

instance Functor List where
fmap f Nil = Nil
fmap f (Cons a as) = Cons (f a) (fmap f as)

C++11 Draft:

concept Functor<template<typename> class F> {
template<typename A, typename B>
function1<F<A>, F<B>> fmap (function1<A,B>);
}
template<typename T>
class List {
/* some implementation */
};
concept_map Functor<List> {
template<typename A, typename B>
function1<List<A>, List<B>> fmap(function1<A,B> f) { /* some more implementation */ }

concept TypeConstructor<typename F> {
template<typename T>
class Rebind;
};

concept Functor<TypeConstructor F> {
template<typename A, typename B>
function1<F::Rebind<A>,F::Rebind<B>> fmap (function1<A,B>);
}

The C++ version has some technical issues with genericity, but even if it worked perfectly I think you can see why I prefer the inferred version in Haskell (that uses all models to simplify unification, where C++ only considers models when the parameter is evaluated; ie inference vs checking).

EDIT: I have heard claims of correspondence between Concepts and type classes, but to accept that I would have to qualify that as "correspondence under catamorphism" which is a statement at most as strong as correlation .... Purely a formal stand point, but I think it captures the structural difference in an important way. As to be expected from someone in to category theory, I suppose.

EDIT2: OMG, I just realised how incredibly meta that was of me, giving fmap as an example of typing fold in higher kinds (I just described 'fmap fmap fmap' on functors!)

Edited by SSR Kermit

Share this post


Link to post
Share on other sites
I'm looking at your indicated birth date and in that context that statement doesn't make a whole lot of sense :P

As mentioned a while ago on this thread:

That's the second language I learned (after VB6 :P).

I actually have a copy of Programmer en Ada 95 (a translation of John Barnes's Programming in Ada 95, I didn't speak English at the time) dated 2002.

This may be related to the fact that my father was chairman of the ARG until 2007 (he's actually the one who proposed Ada.Numerics.À (Ada Issue 388) in order to force tools to correctly support Unicode identifiers).

That looks more or less exactly like what I was going for but found the measurements of F# to be lacking in. I would only want Dimensionless to be polymorphic over primitive and integral (there's that word again) types with simple values, such as complex numbers, but I think that would be difficult in a language without type inference; at least it would make parameters unwieldy with massive template tags and that kind of defeats the purpose.

Actually, I might want to allow homemade quads as the underlying type in the future, as mentioned earlier on the thread.

I'll see whether I want to make that a template parameter (I don't think it would be that unwieldy, boost does that for its quantities library, though it's admittedly not a paragon of legibility) or go with quads everywhere. For the moment I'm fine with double.

There are some important differences, the relationship between the proposed C++11 Concepts and Haskell-style type classes is something like a catamorphism, which in this case roughly says you can express one in the other under a simple reduction/expansion (a 'fold').

It's not really surprising that Haskell is better at category theory than C++ :). Also, how many kinds of morphisms are there in category theory? Let me make up one: erythromorphism: a morphism written by a teaching assistant.

Edited by eggrobin

Share this post


Link to post
Share on other sites
As mentioned a while ago on this thread:

I actually have a copy of Programmer en Ada 95 (a translation of John Barnes's Programming in Ada 95, I didn't speak English at the time) dated 2002.

This may be related to the fact that my father was chairman of the ARG at the time (he's actually the one who proposed Ada.Numerics.À (Ada Issue 388) in order to force tools to correctly support Unicode identifiers).

Now it does make sense; including the unicode bit! I thought I did read the entire thread (as I have a habit of doing: see my post count for the consequence), but I must have skimmed parts.

Also, how many kinds of morphisms are there in category theory? Let me make up one: erythromorphism: a morphism written by a teaching assistant.

There are at least as many morphisms in the category of categories as there are morphisms in the category of syntax (where all morphisms are automorphisms), so that's ×Â3 or something ridiculous like that. Not that it means anything, even in the abstract sense :S

Share this post


Link to post
Share on other sites
so that's ×Â3 or something ridiculous like that. Not that it means anything, even in the abstract sense :S

It took me a while to figure out that this said alef 3 rather that 3^alef. bidirectional text is fun...

You should use ALEF SYMBOL (U+2135) rather than HEBREW LETTER ALEF (U+05D0).

ℵ3

I would insert a right-to-left override just for the fun of it, but the fora turn that into a *. :)

Share this post


Link to post
Share on other sites

Was playing around with this last night. I built it off of your github source in linux as getting all the things needed to do it in windows looked like a huge inconvenience besides being expensive. Outside of the stated problem with having to switch back and forth between keplarian and Newtonian physics for burns I think its very playable. Only repeatable bug I ran into is if your not in a stable orbit and you switch to Newtonian physics and accelerate time you instantly blow up throwing pieces at a rather incredible speed everywhere. But as I am just playing with source code and not anything released I have no clue as to whether any of the functions were working as planned just wanted to tell you I played with it and had fun :).

Also almost forgot, saw little to no difference in the games speed though I didn't have very many flights in progress so not seeing a big performance hit with just the planets and about 10 flights in progress.

Edited by toril
performance update

Share this post


Link to post
Share on other sites
This is very cool, I'm looking forward to seeing where this goes. I suppose I should try and answer this question:

It should be able to predict, but there are serious caveats to this. If the vehicle enters in an stable aerodynamic configuration the results should be predicted fairly accurately, though there might be some errors due to initial transient effects on atmospheric entry. However, if the vehicle is unstable, the results will be completely inaccurate; unstable aerodynamic configurations tend to follow highly chaotic motion, which makes them essentially unpredictable due to how much minor errors can affect the trajectory. This means that there needs to be a way to determine whether the predicted trajectory is that of a stable or unstable vehicle; I suspect the best way is to search for sudden changes in the orientation of the vehicle and color-code everything after that to designate the degree of chaos in the trajectory. If that also accounts for stability induced by creating a large amount of spin-stabilization, it would provide a fairly accurate way to estimate the effects of aerobraking.

As for the time to do this... it will likely be slow, since you'd have to simulate it part-by-part for best results. A faster (but less accurate) solution would be to calculate out the vehicle's properties as best as you can at the hypersonic limit and use that, which works for Mach numbers > ~5 or so. This will work for most aerobraking in stock KSP and will certainly work for RSS.

Alternatively, given that this is a game, you could just use a random number generator to pump a chaotic oscillator and feed that into the third integral.

Share this post


Link to post
Share on other sites

No, I think that won't do. Principia is all about precise simulation and realism, "this is a game" mentality doesn't really apply here. I think that Ferram's idea with calculating aerodynamics for high mach numbers would do, the model shouldn't diverge much from this at lower mach numbers if the vehicle is simple (say, a reentering capsule), and if it's not (a spaceplane), you probably have some degree of aerodynamic control anyway at that point.

Share this post


Link to post
Share on other sites

I uh, yes. Hmm... I think my brain leaked out of my nose, reading the OP. I salute your maths, and will now run very, very far away.

Share this post


Link to post
Share on other sites

ialdabaoth: The "this is a game" consideration is not absent from my thoughts in general, but ferram4 was answering a question about predicting what the result of aerobraking will be or what the landing site after reentry will be. I'm not in charge of simulating low-altitude atmospheric flight (ferram4 does that :)), so if I want to predict it I have to be accurate, I can't just make something up.

I uh, yes. Hmm... I think my brain leaked out of my nose.

You might want to consult a physician about that, I don't think that is entirely normal. :)

Share this post


Link to post
Share on other sites

Classic case of sudden algebra exposure. :) I had that too, upon returning to my Uni after 3 weeks of sick leave... The only way to avoid that is to build up tolerance by starting with something easy (basic Boolean logic is a good start) and working your way up. :)

Share this post


Link to post
Share on other sites

Well, since FAR can already chart L/D ratios in function of AoA and speed, could you try plotting three trajectories beginning at the entry interface? "MAX LIFT UP" / "ZERO LIFT" / "MAX LIFT DOWN", maybe with dotted/red lines if instability or excessive loads are detected. This assumes that the craft is controlled all the way of course, but I don't think that predictions of uncontrolled reentries of unstable things would be of much interest.

No, no, better yet: plot a TARGET ELLIPSE on the planet! a red dot at the zero lift IP and the boundary at the maximum reachable distances (max crossrange, max/min downrange and mixes). Doesn't have to be realtime, one update every 10 seconds would be fine. It would be <robbaz>GLORIOUS</robbaz>

Share this post


Link to post
Share on other sites

thorfinn:

Lots of good ideas here!

Doesn't have to be realtime, one update every 10 seconds would be fine.

This is very true, it also applies to most of the dynamic integration (for medium-acceleration burns, updating the predicted trajectory every couple of seconds rather than every few frames would save a lot of resources and still look good I think).

It's been a while since I've done a

Status update:

The Quantities library (C++) is pretty much done and tested.

I am currently porting the C# Geometry assembly mentioned previously (now called CTSGeometry) to C++ in order to enable its use with these quantities.

Share this post


Link to post
Share on other sites

Ferram says that future versions of FAR could dump tables of aerodynamic coefficients vs. various parameters to a text file; this at "compile" time in the VAB, for now, I've asked him whether they could be recreated in flight on demand.

Do you think that could be enough to integrate flight trajectories with the simplifying assumptions made above? (i.e. following zero lift or maximum L/D or some other extremal point)?

Share this post


Link to post
Share on other sites
Will N-body allow for small ships to orbit a far-out asteroid?

The question is irrelevant. Remember the formula for speed (in a circular orbit)?

gif.download?v%3D%5Csqrt%7B%5Cfrac%7BGM%7D%7Br%7D%7D

Plug in the numbers for a 5000 ton asteroid (and that's a fairly big one, as I understand) and you'll get G×M = 3.337×10-4. Now, assume a close orbit of, say, 100m; your orbital speed will be 0.0018 m/s. One sneeze will give you escape velocity, basically.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.