Jump to content

[WIP][1.8.1, 1.9.1, 1.10.1, 1.11.0–2, 1.12.2–5] Principia—version ‎‎Kronecker, released 2024-11-01—n-Body and Extended Body Gravitation


eggrobin

Recommended Posts

Why can't you calculate acceleration mechanically? I mean calculate vessel's PMI, and then get all engines' thrust vectors in local space (i.e. vessel coordinates) and calculate both rotational and translational accelerations? It doesn't sound too complicated to me... It won't be that simple for predicting maneuvers though since you have to take into account what control system would do (i.e. thrust vectoring, CMGs, RCS), but again you could just assume neutral control to the maneuver vector (like existing system does).

Link to comment
Share on other sites

Why can't you calculate acceleration mechanically? I mean calculate vessel's PMI, and then get all engines' thrust vectors in local space (i.e. vessel coordinates) and calculate both rotational and translational accelerations? It doesn't sound too complicated to me... It won't be that simple for predicting maneuvers though since you have to take into account what control system would do (i.e. thrust vectoring, CMGs, RCS), but again you could just assume neutral control to the maneuver vector (like existing system does).

What's a PMI? EDIT: Oh, it's just the off-diagonal entries of the inertia symmetric bilinear form. Engineers have weird naming conventions when it comes to mathematics. :P

My original intention for off-rails was to derive the proper acceleration of the vessel from first principles by talking to the engines as you say, but as discussed earlier, there are lots of sources of proper acceleration: besides engines, you have decouplers, collisions/getting out and pushing, engine exhaust, explosions etc. So computing the acceleration from first principles entails knowing what can exert thrust. This would break compatibility, and would lead me to effectively writing my own physics engine. Since this is definitely beyond the scope of this mod, I have to grab the acceleration from Unity. As Unity is bad at physics (and the KSP layer is pretty lousy too), I will have to postprocess things.

Edited by eggrobin
Link to comment
Share on other sites

Status update:

I have successfully implemented the integration of proper acceleration for the active vessel when off-rails.

Further experimentation on proper acceleration has led me to the following conclusions:

  • There is a bias in proper acceleration coming from some improperly initialised variable in KSP. Indeed, when loading a vessel in LKO, I observe a strong bias in proper acceleration (~20 mm s-2). This bias is observed independently of the way proper acceleration is computed (differentiating position twice, differentiating any of the numerous velocities, etc.) and geometric accelerations have been checked from first principles (the difference in geometric acceleration depending on the point of the vessel it is computed at is negligible). The bias is reduced to below 1 mm s-2 when warping then coming out of warp. It should be noted that the strong bias is not seen in Vessel.perturbation, but Vessel.perturbation consistently has a bias of 4 mm s-2. As I have attempted to compute the proper acceleration in many different ways and all were consistent with each other and inconsistent with Vessel.perturbation, I assume Vessel.perturbation is mostly nonsense.
  • Accelerations below 1 mm s-2 are biased, unavoidable, unusable in stock, and should be clamped to 0. The acceleration from low-thrust engines will have to be fetched by hand.
  • It had previously been mentioned that spinning the ship accelerates it. If spinning the ship with angular velocity É produces a phantom acceleration a, then spinning it with angular velocity -É produces a phantom acceleration -a. The direction of a does not seem to bear any relation to É.
    EDIT: It seems a is either prograde or retrograde.

The number of astrophysicists here seems to have doubled recently.

Edited by eggrobin
Link to comment
Share on other sites

How is the numerical integration handled as far as units? Is the numerical integration being doing done directly on the in-game untis or is there a unit conversion done to allow for speedier calculations? Are you tracking energy and momentum conservation and are you finding that the SPRK is handling orbits well? I see you have reference the Chambers paper, have you looked at possibly using the Bulirsch–Stoer algorithm for handling close encounters with other bodies aka inside SoI? In my experience the Bulirsch–Stoer Symplectic Integrator introduces less error when close to other bodies than a RK integrator, but the down side is that it can be slow because of step-size prediction.

Link to comment
Share on other sites

Welcome to the fora!

Caveat lector: I am not an astrophysicist nor a numerical analyst, I am an undergraduate student in mathematics whose only experience in numerics stems from taking a couple of courses on numerical mathematics, so the statements below reflect my incompetence.

How is the numerical integration handled as far as units? Is the numerical integration being doing done directly on the in-game untis or is there a unit conversion done to allow for speedier calculations?

I fail to understand how unit conversion would speed up the calculation. They are currently done in an inertial reference frame whose origin lies near the sun. The units are those used by KSP (m for q, m/s for v). Should I switch to furlongs and furlongs per fortnight? :P The reference frame being nonrotating, it is necessary to rotate the result of the integration to use it in KSP, whose reference frame sometimes rotates.

Are you tracking energy and momentum conservation and are you finding that the SPRK is handling orbits well?

My experiments on simulating the Alternis Kerbol system with a satellite in LKO seemed to show that it performed well as far as errors in H, p and L are concerned (the results were similar for reasonably larger timesteps; fifth order convergence was observed). In the current prototype, the timestep is such that the error should be no more a few unit roundoffs anyway.

I see you have reference the Chambers paper, have you looked at possibly using the Bulirsch–Stoer algorithm for handling close encounters with other bodies aka inside SoI? In my experience the Bulirsch–Stoer Symplectic Integrator introduces less error when close to other bodies than a RK integrator, but the down side is that it can be slow because of step-size prediction.

I have not implemented the smarter integrators yet, but this will be the next step after fixing a few bugs with the moving around of off-rails vessels and a bit of refactoring (the last thing I want is my codebase turning into a mess of spaghetti code before it is even playable).

I'll start by implementing the Saha & Tremaine stuff (individual timesteps and separation of the Hamiltonian in H = (T + VSun) + Vinteraction rather than H = T + V), generalising it to higher orders, and then I'll see whether it is good enough at the timesteps I will use.

If the Saha & Tremaine integrator is not good enough (I suspect this will only be a problem for the dynamic integration, the main integration is almost solved by the current SPRK integrator already, so I'll be able to use comparatively small timesteps with Saha & Tremaine) I'll look into the Chambers paper.

As for the choice close-encounter integrator, I think I'll go with the usual method of implementing a few integrators and seeing how they behave in-game.

Edited by eggrobin
Link to comment
Share on other sites

I started working on some more advanced orbital simulations a few weeks ago. It's nothing quite so ambitious or well-organized as this project and I don't work on it often, so I'm excited to see where this goes. Best of luck to you, eggrobin! I'll be keeping an eye on this.

Link to comment
Share on other sites

Sorry if this has been brought up before, I haven't read the whole thread. I just saw that you were writing your own integrators and wanted to make sure you were aware of the work done by Sverre Aarseth, specifically the NBODY libraries. NBODY6 has a version that makes use of the GPU for the calculations. I haven't been able to locate NBODY7 of which the wikipedia page speaks though. Good luck!

Link to comment
Share on other sites

Sorry if this has been brought up before, I haven't read the whole thread. I just saw that you were writing your own integrators and wanted to make sure you were aware of the work done by Sverre Aarseth, specifically the NBODY libraries. NBODY6 has a version that makes use of the GPU for the calculations. I haven't been able to locate NBODY7 of which the wikipedia page speaks though. Good luck!

Well, I don't really feel like interfacing with FORTRAN, I want to add some effects beyond point-mass gravitation (thrust, drag etc.) and writing the integrators is the fun part. It seems it is the easy part too (it's just maths, physics and numerics; the rest involves writing untestable code to interface with an undocumented, buggy and ill-designed API). Speaking of which,

Status update:

It turns out I have trouble properly setting the position when off-rails (finding out where the reference frame actually is is hard).

However, there are some nicer news: I seem to have found the source of the phantom acceleration bias (not the ones arising from rotation, the bias that is removed by timewarping). The floating origin sometimes floats several km away from the ship, so that's probably just floating-point inaccuracies (the usual Kraken). If that hypothesis turns out to be true, this particular acceleration bias will be easy to fix, just reset the floating origin often enough.

Edited by eggrobin
Link to comment
Share on other sites

The floating origin sometimes floats several km away from the ship, so that's probably just floating-point inaccuracies (the usual Kraken).

I don't know much about these things and this might be unrelated to what you are doing, but i'm pretty sure i remember Harvester saying the floating origin is 'reset' to the position of the ship every 6km or so (so within that 6km range the ship does move relative to the floating origin).

Link to comment
Share on other sites

I don't know much about these things and this might be unrelated to what you are doing, but i'm pretty sure i remember Harvester saying the floating origin is 'reset' to the position of the ship every 6km or so (so within that 6km range the ship does move relative to the floating origin).

If that is the case (and experiments show it could be the case) it's pretty silly. They cut physics off at 2.5 km from the ship due in part to floating point inaccuracies, but they allow the ship to be 6 km away from the origin. Frankly, there is no good reason for the ship not to always be at the origin.

Link to comment
Share on other sites

They cut physics off at 2.5 km from the ship due in part to floating point inaccuracies[..]

I highly doubt that this 2.5km had anything to do with floating point inaccuracies. They had to choose an arbitrary distance to free up system resources (physics, part logic & rendering) and 2.5-3km seems far enough to make sure the player isn't interacting with the other ship anymore.

Btw, there is a public property "threshold" in FloatingOrigin, so you might want to test whether a smaller value would improve accuracy.

Link to comment
Share on other sites

I highly doubt that this 2.5km had anything to do with floating point inaccuracies. They had to choose an arbitrary distance to free up system resources (physics, part logic & rendering) and 2.5-3km seems far enough to make sure the player isn't interacting with the other ship anymore.

Btw, there is a public property "threshold" in FloatingOrigin, so you might want to test whether a smaller value would improve accuracy.

Good points. There's also setOffset or whatever it's called, which would enable me to manually set the origin to the ship's position as often as possible (since all the physics takes place within a sphere, the only sensible location for the origin is at the center of this sphere).

Anyway, my main concern at the moment is setting the position correctly when off-rails. What I'm doing at the moment differs significantly from Unity's calculations (i'm diverging from Unity by 1 m after 10 s, so something's wrong).

Link to comment
Share on other sites

Frankly, there is no good reason for the ship not to always be at the origin.

Krakensbane was a patch of an already existing system, I suspect they would have had to rewrite lots of things from scratch to keep the ship always at the origin.

Also, remember that it cuts in only above a certain speed - I don't know where it's defined, I tried to get into the KSP code a couple times and never had the patience to really manage anything useful.

Oh, by the way, since you're rewriting dozens of things if you happen to stumble on a way to rewrite the ASAS logic, please tell us (and tell Ferram before anyone else)

Link to comment
Share on other sites

Regarding krakensbane activation speed, it seems to be around 700m/s (as seen while playing with krakendrives - they all fail quite spectaculary when we use them to slow down below this speed) -

one way to verify if krakensbane code affects your calculations would be to see if you still experience the variation with orbital speeds under 700m/s

Edited by sgt_flyer
Link to comment
Share on other sites

The Krakensbane code is managed in the Krakensbane class, and if you can grab a reference to the instance using some type of "find gameobject" function you might be able to set the velocity cutoff through the public variable MaxV. By default, it's 750 m/s relative to whatever frame you're in (rotating or inertial). I know because FAR used to freak out after hitting that velocity shortly after Krakensbane was implemented.

For the FloatingOrigin class, I wonder if setting the "continuous" variable to true would have the same effect as constantly resetting the origin position, if it isn't set to true already. I do recall that passing 6km used to cause the ship to jump around due to the floating origin shifting if it was slightly deformed, but that doesn't seem to happen anymore, so I suspect the floating origin is always moving with the ship now.

Link to comment
Share on other sites

The Krakensbane code is managed in the Krakensbane class, and if you can grab a reference to the instance using some type of "find gameobject" function you might be able to set the velocity cutoff through the public variable MaxV. By default, it's 750 m/s relative to whatever frame you're in (rotating or inertial). I know because FAR used to freak out after hitting that velocity shortly after Krakensbane was implemented.

For the FloatingOrigin class, I wonder if setting the "continuous" variable to true would have the same effect as constantly resetting the origin position, if it isn't set to true already. I do recall that passing 6km used to cause the ship to jump around due to the floating origin shifting if it was slightly deformed, but that doesn't seem to happen anymore, so I suspect the floating origin is always moving with the ship now.

Well, it is easy to check that the floating origin isn't with the ship: just trace the ship's worldpos. One easily sees that the origin tends to be a few km away until you timewarp (it is properly reset by timewarp). I'm not having any issues with Krakensbane at the moment.

It seems the planets aren't quite where I put them though and this (or something else) is causing problems when trying to set the vessel's position when off-rails. This is driving me ever so slightly insane. :confused:

Link to comment
Share on other sites

Hey!

I'm working on something similar, but more from a computational point of view than providing new gameplay. I'm using a Hermite integrator (pumped by a 6th order Yoshida), and I'm using octrees and their duals to cluster distant sources (ie, FMM)-- getting that to run in realtime on a beefy graphics card should be possible albeit not trivial... I'm working with OpenCL and F#, but I'd be very interested in any kind of visualization features in-game if/when you are ready to share the source.

If you haven't read it, this book is awesome;

Gravitational N-Body Simulations, Sverre J Aarseth, Cambridge University Press (2003)

Aarseth is something of a legend among stellar dynamicists and has devoted a long career to the N-body problem.

EDIT; Found the source on github, good stuff! I applaud your "daring" use of unicode!

Edited by SSR Kermit
Link to comment
Share on other sites

I'm working on something similar, but more from a computational point of view than providing new gameplay. I'm using a Hermite integrator (pumped by a 6th order Yoshida), and I'm using octrees and their duals to cluster distant sources (ie, FMM)-- getting that to run in realtime on a beefy graphics card should be possible albeit not trivial... I'm working with OpenCL and F#, but I'd be very interested in any kind of visualization features in-game if/when you are ready to share the source.

For KSP, where you have about 20 massive bodies, FMM is probably not worth it. Realtime is completely trivial, it's the 100_000_000x that you need for dynamic trajectory predictions that might be an issue. I don't intend to do GPGPU.

I'm not familiar with Hermite integrators; what is the idea behind those?

Note that you can write plugins for KSP entirely in F# if you want (or in C++/CLI; you need wrappers in one of the other three languages for VB.NET because of case insensitivity). It seems people aren't quite aware of that, I'll probably write a PSA about it at some point.

While testing that I discovered that asmi was right when conjecturing that Unity doesn't support mixed-mode DLLs, so if I want to use native code (which would be really nice since you lose vectorisation when using CLR) I'll have to use DllImports and mess with marshalling by hand. On the upside with this approach you know where the thunks are.

I applaud your "daring" use of unicode!

Welcome to the nineties! :) Admittedly some languages lagged behind, like Ada which only had Unicode identifiers in Ada 2005, but it compensates by having the following in Ada.Numerics:

À  : constant := Pi;

EDIT

Also, it's been a while since I've done a

Status update:

I have started doing some refactoring since the sphaghettiness of the code was getting on my nerves.

I have written strongly typed wrappers for the numerous reference frames spawned by KSP (direct vs. indirect, rotating vs. inertial, world vs. body-centric, etc.) as I have had numerous bugs due to a misplaced xzy, rotation, translation, scaling, inertial force etc.

Of course, since KSP has the brilliant idea of using both direct and indirect reference frames, I needed distinct types for vectors, bivectors and trivectors (basically I had to strongly type Grassmann algebras; there can be no cross product, only wedge products, commutators and left and right actions of alternating forms---where ncnhkgd.png is identified with pynpdhk.png through the inner product, and ogat7w6.png is identified with qcv7ksx.png).

I do not think I will implement strong typing for physical quantities yet (though I'd like to), since C# generics are not powerful enough for that; I would need C++ templates. I'll do that when I rewrite things in C++/CLI.

The next step is to implement my own versor, since Unity's Quaternion is in single-precision float and KSP's QuaternionD is broken.

The rest should be more straightforward refactoring.

Edited by eggrobin
Link to comment
Share on other sites

For KSP, where you have about 20 massive bodies, FMM is probably not worth it. Realtime is completely trivial, it's the 100_000_000x that you need for dynamic trajectory predictions that might be an issue.

You're quite right about multipole not being worth it for stock KSP; not only are we dealing with low N but the system is very flat and the rotational aspect completely dominates phase space. On that note, 2D/4D clustering using volumetric 3D space partitioning (octrees) is just silly. If we define a system of say 5 gas giants in the 0.5 < M_Jup < 6 range and put more than an Earth-mass of moons in orbit around each, plus trojans and spartans at the L-points and add a considerable sphere of scattered objects, well then FMM might be good :cool: (one can dream, right)

I don't intend to do GPGPU.

Most sane people would not intend to do that. However, in my case, I think I have strong reasons to; but that's what all crazies say. If I can realise the very simple Yoshida scheme, a 6th order integrator using seven force computations, and parallelize it to the (time) performance of a 3 force computation scheme, it will definitely be worth it. yo6 is essentially the leapfrog called in a pre-3,1,3-corr pattern, which hints at a parallel model . This is what I'm currently working on, but I'm worried about portability despite (or perhaps due to) the grandiose claims by Apple, AMD and nVidia regarding OpenCL.

I'm not familiar with Hermite integrators; what is the idea behind those?

A good explanation is here: in volume 2, chapter 11 of a lengthy tutorial on the n-body problem that Hut and Makino wrote. The book was never finished, which is a shame because it's awesome as an introduction for people from other disciplines.

In short, it's a generalization of the leapfrog with nice properties (such as 4th order behaviour). I'm not sure on this one, but I believe there is a way to describe all these "leapfrog in higher orders" algorithms in terms of partitioned RK schemes, although they quite elegantly escape dependencies on the vectorized representation. Again, that might hint at interesting things -- perhaps a general concurrency scheme for a class of partitioned RK algorithms, which would be pretty awesome. It's interesting to note how symplectic schemes could arise from a separate branch of reasoning that doesn't use a Hamiltonian; Aarseth has more on the history in his seminal 2003 work. I've seen Yoshida referenced everywhere in the litterature, so I guess I better read up on Lie algebras and Baker–Campbell–Hausdorff in order to understand his work ....

[snip]

[...]

Welcome to the nineties! :) Admittedly some languages lagged behind, like Ada which only had Unicode identifiers in Ada 2005, but it compensates by having the following in Ada.Numerics:

À  : constant := Pi;

I think you'd be surprised at how many companies enforce strict C89 lexical standards in 2014, even if you're coding something like C# in a fully utf-8 environment. I've even seen source control systems that reject check-ins if you don't use strictly alphanumeric US ASCII extended by only precisely a set of characters from the host language (operators etc). And this in Europe! Inconceivable! :huh:

EDIT: To elaborate a bit for the forum and give some praise to Hut and Makino; I really do recommend anyone seriously interested in implementing n-body systems to read (and code along) with the narrative. I'm a computer scientist and amateur astronomer and as such I found everything they had to say about software and development to be rather retro, even for 2007, but skipping those parts gave me a surprisingly detailed understanding of both the physics and math involved in stellar dynamics. I had a layman conceptual understanding and I'd seen animations of such simulations, but after a week of reading/math/coding I was able to move on to Aarseth's rather compact style and managed to read some more recent papers on the subject, where I have to take the theoretical physics on good faith but can check the numerical bits with general mathematical tools (such as Taylor series and recurrence relations). If you do read the tutorial, do note that I'm one of those who really think there should be hats on those f's! Otherwise you are proving structural equality between E an v!

Check it out (like eggrobin said: welcome to the nineties!): http://www.artcompsci.org/kali/

Edited by SSR Kermit
Hut and Makino are great!
Link to comment
Share on other sites

You're quite right about multipole not being worth it for stock KSP; not only are we dealing with low N but the system is very flat and the rotational aspect completely dominates phase space. On that note, 2D/4D clustering using volumetric 3D space partitioning (octrees) is just silly. If we define a system of say 5 gas giants in the 0.5 < M_Jup < 6 range and put more than an Earth-mass of moons in orbit around each, plus trojans and spartans at the L-points and add a considerable sphere of scattered objects, well then FMM might be good :cool: (one can dream, right)

We'll have asteroids to deal with in 0.24, so that's not completely out of scope.

I believe there is a way to describe all these "leapfrog in higher orders" algorithms in terms of partitioned RK schemes,

"leapfrog in higher orders" is pretty much what SPRKs are about, though a stricter generalisation of leapfrog is an FSAL SPRK. They give some really nice explanations for the idea behind the SPRKs in this book though, rather than expressing them all as a bunch of coefficients. I think I'll add it to the recommended reading section, it looks pretty accessible.

I've seen Yoshida referenced everywhere in the litterature, so I guess I better read up on Lie algebras and Baker–Campbell–Hausdorff in order to understand his work ....

You'll have a hard time doing that, since papers about symplectic integrators assume you know what this is all about and general treatments of the Baker-Campbell-Hausdorff formula fail to mention the relation to Hamiltonian mechanics. Here's a rough outline, you should be able to fill in the blanks from Wikipedia.

-- Notations:

M is the phase space, in this case â„Â3N×(â„Â3N)*;

{ . , . } : C∞(M)2 → C∞(M) is the Poisson bracket;

{f, . } := g ↦ {f, g}, i.e., {f, . } g = {f, g}, so that {f, . } is a linear endomorphism of the space C∞(M) of smooth functions of the phase space.

We want the solution z to z' = -{H, z} (Hamilton's equations, where z = (q, p) are the generalised coordinates).

It is of course given by the exponential of the linear operator applied to the initial conditions, z = exp(-t {H, . }) z0, so that z(Ä) = exp(-{Ä H, . }) z0, where Ä is the timestep.

We take smooth functions f(q, p) and g(q, p) in C∞(M) (H, z = id, L, p, the Lagrangian ℒ, T, V, are examples of such functions) and look at the commutator of the linear operators {f, . } and {g, . }:


[{f, . }, {g, . }] = {f, . } {g, . } - {g, . } {f, . }
= {f, {g, . }} - {g, {f, . }}
= {f, {g, . }} + {g, { . , f}} -- As the Poisson bracket is antisymmetric,
= - { . , {f, g}} -- As the Poisson bracket satisfies the Jacobi identity,
= {{f, g}, . } -- Antisymmetry.

It follows that the commutator on the operators of the form {f, . } works essentially like the Poisson bracket, so it will satisfy the Jacobi identity too. It is bilinear and alternating, and the {f, . }s form a vector space, so it is a Lie algebra.

Note that we actually started with the Lie algebra (C∞(M), { . , . }), whose the Lie bracket was the Poisson bracket (a Poisson algebra, mathematicians have creative naming conventions), what we just did was taking the adjoint representation (see the relevant Wikipedia article), thus getting a subalgebra of End(C∞(M)). The usual notation for the adjoint representation is ad f := {f, . }; I'll use this notation from now on, the brackets are getting rather cumbersome.

Back to the problem at hand, finding exp(Ä ad H). We have H = T + V (or some other partition of the Hamiltonian). We want to know how to write exp(Ä ad(T + V)) = exp(Ä ad T + Ä ad V) as a function of exp(Ä ad T) and exp(Ä ad V), the separate evolutions. ad T and ad V don't commute, so it's not just the product. The Baker-Campbell-Hausdorff formula tells you just that: it says


log(exp(Ä ad T) exp(Ä ad V)) = Ä ad T + Ä ad V + 1/2 [Ä ad T, Ä ad V] + 1/12 [Ä ad T, [Ä ad T, Ä ad V]] - 1/12 [Ä ad V, [Ä ad T, Ä ad V]] + ...

I think this should enable you to read Yoshida's works.

I would recommend starting with Yoshida's Symplectic Integrators for Hamiltonian Systems: Basic Theory [Yoshida 1992] in order to see the general definitions and ideas, his 1990 paper Construction of higher order symplectic integrators focuses only on proving that such integrators exist, with no concern for efficiency, and only talks about even-order integrators. See the discussion with Wisdom at the end of [Yoshida 1992].

The main idea is the same as for Runge-Kutta (see my introduction linked in the OP) except instead of trying to match terms in the Taylor expansion of the solution, you try to match terms in the Baker-Campbell-Hausdorff formula: the convergence rate is for the Hamiltonian here. Of course you do that by composing symplectic maps so that you don't have an energy drift.

I think you'd be surprised at how many companies enforce strict C89 lexical standards in 2014, even if you're coding something like C# in a fully utf-8 environment. I've even seen source control systems that reject check-ins if you don't use strictly alphanumeric US ASCII extended by only precisely a set of characters from the host language (operators etc). And this in Europe! Inconceivable! :huh:

We'll always have Mordac.

there should be hats on those f's!

Which f's?

Unrelated: while investigating formalisations of physical quantities and reference frames, I found this fascinating post by Terence Tao. It turns out that deep down formalising them works similarly. The post is pretty enlightening: it give a clue as to how to formalise the fact an average of temperatures makes sense, a difference of temperatures makes sense, a sum of temperature differences makes sense, but a sum of temperatures doesn't (or similarly, why a barycenter makes sense while a sum of positions doesn't).

Edited by eggrobin
signs.
Link to comment
Share on other sites

I'm still in algebra and failing.

If the date of birth in your profile is accurate, I would be rather surprised if you understood the above post. :)

I would however wholeheartedly recommend reading the bits of Feynman linked in the OP, I first learned calculus there when I was about your age. His style is very pleasant and his explanations are quite intuitive.

Link to comment
Share on other sites

We'll have asteroids to deal with in 0.24, so that's not completely out of scope.

I didn't know that. Cool!

We want to know how to write exp(Ä ad(T + V)) = exp(Ä ad T + Ä ad V) as a function of exp(Ä ad T) and exp(Ä ad V), the separate evolutions. ad T and ad V don't commute, so it's not just the product. The Baker-Campbell-Hausdorff formula tells you just that: it says


log(exp(Ä ad T) exp(Ä ad V)) = Ä ad T + Ä ad V + 1/2 [Ä ad T, Ä ad V] + 1/12 [Ä ad T, [Ä ad T, Ä ad V]] - 1/12 [Ä ad V, [Ä ad T, Ä ad V]] + ...

I think this should enable you to read Yoshida's Construction of higher order symplectic integrators.

That made sense to me, thanks! So there really is a connection to derivations in algebras in all this; to me that's really cool. Coming from a background in category theory as applied in computer science, it makes sense for me to think in terms of Der (or indeed modules on rings). Wikipedia can be surprisingly helpful when it comes to natural sciences and abstract nonsense. This all does look like bad news for my proposed parallelization scheme, though; if my intuition serves me right here then non-commutative ad T and ad V implies a concurrency barrier-- ie that would say that the order of evaluation is critical to the choice of coefficients. If so, it would also explain why all parallel integrators I have found "in the wild" are strictly data parallel! (EDIT: Yup, Equations 17, 24-26 of the Yoshida article linked pretty much spell that out in plain!)

This is cool stuff, I'm learning tons!

Which f's?

f(E, V) = f(E)

There's a dialouge in the Kali Book about how this sloppy formula might mean something very different to a student of another field; indeed to me that formula is saying something about morphisms and commuting diagrams that can only be true if the morphism mapping E and V is fully contained in morphism mapping only E (the → direction) and f(E) contains no objects not in f(.,V) (the ↠direction). For example, in a programming language that has currying and first-class functions, there is exactly one function satisfying the previous; the identity function.

Edited by SSR Kermit
Yoshida is pretty smart
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...