eggrobin Posted March 5, 2014 Author Share Posted March 5, 2014 (edited) category theory as appliedI didn't know this was a possibility. it makes sense for me to think in terms of Der (or indeed modules on rings).I'm confused. ad x is indeed a derivation (it's actually by definition an inner derivation), but where do you see a module that's not a vector space?non-commutative ad T and ad V implies a concurrency barrier-- ie that would say that the order of evaluation is critical to the choice of coefficients.If T and V commuted life would be simpler; in particular the first order integrator would be exact. Also, I edited my post slightly since you replied in order to explain the actual of role Baker-Campbell-Hausdorff here and in order to link to Yoshida. Wisdom makes some really good points in the discussion in [Yoshida 1992].EDIT answering the edit above :EDIT: Yup, Equations 17, 24-26 of the Yoshida article linked pretty much spell that out in plain!Equations 24-26 only say that for the integrators constructed by Yoshida for his proof, which, as pointed out by Wisdom, are suboptimal (see [Atela & McLachlan 1992] for optimal integrators; see also the Mathematica documentation on SPRK methods for sources for their coefficients---up to order 10. I have not been able to find the Sofroniou & Spaletta paper outside a paywall).Equation 17 on the other hand pretty much defines SPRKs, so you won't be able to parallelise too much. GPGPU might still be beneficial from a vectorisation standpoint (I'm not familiar with it though; just writing native code will already give you a nice speedup due to SSE). Edited March 5, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
SSR Kermit Posted March 5, 2014 Share Posted March 5, 2014 (edited) I didn't know this was a possibility. Then you haven't experienced the joy of Monads! Apart from the category of types and CCCs though, yeah. Application outside the category of applicative domains is something of an oxymoron I'm confused. ad x is indeed a derivation (it's actually by definition an inner derivation), but where do you see a module that's not a vector space?Ah ... that's a long winded explanation; it starts with the extension of lambda calculus into an algebra and then adds simple types -- that is the internal language of a cartesian closed category -- concurrency extensions involving indeterminism generalises this further into an applicative semi-ring ... Anyways, I tend to see the "vector bundle" as dependent vectors of concurrent, composable computations. Another example of an "application" of category theory, I suppose. EDIT: Oh yeah, the Curry Howard isomorphism (properly a correspondence, which is pretty amazing) is absolutely central to how I think about algebraic structure; that is I think about structure as language.If T and V commuted life would be simpler; in particular the first order integrator would be exact. Wait, what? I'm missing something pretty fundamental here I think....Also, I edited my post slightly since you replied in order to explain the actual of role Baker-Campbell-Hausdorff here and in order to link to Yoshida. Wisdom makes some really good points in the discussion in [Yoshida 1992].EDIT answering the edit above :Equations 24-26 only say that for the integrators constructed by Yoshida for his proof, which, as pointed out by Wisdom, are suboptimal (see [Atela & McLachlan 1992] for optimal integrators; see also the Mathematica documentation on SPRK methods for sources for their coefficients---up to order 10. I have not been able to find the Sofroniou & Spaletta paper outside a paywall).Equation 17 on the other hand pretty much defines SPRKs, so you won't be able to parallelise too much. GPGPU might still be beneficial from a vectorisation standpoint (I'm not familiar with it though; just writing native code will already give you a nice speedup due to SSE).Right; my goal was to do at least one level of task parallelisation on the 6th order Yoshida-- but now having read just the introductionary article I see that all those have the same non-reducible (stateful) behaviour. That's very nice to know in such detail. Your brief explanation of BCH helped me see the connection. Edited March 5, 2014 by SSR Kermit Curry-Howard! Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 5, 2014 Author Share Posted March 5, 2014 (edited) Then you haven't experienced the joy of Monads! Apart from the category of types and CCCs though, yeah. Application outside the category of applicative domains is something of an oxymoron I have mostly seen category theory in algebraic topology (which of course has applications, such as proving that you can cut a ham sandwich in half or that you cannot comb a hairy sphere ), so I wasn't aware of that.Wait, what? I'm missing something pretty fundamental here I think....Well, if ad T and ad V commute, their exponentials commute (because you can apply the binomial theorem to the terms of the series), exp(Ä ad H) = exp(Ä ad T) exp(Ä ad V)so (8) in [Yoshida 1992] is exact.Another way to see that is that as the exponentials commute, (16) is just ( ÃŽÂ [sub]i[/sub] exp(c[sub]i[/sub]Ä ad T) ) ( ÃŽÂ [sub]i[/sub] exp(d[sub]i[/sub] Ä ad V) ) = exp(Σ[sub]i[/sub] c[sub]i[/sub] Ä ad T) exp(Σ[sub]i[/sub] d[sub]i[/sub] Ä ad V) The integrator has first order if and only if Σici = 1 and Σidi = 1 (this is because they are actually the nodes of the underlying Runge-Kutta integrators, see my introduction), so any integrator that has 1st order has the highest order possible. As [Yoshida 1990] proves that you can have arbitrarily high orders, the 1st order integrator has arbitrarily high order, i.e., it is exact.Note that this never actually happens; writing H = H0 + H1 in the general case as pointed out by Wisdom, ad H0 and ad H1 commute if and only if {H0, H1} = 0. This impliesH[sub]0[/sub]' = {H[sub]0[/sub], H} = {H[sub]0[/sub], H[sub]0[/sub] + H[sub]1[/sub]} = {H[sub]0[/sub], H[sub]0[/sub]} + {H[sub]0[/sub], H[sub]1[/sub]} = {H[sub]0[/sub], H[sub]1[/sub]} = 0and similarly for H1, so H0 and H1 are first integrals; in other words we are pretty much dealing with two separate Hamiltonian systems. In the usual case T(p) + V(q), we get that the forces are identically 0. Edited March 5, 2014 by eggrobin plural. Quote Link to comment Share on other sites More sharing options...
SSR Kermit Posted March 5, 2014 Share Posted March 5, 2014 "You have much to learn, Grasshopper"I see it in the equations; that really is fundamental. A good opportunity to step back and do some review of what I've learned the last few weeks.Thanks for the knowledge bombs! Quote Link to comment Share on other sites More sharing options...
rhoark Posted March 10, 2014 Share Posted March 10, 2014 I'm using a class for physical quantities that's basically a tuple of quantity and unit that checks units at runtime. Even if I'd rather have it at compile time, its essential to identify errors at the point they occur rather than wonder what causes some behavior downstream.What is/will be the shape of the interface between Principia and vessels/parts for determining on-rails thrust and resource consumption? Quote Link to comment Share on other sites More sharing options...
SSR Kermit Posted March 10, 2014 Share Posted March 10, 2014 A few questions about the code of the SPRK itself:The delegate NBodySystem.computeAccelerations takes a double that is calculated from the time plus a sum of the 'b' coefficients times the interval-length -- however the value is never used. Is that for future use?I'm not really clear on the use of samplingPeriod and samplingPhase, could you elaborate on how that works? Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 11, 2014 Author Share Posted March 11, 2014 (edited) I'm using a class for physical quantities that's basically a tuple of quantity and unit that checks units at runtime. Even if I'd rather have it at compile time, its essential to identify errors at the point they occur rather than wonder what causes some behavior downstream.Runtime checks are not very useful, especially given the time it takes to start KSP; a wrapper and testing will alleviate that, but I fundamentally like things to be sound at compile time.Compile-time checks for quantities are a pain to code in C# though, so I'll probably do quantity checking in the future refactoring that involves rewriting stuff in C++/CLI. For the moment, reference frames are the greatest source of confusion (it's full of xzy and rotations and rotations conjugated by xzy), so I'll be compile-time checking those. See Geometry.cs for the current incomplete implementation. Forgetful functors are missing, I just coded them but I didn't push them, and I still need to write the left actions of most transformations.EDIT: I didn't push permutations either.What is/will be the shape of the interface between Principia and vessels/parts for determining on-rails thrust and resource consumption?I'm not sure about that, I'm still trying to make the core features work properly.A few questions about the code of the SPRK itself:The delegate NBodySystem.computeAccelerations takes a double that is calculated from the time plus a sum of the 'b' coefficients times the interval-length -- however the value is never used. Is that for future use?You mean the 'c' coefficients, right?This value is the time. ComputeRightHandSide computes any RHS, it just turns out the one we're dealing with here is autonomous. So it's not for future use, it's for generality.I'm not really clear on the use of samplingPeriod and samplingPhase, could you elaborate on how that works?It was something I quickly hacked together for memory usage when running long-term simulations of Alternis Kerbol; I only store the state every samplingPeriod steps (and samplingPhase just tells me how long it's been since I sampled). At the moment I'm using that to plot the trajectories, but I really need to find a smarter way to sample. Edited March 11, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
rhoark Posted March 11, 2014 Share Posted March 11, 2014 I decided runtime checks were worth having after I spent 2 hours figuring out why my fuel was turning to ice (pressure in kPa where Pa was expected). Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 11, 2014 Author Share Posted March 11, 2014 I decided runtime checks were worth having after I spent 2 hours figuring out why my fuel was turning to ice (pressure in kPa where Pa was expected).I had the same sort of experience with reference frames (planets have a tendency to go crazy when you forget a .xzy), hence the ~500 lines of reference frame management (+tests, this needs tests). For the moment I'm mostly dealing with distances and velocities, all in SI units. When I get to fuel and thrust management I guess I'll want strongly-typed physical quantities; writing things in C++/CLI seems cleaner than doing runtime checks. Quote Link to comment Share on other sites More sharing options...
KvickFlygarn87 Posted March 12, 2014 Share Posted March 12, 2014 Hmm, any progress on making thrust work? Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 12, 2014 Author Share Posted March 12, 2014 (edited) Hmm, any progress on making thrust work?This will be the first thing on the list after the current refactoring. For the moment I need to write tests for the Geometry assembly, then wrap the things I need in Orbit, CelestialBody, Vessel, etc. so I have strongly-typed reference frames to work with. This will make it easier for me to understand what I'm writing, spending two hours tracking a missing rotation or xzy is no fun.EDIT:I don't think I have mentioned it, but I think the jitter I was experiencing was due to the planets not quite being where they should be (inaccuracies in the Orbit code). If this is indeed the case, I can ignore it while in space (all vessels will twitch together), and at lower altitudes I can leave Unity in charge of the physics. Some of the things I was observing also came from confusion between reference frames, hence the current refactoring. Edited March 13, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
fusty Posted March 13, 2014 Share Posted March 13, 2014 Hey eggrobin, thanks for including the readings to understand all this! Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 15, 2014 Author Share Posted March 15, 2014 (edited) Hey eggrobin, thanks for including the readings to understand all this!You're welcome! Feel free to point out any errors in my explanatory post or ask for clarifications.After some pretty fruitful forays into the nebulous realm of particle effects, it's time for a Status update:The Geometry assembly seems to be mostly feature complete, so I'll start writing tests tomorrow (Saturday) after changing a few access modifiers, replacing the ugly switch statements in Permutation by something nicer, and a few optimisations.Here is an overview of its features; things are made rather more formal than is usual in physics (there are even a few categories, SSR Kermit will be happy) because the point is type-safety and convertibility between reference frames (some direct, some indirect).All types are generic with parameters deriving from ISpace. The parameters are tags indicating which reference frame the object belongs to or between which reference frames the transformation acts. Examples would be Local, World, MirroredWorld, Integrator, etc. I currently only support orthogonal transformations, so it cannot be scaled space. This might change in the future. From a mathematical standpoint, the tag A represents an affine space A on a three dimensional real inner product space V. The types are immutable except for R3Element.The assembly is CLS-compliant.Scalar.cs: Scalars (double precision floating point) are wrapped in the type Scalar so as not to risk accidental operations with stock weakly-typed quantities or single-precision floating point numbers.[*]Sign.cs:A sign. We don't care about 0 since this is used for determinants of orthogonal matrices, so 0 is not specially treated (I think it's seen as negative).[*]R3Element.cs:R3Element is basically my equivalent of KSP's Vector3d, a weakly typed three dimensional real vector, on which all sorts of operations (including dot and cross product) are allowed without any regard for their invariance under transformations. It is the underlying data type for all other vectors, which have a readonly R3Element Coordinates field containing their coordinates.[*]AffineSpace.cs:Point<A> is an element of the affine space A -- think of it as a position. You can't add them or scale them, but you can take the vector Vector<A>.ToFrom(p, q) from q to p, you can compute barycenters as Point<A>.Barycenter(q1, λ1, q2, λ2,), etc. [*]Grassmann.cs:Vector<A> is a vector in the vector space V acting on A. The operations of an inner product space are available.Bivector<A> is a bivector in V ∧ V. Physicists call that a pseudovector. Examples include angular velocity. The operations of an inner product space are available.Trivector<A> is a trivector in V ∧ V ∧ V. Physicists call that a pseudoscalar. Examples include the magnetic flux. I probably don't need this, so you can't do anything with it at the moment.The syntax for the inner products is Vector<A>.Inner(v, w), resp. Bivector<A>.Inner(v, w), etc.In addition to the inner product space operations, the wedge product of the Grassmann algebra, ∧: V × V → V∧V, ∧: V × V∧V → V∧V∧V, ∧: V∧V × V → V∧V∧V,is available. The syntax is v.Wedge(w). The wedge product of vectors is like cross product of vectors, except that by keeping the result in the Grassmann algebra things behave nicely through the looking-glass.Moreover, since in an inner product space, V∧V is canonically isomorphic to the Lie algebra so(3, V), bivectors can act as antisymmetric matrices on the left and on the right, B.ActOn(v), resp. v.ActedUponBy(. This corresponds to the cross product of a vector with a pseudovector.They can also be exponentiated, yielding an element of the Lie group SO(3) of rotations of V, which is the type Rotation<A, A>. This is how you get a rotation from an angular velocity times a duration. The syntax for that, for instance, would be Bivector<A>.Exp(É * t). Finally you can take their Lie bracket, the commutator of matrices, which corresponds to the cross product of pseudovectors.[*]OrthogonalMaps.cs:These are maps between vector spaces and their Grassmann algebras. The maps on the Grassmann algebras are uniquely defined by their action on the base spaces, so we'll look at them as maps between vector spaces.OrthogonalMap<A, B> is a map between A and B (or rather between the vector spaces acting on the affine spaces A and in the category of inner product spaces.Rotation<A, B> is a map in the category of oriented inner product spaces.Permutation<A, B> is a map in the category of inner product spaces equipped with an unordered basis (it's a coordinate permutation).Since these are not implemented as matrices and have distinct representations (the rotation is a quaternion, the orthogonal map is a rotoinversion (sign and quaternion), and the permutation actually permutes the coordinates), converting a Permutation P to an OrthogonalTransformation is a nontrivial operation. This is done by evaluating P.Forget(), the forgetful functor to the category of inner product spaces.[*]EuclideanMaps.cs:Similarly, this file contains the types for transformations between affine spaces on oriented inner product spaces (RigidTransformation) and between affine spaces on inner product spaces (EuclideanTransformation), with a forgetful functor from the first to the second.[*]Maps.csThis class contains the composition between maps. The syntax is Maps.Compose<A, B, C>(f, g) for the composition fâ—‹g of two maps of the same type (i.e. in the same category, forgetful functors are not implicit) f: B→C and g: A→B. Edited March 15, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
Aghanim Posted March 15, 2014 Share Posted March 15, 2014 Wow. I don't know where to start. The amount of complicated maths like ∧: V × V → V∧V and insert algebra here with me in the group of not even knowing what a derivative means makes me cannot contribute at all But if my intuition is correct, this means that we could have Lagrange points and low-energy transfer and interplanetary transport network right? Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 15, 2014 Author Share Posted March 15, 2014 (edited) Wow. I don't know where to start. The amount of complicated maths like ∧: V × V → V∧V and insert algebra here with me in the group of not even knowing what a derivative means makes me cannot contribute at all This last update was just about the technicalities of how I will implement strongly typed reference frames (in KSP, sometimes the universe rotates around you, sometime it moves with you, sometimes the y and z coordinates are flipped, i.e. you get mirrored positions; moreover I also render things in weird reference frames), so you want to be sure that when you combine things from different bits of KSP you do it in a consistent way (for instance, you don't want to apply a mirrored velocity in a non-mirrored reference frame otherwise you will go up instead of sideways).The right way to do that involves some interesting algebra, but this only is relevant to modders who have to deal with the reference frame weirdness of KSP.But if my intuition is correct, this means that we could have Lagrange points and low-energy transfer and interplanetary transport network right?Well, the above post is just a lot of abstract nonsense, but yes, the point of the whole mod is to have realistic gravitation, so there will be Lagrange points, sun-synchronous orbits when I implement quadrupole moment, etc.You can actually look at the kind of trajectories you will be dealing with once this becomes playable in the imgur album in the OP (note that some pictures are in rotating reference frames). Edited March 15, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
Guest Posted March 15, 2014 Share Posted March 15, 2014 Why is it that you can explain Grassman algebra in an interesting, entertaining manner, while my lecturer manages to make vector spaces sound like incomprehensive gibberish, calculating scalar products an arcane art and trivial things like martix operations utterly boring? Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 15, 2014 Author Share Posted March 15, 2014 (edited) Why is it that you can explain Grassman algebra in an interesting, entertaining manner, while my lecturer manages to make vector spaces sound like incomprehensive gibberish, calculating scalar products an arcane art and trivial things like martix operations utterly boring? A wild guess: I actually have a practical reason for talking about these things, rather than saying "we are now going to talk about alternating multilinear maps on a finite-dimensional vector space over a field K with characteristic other than 2", which only really sounds exciting once you know why alternating maps are useful.It turns out a lot of this abstract nonsense and algebraic structures can be used in order to strongly type stuff. Physicists usually don't bother with that and say "it's a vector, but beware, it behaves weirdly under reflection". Good luck implementing that safely...Similar considerations apply to numerical integration: I found the numerical analysis course I took last year quite boring "let's make up another integrator by extrapolating an integrator we have; let's now look at ten different ways to do adaptive timestep; let's apply all of this to either a linear differential equation, the logistic equation or the two-body problem", but having an actual problem to solve is fun. Edited March 15, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
NERVAfan Posted March 16, 2014 Share Posted March 16, 2014 I can only understand 10% (optimistically) of this, but an N-body mod sounds very interesting! Quote Link to comment Share on other sites More sharing options...
DiEvAl Posted March 16, 2014 Share Posted March 16, 2014 (edited) Can you give a lower bound on when you get something playable? ;-) Edited March 16, 2014 by DiEvAl Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 16, 2014 Author Share Posted March 16, 2014 (edited) Can you give a lower bound on when you get something playable? ;-)1 Planck time. A better answer to this question depends on several things:What you call playable. If you want to be able to see where your burn leads you in real time in the map view (dynamic integration), you will have to wait for me to implement Saha & Tremaine, so that would be at the very least a month. If you only require application of thrust and if I were to set the refactoring aside (which I'm not likely to do, see next point), I could get that done before the end of the coming week.What I prioritise in my development process. I tend to like building up big structures for type-safety so I don't waste time tracking a missing xzy or mismatched units afterwards, but this takes time. I believe the "quick and dirty" niche is already successfully filled by HoneyFox's Orbit Manipulator Series, so that this approach makes sense.How much I get sidetracked by other things (I spent the second half of last week fixing and adding features to sarbian's SmokeScreen plugin and I implemented physical quantities in C++/CLI today).I think I'll use these physical quantities in Principia, I'll have to see whether I want to switch to C++/CLI altogether or somehow interface that with C#. I'm likely to switch to C++/CLI, since I intend to interface with native C++ eventually (I have established that mixed-mode is not possible, but having a codebase in C++/CLI and C++ seems saner than a mix of C#, C++/CLI and C++) and I needed C++/CLI in order to test the Geometry assembly anyway (you can look at the existing tests on the repository). Edited March 17, 2014 by eggrobin Quote Link to comment Share on other sites More sharing options...
John FX Posted March 16, 2014 Share Posted March 16, 2014 This is looking very nice. Take your time, I`m sure it will be worth it. Quote Link to comment Share on other sites More sharing options...
skykooler Posted March 17, 2014 Share Posted March 17, 2014 If it's going to include native code, will there still be versions for those of us running OSX or Linux? Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 17, 2014 Author Share Posted March 17, 2014 If it's going to include native code, will there still be versions for those of us running OSX or Linux?Yes. Native code is some time away though; at the moment the choice of language is mostly a question of deciding which language I least dislike. Quote Link to comment Share on other sites More sharing options...
ummwut Posted March 19, 2014 Share Posted March 19, 2014 This is all Newtonian; Gravity propagates at an infinite speed? Seems like having gravity propagate at any lower speed would be problematic. Quote Link to comment Share on other sites More sharing options...
eggrobin Posted March 19, 2014 Author Share Posted March 19, 2014 This is all Newtonian; Gravity propagates at an infinite speed? Seems like having gravity propagate at any lower speed would be problematic.Indeed, no GR here. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.