Jump to content

SSR Kermit

Members
  • Posts

    64
  • Joined

  • Last visited

Everything posted by SSR Kermit

  1. Yes you can. I've been playing like that for a long time. Either of the two packs is good enough, but my favourite is to skip all the wings from either pack and go with procedural wings for all lifting needs. Fuselage sections, intakes, tails are excellent in B9 but the big thing in either are the cockpits/pods. I've run successful operations using only spaceplanes (FAR, AJE) with only the cockpits, p-wings and cargo-bays. In career mode you have to do some junk missions first to get reasonable parts, though. As for me, with a new release of B9 coming, I'm looking forward to playing KSP more seriously again. To be honest, B9 is more important to me than stock updates at this point since I'm rather lukewarm to how science and contracts work.
  2. I only fly space planes. The only reason I launch a rocket is to get a payload larger than a hundred units into orbit for refueling my space planes. Personally, I find atmos/exatmos flynig rather dull without at least FAR, Deadly Reentry and a _bunch_ of parts for aircraft, in particular cockpits (edit; CARGO BAYS). I spend a clear majority of my time playing KSP in the design phase. Generally when a plane is done, it can perform extended missions in a reasonable amount of (real world) time. Transonic testing is usually what takes the most time-- getting the center of lift and center of mass in the right spots so that it flies well at mach 0.5 as well as mach 5 can take a lot of fiddling about with wing angles. Flying space planes definitely moves the focus to the craft creation, even if you don't go overboard with cost/performance like I tend to do.
  3. In space, kind of the same thing, no? I just don't think that will happen. Not even RTGs are produced anymore. Again, I just don't think anyone is going to pay for developing those techs. And they lose facility outside the heliosphere unless you deploy lasers in a sun-polar orbit -- or accelerate out of the orbital plane. Maintaining those facilities for decades or centuries isn't likely to happen. That's my reasoning, at least. Do you see any reason why someone would invest all that over such a long period, or is it simply not going to happen unless it's nuclear?
  4. His brother? Hmm.. then that's rather unethical; could be though, it wouldn't surprise me if the story gets changed a bit among researchers in the same field.
  5. Here's the application of the fingertip results: McGowan Institute 2 million treatments. I guess you don't watch Oprah, or the news?
  6. *Sigh* Ok, Mr. Internet Tough Guy. Here's a good paper that gives a general overview: US National Library of Medicine Here's the application of the fingertip results: McGowan Institute I guess the 2 million people who have recieved ECM treatments are part of some conspiracy? EDIT: Besides, what does the Swedish Royal Academy have to do with this? The Nobel Prize?
  7. There is a special place in my heart for Voyager. Not the Janeway one, although I like her too, but Voyager 1 and Voyager 2 are missions that I have followed my entire (conscious) life. Voyager 2 is STILL doing science! I saw the launch of New Horizons on Nasa TV with some other space geeks, and I'm super excited for the "Fastest Thing Ever" to shoot past Pluto into the kuiper belt and towards the fascinating scattered disc, perhaps glimpsing some other transneptunian objects. However, New Horizons will never overtake Voyager, due to the latters' gravity assists around the giants. When will we go further and faster? What kind of spacecraft will overtake Voyager as "Really the Furthest and Fastest" and head for the stars? I personally believe the first interstellar precursors will use only current science but with some new tech; given the political climate nuclear is unlikely in the coming century, so compact, long term energy will have to use novel tech. Solar isn't an option, obviously. The orbital infrastructure for beamed power isn't likely to get it's budget approved; I think it will be an autonomous mission. Probably ion engines. Solar sails are unproven, magnetohydrodynamic propulsion as well-- developing those techs is prohibitively expensive, given that it needs to happen (and likely fail a few times) in deep space. So I'm thinking, a conventional chemical rocket assembled and launched from LEO as a transfer stage to escape velocity and an extended cruise phase with ion engines. I doubt it will really be designed to actually reach anything, but to study the interstellar medium some ways away from the heliosphere. I believe the JAXA are the ones who will do it, based on their willingness to do amazing stuff (the quirks of the Hayabusa missions come to mind) and their growing expertise with ion engines. Also, they describe what they are doing as "pawa uppu ion engines" -- they're applying power-ups to their ion engines. My money's on them! What do you think?
  8. It's all over google, wikipedia and more. Regenerative medicine is the keyword. Well, I read a rather distressing article on the subject (I have it on paper, I'll try to find it and post a citation link, maybe it's online), and indeed Extremely Prolonged Youth is the implied consequence. That's because you have to "cure cancer" in order to make it work, and also be able to modify what's known as "programmed cell death". That could even mean reversing the aging process. The researchers had reservations against what "youth" meant-- we are talking something quite different from physical age. All your cells replace on a regular basis, so the difference lies in how many times the cells have divided. This has profound genetic meaning, but the arm won't grow out a baby arm and then mature; it would be (mostly) as it was. The clock no longer matches the physical age. The stem cells are responding to the cells in the stump, and then in a chain reaction as it grows (I have had this explained to me, so I'm a bit fuzzy on that one). In programming terms, they where instanced with the same parameters as the stump-cell, but there's a null-pointer in it's internal time variable. However, here is where the "end of the species" part comes in. Feed forward in genetic systems can be beneficial on a small scale but over many generations it usually leads to stagnation (there are technical reasons; the noise-to-signal ratio of useful genes increases. Plants are a notable exception, they can have truly enormous DNA). Sterility is common in biological systems, and so-called "over-fitting" in artificial evolution; they have lost the ability to evolve further, "stuck in their local minima". Sterility is a shot to the temple in evolutionary terms. That's it. Done. So unless those who wanted to live forever willingly sterilized themselves, there would be a problem. However, I take a different stand point; mostly from a spiritual point of view. I think our DNA has protected itself from that kind of mutation early on in evolution. During the fractal phase of life, that evolutionary path should have been tested. It didn't work, or life in general would not die of age. While regenerative medicine will change surgery and internal medicine as we know it, I believe we will find any attempt to change the fundamentals of our evolution to fail-- it's not a compatible modification. We have to die, or we can't live to begin with-- not as human beings at least. Maybe as some kind of sapient crystalline sponge </nerdy reference drop>? And as for "conspiracies"; where's the conspiracy? There are some pretty obvious conspiracies out there, so I'm rather disconcerted at how many people waste time on aliens. I mean, when individuals in governments make backroom deals with banks to rip off the people, that's pretty much the definition of conspiracy. There's been an awful lot of that the last hundred years. Yet. Aliens. Lizards. Orbital Mind Control Rays. PS. Don't get me wrong, I love tinfoils, but they don't much like me....
  9. That's not entirely correct. Regenerative medicine is a real thing, and all it takes are a few skin cells that are coaxed into stem-cells. They can then become anything, including an entire arm that would grow directly out of the stump. The problem is that they have a tendency to continue to grow-- they become cancer cells. The cells also have screwed up internal clocks-- they "don't know" what generation they are so they "don't know" when they are supposed to die. The implications of that are unknown (yet). One researcher used the experimental procedure on himself; he had lost a fingertip and managed to grow it back without incident. He will of course be at greater risk of developing cancer (and metastases) the rest of his life, but it shows that it works in principle (if you're very lucky; many lab animals were not). The risks of directly manipulating running genetic code (ie, control genes) are profound. We're talking "end of the species" kind of risks. It's not going to be around any time soon on a large scale, but other retroviral engineering is already a fact. It's like something out of science fiction, but I know this is real from a first source (a bioinformatics researcher working with retro-viral promotors, which are what special types of viruses use to inject their code into cells). EDIT: The news reports said it was a "patient", but from what I've heard this was a case of an individual researcher doing something absolutely crazy, but rather traditional in medicine (using one self as a test subject) that ended up not killing him....
  10. Now it does make sense; including the unicode bit! I thought I did read the entire thread (as I have a habit of doing: see my post count for the consequence), but I must have skimmed parts. There are at least as many morphisms in the category of categories as there are morphisms in the category of syntax (where all morphisms are automorphisms), so that's ×Â3 or something ridiculous like that. Not that it means anything, even in the abstract sense :S
  11. I'm looking at your indicated birth date and in that context that statement doesn't make a whole lot of sense Most universities in the west had stopped teaching Ada by 2000, but I had picked it up as a preparation for Uni the years prior. I miss Ada too, but at least I have VHDL.... Which is precisely why I didn't use them in my code; I had to cast everything anyway in the end. That looks more or less exactly like what I was going for but found the measurements of F# to be lacking in. I would only want Dimensionless to be polymorphic over primitive and integral (there's that word again) types with simple values, such as complex numbers, but I think that would be difficult in a language without type inference; at least it would make parameters unwieldy with massive template tags and that kind of defeats the purpose. There are some important differences, the relationship between the proposed C++11 Concepts and Haskell-style type classes is something like a catamorphism, which in this case roughly says you can express one in the other under a simple reduction/expansion (a 'fold'). The following illustrates it rather nicely: Haskell: class Functor f where fmap :: (a -> -> f a -> f b instance Functor List where fmap f Nil = Nil fmap f (Cons a as) = Cons (f a) (fmap f as) C++11 Draft: concept Functor<template<typename> class F> { template<typename A, typename B> function1<F<A>, F<B>> fmap (function1<A,B>); } template<typename T> class List { /* some implementation */ }; concept_map Functor<List> { template<typename A, typename B> function1<List<A>, List<B>> fmap(function1<A,B> f) { /* some more implementation */ } concept TypeConstructor<typename F> { template<typename T> class Rebind; }; concept Functor<TypeConstructor F> { template<typename A, typename B> function1<F::Rebind<A>,F::Rebind<B>> fmap (function1<A,B>); } The C++ version has some technical issues with genericity, but even if it worked perfectly I think you can see why I prefer the inferred version in Haskell (that uses all models to simplify unification, where C++ only considers models when the parameter is evaluated; ie inference vs checking). EDIT: I have heard claims of correspondence between Concepts and type classes, but to accept that I would have to qualify that as "correspondence under catamorphism" which is a statement at most as strong as correlation .... Purely a formal stand point, but I think it captures the structural difference in an important way. As to be expected from someone in to category theory, I suppose. EDIT2: OMG, I just realised how incredibly meta that was of me, giving fmap as an example of typing fold in higher kinds (I just described 'fmap fmap fmap' on functors!)
  12. I will be putting it up eventually, but it's an undocumented hard-coded hacking bonanza as it is Data parallel in this context is pretty simplistic, it's simply the observation that integration is partitioned with a single dependency across the computational front; it's independent in the number of objects and components. The relevant part looks like this: while (t < tmax) do let ÃŽâ€qStage = ref (Array.zeroCreate hypDim) let ÃŽâ€pStage = ref (Array.zeroCreate hypDim) for i = 1 to stages do let f = force(ks.kBodies()) // closures (over immutable references to internal structures) let ap = fun ix e -> let e' = e + ÃŽâ€t * b.[i-1] * f.[ix] p.[ix] <- pPrev.[ix] + e' e' let aq = fun ix e -> let e' = e + ÃŽâ€t * a.[i-1] * p.[ix] q.[ix] <- qPrev.[ix] + e' e' // note the dependency of q on p; this is the computational front ÃŽâ€pStage := Array.Parallel.mapi ap !ÃŽâ€pStage ÃŽâ€qStage := Array.Parallel.mapi aq !ÃŽâ€qStage // Parallel compensated summation, independent in the "phase space hypervector"; // ie. for every component let sp = fun ix e -> let ÃŽâ€p = e + pErr.[ix] p.[ix] <- pPrev.[ix] + ÃŽâ€p pErr.[ix] <- (pPrev.[ix] - p.[ix]) + ÃŽâ€p pPrev.[ix] <- p.[ix] let sq = fun ix e -> let ÃŽâ€q = e + qErr.[ix] q.[ix] <- qPrev.[ix] + ÃŽâ€q qErr.[ix] <- (qPrev.[ix] - q.[ix]) + ÃŽâ€q qPrev.[ix] <- q.[ix] [ async { Array.Parallel.iteri sp !ÃŽâ€pStage}; async { Array.Parallel.iteri sq !ÃŽâ€qStage} ] |> Async.Parallel |> Async.Ignore |> Async.RunSynchronously // time calculations (not async to avoid ref clutter, little gain) δt <- ÃŽâ€t + tErr t <- t + δt tErr <- (tPrev - t) + δt tPrev <- t That being said, simplistic as it is, profiling reveals that 30% of the CPU time is spent in parallel computations (force calculations being the major contributor to the rest), so there is a significant gain in speed at the expense of eating lots of ram. It's really a great fit for GPGPU! Using octrees and FMM, the force calculations can be parallelized in a similar fashion, but then we're talking tree codes! My next step is to formalize the above to a polymorphic parallel vector/matrix/array library to clean up the syntax and hide as much as possible of the asynchronous primitives. I'm mostly impressed you have the patience for that working in C# Honestly I don't know much about the interface towards Unity/KSP, so I can't say if the structure is suitable or not.... From a general stand-point it's what I would expect, and that's usually a good sign with code. I think any implementation of abstract algebra will only show it's efficacy when you start using it (eh, that's another of those "did I just say that") -- I usually do that in typed functional languages since metaprogramming is more or less inherent to that style. Much of what you are doing would probably be a lot more concise in an ML style language (Haskell, F#/OCAML). The extreme is Haskell's type classes that lets you embed categories directly in the category of types-- operators (eg ∧) get their expected properties through simple instance declarations of the underlying types with minimal boilerplate coding. Once you've coded the type classes, of course, but that's a lot less typing than OOP classes! F# measurement types work pretty well, but not so much when you do things like n3/2... are they the same as the ones accessible from C++ I wonder?
  13. By the way, I have coded up a data parallel version in F# of the NDSolve algorithm you (OP) referenced. I haven't run any real performance tests, in particular in a KSP context, but I could probably use equivalent test cases to those you have to see if there are any gains from a straight-forward parallel array solution. I'm using .NET Async<T> and F# Array.Parallel, so I'm not sure about the amount of marshalling and such going on .... Thanks again for all the excellent references, my continued reading has made me something of a H. Yoshida fan This one had an interesting bit about softening: Symplectic integrators and their application to dynamical astronomy (Kinos-h-i-ta, H., Yoshida, H., & Nakai, H.) (forum does not like romaji) I'll try putting some tests together and see if I can integrate (uh, the software kind) your KSP stuff with my solution. I'm not expecting all that much, but I've purposely written it as to be easily convertible to OpenCL kernels. I have doubts about using the .NET thread pool inside the Unity process, it might just degenerate to context swapping...
  14. GR + KMP would melt my brain.... "Has he been here? Is he going to be here? Will he be here, as I'm seeing him now, if I travel to the future, even though he has already been here from my point of view?"
  15. Finally got my Minmus colony up and running! I've modded Extraplanetary Launch pads so that science labs generate rocket parts and construction facilities accumulate parts until it can build the vessel. This since I don't currently use any resource mods. I'm going to be building a fleet of ships and colonize Jool; to start off about 10-20 Kerbals in each SOI. <iframe class="imgur-album" width="100%" height="550" frameborder="0" src="http://imgur.com/a/bsXcH/embed"></iframe>
  16. A few questions about the code of the SPRK itself: The delegate NBodySystem.computeAccelerations takes a double that is calculated from the time plus a sum of the 'b' coefficients times the interval-length -- however the value is never used. Is that for future use? I'm not really clear on the use of samplingPeriod and samplingPhase, could you elaborate on how that works?
  17. I do know a bit about algorithms, it was a major part of my Computer Science master. However, genetic programming isn't usually a part of university courses on algorithms (as opposed to genetic algorithms). GP is usually taught as part of Complex Adaptive Systems studies specifically dealing with evolutionary computing. Anyways, from a computational perspective (and to biologists taking the "gene centric" view point), the entire concept of a "species" is not well defined so there isn't a specific "event" that creates one. Populations don't interbreed strictly based on genetics, but also due to social conditioning (or more correctly its phenotype)-- there are many "species" that can have viable offspring but don't due to their mating behaviours. From the DNA perspective it becomes even more fuzzy, since the genes that make a fish blue are the same genes that make a flower blue and the boundaries between some "species" are smaller than the individual variance within the population (weird animal/plant plankton). Then there's this: Ring species and Horizontal gene transfer. In systematic biology, the concept is very useful, but not so much from the computational evolution perspective. See the Species Problem However, if you're interested in computational structures that do spawn classes of other structures, check out Cellular automata
  18. Absolutely! The easiest way to see this is to consider Brownian motion (or in 2D "random walk") in the search space. This "movement" is across a mathematical topology (landscape) where the peaks and valleys represent "high fitness" or "low fitness". This landscape changes with the physical environment, indeed it's a fitness map of the environment. Brownian motion has a remarkable ability cover the entire area it's moving in over time. If we get stuck plodding along in a valley that means we are going extinct. Characteristics in those areas are not helping. However, the evolutionary search (random walk) will still cover all of the fitness map, given enough time. Whether or not the genes persist decides if that path, or "test", "succeeds" or "fails". Note that the global operation (evolution) is not optimizing for anything (its random!). Benoit Mandelbrot describes random walks, I think it's in "The Fractal Geometry of Nature". Feynman explains random walk in a physical context in the Feynman Lectures. John R. Koza is one of the great pioneers of the use of evolution for automatic problem solving and the first volume in his seminal work "Genetic Programming: On the Programming of Computers by Means of Natural Selection" gives an excellent introduction of the computational aspect of evolution without going in to the confusing complexities of biological evolution (which has itself evolved). Rather than DNA the book uses a simple variant of Lisp. The other source for these ideas is the subject bioinformatics, but that's a pretty dense and compact discipline dealing primarily with DNA itself -- that is certainly not a simple reproductive selector. DNA does so many weird things like horizontal gene transfer, crossover, then there's DNA-amylase, gene control (activation/deactivation) and so on.
  19. By it's very nature, reproductive selection is a complete search method. Even the simplest form of evolution - that only admits random mutation in cloned individuals - has a remarkable tendency to fill up the search space quickly. So, every characteristic that can arise, eventually will arise (given infinite time, in the mathematics of it). An army of typing monkeys will never write King Lear, but generations of evolving monkeys eventually will. It's not so much that evolution has a course as it is "in these (changing) conditions, there are a number of possible ways to survive. Evolution makes sure all of them are tested."
  20. At the height of Pluto's summer, the surface pressure reaches 0.30 Pa: less than 1/320000th of the surface pressure on the Earth. That fulfills the "least empty" classification of a laboratory vacuum (a "rough" vacuum). It's more like an unusually large exosphere, stretching about halfway to Charon according to the current model. I guess New Horizons will tell for sure. EDIT: Not that we really have so many examples of exospheres as to be able to say what is usual or not!
  21. "You have much to learn, Grasshopper" I see it in the equations; that really is fundamental. A good opportunity to step back and do some review of what I've learned the last few weeks. Thanks for the knowledge bombs!
  22. Then you haven't experienced the joy of Monads! Apart from the category of types and CCCs though, yeah. Application outside the category of applicative domains is something of an oxymoron Ah ... that's a long winded explanation; it starts with the extension of lambda calculus into an algebra and then adds simple types -- that is the internal language of a cartesian closed category -- concurrency extensions involving indeterminism generalises this further into an applicative semi-ring ... Anyways, I tend to see the "vector bundle" as dependent vectors of concurrent, composable computations. Another example of an "application" of category theory, I suppose. EDIT: Oh yeah, the Curry Howard isomorphism (properly a correspondence, which is pretty amazing) is absolutely central to how I think about algebraic structure; that is I think about structure as language. Wait, what? I'm missing something pretty fundamental here I think.... Right; my goal was to do at least one level of task parallelisation on the 6th order Yoshida-- but now having read just the introductionary article I see that all those have the same non-reducible (stateful) behaviour. That's very nice to know in such detail. Your brief explanation of BCH helped me see the connection.
  23. I didn't know that. Cool! That made sense to me, thanks! So there really is a connection to derivations in algebras in all this; to me that's really cool. Coming from a background in category theory as applied in computer science, it makes sense for me to think in terms of Der (or indeed modules on rings). Wikipedia can be surprisingly helpful when it comes to natural sciences and abstract nonsense. This all does look like bad news for my proposed parallelization scheme, though; if my intuition serves me right here then non-commutative ad T and ad V implies a concurrency barrier-- ie that would say that the order of evaluation is critical to the choice of coefficients. If so, it would also explain why all parallel integrators I have found "in the wild" are strictly data parallel! (EDIT: Yup, Equations 17, 24-26 of the Yoshida article linked pretty much spell that out in plain!) This is cool stuff, I'm learning tons! f(E, V) = f(E) There's a dialouge in the Kali Book about how this sloppy formula might mean something very different to a student of another field; indeed to me that formula is saying something about morphisms and commuting diagrams that can only be true if the morphism mapping E and V is fully contained in morphism mapping only E (the → direction) and f(E) contains no objects not in f(.,V) (the ↠direction). For example, in a programming language that has currying and first-class functions, there is exactly one function satisfying the previous; the identity function.
  24. If Pluto is a planet, then so is Ceres (between Mars and Jupiter), Eris, Sedna, Makemake and a whole class of Trans Neptunian Objects. It doesn't make much sense to extend the solar system out to thousands of AU and say that Sol has perhaps dozens of planets; should large periodic comets also be considered planets then?
×
×
  • Create New...