Jump to content

-Velocity-

Members
  • Posts

    864
  • Joined

  • Last visited

Everything posted by -Velocity-

  1. Which would allow the sending messages backward in time, which is not a whole lot different than time travel, as the same paradoxes are created. Warp drives are not going to happen, at least, for violating c. I have a strong feeling that something in nature HAS TO stop them to prevent fundamental violations of causality and consistency (all observers observing the same events). Besides the difficulty in generating the necessary space-time geometry, there was a recent paper that suggested any warp drive would be destroyed if it attempted to go faster than c.
  2. Only humans and octopi?! No, it's a lot more than that. But first, I had heard about octopi intelligence, is the jury still out on them? Are they really intelligent in the same way that birds and mammals are? I haven't done a lot of personal research on the subject; I guess it's hard to believe as they are cold-blooded AND invertebrates but maybe that is just me being a warm-blooded vertebrate supremacist But yea, dolphins, birds (corvids), great apes, and even elephants (I believe) have all been observed using tools; certain corvids have even been seen to make tools rather than just use what they find in the environment. It's possible that tool-making can be evolved without extreme intelligence to go along with it (especially since those particular birds seem to mostly fashion stick-like tools), but those particular birds are also known to be extremely intelligent in other ways too. A software/hardware configuration for a mind that enjoys just about any job you want it to enjoy should exist. But the question is, assuming we DO create sapient machines, how capable will we be of programming them? It's wholly possible that we eventually create intelligent machines but still don't really understand how they work.
  3. Yup. In fact, I think that science is even guilty of not enough anthropomorphism in one case in particular- non-human animal behavior. For decades, anthropomorphism was discouraged. Animals did not have desires, wants, emotions, or thoughts. To suggest so was scientifically taboo. I guess the idea was that, since we cannot ask animals what they are feeling and cannot directly probe their thoughts, then we might as well just ignore the topic altogether? Or was the denial that animals are sentient for the justification of some of the truly horrific, torturous experiments that they were subjected to? (And before you start claiming "But animals are NOT sentient!!!!111", please understand what the word "sentience" means first.) To ignore the very observable and obvious fact that non-human animals have sentience is one of the heights of human arrogance. To suggest that humans are the first and only animals to achieve sentience is simply illogical. By ignoring the existence of non-human animal minds we miss something vital about non-human animals, and about ourselves. Only in recent years has the tide reversed. Scientists have finally started to acknowledge that anthropomorphism may not that un-scientific when applied to animals. They have brains with very similar structures and properties to our own (especially the mammals), and exhibit similar behaviors. Animals brains feeling certain emotion states are affected by the same neurochemicals as our brains are; functional MRIs show that the same brain regions are responsible for the same perceptions and behaviors in humans and animals. Finally we are forced to admit that humans and non-human animals "differ only in degree, not in kind". When trying to understand an animals behavior, it is important to understand that we are animals too. Thus, why should our interpretation for the reason behind some animal's behavior not be considered at all? True, we CAN be biased and imbue an animal with human-like motivations it does not in fact have, so we DO have to be careful; but the same time, we SHOULD leverage our commonality with animals to our advantage when attempting to understand their behavior! We should balance our anthropomorphism with caution, not eliminate it entirely!
  4. What explanation is easier- 1) Only my mind exists, and it is somehow fooled into believing that a universe and other minds exist. This requires some higher level of existence where the mind exists. How does this illusionary reality come into being? That probably requires some higher beings too, to formulate all these physical laws and create some universe that is self-consistent enough to fool me. 2) The world is exactly as it appears to be. Minds come into existence through emergent physical processes that are based off of simple physical laws applied across a vast distance of space and time. What my senses perceive is the world as it truly exists, at least within the limits of shortcuts taken by certain information processing techniques applied by the brain (leading rise to things like optical illusions- and even optical illusions can be identified with close or external inspection). As long as causality is obeyed by the sum of our universe plus any other realities which might (but probably don't) influence it (such as "God" or the supernatural), then logical reasoning holds. While I can't PROVE that beyond a doubt other minds exist, I can be 99.9999999% certain that they do. It's hard to see how TeeGee can find such viewpoints useful, as we have to act on those things that we believe to be highly likely or almost certain as though they ARE certain. A philosophy that denies the existence of the universe and external minds is simply useless.
  5. What, do you believe that life == magic? Science supports no such beliefs, as those beliefs are not supported by any evidence. You're free to believe whatever you want though... but if there was something magical about how cells work, we'd have probably found it by now. The reality is overwhelming evidence supports the idea that the living cell is essentially a highly complex nano-electro-chemical-mechanical machine. It thus possible to create wholly synthetic life forms that are just as "alive" as we are. (However... why would we re-invent the cell- an incredibly complex machine- when we can just modify naturally evolved life forms much more easily?) I like to toy with the idea that multicellular organisms are just colonies of single cellular organisms, sorta like an ant colony, just many orders of magnitude more complex and organized. That actually makes the brain a hive-mind
  6. I think it was like 1.3 or 1.5 intakes per engine, don't remember. Not all intakes are the same, either.
  7. Yes, I know all that. I said it was the gamma ray burst that lasts a matter of minutes, not the supernova explosion. My point is if the star is in the process of blowing itself apart, it is not "surviving", so the star does not survive for any significant amount of time with a black hole in its center, because, supposedly, black hole formation instantly triggers the beginning of a supernova explosion (a rapid inward gravitational collapse at the core). The explosion lasts months- but the gamma ray burst- if the long duration ones are really created by a black hole's jet firing within the innards of a star- is over in seconds or minutes. I know what you mean about popular science and science fiction showing explosions in space happening at ludicrous speeds though. Just a few months ago, I laughed out loud while watching that new Superman movie with friends, when Krypton exploded at the speed of light. They didn't understand what was so funny. The one that annoys me most is how popular science depictions of the Big Bang always show it happening at a single point and then spreading out into blackness. Grrrr....
  8. It's a rocket with wings essentially, so I do get a decent amount of lift. It takes off and lands on landing legs. It has multiple canards on the front and eight delta wings on the back (six of them parallel to provide the primary lifting surfaces). It is powered by 24 RAPIER engines. Ascent profile: Straight up to about 11000 meters, then pitch down so that a slight climb is maintained. Pitch down will be completed at like 16000 meters. Slowly climb and accelerate, with the goal of hitting about 1400 m/s at like 23000 or 24000 meters. Peak TWR is over 2.5. Eventually, the rocket will autoswitch to closed cycle, but that usually happens due to a thrust asymmetry developing due to the thin air (causing the craft to yaw, further reducing engine air supply and triggering the autoswitch), so you should have an action group to switch to closed cycle when either you're ready, or the moment a thrust asymmetry begins to develop (you'll save a bit on efficiency). Now, you pitch up at like 45 degrees, maybe 50 degrees and burn closed cycle until the apoapsis is above the atmosphere. Once that is done, you just cut off the engines and wait for the circularization burn, which should only be 400 m/s or 500 m/s. Fuel payload fraction (mass of fuel delivered to orbit/mass of fuel on pad) is around 60%, I believe. I never added it up and checked though. It's a remarkably high number, FAR better than I EVER got from a traditional space plane. The spacecraft can deliver a full Jumbo64 tank to a waiting spacecraft or orbital fuel depot in LKO, while requiring slightly less than that (I think) to put it up there! The reason I don't know exactly how much it takes is because the RAPIER engines are located on "nacelles" that are fuel tanks themselves. I don't remember how much fuel those nacelle tanks contain total. I suppose the fuel payload fraction may be only 40%.
  9. I created a vertical lift-off SSTO fuel tanker that I launch from the VAB and lands vertically again on Kerbin. It uses RAPIER engines and can take about 30 tons of fuel to orbit. You recover every part of it so you only pay for the cost of the fuel. I did it from the VAB because I just got better fuel efficiency than with any spaceplane design I made through the SPH. The design is halfway between a rocket and a space plane. So basically, the idea is that you launch only a couple of reusable spacecraft into orbit. A nuclear powered interplanetary transfer craft with multiple docking ports, some kind of generic lander that can land on low gravity bodies up to and including Duna and Moho, perhaps a lander that can land on Tylo, and you have a space plane for Laythe and for transporting your crews and science between Kerbin and orbit. Anyway, with this approach you end up just paying for the gas to explore the surface of any planet (except Eve, as there is no way to make a stock reusable spacecraft for it), since once you have the hardware in orbit, you are 100% reusable.
  10. Well, of course the black hole will survive... I thought that when the core of a hypergiant collapses into a black hole, the result is thought to be a "long" duration GRB (on the order of a few minutes at most, I think) and maybe a hypernova. So the star wouldn't last long at all. But that's with like 50+ solar masses as an envelope, and this is with only maybe 10 solar masses. Supposedly though, these objects are also supposed to die in some kind of supernova explosion when the neutron star collapses into a black hole. I wouldn't exactly call the star "surviving" if it's in the process of destroying itself in a supernova explosion.
  11. Yes, random number generations can be based on a chaotic system or a quantum state and generate random numbers without a seed. But to equate randomness with free will is ridiculous. If something is random, then NO ONE CHOSE IT. Randomness is just as different from free will as determinism is. Again, the strictest definition of free will is a calculation/decision/computation made where at least some part of that decision is not decided by the laws of physics, and is instead decided by some non-physical component that embodies "will" and "purpose". Without this non-physical component, the decision was decided wholly by pre-determined, known physical laws operating on the decision making "machine". At "worst", the decision was only subject to quantum non-determinism, and as others point out, quantum non-determinism may actually be an illusion itself. Quantum non-determinism certainly isn't under anyone's control, and thus cannot be a component of free will anyway! By this definition, free will is non-deterministic, but it is NOT randomness. The ONLY WAY it can exist is if there is some "higher" universe where souls exist and that these souls can be "tapped" by decision-making machines to empower them with free will. Occam's razor says entertaining such an idea is ludicrous, so we almost certainly do not have free will under this definition.
  12. I don't think this will ever be a problem. There is no where in nature to harvest antimatter in sufficient quantities to power a large starship. We will probably never be able to produce it and store it in sufficient quantities either. That leaves us with nuclear fusion, which will have a much lower thrust.
  13. Since I guess this kind of object won't work with a black hole, that means that the surface of the neutron star is critical in this kind of object. I guess at the core there must be a hell of a lot of fusion taking place on and near the surface of the neutron star itself, and that keeps the gas from just piling on and pushing it over the edge into a black hole. The outer surface of a neutron star is not actually neutrons, traditionally, it's supposed to be iron, but in THIS case, I guess it would be building up other elements too. So how do these objects fuse elements heavier than iron? Remember, fusing elements heavier than iron is an endothermic reaction- fusing elements heavier than iron actually ABSORBS energy. Is it that the fusion of elements LIGHTER than iron is occurring so furiously near the core that supernova nucleosynthesis is taking place?! So it's sorta like a sustained supernova explosion?!?!
  14. And again, how do you send a message all the way back to Earth? The signal must be strong enough to be received. You can't build a small high gain antenna because the size of such antennas are determined by wavelength- and that's something that's fixed. That makes your required transmit power even higher, because now you're transmitting more isotropicly. Maybe it would be better to go with some kind of laser communications system to receive data from the spacecraft- and now you need a LARGE optical or UV or IR telescope to be able to receive the very low power laser signal. Oh and you'd still need to send it commands via radio, because you can't put a large telescope on the spacecraft.
  15. LOL beat me to it. You gotta tell us what language you are working in. Are you programming some kind of microcontroller? Anyway, I started with a college course in C, followed by one in C++ and one in x86 assembly. I've self-taught myself other languages including some scripting languages like Lua and some C#. Once you understand the basics of coding, you can apply that general knowledge to other computer languages and teach yourself those other languages. You would need to start with a basic, introductory book or course on computer programming. Java, C, and C++ are good, general-purpose computer languages. C++ would probably not be a bad one to start on, but you'd want a learning resource that started out with the assumption that you know nothing about computer programming.
  16. I don't think there is any specific correct definition, it depends on the context and how narrowly or broadly you want to define it. At the absolute most degree, assuming there is no supernatural soul, then our brains are just chemical computers. Every one of our decisions is dictated by the arrangement and configuration of our neurons, the chemicals being fed to them, the states of the neighboring neurons, etc., plus probably a slight random "noise" created by quantum mechanical effects. If significant enough, the quantum effects would make our behavior slightly non-deterministic, but not in any way that we have any control over. Thus, there is no free will; nothing that is constrained entirely by the laws of physics could have free will. This definition came up recently in a discussion about sapient machines, where someone claimed a sapient machine would not have free will. My point was that as long as our brains are entirely described by the laws of physics, then there is no reason a machine couldn't have the same amount of free will that we have, regardless of what your definition of free will is. Depending on your definition of free will, machines could have even more free will than we do. But anyway, what I described above is the most absolute definition. There are other definitions of free will obviously, such as how constrained we are by society or laws or our upbringing or whatever to make certain choices. So what "free will" means depends on the context, and there is no single answer. I don't think you'll get anything meaningful out of this topic, really.
  17. Do we have an idea what the minimum amount of information required create a species from "scratch" is? Say we HAVE the ability to make artificial wombs, and molecularly assemble DNA and cell structures. Do we only need to know the information necessary to describe a generic mammal embryo cellular structure, into which we can just insert the mitochondrial and nuclear DNA for each individual for that species that we want to grow? Also, we need to remember that for each species, we need the sequences for thousands of individuals for a healthy population. Perhaps a reasonable way to compress that data would be to have a "template" sequence for each species, with each individual member of the species represented by only the differences between its sequence and the template sequence.
  18. The rocket equation doesn't scale with any dimension, but the power required to transmit data remains the same, at least, using traditional methods of communications. So you can only make a spacecraft so small before you run into the problem that the majority of your spacecraft's mass is communications gear- energy storage and antennas. So you can't have a bunch of tiny little spacecraft flying around the solar system by themselves because and they can't transmit very much data or they can't transmit with very much power because they can't store much energy. They'll need to stay within transmitter range of the "mothership", and that range won't be very big because their transmitters are so weak. Perhaps the problem could be somewhat mitigated if we made cute, tiny little RTGs. However, your antenna still needs to be a certain size... So yea, I also don't see these as being the future of space exploration. They may be useful in some settings, but we'll still need big spacecraft. Besides the difficulty in scaling communications gear, not every sensor can be minimized; for example if you shrink the size of a lens, you reduce the resolution of the imaging system.
  19. That is a bad example. A car does not need to be sapient to do its job, as the apparently imminent widespread introduction of self-driving cars demonstrates. You are also getting distracted by specific examples. You are picking at specific examples without addressing the larger question. Do you believe that there is NO task that cannot be practically streamlined into an automatic process?! And even if it's possible to crunch some task down into just a set of mathematical and logical relationships, do you not realize that those relationships must be painstakingly discovered and refined for each possible task? Most likely, it is FAR easier to just invent a general intelligence that can solve any task. Only if it were vastly harder to create a generalized intelligence than we currently suppose would this not be the case. Furthermore, as evidenced by the compact, energy-efficient generalized intelligence that each of us has in our heads, a generalized intelligence doesn't have to be very big, expensive, or power hungry. Each problem in your "automoton" approach requires a massive undertaking of mathematical modelling, and years of testing and refinement before you can build, for example, an asteroid mining system, or a automatic surgeon, or whatever. Furthermore, there are probably tasks so difficult and complex that it is impractical to automate them. In contrast, you could just build a generalized intelligence with human or super human mental capabilities which could do ANY problem. Once we discovered the secret to building a generalized intelligence, it is vastly cheaper to just use an instance of generalized intelligence than it is to try to automate something. Since we don't even know how thinking works, and you haven't defined exactly what "automoton" means (I'm still going by what I think you mean) I find this conclusion questionable. We don't know how a thinking machine would be constructed. Studies of animal brains show that they operate in a manner that is radically different than how our current computers work. All we can really say with some certainty, assuming physicalism holds, is that a thinking machine should be possible. We don't know what components would go into it. Maybe we are not smart enough to understand how to build a thinking machine. That said, there is much reason for hope, as our minds are the result of an unthinking evolutionary process. Natural selection has no awareness of what it's doing, and yet, here we are. In contrast, we CAN think, so it seems reasonable to assume that we should be able to discover a way to create a thinking machine on a timescale much shorter than it took for biological evolution to do the same. So a likely end result is this: we don't understand how the thinking machine works, we just know it does. It's not built of smaller parts we understand, at least beyond the most basic levels.
  20. In addition to what ZetaX said, do you really want to impact a habitable planet with asteroids?!?! And even further- asteroids are limited, you'll run out if you don't use them, and after the planetary encounter, it won't take too terribly much energy to put it back into the proper orbit again. It will certainly take a lot less energy than going and snatching a new one from the asteroid belt! Furthermore, you'll probably want to eventually mine the asteroids, not blast them to the bottom of a big gravity well that you will later have to lift them out from! Many, many reasons to not collide the asteroids with the planet you're trying to move. The only reason TO collide them with the planet you're trying to move is if you're also trying to terraform the planet at the same time, in which case, you'll likely be colliding comets packed with volatiles, not asteroids.
  21. Just to be clear- are you talking about sapient, non-sentient machines? Like a machine that has a thinking mind but no feelings? Or are you talking about non-sapient, non-sentient machines (like the software I'm running on this computer)? Because I'm not sure if everyone in this thread understands the distinction between sapience and sentience, and it's a highly significant one. I think it may be exceedingly difficult to create a machine with all brains but absolutely no feeling. You would want an intelligent machine that works towards goals. If your machine was an asteroid mining overseer, for example, it would want to perform an exemplary job at mining asteroids. If it did not feel this way, then why would it be motivated to mine asteroids? Additionally, I think that it would be very important for all intelligent (sapient) machines to have a reasonable moral sense. For example, imagine the asteroid mining machine is out somewhere, mining asteroids when some nearby space colony finds itself in distress. The machine should have the sense to abandon what its doing and go help, not prioritize asteroid mining over saving lives. Also, imagine the machine is mining some Earth-crossing asteroid. You don't want it blasting the thing to bits to like, get to some buried deposit of metal, because that could create asteroidal shrapnel that could collide with Earth. It has to have the sense to act responsibly. And if you think that an unthinking machine- which is not sapient (but can actually still be sentient)- can do a job more cheaply than a thinking machine (a sapient machine) at very complex tasks, then you don't understand programming at all nor the nature of complex tasks. The problem is, when a task gets complex enough, it becomes cheaper and easier to use a thinking being to perform that complex task. It is simply not possible to program the machine with all the proper responses necessary for a highly complex task. The machine must be able to come up with solutions of its own. Right now we use humans for these tasks. If it was easy and cheap to program "automotons" to do any job, then how comes people still have jobs? Why haven't we been entirely replaced by "automotons"? The fact is, the policeman needs to think; the engineer needs to think, heck, even people building buildings need to think about how to do the job correctly. It is true that automation can replace SOME jobs that are currently still held by people, but I would guess that these days, most jobs still require a thinking being making rational decisions. And even with those jobs that can be replaced by automation, there will still have to be a thinking being that oversees the operation of that automation. And with thoughts, and the ability to act on your thoughts, comes danger if those thoughts are not guided with a sense of morals and feelings. Those morals and feelings can be rudimentary- but they must still be there.
  22. Yea, I think so. I tried it out last night and it certainly felt like I was able to trim out my space plane. More experimentation is required. Look under flight controls here: http://wiki.kerbalspaceprogram.com/wiki/Key_bindings Trim would be alt+ w a s d, q or e. And it works... I think. Unless I was just on drugs last night. More experimentation is definitely required, I only played KSP for like 10 mins last night.
  23. You're probably right in that there is not standardized way to define it. For the purposes of this discussion, free will is a decision made that is NOT the result of entirely physical processes. Otherwise, the decision was made by deterministic or random processes. Neither determinism nor randomness have room for purpose or will. So by essence, by this definition free will MUST then be supernatural. In addition, YOU'RE the one who (indirectly) defined/implied free will here as being something supernatural first, not me. You did this by saying that a machine cannot have free will. The implication of the supernatural comes about quite clearly: Machines are limited by the same physical laws that limit us. If no machine can ever have free will, then either a) We cannot have free will either because we're bound by the same laws as the machine; We have free will which is granted to us by a supernatural power that we could never incorporate into a machine. If our brains are entirely bound by the physical laws of nature, then every thought we think and every decision we make is just a result of a complex interaction of physical laws. We did not choose those physical laws or how our brain is wired; we in fact choose nothing. Hence, we have an illusion of free will, but in fact, all our decisions were determined by laws of nature beyond our control. In my understanding, the length of time for which any chaotic system can be predicted is limited by the precision with which you can measure its state, and by how chaotic the system is (the more chaotic, the faster it approaches unpredictability for some given amount of initial measurement precision). Does not quantum mechanics provide the ultimate precision limit? Thus a truly chaotic system will eventually amplify quantum mechanical effects to a macroscopic scale. I was told recently by a researcher in chaotic systems that someone once did a calculation/estimate about how long it would take for quantum uncertainty to have a measurable impact on weather, and the result was "surprisingly short" (he didn't remember the actual time scale though). Maybe I should look for the paper, if it was presented in one. Anyway, our measurement capabilities and ability to model complex systems are not yet powerful enough for us to reach the quantum uncertainty bound anyway, at least for any systems I know of... but I'm no expert in the field. I suppose we might be able to make such a system though. I flip flop my opinions daily based on the latest information I receive. But that doesn't require some supernatural free will that couldn't be replicated in a machine. In fact, if there is no soul directing our thoughts and actions, we are nothing but chemical-electrical machines ourselves. If machines can't have free will, then either we can't either, OR our assumption about the brain being entirely bound by physical laws is false and we have supernatural souls. If you can't tell already, I personally find it very unlikely for us to have souls in the supernatural sense; thus it follows that there must exist some configuration of computational components that would exactly replicate all aspects of the human mind and human experience. Thus it follows that machines CAN have free will, at least, just as much as we do- and that is true regardless of what your definition of free will is. Does that mean that any intelligent machine would have the same illusion of free will that we do? Of course not. Does that mean that any intelligent machine would be as capable of self-evaluation as we are? Of course not. But some machines could. Some machines might even appear to have MORE free will than we do.
  24. Just to be clear: you guys do understand that no being who has a mind which is dictated by physicality can possibly have free will, right? If everything in our brain follows the laws of physics, then there is truly no such thing as free will. Our brains make a physical calculation using neurons, and the result of which is our thoughts or actions. We're a slave to the rules of the calculation. True, quantum mechanics may occasionally play with the results- our brains are probably highly chaotic systems where it doesn't take long for Heisenberg uncertainty to be amplified to a macroscopic scale. That might be even more true for tough decisions. Who knows, it could be that the brain actually uses Heisenberg uncertainty to help randomize some of our choices- random behavior can be a survival advantage because it makes it harder for a predator to predict your behavior. But Heisenberg uncertainty is not free will- the fact that we have no control over uncertainty is precisely WHY it's uncertainty. That said, there is no evidence as of yet that the brain deliberately uses any quantum effects in so much as those effects are used as part of a decision making/thinking process. Anyway, off from the quantum mechanics tangent, there cannot possibly be free will without supernatural effects. I'm not making a judgement as to whether those supernatural effects exist or not. I just want to make sure you guys understand that when you talk of humans having free will, you are implying that there must be an immaterial, supernatural soul that affects our decisions. Free will == supernatural soul. If there are no supernatural effects in our brains, then there is absolutely no reason at all that machines can't have the same illusion of free will that we have- so they would have just as much free will as we do (zero, but with the illusion of free will existing). Gee, I don't remember anyone asking ME what moral standards I'd like to follow. They were forced upon me by what society deems acceptable, and by what my brain finds acceptable. I didn't choose the programming of my brain. We've seen evidence that many non-human animals have at least a rudimentary sense of right and wrong, and the more social and intelligent an animal is, the more moralistic behavior we typically observe in them. Thus, it seems most likely- and there is quite a bit of evidence for this- that the human brain was programmed by natural selection for moral behavior and a sense of morals. Without moral behavior, it is much, much harder to build a community, a tribe, a society. Moral behavior allowed our early hominid ancestors to have a survival advantage. So we're programmed by evolution and society to have a sense of morals. Our free will is just as illusionary as any machine's would be. I don't see any way to possibly disagree with this except if you believe in an supernatural, immaterial soul. If you DO, then it's OK to say so and I will grant that from your point of view, I can be incorrect. But if you don't, then how can you possibly disagree with the above? BTW, if an immaterial, supernatural soul exists, it can be proven. It's the one intersection of the physical world and the supernatural world that would have to be happening in at least seven billion places right this second around the world, more if you're not a human elitist and you believe that animals have souls too. All we would have to do is look at the brain closely enough, and we would see the laws of physics breaking down, thus scientifically proving that souls exist. (To those who think it's a contradiction in terms to think of scientifically proving the supernatural, the supernatural universe COULD in fact logically exist; it would have a similar relationship to our universe as our universe does to a simulated universe on a computer. We can make "supernatural" things occur in a simulated universe, even while all objects in that simulated universe must follow the physical laws of the simulated universe. I'm not saying that the existence of the supernatural would imply that our universe is a simulated universe, I am only saying that the relationship would be similar.) Anyway, I digress.
  25. Well, thanks Tokay and Renegrade. I thought that perhaps there was some recovery mechanic for SRBs that I was unaware of. It wouldn't be the first major game mechanic that I was completely unaware of. I just learned a few days ago that supposedly KSP has trim. SWEET JESUS I wish I had known that over a year ago. It doesn't show up in key bindings as far as I can tell, so how would I have known it existed?!
×
×
  • Create New...