SomeGuy12
Members-
Posts
197 -
Joined
-
Last visited
Content Type
Profiles
Forums
Developer Articles
KSP2 Release Notes
Everything posted by SomeGuy12
-
For space colonies, how thick does the shielding need to be?
SomeGuy12 replied to SomeGuy12's topic in Science & Spaceflight
The way I'm describing doing it, each cylinder rides inside a series of ring tracks. If you use steel instead of water, it means the inner ring tracks are attached to several meters of metal. There is absolutely no chance they are going to fail. (well, magnet failures, but there are passive ways to do this where the magnets are just coils of wire that have to be intact to work) The transition car - essentially a maglev car riding a set of rails in between the 2 colony disks - works by driving itself faster and faster (using a linear motor embedded in the track it is riding on probably) until it matches speed with one disk or the other. It then extends a gantry to the hatch on the colony disk in question. The gantry would be designed to break off in the event of it extending to the wrong disk. Passengers board, and sit in special chairs that rotate so the passengers are lying on their back with respect to the G forces. Transit car transitions - amusingly, it could accelerate harder when the relative speed of the car is lower, keeping the gravity experienced by the passengers to a steady 1.5 G or so, and the seats would rotate in 2 dimensions. (they would be able to rotate 180 degrees, and also adjust their angle of tilt with respect to the floor). Furthermore, you can actually use the track and the outer ring it is supporting as part of the structure. All the pieces of the colony would exert pressure down on the maglev track and this essentially becomes hoop stress on the outer, stationary ring. So the radiation shielding is also part of the structure, yet it is not adding to the structural strain on the structure as it is not spinning. -
For space colonies, how thick does the shielding need to be?
SomeGuy12 replied to SomeGuy12's topic in Science & Spaceflight
The problem with the endless "hollow out out an asteroid" cliche is that all that rock could be melted down and used in a controlled manner. Why depend on a lumpy, random scattering of rocks for your shielding? Why leave all the rare elements in the asteroid unharvested? Doesn't make sense. Better to eat the asteroid and create exactly what you want. My idea of using water instead of ice is that the algae in the water would be part of the colony's life support system, saving on needing farming mass elsewhere. It doesn't have to be "algae" either, we could probably genetically engineer small aquatic plants that taste like apples or steak or something. -
I just had a bit of inspiration. I'm not certain how thick the shielding needs to be, but the problem with the mass of shielding - say it's a meter of lead - is that it enormously stresses the centrifuge wheel. You have to make cables and structural supports able to survive the apparent "weight" of a meter thick layer of lead all the way around. Terrible idea. Instead, you build a gigantic can shape out of whatever the shielding material is. One logical thing would be to use glass->water->glass, with the water full of algae. The whole can doesn't spin. There's an inner can, and it has tracks inside it. Maglev tracks, specifically. The colony itself is a rotating disk that is much, much lighter - as light as possible - that spins, with the maglev tracks providing an occasional nudge. The linear motors in the maglev track are under computer control and you dump any angular momentum into the inner colony disk, keeping the outer can completely non rotating. You'd actually use 2 colony disks, that counter rotate. That way you don't have to consume propellant to get the colony spinning and to despin. (entire colony has a net angular momentum of zero) . You would have to have some kind of maglev transfer car at the junction between the disks, where you get into the car at a boarding hatch, it decelerates and then spins the other direction inside a vacuum transit track located between the 2 disks. So, how thick does the water layer need to be? I'm guessing it needs to be the same thickness as enough water to provides 1 atm of pressure in earth, as this means that there is as much shielding mass as earth's atmosphere. That means it has to be 32 feet thick. For this reason, you'd want to make the colony as big as possible in order to reduce the relative proportion of shielding mass.
-
Another factor that kills the solar-thermal is really simple. The sun moves across the sky if the array is on earth. Mirrors have a focal point that depends on the position of both the light source and the mirror angle. So you have to install electric motors on every single mirror and sensors so they can be moved under computer control. This is expensive and complicated and the motors will break. PV panels, you just screw down onto a rack and forget about it. If you use microinverters, failures and shading of individual panels don't affect the rest of the panels, and microinverters now cost $0.50 a watt, the same cost as the larger string inverters now. (for a few years, microinverters were more expensive). So it's basically foolproof if you use PV. The same consideration applies to solar PV in space. Early satellites and even some modern satellites, they just slap solar cells onto every face of the satellite. No matter how it's oriented, it gets some power, keeping it alive and able to accept commands. Even the big arrays that have motors to keep them oriented are much simpler than solar thermal would be. I think the OP has a point, actually. If you were doing large scale space based solar, solar thermal might be the way to go. The reason is that per every kilogram you launch, sterling + solar thermal is possibly more power per kilogram as mirrors are far lighter than solar pv, and sterling heat engines can be pretty efficient. Then again, the sterling engine component may be far heavier than just using efficient PV panels everywhere - they make high end double and triple junction PV that are 31-44% efficient. That's probably lighter and more efficient than a sterling engine of the same mass. No, what you'd do for space based solar is you might use mirrors to concentrate sunlight onto a smaller array of double/triple PV. This might end up being lighter as the mirrors could be very thin and light. Also easier to shed heat from a concentrated panel as blackbody radiation is proportional to T^4.
-
A solution to the magnetosphere problem of colonizing mars
SomeGuy12 replied to Clockwork13's topic in Science & Spaceflight
Are you trolling in your response? The "gunpowder rocket" analogy was to point out another stupid and pointless mental exercise. Nobody is ever going to build gunpowder rockets able to reach the Moon except for kicks, and nobody is ever going to terraform a planet, except for kicks. It's a horrible solution to a problem. I pointed out that the "for kicks" reason is the only reason to terraform a planet, such as in some future world where eccentric people can own entire planets. I'm not saying that will ever happen. A self replicating factory is a container full of machinery that can duplicate all of the machinery in the container. The human body is such a factory. Equipment made by a self replicating factory is very close to "free" because you only have to pay to construct the first factory and then it can copy itself without human labor, gathering it's own raw materials, trillions of times. I'm saying that the human body is so limited that a realistic outcome is that future people won't use them at all. -
Replacing Orion SM fuel with Cryogenic fuels
SomeGuy12 replied to fredinno's topic in Science & Spaceflight
The reason for liquid, hypergolic fuels that stay liquid at ambient temperatures in orbit is that here's what your rocket engine looks like. There's 2 tanks, one containing fuel, the other oxidizer. Both are pressurized with helium. There's a valve coming from each tank, with a servo motor on it. Both feed into a rocket nozzle. You want to fire the engine? Send power to both valves. Turn it off? Cut the power, springs close the valves. This is as simple as it can possibly be. Well, not quite, a single tank with a single valve is simpler (monopropellant) but there is less performance. You can wait for years in between engine burns. You can lose power to the capsule and do the burn if you ever restore it. It's reliable and will get you home. Sure, cryogenics have better performance, but the complexity is much greater. More complex tanks, there's a vacuum insulation layer that can fail, the valves have to be made of high quality material, and you need a cryocooler, heat radiator, and always on power source to keep the liquid cold. Or, you constantly lose propellant the longer you wait. That's a negative, no competent aerospace engineer is going to sign off on that. -
A solution to the magnetosphere problem of colonizing mars
SomeGuy12 replied to Clockwork13's topic in Science & Spaceflight
Why does anyone waste their time with such silly ideas? Why not plan on a way to get to the Moon using a gunpowder rocket? Nobody is ever going to terraform Mars, or any other planet, unless it's some eccentric rich owner of an entire planet who does it for kicks. Instead, we will invent some kind of self replicating factory - look in the mirror if you need proof it's possible to build one - and we'll convert planets to forms that are more useful to us. For present day humanity, that form would be a bunch of spinning stations. You could build spinning stations with the combined surface area of many many times that of Mars, housing thousands of times more people (or giving the people you house far more living space) using the matter found in Mars. Inside each station, probably shaped like a spinning wheel with carbon nanotube cables as spokes, there's no air leakage, the distance from the sun would be set to minimize radiation exposure, the whole "station" would have engines and be able to move, and the environment is perfectly controlled for the inhabitants. For future successors to humanity, they would probably be able to "live" in solid state chunks of computing machinery and wouldn't need any inefficient things like a station, just chunks of computing machinery positioned an optimal distance from the sun. -
Space Warfare - How would the ships be built/designed?
SomeGuy12 replied to Sanguine's topic in Science & Spaceflight
Interstellar fighting is impossible, right? Assuming post Singularity, within about a century from now, technology is within a few percent as good as it will ever get. (things are limited by physics). If that is the case, then the defenders of a star system get to have all the rocks of that star system to make into defensive warships. The attackers have to use an ultra-high isp, extremely low thrust engine to travel interstellar distances (while the defenders get to use high acceleration fusion or antimatter boosted engines that are far less fuel efficient). The attackers have to emit a flare as their drive produces petawatts of gamma rays so they are seen coming the entire time. -
Space Warfare - How would the ships be built/designed?
SomeGuy12 replied to Sanguine's topic in Science & Spaceflight
Why not have guided missile kinetic rounds? Basically, the round would have G-force hardened electronics, similar to the stuff used for guided artillery rounds today. Railgun/coilguns have a lot more acceleration than modern artillery rounds, fortunately, you don't need any servo motors or guidance fins. Instead, you'd have these type of thrusters. These are just sticks of solid propellant in metal tubes containing electrodes, with g-force resistant connections to the interior electronics. The way the rounds would probably work, there would be a photocell sensor facing backwards on the round. The launching ship sends out a coded series of communication laser pulses to each round in flight. The launching ship programs the round just-in-time with a one-time-pad(if you really want to be secure from hacking,you generate the pad using a sensor picking up radiation from a radioactive source in a sealed container right before you fire) as the round is loaded into the chamber, so there would not be any feasible way for the adversaries to send their own commands. The light detecting sensors are on the opposite side from the enemy, so you can't jam them. (the whole shell would probably be studded with light detecting sensors. All of them are looking for update packets. If an update packet passes the one time pad security checks, it gets accepted) Anyways, these update packets order the shell to make maneuvers to correct for tiny inaccuracies in the launching railgun/coilgun and to correct for any movements made by the victim ship. Solid rocket motors can be hugely powerful if shaped the right way, so you could easily counter several G burns by doing your own "mirror burn". Rounds leaving a railgun are going to be really hot, glowing in infrared, and will flare like a star when you perform correction burns. So the counter to this type of weapon would be to shine a laser on the incoming round, igniting the solid rocket propellant and sending it tumbling out of control. Of course, the counter to a laser weapon is to shine a laser of a different frequency on the band gap main mirror of the laser turret. You can probably counter a laser with a much weaker laser than the laser you counter. Effectively, missiles/kinetic missiles do have a limited range because of the engine in the missile/kinetic missile has a low ISP. In the case of a solid rocket motor boosted railgun round, the ISP is only about 200. Monster fusion drives of the future could have ISPs from 10k to 300k. So if you fired a missile from too far away, the target ship can dodge instead if it can burn for long enough. For instance, if you weigh your kinetic round down with 50% solid rocket fuel (does it really matter if you hit a target with tungsten or solid rocket propellant if the impact velocity is 10-20 km/second?), your shell has a dV of 1359 m/second. So if the ship you fired at tries to dodge, and uses a gas injection aneutronic fusion engine (apparently, this is not fantasy and such a thing is actually possible! If it is possible to contain plasma hot enough for aneutronic fusion with a reasonably light apparatus, the engine would look like a power reactor except with a tiny flaw in the containment that would let a stream of high velocity plasma jet out (but not a big enough hole to stop the reaction) . You would inject hydrogen into this stream somehow in order to get an "afterburner" that would boost thrust at the expense of ISP) that can do 1/10 of a G, then in 1359 seconds the victim ship has dodged. If the kinetic rounds have a closing velocity of 10 kilometers/second, the max range is 13,000 kilometers. 26,000 if you can launch at 20 kps. The trouble with boosting the launch velocity is that every time you double it, the massive bank of capacitors driving the railgun, or big honking stack of magnets driving the coilgun has to quadruple in mass. So there's diminishing returns. Why would you bother with kinetics if you can use lasers instead? One reason is bang/buck. If you can only fit, say, 100 metric tons of weapons into the payload section of your warship, you might be able to do a heck of a lot more damage with kinetic weapons weighing the same amount as lasers. You also have a lot more range - with 5 meter or so laser mirrors, you only have a "beam range" of about 1000-2000 kilometers. You could open up with kinetic missiles at vast distances, at least against targets who don't have the dV to run or point defense. You'd carry counter-laser-lasers, these would be small, lower power solid state lasers designed to marr the mirror surfaces of the enemy ship's laser turrets. They would double as point defense. One final note : I don't see blinding as an effective strategy. The reason is that over the years, electronics and sensors have gotten smaller and smaller and smaller. I think it's a reasonable expectation that by the time we have the technology for space warships (if we ever build them), we'd be able to stud their surfaces with distributed sensors and electronics. That is, rather than use lenses, the hull itself would have a thin surface layer with regions of it that are sensitive to light and sensitive to infrared. So the (backup) sensors would be printed onto the hull itself, with the electronics that can integrate all this incoming light into a coherent image (similar to a lightfield camera) These sensors would probably be sensitive enough to spot enemy warships with active engines (from the massive flares of light their engines give off) and the stars, enough that you could at least keep shooting and/or get home if you survive the engagement. Not that anyone would plan to return home. Why carry enough fuel to get home? A more realistic space warship would probably be intended to be disposable. It would either use autonomous systems or some kind of AI instead of a crew. An AI could just get itself home by beaming it's "mind-state" file off the wreckage of the ship, and streaming that state file during a battle. (so the AIs never "die" and learn from their experiences when they are defeated) -
Angel, frankly, you can argue either direction. The speed of human transit jumped enormously from 1900 to 1960 - that's from horses and trains to the early jet age. Due to nasty little laws of physics, it's inconvenient to go faster than sound as it consumes too much fuel and makes too much noise and costs too much to construct an aircraft capable of doing it. On the other hand, yeah, 1960 computers were experimental toys and few existed, engineers used slide rules, til now. And, computer provide a way to bypass jet speed limits. We aren't quite there yet, but telepresence signals are lightspeed. If you could do every task you can do locally just as well remotely, it doesn't matter if you have to wait hours to fly there. Anyways, arguing aside, the common factor during the last few thousand years of progress is that human brains are, at best, very slightly better than the average human's brain during the time of the Romans. If we can eliminate human brains as the bottleneck, well, we ain't seen nothing yet. It is physically possible to tear apart planets into self replicating machinery, at least the solid portions, and probably possible to build antimatter fueled starships.
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
I can, sorta. I have a degree in a related field and took a medical school neuroscience course. As far as I know, at all times, all synapses everywhere, the cells are active and slowly updating their states. However, the truly "active" systems - neurons actually firing off action potentials - are only about 10% of the brain at the same time. With that said, if you were building an artificial equivalent, the only correct emulation requires you to mathematically calculate state updates to every neuron in the machine. The more quiescent ones you could state update less often. This is why I think that designing massive multi-layer custom chips with dedicated circuitry for each and every neuron - IBM's approach - is the only practical way. CPU/GPUs are too inefficient and not meant for this.
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
No, you didn't do the most cursory of research or napkin math. (yes, this is a childish tit for tat). The reason I say that is simple - open a neuroscience book and find out what happens when an action potential hits a synapse. Think about it for a little bit. TLDR, a 1-clock solution to mimic that exists by pre-calculating the threshold and arming drive gates with an enable signal to the inputs to a neuron that will push it over the threshold. So in one clock, incoming signal arrives, and drive gates set the flip flops for the next synapse over. TLDR, what I'm describing is using circuits that are the same speed as current silicon, but you have a massive, massive pile of them. They are either stacked on top of each other in 3d to make gigantic multi-layer cubes or there's a lot of them filling a building. The limiting factor becomes the speed of light between different simulated neurons. Do your own research, you won't believe my findings. What is the longest possible wire run inside the brain? What is the propagation speed of the fibers crossing the corpus callusum? Google for "hollow core optical fiber". You'll discover that if the ultimate bottleneck is the speed of light, you could build a system that is a million fold or greater faster. It ultimately depends on how dense the resulting cube of brain-emulating circuitry is. A 1-meter cube would be a lot easier to cool and power the circuits than a 10 centimeter cube. In the much nearer term, before we talk about theoretical limits, you could much more easily get 10,000 times speedup. You'd need a machine the size of a building and the hollow core optical fiber runs could stretch to kilometers. This is something present day humanity could build if the budget were there and we knew the pattern for the neural circuits. (by, say, scanning a brain) I'm not saying you couldn't do even better by taking that same pile of brain emulating circuitry, and building a machine using the same technology that is organized in some far more efficient, non-human way. I'm disagreeing with your premise that we have to do that, or that it would be pointless to emulate human minds. We might not be able to build a sentient machine using full synthetic algorithms if the people designing that machine are mere humans, limited by human scale thought speeds and memory. A sentient machine is thousands of separate subystems, all of them capable of learning and evolving, all of them capable of affecting each other. How would you get a system that complex to even work without just crashing or freezing up if it's impossible to test or debug systems that constantly change their behavior based on learning input? What if, say, 1 million of the world's best people can't make it work? (for one thing, there's a logarithmic relationship between throwing people at a problem and results. the more people you add the less efficient each marginal person becomes, and there may even be a point that adding another person to a design team reduces performance. ) Copying a known good design from the human mind sounds to me like a lower risk project. Once you have it working and running at higher speeds, you have your new super-intelligent best friends work out a way to build full synthetic AIs. I think a human being who thinks 10k+ times faster and has instant mental access to tools like databases of knowledge, calculation routines, and so forth would be a de facto superintelligence. Even if you disagree that 10,000 times speedup is possible, do you concede that if you could build such a machine it would be super-intelligent? One final assumptions check : you do know I'm talking about using current large scale silicon fabrication techniques to make custom hardware circuits to duplicate brain function. I'm not talking about a processor, I'm talking about a design where every synapse gets dedicated hardware to calculate the voltages and threshold at that synapse. You cannot meaningful compare what I am describing to, say, performance charts of calculations per second for Intel processors over time. Processors solve problems that are serial - step N2 depends on the answer from step N1. A brain simulation is inherently parallel - each separate synapse calculates it's voltage separate from all of the others. Our limits for silicon mean that we cannot get the circuits to solve serial problems any faster at the present time, faster switching rates make too much heat and the propagation delays are too long.
- 59 replies
-
- 1
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
Concept : you build a gigantic superconducting quench gun on the Moon, and another one at either the surface or on one of the Moons of Mars. Guns fire BB sized iron pellets. Spacecraft is also a superconducting magnet line, and uses the magnets to decelerate incoming pellets, turning the magnets on after the pellets pass instead of turning them off after the pellets pass. High thrust - ultimately, the spacecraft is just a stack of gigantic superconducting magnets, chasing after this perpetually present "iron bar" from a continuous stream of pellets. I think 1 G can be done easily. Zero mass loss. If the spacecraft accumulates the energy from each pellet and decelerates it to 0 relative to the spacecraft, it can fling the pellets back to the moon that launched them, and if there's a pellet launcher at both ends of the journey, it ends up being a net zero momentum change and net zero mass lost from the moon. So you could do this for millions of years, sending large passenger vehicles back and forth, and not run out of iron for propellant. What's the rocket equation? Since there's no propellant carried onboard, it's immune to those logarithms. What do you think? This rough scheme sounds like what a "civilized" inner solar system would be like, where people can routinely reach other planets in a week or so, although there would be periods of time when Mars is on the other side of the Sun where the "track would be closed" and no trains would run.
-
Sigh. A lot of posts in this thread that were not thought through. 1. Nobody credible talks about the Singularity going on forever. A better model is a step function. It's a sudden, vertical increase in technology from (1) what we got to with human minds over a few thousand years (2) the limits of what can be developed using systematic experimentation and engineering. By "systematic experimentation", I mean something like a laboratory the size of a continent, systematically trying each and every possibility for useful physical phenomena. Systematic engineering is starting with the problem description and designing solutions and measuring their efficiency, systematically investigating the most promising possibilities. Why would it happen this way? Because biological brains are stupid, ludicrously so. Reasonable estimates for how much better you could do using existing silicon fabrication technology are speedups of thousands to millions of times. You only need a small speedup to boostrap/step intelligence forward to physical limits. Example : google's neural net tools are so versatile they can help you write code and design better neural net tools. You use the tools to design an improved tool, which you use to... You probably hit a wall eventually limited by the computer chips and memory the tools run on. But if the tools are now versatile enough that you can use them to help you design faster computer chips/denser memory, then... Eventually your designs get so complex that human designers are just lost. But now the tools are helping to overcome that perhaps, translating hideously billion-transistor functional blocks to something that makes sense. And the tools are helping with verification and testing... And so on, til eventually the human designers are no longer part of the loop as they are the bottleneck. 2. Nobody has a "choice" whether to participate or not. The beings who plug themselves into a global computer network, upgrade their minds, make themselves immortal - will be unstoppable. Adapt or die - either join them or you will be so obsolete that you may as well be dead. - - - Updated - - - Well, you either didn't think it through or you are not sure at all. You can get a million fold speedup. How much smarter would you be if you thought the same way you do right now, but had 1 million times the time to think stuff through? (and a virtual environment to "live" in where you have simulation tools and computers you can use that are as fast as you are)
- 59 replies
-
- machine learning
- neural networks
-
(and 1 more)
Tagged with:
-
Crops on Mars (minor "The Martian" spoilers)
SomeGuy12 replied to peadar1987's topic in Science & Spaceflight
Got a cite? You're saying that, to the best of our knowledge, the ordinary rocks of Mars are lethal, immediately, and not just a long term exposure hazard. I am skeptical. On earth, even if you worked in an asbestos plant handling the most dangerous form without protection, it takes more than 10-20 years to develop the lung cancer. For that matter, maybe if we can colonize Mars, we can grow a replacement lung... -
The OP's budget is too low to even consider fancy designs. He should stick with ball bearings and some kind of simple barrel like a plastic PVC tube. If he takes some courses or reads some electrical engineering textbooks, builds some simpler devices and gets them working, and gets a budget more like 5 grand, then he can consider doing things a better way.
-
Firefox pre-fetching pages, wasting bandwidth
SomeGuy12 replied to LordFerret's topic in Science & Spaceflight
Actually, a few years ago, the major USA home network ISPs (AT&T, Comcast) instituted monthly bandwidth limits of around 150-250 gigabytes a month. It didn't escape anyone's notice that both these companies offer TV service, and the television service, which consumes digital bandwidth from essentially the same pool as internet bandwidth, is not limited, but if you had Hulu or Netflix TV service, the viewings would count against the cap. I think it's unmistakably an abuse of monopoly power and a perfect example why internet service ultimately must be regulated like a utility, the same as water or electricity. TLDR, internet is now an essential service just as important as having electricity, and these data caps are not rooted in actual ISP costs - it would be like the power company selling you the first 1000 kilowatt hours for 10 cents each, then charging $1/kwh for the next 100. Totally arbitrary and in no way related to the production of energy. Anyways, prefetching webpages won't consume enough bandwidth to matter, but binge watching Netflix 4k shows might... -
Yeah. I don't see a problem with a small probe that tries to use this drive. It will be possible to tell within a few days to a few months max if it works or not, and the signal will be unmistakable. If the thing gains tens of meters per second of dV that wasn't in the fuel tanks, there's not going to be anything the skeptics can say. If it doesn't budge at all, there's nothing the supporters can say. Understanding the theory is besides the point - if it actually works, you can set fire to most physics textbooks and start over. I'm sorta imagining the post-event physics textbooks. "Conservation of Momentum *" * only a suggestion, and not the law
-
But if de-existence follows existence, then existence must have followed de-existence, or it couldn't have existed to be burned out in the first place...uggh. Anyways, yeah, it's totally possible that our universe is going to die, it's just a temporary bubble, etc. But one popular theory is that something is making a constant stream of fresh universes, we just can't see the new universes cuz they are their own bubbles. You know, the death of the universe isn't the problem. It's our own personal deaths that are the problem. Universe has trillions of years of usable life left probably - even after the stars stop, you'd be able to pick up their corpses and make them into mini-black holes that provide hawking radiation for power or something. If you could survive your own personal death - get your brain cryogenically frozen when you die, future people reconstruct you and eventually move it to a computer - you actually could be around a trillion years hence. And if your new brain wasn't dependent purely on squishy cells but used computer chips, it could run like a million or billion times faster. So your perception of time might be 1 million - 1 billion times slower, * the billions or trillions of years the universe has left. A million times a trillion's a pretty big number. Practically eternity....
-
So, uh, since the universe seems to be on a rapid path to tearing itself apart and losing all energy, how did the universe form in the first place...Why is there something rather than nothing...If there's something now, and there was nothing before, then something can come from nothing. Weird. Nothing we understand about reality makes the slightest bit of sense viewed in that light. Something coming from nothing is paradoxical. "there was always something" is paradoxical. Reality is impossible. If we're real and here now, how did all this come to be? There should have always been nothing. (and no religion doesn't explain anything)
-
On the bright side, if 2 billion years has reduced the total energy output of the universe by half, we just might have a solution to the Fermi Paradox. Current evidence suggests that it took life about 3 billion years to even evolve it's way to us. If 3 billion years ago, the universe was more than twice as bright - meaning it was more than twice as hot, twice as much radiation, etc - and at that point life could evolve here on Earth, on the edge of the galaxy (so less radiation out here), the reason we aren't up to our kneecaps in aliens is because any alien sentients are just now evolving on the edges of their respective galaxies. It's self apparent that life has a tough time getting started in a hot, high radiation environment - the kinds of simple molecules that get started first are too simple to have any kind of self repair, etc. If the environment is too hot, the molecules will be destroyed faster than they can initially, crudely self replicate, and they never get a "foothold" and a chance to evolve more advanced ways of doing things. Basically, life has to crawl and slither before it can walk and fly. Living machines that work in high radiation environments are probably possible, but they would need internal data integrity math and self repair systems that repair themselves as fast as they break -> these are hugely complicated systems that are too sophisticated to arrive by chance without a few billion years of evolution.
-
That complex relationship is any of these engines are sending a continuous stream of mass in a line out the back. The mass * velocity of this line = momentum change of the spacecraft. Momentum change is also known as "impulse". so, the effect of the engine you care about is just the equation impulse = mass_propellant * velocity_exhaust. Well, the energy you need to accelerate the propellant to that velocity is just : Energy_Required = (1/efficiency_engine) * (1/2 * mass_propellant * velocity_exhaust^2) See the problem? Let's say you have 2 electric engines, and they are 50% efficient. One has double the ISP of the other. Since ISP is double, impulse is double, and therefore exhaust velocity is double. This means that Energy_required = (1/0.5) * 1/2 * (1/2)mass_propellant * 2*velocity_1^2. Or, energy requirements double for the same impulse. That's all it is. A dual stage 4 grid, by this fundamental equation, means that since it has about 3 times the ISP, will require 3 times the energy for the same change in velocity. Currently, that energy has to come from solar panels. Since you need 3 times the energy, but your panels haven't gotten any larger, you accelerate 3 times slower with a more efficient engine. As you know, the Dawn probe Ion engine is already crap for thrust, cutting it back by a factor of 3 (or even 6) isn't helping you any. Solutions : if it's an inner planets probe, you could launch a bigger rocket from the ground, and basically pack in a much larger solar panel. If half the mass of the probe is a solar panel, it would be a much better power:weight ratio. If it's outer planets, you need high power nuclear. None of this "SAFE-400" crap, I suspect the quoted 100 kilowatts/500 kilograms for that reactor is not an adequate power:mass ratio. You'd need something really pushing it, and minimal to no shielding to slow you down. Anyways, if you had such a reactor, some of the older electric thruster designs like MPD would have been fine.
-
Streetwind, it's not a big deal. For electric propulsion to work, you need to fly either nuclear powered probes or probes with very large solar panels. It also has to be a mission where the huge delta-V is actually useful, such as an outer planets probe that does more than a flyby, but instead enters orbit around the planet in question. To pull that off, you'd need the probe to be nuclear powered with a real reactor onboard, not a mere RTG. You need some legitimate high power to weight ratio. That means a compact, lightweight reactor that is still safe to launch (maybe you'd dump your fissionable material's packaging and shield after launch so you don't have to carry the weight on the journey). It means a lightweight heat engine with high power to weight ratio. Probably means a liquid tin droplet radiator, or a high performance high temperature coolant radiator. Compared to solving all these technical problems, the electric thruster is the easy part. Some of the previous electric thrusters would have been fine - you just need an appropriate power source.
-
Sounds like this is the ticket. (ELF) Pretty much meets my checklist : high power (I would assume you would upgrade those coils to superconducting wire on a flyable version), long burn life (no or minimal electrode erosion), any propellant you want, high ISP...this is how to do it. The lorentz force thrusters that use electrodes are currently one of the most powerful and efficient thrusters available, but they suffer greatly from erosion. This seems to fix it. And, I guess using solids would be annoying - but if you could use water as propellant, that would be great. You would electrolyze the water to hydrogen and oxygen and store it in gas bottles. Then you'd configure the ELF for one or the other, use the pure hydrogen or pure oxygen, then switch to the other once the temporary fuel feed tank is empty.