Jump to content

SomeGuy12

Members
  • Posts

    197
  • Joined

  • Last visited

Everything posted by SomeGuy12

  1. No, but you're out at Ceres, having landed on it. You'd like to be able to stuff whatever you find lying about into your nuclear-electric rocket to improve your delta-V budget for getting home (or to the next asteroid) It might be supplemental at first - you'd have the minimum amount of fuel to get back, but if you can find more, you can get back on a faster trajectory. Eventually it would be part of the mission plan to pick up fuel at the destination.
  2. Out of the available electric engine designs, why is Xenon gas such a popular propellant? It's a little hard to come by. Second, are some of these designs more fuel-agnostic than others? An ideal engine would be one that is high thrust, high ISP, lightweight, long burn life, and it would let you throw any old rocks you found out in space into the input hopper. I guess you'd convert the rocks to plasma first, then separate them by atomic weight. Then with this atomic weight-separated propellant, you'd set your magnets or electric fields or voltages or whatever for a propellant with that particular atomic weight, and use it until you run out, then switch to the next one. Basic physics say it shouldn't matter what the propellant is, just what your desired ISP is...
  3. The basic premise sounds solid. You can get ISPs and thrust to weight ratios far in advance of anything you can carry on a rocket without using nuclear energy. It's straightforward present tech to build phased-array antenna for the microwave transmitter so you can focus it without mirrors and lenses. Microwaves are a longer wavelength, though. What kind of energy losses do you face between the ground and the rocket? I read the white paper. 55% energy loss at launch, 88% during the circularization burn.
  4. Jouni, ribosomes and the metabolic paths in an e-coli have finite complexity, despite being capable of producing products accurate to the atomic level. (yes, peptide chains are atomicly exact - their permanent bonds are fixed, though the folding bonds are a bit loose) This demonstrates the feasibility of something that would be the exact opposite of anything you have argued. It would be feasible to make some kind of small, simple machine that can make any product it has a pattern for, and those products can become millions of separate functional systems. This is the ultimate promise of nanotechnology. The production machine would be very simple, the products would not be - just like it is in biology, where the molecular paths that can make every possible amino acid and a universal assembler to string them together can be easily crammed into a single cell and only occupy a small portion of it's codebase and internal systems. For practical and technical reasons, this is probably not how a nanoscale factory (frequently referred as a "productive nanosystem" or "nanoforge") would actually be constructed, but you could build it this way.
  5. The problem it's a quickdraw contest/who can be the biggest jerk. Whoever creates the "techno-plague" first will have the biggest impact on future events in the galaxy. By choosing not to do this, you are choosing a suboptimal method for survival. (you'd attach copies of yourselves to the machinery, so the farther the plague spreads, the farther you personally spread) Same argument applies in that it's better to drain a significant fraction of the resources of a star, if needed, building a really really big rocket that can hit a higher fraction of C. Well, if you need to do that - there probably is an efficient and elegant method that gets you to 0.9 C and only requires eating a few asteroids to build the equipment. An antimatter rocket with a big enough mass ratio can do this in theory.
  6. Andrew, the reason your idea will not work is because a probe setup that doesn't care about the rules would outbreed/outcompete one that does. Turning whole star systems into more probes, faster probes, or much bigger probes, armed with weapons, is a better strategy than nibbling a few asteroids and waiting. That's the lowest common denominator and what you would expect to see. The fact that we exist at all strongly suggests that alien intelligences capable of building such things are very, very, very far away from us. (not in our galaxy far)
  7. Aren't we arguing about the size of the apple now? We could : 1. Package several autonomous systems into a single probe. By sharing subsystems (you need a minimum amount of mass to absorb the gamma rays produced by an antimatter fired starship engine, so a bigger engine is more efficient) you save on total mass. Once it arrives at the destination, the autonomous probes split up...kind of like cancer does when it colonizes the body...so that a single adverse event or failure won't stop the overall effort. 2. Once we consume an entire star's worth of retrievable solid matter (presumably any star in our local group will have at least an earth-mass or more worth of solid rocks captured around it) we have an awful lot of resources for launching the next set of "probes" to a star not yet colonized. Once you are converting earth-masses or more worth of rocks into machinery, you have an awful lot of it. This, by the way, means that a realistic probe might in fact have incredibly sophisticated software systems, equivalent to cramming an artificial neural network emulating the mind states of a 100 people or more into it. If you had the equivalent of the minds of 100 people, you would be able to do very intelligent experimentation and engineering. You do realize that this probably would fit into 100 kilograms or less of computing machinery, assuming you had 3 dimensional, nanostructured hardware. (the brain weighs 1.3 kilograms, and obviously you don't need many of the brain's systems for this, also, you could make equivalent circuits with a lot less mass because they don't have to be alive and self repairing nor use proteins...) And the reason it agrees with your theory above is that when this probe encounters an obstacle it lacks the programming to overcome, it can extract information from the environment and use that combined with base programming (knowledge about the laws of physics, prior techniques for similar problems, etc etc etc) to craft a new solution. It's not a closed system and information is both entering and leaving. For that matter, isn't a human brain, like the one you are using the read this message, a device that destroys information on a colossal scale? The hugely complex state your mind is in right now is only transient, and only a tiny fraction of the information will be stored.
  8. Ok, so we can build self modifying algorithms right this second that start simple and become far more complex than their original source code. Practically all of the neural net-AI algorithms give you this result. There is some simple code that defines the ANN, the rules used to evaluate a given ANN configuration's score, and the generational breeding. Now, all the examples are fed a dataset, though, and you would argue that the information is coming from the dataset. The complexity of the input data set is greater than the resulting neural network to solve that data set - an example of a mario AI I saw only had a few hundred neurons, while the source code to Mario itself, plus the structure of the computer chip that runs Mario, is probably much more complex. More complex examples like an AI that recognizes dog breeds is obviously using a far more complex input data set. Assuming you can build a probe that eats rocks and designs itself new equipment to eat rocks better, maybe the information to do so is coming from the laws of physics of the real world? That is, the probe has a test chamber, and it builds physical rock-eating robots. It tests how well the robots eat rocks, and promotes the ones that have higher scores. That "test chamber" contains complex pieces of rock, is governed by whatever mechanism calculates the answers to the laws of physics, and the laws themselves create complex and difficult to predict results that are all data input to this hypothetical machine of yours. I think your arguments are correct for an error-free, closed digital system that has no information inputs of any kind. However, a von neumann probe is not such a system, which is where you went wrong - it's adding information because it's using sensors that are injecting additional information, such as a sensor that measures the rock-eating performance of a probe subsystem. You may argue that you want to pretend it's a spherical cow and doesn't require input matter and energy, but that would be simplifying it inappropriately. Sorry if I seem so determined by my position when I don't know the details of the theory you're using. Most people don't. Does this theory have any practical applications? As far as I am aware, the absolute state of the art in useful computer science - stuff that is only marginally useful because it is so new - is various forms of artificial neural network, where recent results have gotten much better because of the use of GPUs and more sophisticated rules for the ANN that use tricks stolen from nature.
  9. I was as well. You don't know in advance what you're trying to build, however, the non deterministic algorithm got tested before you left at the host star, and was tested with a large number of different starting seeds until you found one that resulted in something that met your design constraints. (it isn't an exact match to your blueprint, it's basically a form of lossy compression) Umm, a non deterministic algorithm seeded with a starting number that results in a desired result that will be arrived at every time...what have we done here? The entropy definition I'm using is the physics one - the probe's a physical object. As you gain information stored in your memory banks and more tiny parts as the probe gets larger with more systems, entropy decreases because the order of the many atoms crammed into your probe matters, and that order has been forced into a low entropy configuration.
  10. All life on earth. The "nothing" is that the information to form the life on earth never existed until random chance and a ratcheting algorithm generated it. The physics of the planet are deterministic... Our Von Neumann probes can use such a ratcheting algorithm. And you can make them deterministic, where the seed of that algorithm is preset, and the answers are also preset. Or you can make them partly deterministic, where the probability of getting a useful result is far more likely than the chance of garbage. A simple example : the probe has an inbuilt simulation of physics. As it encounters technical problems, it simulates using an evolutionary algorithm possible solutions to the problem. The computer doing this might not be quite deterministic (big multiprocessing systems are often not), but the answer it creates passes the simulation test for functionality. A series of design variants are generated and tested in the real world, as small simulation errors mean that the best design in the sim may not work exactly as well in the real world. The design variant with the highest overall score is used... I see no reason the probe couldn't use such a technique to design a more sophisticated version of itself, limited of course by ultimate theoretical limits of how efficiently you can arrange matter to perform the probe's mission objectives. In all these cases, you are definitely getting something containing more information than you started with. You are definitely lowering entropy locally for your probe. Doesn't that little equation of yours say this is impossible?
  11. Are you denying the basic premise that your interpretation of the theory is wrong? Since reality disagrees with your interpretation, I see no other possibility. If your theory were a law of physics, we would not be having this discussion. It's just a theory, and it's incorrect. The math may be self consistent, but it is not what the universe is using. We are talking about Von Neumann machines - in what way is the requirement to send additional information over a laser link a change that makes the machine no longer count as a self replicating machine that can expand until the galaxy's solid matter is consumed? What exact detail makes it different? It just means the host machine is part of the system. How does a machine that is sentient fit into your explanation? If the machine is capable of deriving goals for itself, evaluating possible designs, conducting experiments, and then building a better version of yourself...I think you're saying that such a device is impossible?
  12. Your theory is flat out wrong. Because we already have proof of existence of such devices, and can trivially draw out a sketch for more sophisticated devices. Let's find your errors. Oh. That didn't take long. 1. When the machine copies itself, it first burns a bunch of energy reducing the entropy of the input feedstock to a very low level. (plasma separation, element specific filters, etc). 2. The machine's components are not a solitary piece of equipment. They are a population of individual pieces. Life on earth clearly demonstrates that such a population can ratchet forward life on earth has done such a thing, from simple machines to the current complexity. The probability of a given machine producing a more complex version of itself is statistically highly unlikely...but it is possible. 3. In my proposal, the machine isn't just the piece you sent to another star. You send a constant stream of data from a vastly more complex host machine at the starting star. That data stream contains the information needed to make the more complex versions of the base machine you sent. 4. In my proposal, another way to do this is procedural compression. Basically, an CRC/md5ed bit of code inside the base machine says things like "go to X, perform this mathematical operation on X, execute the new code present at X...". You tested this code back at the host star, and developed a piece of code that will unpack into something that is capable of meeting the design constraints. Essentially it's almost like setting up a bacterium to evolve itself into a much more complex creature by rewriting it's own genome. The probability of something like this existing by chance is unlikely, but you'd build it by working backwards. 5. You can reduce the errors to near zero. Thinking about it, I'm giving engineering reasons. The theoretical reason your idea is wrong is that energy and information are related quantities, and you appear to be able to trade energy for information complexity.
  13. Because empirical data trumps theory. Speaking as an engineer here, your premise is bad. You can't "math it out" - too many variables. Slight differences in the projectiles, air resistance, slight asymmetries in the magnets, heating of your power MOSFETs, etc etc etc. Much more reliable to actually detect the projectile using photocells and switch your power control switches based on this information. I do take back what I said about microcontrollers - your sensor rig, you would want a photocell sensor associated with every magnet, but you can determine your real acceleration and real velocity by measuring the exact times it crosses 3 of the sensors in a row. Anyways, the currents involved are enormous. I am uncertain what kind of power mosfets can even handle the load. And your power supply...yikes. What kind of money are you looking to spend?
  14. The mechanism I described, the first thing the probe munches on is an asteroid no larger than the one that was recently landed on. Once it's done enough munching, it would unpack itself back into a sentient system. Data integrity checks and redundant information would make the probability of mutations less than one time in the lifespan of the universe - it would not evolve like living creatures do.
  15. E-coli has 28,000 mechanical parts. It is very complex, and it contains logic circuitry similar to a computer, sensors, movement systems, and so forth. It needs as substrate trace minerals and sugar to self replicate, as well as an aqueous environment. That is it. This is a strong existence proof that a true von neumann probe is feasible, eventually. What would such a probe actually look like? Well, like e-coli, it would need an internal system that can string together simple "feedstock" molecules into larger molecules that can perform tasks. It would also need the internal systems to make new feedstock. Every robot part in the probe would need to ultimately made of things that can be made from these larger molecules made from the feedstock molecules. Proposed design : the robot parts would be cubical metal subunits, on the order of the size of modern living cells. There would be between 10 and 100 specialized types of subunits that each perform a basic task. The obvious subunit is a cube that just has little attachment pieces on all sides that cause it to lock to adjacent subunits. Then there might be a kind that is like the first basic subunit, but one face of the cube has a sliding rail where things can slide up and down. This type goes in a joint. Then, another kind might have a gear on the side and an internal motor, and be capable of driving pieces. And so on. The subunits would have to be made of a series of a small pieces that got made from the feedstock. Many many pieces would be shared in common between the subunits. They would be made using convergent assembly on a nanoscale manufacturing line. The probe would need to eat - it would eat rocks and digest them by converting the rocks to plasma then directing the plasma using magnetic separation devices. Or it might dissolve the rocks in acid and water and use a system much more similar to modern living cells. It would be able to eat itself - to cannibalize broken parts and then remanufacture them. This means it would need multiple parallel redundant copies of every critical part in the probe, so it can eat a broken part while other parts take up the load. Unlike an e-coli, it might weigh hundreds of kilograms. A device that operates in a vacuum and can eat rocks raw without needing help would necessarily be a lot bigger than an e-coli that just floats around in fluid until it bumps into food. Such a probe doesn't have to be dumb. Once it reaches a destination star, it would eat rocks and build itself a bigger computer to think with. It would load highly compressed software, possibly using procedurally defined software that self modifies when it unpacks itself back into a sentient system. It would then build a laser receiver device. The beings who launched it would send out a constant binary stream that would contain more advanced software libraries and possible even encoded sentient beings as map files of their internal neural networks. Humans could travel this way - the safe way to travel interstellar distances, by sending a binary copy of your mind state across the light years. Then you just wait for the reply that will contain your mind-state when you got done exploring...
  16. So it turns out I was just 2 days ahead of the news curve on this. Soylent is already 20% algae and there's discussion of making it 100%, with a plan to genetically engineer the algae to make each constituent part of Soylent. Apparently, per Rhineheart, a 100k square foot warehouse full of Algae tubes could feed LA. Even if he's off by a factor of 10, that's a lot smaller than the farms that feed LA, and algae tubes would recycle their water, and would cost very little per unit of food because the entire process would require minimal human labor... If his dream worked out, we'd still have food crops, but only for pleasure. Only the best and most competitive areas would grow high grade food...
  17. And it's not programming the microcontroller, it's designing the high power switches that can cut the flow of current instantly upon command from that microcontroller. You don't even need or want the microcontroller, this adds latency. You want a photocell sensor and a diode, located at the entrance to the magnet. You'd design electronics to delay an instant and then cut the power.
  18. What do you have to do so the shrimp breed a continuous series of new generations? Wonder if you could just eat this mix - the shrimp would provide animal protein, etc.
  19. No doubt, no doubt. Nevertheless, this appears to be the form the solution should be in. I'm not saying it would be easy - I honestly think the pricetag to get to a working, reliable system might be measured in billions - but a tank of fluid of minorly genetically modified algae sounds a lot easier to engineer into a closed system than a tray of terrestrial plants growing in hydroponics or real soil.
  20. The basic equations says they would be. If you produce enough food, you produce enough oxygen. Supposedly it only takes 6 liters of fluid. What I like about it is that it's light. This has potential. You could really fit this into an interplanetary rocket on a long journey, one using a high ISP drive system where the propellant tanks and the drive itself are most of the mass. With these kinds of engineering requirements, you'd really be living in a tin can. This lightweight life support system. Carbon fibers and thin metal plates and spaced meteorite armor (whipple shielding). If the habitat were spinning, it would probably be 2 or 4 dumbbell shaped modules, spinning on a single access tunnel that is laced with carbon fibers (since the main structural stress would be tension along the access tunnel). The reason to have 4 modules instead of 2 is to make it so you can spin up and down by just counter rotating the 2 sets of modules. Since the net angular momentum of the spacecraft is still zero (2 sets of modules the same mass spinning opposite directions) you do not have to consume any propellant to do this, just electricity to run the motors to spin the modules. All the computers would be ultrathin tablet screens, with the CPU hardware a tiny stick about the size of a thumb drive. Every bit of fixtures and furnishings would be the lightweight solution.
  21. Per atomic rockets, Spirulina is apparently fairly close to what you would need. It grows extremely fast in water, apparently it would only take about 6 liters of fluid per astronaut, in theory, to supply the opposite side of the Krebs cycle and keep the astronaut alive with food and oxygen. Your system would be simple. The urine and feces go into a tank where there is extremely high pressure, hot water that apparently will extremely rapidly break the waste down to the base molecules. (called supercritical water oxidation). The CO2 and minerals from that probably go through some ion traps to control levels of certain metals and then into the algae tank. The algae tank is just a stainless steel tank full of water and bright grow lamp LEDs in the walls. The whole thing can be spun to stir the algae and remove it and keep an air-water interface. You run the room air through a metal oxide bed, then heat up the metal oxide to release the CO2, which you also pump into the algae tank. You'd probably do this in cycles, and once the CO2 concentration is low enough, you'd spin the tank and suck the oxygen produced from the tank. Filter any residual CO2, then send that oxygen to storage bottles and ultimately back into the room. Ok, so that's the basic system. Apparently the drawback of spirulina is that it produces too much nucleic acid. You would either genetically modify it to produce less by knocking out the genes. If that doesn't work (the nucleic acid may be a critical part of the cell biology for spirulina), you'd dig up a biological enzyme that breaks down nucleic acid and produce it in a separate tank using e-coli. (you would separate the enzyme with centrifuges and add it to each batch of algae goop at harvest time) What about taste and nutrition balancing? This would be simple genetic engineering, stuff you could do in a hackerspace at the new DIY biology labs in LA. You insert genes behind a promoter for some key stage of the algae's growth cycle that code for some form of carbohydrate. You'd insert genes for the amino acids you want to increase the concentrations of. You'd insert genes for vitamin C. (or make it via e-coli in a separate tank) Where it gets really cool is you could make several strains of the algae. One strain is carbohydrate heavy, another is lipid heavy (maybe make the same lipids that are in fish oil!), another is protein heavy, and another one is very sweet or has a certain flavor protein. You might actually have 10 or more separate flavoring strains, similar to those Coke machines at restaurants that can make any coke product by combining different base syrups and additives. The whole system would be a machine about the size of 3 or so vending machines. It would have the 5-10 different strains in different tanks, each on a growth and harvest cycle. When it comes to harvesting, each tank would be spun enough to pump fluid out. The fluids from the tanks for the "recipe" would go into a high speed centrifuge that would remove most of the water. The resulting goop would be pressed into molds, and probably heated to remove the remaining water and maybe bake it. You might add binding agents produced via some natural process to make the resulting algae bar "stick" together. A crewmember would go up to the machine and press his or her thumb on the reader. (or the reader would read an RFID tag implanted in them) The machine would eject a custom algae bar for that particular crew member, using data such as their activity level, current weight and fat levels, gender, age, etc. It would also let you choose between various flavoring combos to mask the slimy taste of algae. Maybe orange or cherry or nut flavor, etc. Sound possible? I think this is now possible with current tech, albeit it would require some significant funds to develop a reliable system. You'd want to build dozens of these and test them on Earth. Maybe we'll see them installed in a university dorm somewhere...
  22. Thanks for the idea. Aim north of the permafrost line, and maybe the ice will help cool the meteorite in a jiffy.
  23. Less than 6 people per square mile. 17 million total. Less power to object to having their country used as a lithobraking zone...
  24. I had in mind you instead set up a "straight shot" approach. That is, it would approach the planet and the course would take it on a straight shot through the upper atmosphere right into the ground at the chosen impact location. You would pick a spot far away from other people who have the resources to complain (hence I said kazakstan), make certain that the probable error band doesn't cover any significant population centers, and as a secondary consideration, try to find soil or rocks for the impact point that will soak up the energy in a way that makes it easier to mine. The problem with your approach is it sends the thing passing over most of the planet, possibly shedding pieces that impact with people on the ground.
×
×
  • Create New...