Jump to content

Von Neumann probes


Souper

Recommended Posts

Let's assume that we build a von Neumann probe like that. It's probably huge, and carries a lot of equipment that no longer exists on Earth. We launch it towards a promising solar system...

...and hope that it finds an exact duplicate of the Earth. Otherwise it's in trouble, because it may not find all the raw materials it needs, or have the right equipment for processing them. Now it needs to improvise and build completely new machines for tasks nobody had even thought about before. It's just a single probe, so it may not be able to figure out everything before it runs out of some critical resource, or before making a critical mistake.

Maybe it's better to launch many probes to the same destination. That way, the mission isn't doomed, if one individual isn't creative enough or makes a bad choice. Unfortunately, even that may not be enough. The probes may come up with a solution, but they may be unable to build the machines the solution involves quickly enough.

Perhaps the real solution is to seed a self-sustaining ecosystem at the destination, giving the probes enough time to adapt to the conditions and to figure out new ways to do things. But how do we do that, if we don't even know what the destination is like before the launch? The obvious solution is to stay in contact with the source, and launch resupply missions once the destination has been properly explored. At this point, the whole thing starts feeling more like colonization than von Neumann machines.

You would not land on an planet you would be better off going for asteroids. The probe must be smart enough to identify suitable ones and mine them for needed materials.

The probe don't need to be much smarter than this, yes it need to be able to replace broken parts with new who is made, it the redundancy breaks down it will not be able to repair itself.

However its first task will be to scale up production and add redundancy.

Note that an human colony will face the exact same problem unless they find an parallel earth where they can live of the land or at least grow plants, only then can they afford to loose advanced technology. If not they would have to keep the life support system running, yes humans are smarter and better at repairing but the complexity problem kicks in just as much here.

Link to comment
Share on other sites

No, that system of reactions must not be self-sustaining. They are simply there already. It might use them until none are left (then they probably die).

Ok, let's drop the "self-sustaining" part. It doesn't change anything. We have a complex system of chemical reactions set up by some unknown process. The system is complex enough and large enough that it could bootstrap a proto-life ecosystem under suitable conditions.

Again: your setting is pointless as you in the end are counting everything as an ecosystem, thus devoiding your argument from any meaning. If we actually build such probes, you could still say that "it is just the universe sustaining a complex ecosystem".

Please reread my first message on page 7. I tried to explain there what I really meant.

Link to comment
Share on other sites

Because the cell doesn't duplicate itself in an ocean. It duplicates itself in a complex self-sustaining system of chemical reactions. (Note that the term 'ecosystem' is often used for non-biological systems with similar characteristics as biological ecosystems.)

You however can have an extremely primitive ecosystem, just single celled plants with photosynthesis should do in theory. Growth would probably be restricted of lack of co2 however new is released by volcanoes and. Advanced organisms need an more advanced ecosystem but that is another issue.

Link to comment
Share on other sites

Yes I am but that doesn't make any difference to my argument. Start with a complex machine that is a) composed of simpler sub-machines, B) capable of building slightly more complex sub-machines than it, itself is made from and c) capable of assembling those more complex sub-machines into a new machine. Watch that complex machine build a more complex machine.

Do you also assume that the machine is able to build new copies of every sub-machine it consists of? In that case, your argument is tautological and doesn't tell anything about the reality.

Incorrect. Go and read up on computable vs non computable problems. It is entirely possible to have a process that you completely understand and yet the only way to predict what the process will do is to run it and see what happens.

I'm quite familiar with computability. My favorite result there is Rice's Theorem. It essentially says that even if you're given the source code of a computer program and the opportunity to run it as many times as you want, it's logically impossible to say anything about what the program does. As with all statements involving non-computability, you must wrap the statement in a thick layer of quantifiers to say anything relevant to the real world.

My point was that in order to differentiate between understanding and storytelling, we must be able to make predictions or produce other tangible results. Some things are just logically impossible to understand, while others are practically impossible due to their complexity.

Link to comment
Share on other sites

Everything just keeps getting more and more complex all the time.

You keep on repeating that over and over, but that does not make it true. I have a reasonable insight into what makes a machine and what goes into producing something and stating that producing ever more complex parts/machines requires ever larger and more complex production machines simply is not true, and that these things would become ever more separated as complexity increases also is not true. It has been the case for a while and in certain areas, but both modern production and developments in different areas show us it is not a rule that is absolute.

That is not even taking the human component in machines, which arguably is an incredibly complex and high maintenance tool, into account.

Link to comment
Share on other sites

Do you also assume that the machine is able to build new copies of every sub-machine it consists of? In that case, your argument is tautological and doesn't tell anything about the reality.

Of course but I fail to see the tautology. Quite the opposite given that this is an essential, in fact a defining feature, of a von Neumann machine.

I'm quite familiar with computability. My favorite result there is Rice's Theorem. It essentially says that even if you're given the source code of a computer program and the opportunity to run it as many times as you want, it's logically impossible to say anything about what the program does. As with all statements involving non-computability, you must wrap the statement in a thick layer of quantifiers to say anything relevant to the real world.

My point was that in order to differentiate between understanding and storytelling, we must be able to make predictions or produce other tangible results. Some things are just logically impossible to understand, while others are practically impossible due to their complexity.

In that case, let me refer you to the three body problem (since we're on the KSP forum). We can understand everything about the relevant orbital dynamics but cannot use them to make tangible predictions since the system is inherently chaotic. So it's logically possible to understand the system but we cannot use that understanding to predict its behaviour.

Link to comment
Share on other sites

You keep on repeating that over and over, but that does not make it true. I have a reasonable insight into what makes a machine and what goes into producing something and stating that producing ever more complex parts/machines requires ever larger and more complex production machines simply is not true, and that these things would become ever more separated as complexity increases also is not true. It has been the case for a while and in certain areas, but both modern production and developments in different areas show us it is not a rule that is absolute.

Look at the entire system, not just the machine producing the final product. What intermediate products are used, and how are they produced? How are the machines used for making the intermediate/final products built? Trace all this recursively, until you reach primary production. It's always possible to make one part of the system less complex, but the price is increased complexity in other parts of the system.

Of course but I fail to see the tautology. Quite the opposite given that this is an essential, in fact a defining feature, of a von Neumann machine.

The debate is about whether von Neumann machines are possible. If you assume that they're possible, your argument looks like a tautology.

In that case, let me refer you to the three body problem (since we're on the KSP forum). We can understand everything about the relevant orbital dynamics but cannot use them to make tangible predictions since the system is inherently chaotic. So it's logically possible to understand the system but we cannot use that understanding to predict its behaviour.

The last time I checked, weather forecasts were quite reliable.

Link to comment
Share on other sites

Look at the entire system, not just the machine producing the final product. What intermediate products are used, and how are they produced? How are the machines used for making the intermediate/final products built? Trace all this recursively, until you reach primary production. It's always possible to make one part of the system less complex, but the price is increased complexity in other parts of the system.

Prove it. Simply repeating your idea over and over tells us nothing. Like I said, I happen to know a little about about production and I just do not see it.

Link to comment
Share on other sites

If I have a RepRap and it's just a matter of progressive iteration, could I not turn it in to an Apple Ipad producer with relative ease? If my RepRap does not have this ability, what additional features do I need to get to that? At what point do I go from "can only make things out of plastic" to "can make everything from glass to metalwork to transistors to silicone chips"?

If it requires not additional complexity for a construction system or manufacturing process to progress, then any RepRap/constructor can be programmed to iterate into a universal constructor.

I don't see that happening, so it seems there is a limiting factor to manufacturing processes.

Von Neumann machines/UCs might be possible, but in no way are they assumed to be possible for us to currently construct.

Link to comment
Share on other sites

Prove it. Simply repeating your idea over and over tells us nothing. Like I said, I happen to know a little about about production and I just do not see it.

The world is now more complex than it was 10 years ago, when it was more complex than 20 years ago, and so on. Pretty much everything produced today involves bigger and more complex supply chains than World War II. Computers are used for almost everything, and the production of some of their components involves huge facilities with large one-of-a-kind machines.

If you want more specific answers, ask more specific questions.

Link to comment
Share on other sites

There is a difference between there simply being those long chains and them being strictly necessary. They are simply efficient (from a market point of view), but they are not necessary.

Link to comment
Share on other sites

The world is now more complex than it was 10 years ago, when it was more complex than 20 years ago, and so on. Pretty much everything produced today involves bigger and more complex supply chains than World War II. Computers are used for almost everything, and the production of some of their components involves huge facilities with large one-of-a-kind machines.

That is not proof. We need proof of your statements, not some casual and non-causal observation. Look at your own statements, and back them up with proven facts, research papers and other sources.

Link to comment
Share on other sites

The debate is about whether von Neumann machines are possible. If you assume that they're possible, your argument looks like a tautology.

Ahh, I see. Thank you.

The last time I checked, weather forecasts were quite reliable.

Only up to a point, as I think you well know. And that has nothing at all to do with my argument which was a rebuttal of your assertion that "in order to differentiate between understanding and storytelling, we must be able to make predictions or produce other tangible results. Some things are just logically impossible to understand, while others are practically impossible due to their complexity."

Which is just flat out wrong. We can understand everything about a system and yet be unable to make useful predictions about it. In fact it is our deep understanding of that system that leads us to appreciate its unpredictability.

Link to comment
Share on other sites

That is not proof. We need proof of your statements, not some casual and non-causal observation. Look at your own statements, and back them up with proven facts, research papers and other sources.

Please be more specific. What are the exact statements you're interested in, what is your understanding of the issues, and why do you think so?

May I suggest that Jouni states a formal definition of complexity, including how one might measure it¿ Then we can discuss if a) it is adequate, B) whether it in- or decreases.

We could use some standard definitions from algorithmic information theory. For example, let's choose an universal model of computation. The complexity of an object is the length of the shortest program that describes a class of objects, among which the object in question has maximal conditional Kolmogorov complexity. (Or something like that. It's been a while since I last touched algorithmic information theory.) This kind of complexity is obviously uncomputable (like with any reasonable definition of complexity), but the minimum description length approach provides a principled way for approximating it.

Which is just flat out wrong. We can understand everything about a system and yet be unable to make useful predictions about it. In fact it is our deep understanding of that system that leads us to appreciate its unpredictability.

That kind of understanding is superficial, not deep. If there are no practical consequences, the understanding is no different from stories about the Flying Spaghetti Monster.

Link to comment
Share on other sites

No. We create similar copies, but they're quite different. We're also not probes..,
Give two humans one of these:

18mhv1zr9whipjpg.jpg

And enlarge the interior space a little

And have a pureed food tube, a water faucet and drink pouch...

And ta-da! You have a Von Neumann probe.

Link to comment
Share on other sites

E-coli has 28,000 mechanical parts. It is very complex, and it contains logic circuitry similar to a computer, sensors, movement systems, and so forth. It needs as substrate trace minerals and sugar to self replicate, as well as an aqueous environment. That is it. This is a strong existence proof that a true von neumann probe is feasible, eventually.

What would such a probe actually look like? Well, like e-coli, it would need an internal system that can string together simple "feedstock" molecules into larger molecules that can perform tasks. It would also need the internal systems to make new feedstock. Every robot part in the probe would need to ultimately made of things that can be made from these larger molecules made from the feedstock molecules.

Proposed design : the robot parts would be cubical metal subunits, on the order of the size of modern living cells. There would be between 10 and 100 specialized types of subunits that each perform a basic task. The obvious subunit is a cube that just has little attachment pieces on all sides that cause it to lock to adjacent subunits. Then there might be a kind that is like the first basic subunit, but one face of the cube has a sliding rail where things can slide up and down. This type goes in a joint. Then, another kind might have a gear on the side and an internal motor, and be capable of driving pieces. And so on.

The subunits would have to be made of a series of a small pieces that got made from the feedstock. Many many pieces would be shared in common between the subunits. They would be made using convergent assembly on a nanoscale manufacturing line.

The probe would need to eat - it would eat rocks and digest them by converting the rocks to plasma then directing the plasma using magnetic separation devices.

Or it might dissolve the rocks in acid and water and use a system much more similar to modern living cells.

It would be able to eat itself - to cannibalize broken parts and then remanufacture them. This means it would need multiple parallel redundant copies of every critical part in the probe, so it can eat a broken part while other parts take up the load.

Unlike an e-coli, it might weigh hundreds of kilograms. A device that operates in a vacuum and can eat rocks raw without needing help would necessarily be a lot bigger than an e-coli that just floats around in fluid until it bumps into food.

Such a probe doesn't have to be dumb. Once it reaches a destination star, it would eat rocks and build itself a bigger computer to think with. It would load highly compressed software, possibly using procedurally defined software that self modifies when it unpacks itself back into a sentient system.

It would then build a laser receiver device. The beings who launched it would send out a constant binary stream that would contain more advanced software libraries and possible even encoded sentient beings as map files of their internal neural networks. Humans could travel this way - the safe way to travel interstellar distances, by sending a binary copy of your mind state across the light years. Then you just wait for the reply that will contain your mind-state when you got done exploring...

Edited by Vanamonde
Link to comment
Share on other sites

Unlike an e-coli, it might weigh hundreds of kilograms. A device that operates in a vacuum and can eat rocks raw without needing help would necessarily be a lot bigger than an e-coli that just floats around in fluid until it bumps into food.

Such a probe doesn't have to be dumb. Once it reaches a destination star, it would eat rocks and build itself a bigger computer to think with. It would load highly compressed software, possibly using procedurally defined software that self modifies when it unpacks itself back into a sentient system.

It would then build a laser receiver device. The beings who launched it would send out a constant binary stream that would contain more advanced software libraries and possible even encoded sentient beings as map files of their internal neural networks. Humans could travel this way - the safe way to travel interstellar distances, by sending a binary copy of your mind state across the light years. Then you just wait for the reply that will contain your mind-state when you got done exploring...

I'd like to focus on this item here. As SomeGuy12 mentions, the existence of bacteria proves that Von Neumann machines are possible. But here, we now move from the discussion of "is this possible?" to "is this responsible". Or - referencing the article I linked to earlier - do we want replicators or explorers? Which would we rather come into contact with?

A Von Neumann Probe - I'm just going to refer to them as VNP, as it's easier to type - will certainly need resources if it is to replicate, but replication may not be the best action for it once it reaches a target star. After all, the star's planets may be inhabited, and even if we take the precaution of hard-wiring "don't eat the natives" into the original VNP before launch, the natives may see things differently once the VNP starts chomping rocks and spitting out more copies of itself with out so much as a "by your leave?". We can certainly imagine the response our world's leaders might give if, say, tomorrow astronomers report they've detected an alien probe busily replicating in our asteroid belt.

So if exploration and contact are the point of sending out VNPs, we need them to be smart enough to not only recognize if a system is inhabited, but also to make contact and to ask permission to replicate. If permission is given, or if there's nobody to get permission from, but there are planets where civilization could develop, the VNP must also be smart enough to limit its numbers. It would not do to spare a civilization but gobble up all of the off-planet resources that civilization may one day need.

This will certainly increase the complexity of any VNP sent out, but fortunately we can scale up. As Cooper's article states, we will likely work with VNPs first as miners in our system, and can gradually expand their capabilities to interstellar explorer-grade as we go along. But if there's an alien VNP patiently waiting in our system, our miner VNPs will need to be smart enough to recognize them and hold off, rather than attempt to eat our first alien visitor. Recognizing an artifact as a "do not eat" priority should be a lot easier than recognizing a civilization, and can serve as the ethical foundations our future explorer probes will need if they are to replicate responsibly.

Link to comment
Share on other sites

The mechanism I described, the first thing the probe munches on is an asteroid no larger than the one that was recently landed on. Once it's done enough munching, it would unpack itself back into a sentient system. Data integrity checks and redundant information would make the probability of mutations less than one time in the lifespan of the universe - it would not evolve like living creatures do.

Link to comment
Share on other sites

We could use some standard definitions from algorithmic information theory. For example, let's choose an universal model of computation. The complexity of an object is the length of the shortest program that describes a class of objects, among which the object in question has maximal conditional Kolmogorov complexity. (Or something like that. It's been a while since I last touched algorithmic information theory.) This kind of complexity is obviously uncomputable (like with any reasonable definition of complexity), but the minimum description length approach provides a principled way for approximating it.

Maybe I'll expand upon this a bit.

The Kolmogorov complexity H(x) of object x is its algorithmic enthropy. It's defined as the length of the shortest program producing the object (or a full description of it). Like other enthropy measures, the Kolmogorov complexity is quite useless as a measure of useful information. After all, random noise has maximal enthropy.

My definition above essentially splits the Kolmogorov complexity into two parts: a description I(x) of a class of objects that should be functionally equivalent, and random noise N(x) that defines the individual object x within the class. Under some assumptions, we have H(x) ≈ I(x) + N(x).

Let's assume that x is a machine, y is the raw materials it uses, z is the object it produces, and r is the set of random conditions during the production. Because x, y, and r completely describe z, we have H(z) <≈ H(x) + H(y) + H®.

Now let's assume that machine x is reliable. This means that most representatives of the same class of machines should be able to produce a representative of the same class of objects from most representatives of the same class of raw materials under most random conditions. Essentially, random noise N(·) and random events r should be mostly irrelevant in the production. Now we have I(z) <≈ I(x) + I(y).

The next thing to note is that machine x already contains a description of the raw material it needs; I(y) is included in I(x). Otherwise the machine can't know when it should start the production. Now we have I(z) <≈ I(x); a reliable autonomous machine can't produce objects more complex than itself.

The final step to I(z) < I(x) is more fuzzy. Production without information loss feels like an information-theoretic version of perpetual motion. In principle, there's nothing against it, but it's just extremely unlikely. The reasons against von Neumann-style self-replication are therefore statistical, not logical.

Link to comment
Share on other sites

I'd like to focus on this item here. As SomeGuy12 mentions, the existence of bacteria proves that Von Neumann machines are possible. But here, we now move from the discussion of "is this possible?" to "is this responsible". Or - referencing the article I linked to earlier - do we want replicators or explorers? Which would we rather come into contact with?

A Von Neumann Probe - I'm just going to refer to them as VNP, as it's easier to type - will certainly need resources if it is to replicate, but replication may not be the best action for it once it reaches a target star. After all, the star's planets may be inhabited, and even if we take the precaution of hard-wiring "don't eat the natives" into the original VNP before launch, the natives may see things differently once the VNP starts chomping rocks and spitting out more copies of itself with out so much as a "by your leave?". We can certainly imagine the response our world's leaders might give if, say, tomorrow astronomers report they've detected an alien probe busily replicating in our asteroid belt.

So if exploration and contact are the point of sending out VNPs, we need them to be smart enough to not only recognize if a system is inhabited, but also to make contact and to ask permission to replicate. If permission is given, or if there's nobody to get permission from, but there are planets where civilization could develop, the VNP must also be smart enough to limit its numbers. It would not do to spare a civilization but gobble up all of the off-planet resources that civilization may one day need.

This will certainly increase the complexity of any VNP sent out, but fortunately we can scale up. As Cooper's article states, we will likely work with VNPs first as miners in our system, and can gradually expand their capabilities to interstellar explorer-grade as we go along. But if there's an alien VNP patiently waiting in our system, our miner VNPs will need to be smart enough to recognize them and hold off, rather than attempt to eat our first alien visitor. Recognizing an artifact as a "do not eat" priority should be a lot easier than recognizing a civilization, and can serve as the ethical foundations our future explorer probes will need if they are to replicate responsibly.

An VNP with an error would stop working as in unable to repair itself or make other parts. Its very unlikely that it would say start producing more and more mining robots who mine or similar cancer like replication problems. its some ways to make this even more safe, if you compress and encrypt the relevant files any bit fail will result in scrambled data, you then use error correction and multiple copies to avoid corruption. Now design system so it regularly has to read the files from disc.

The VNP would not have much to do with planets other than land probes on it for exploring. If you see any sign of an advanced civilization, radio, primarily but also light and it would report back and go into some sort of ambassador role. An pre industrial civilization would be hard to detect but would also not notice anything except perhaps an lander or two.

I imagine the VNP would have two purposes, primarily would be exploration they would move outward exploring solar systems, report back in the chain make copies of itself for more exploration. It would not send VNP to systems who have them, might send an backup probe to an system with ship underway but nothing more.

After the expansion part it would maintain the VNP communication network and report back. it might also get new orders or plans from the operators.

An second stage might be to prepare a planet for colonization.

Link to comment
Share on other sites

The final step to I(z) < I(x) is more fuzzy. Production without information loss feels like an information-theoretic version of perpetual motion. In principle, there's nothing against it, but it's just extremely unlikely. The reasons against von Neumann-style self-replication are therefore statistical, not logical.

Your theory is flat out wrong. Because we already have proof of existence of such devices, and can trivially draw out a sketch for more sophisticated devices. Let's find your errors.

Oh. That didn't take long.

1. When the machine copies itself, it first burns a bunch of energy reducing the entropy of the input feedstock to a very low level. (plasma separation, element specific filters, etc).

2. The machine's components are not a solitary piece of equipment. They are a population of individual pieces. Life on earth clearly demonstrates that such a population can ratchet forward life on earth has done such a thing, from simple machines to the current complexity. The probability of a given machine producing a more complex version of itself is statistically highly unlikely...but it is possible.

3. In my proposal, the machine isn't just the piece you sent to another star. You send a constant stream of data from a vastly more complex host machine at the starting star. That data stream contains the information needed to make the more complex versions of the base machine you sent.

4. In my proposal, another way to do this is procedural compression. Basically, an CRC/md5ed bit of code inside the base machine says things like "go to X, perform this mathematical operation on X, execute the new code present at X...". You tested this code back at the host star, and developed a piece of code that will unpack into something that is capable of meeting the design constraints. Essentially it's almost like setting up a bacterium to evolve itself into a much more complex creature by rewriting it's own genome. The probability of something like this existing by chance is unlikely, but you'd build it by working backwards.

5. You can reduce the errors to near zero.

Thinking about it, I'm giving engineering reasons. The theoretical reason your idea is wrong is that energy and information are related quantities, and you appear to be able to trade energy for information complexity.

Edited by SomeGuy12
Link to comment
Share on other sites

Your theory is flat out wrong. Because we already have proof of existence of such devices, and can trivially draw out a sketch for more sophisticated devices. Let's find your errors.

It's not my theory, it's just mathematics. If you're not familiar with algorithmic information theory, I suggest you to read a book or two about it. Algorithmic information theory is generally offered as a graduate-level computer science/mathematics class, so the web pages of your favorite university should have book suggestions.

1. When the machine copies itself, it first burns a bunch of energy reducing the entropy of the input feedstock to a very low level. (plasma separation, element specific filters, etc).

I fail to see how this is relevant to the discussion.

2. The machine's components are not a solitary piece of equipment. They are a population of individual pieces. Life on earth clearly demonstrates that such a population can ratchet forward life on earth has done such a thing, from simple machines to the current complexity. The probability of a given machine producing a more complex version of itself is statistically highly unlikely...but it is possible.

This makes no difference. H(x) is a complete description of the entire population, while I(x) is a complete description of all of its relevant functionality. Life generates useful information from entropy, but because the process is random, it can't know in advance what it's going to get. Because we're talking about machines that should be able to produce something pre-defined, that kind of information is useless.

3. In my proposal, the machine isn't just the piece you sent to another star. You send a constant stream of data from a vastly more complex host machine at the starting star. That data stream contains the information needed to make the more complex versions of the base machine you sent.

In this case, we're not talking about von Neumann machines.

4. In my proposal, another way to do this is procedural compression.

Kolmogorov complexity already incorporates all compression methods anyone is ever going to invent. After all, it's the shortest algorithmic description of an object.

5. You can reduce the errors to near zero.

Are you saying that perpetual motion is possible?

Thinking about it, I'm giving engineering reasons. The theoretical reason your idea is wrong is that energy and information are related quantities, and you appear to be able to trade energy for information complexity.

What are you trying to say? I didn't mention energy at all, because it's irrelevant to algorithmic complexity.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...