Jump to content

Von Neumann probes


Souper

Recommended Posts

It's not my theory, it's just mathematics. If you're not familiar with algorithmic information theory, I suggest you to read a book or two about it. Algorithmic information theory is generally offered as a graduate-level computer science/mathematics class, so the web pages of your favorite university should have book suggestions.

Are you denying the basic premise that your interpretation of the theory is wrong? Since reality disagrees with your interpretation, I see no other possibility. If your theory were a law of physics, we would not be having this discussion. It's just a theory, and it's incorrect. The math may be self consistent, but it is not what the universe is using.

We are talking about Von Neumann machines - in what way is the requirement to send additional information over a laser link a change that makes the machine no longer count as a self replicating machine that can expand until the galaxy's solid matter is consumed? What exact detail makes it different? It just means the host machine is part of the system.

How does a machine that is sentient fit into your explanation? If the machine is capable of deriving goals for itself, evaluating possible designs, conducting experiments, and then building a better version of yourself...I think you're saying that such a device is impossible?

Edited by SomeGuy12
Link to comment
Share on other sites

Are you denying the basic premise that your interpretation of the theory is wrong? Since reality disagrees with your interpretation, I see no other possibility. If your theory were a law of physics, we would not be having this discussion. It's just a theory, and it's incorrect. The math may be self consistent, but it is not what the universe is using.

In which way reality disagrees with my interpretation? Provide a detailed example, where reality creates useful information from nothing in a deterministic way.

Besides, do you even know computability theory and algorithmic information? Or what 'theory' means in a mathematical context?

We are talking about Von Neumann machines - in what way is the requirement to send additional information over a laser link a change that makes the machine no longer count as a self replicating machine that can expand until the galaxy's solid matter is consumed? What exact detail makes it different? It just means the host machine is part of the system.

We're talking about von Neumann machines, which are autonomous self-replicating machines. If the machine has an external source of information, the replication must also replicate that source.

How does a machine that is sentient fit into your explanation? If the machine is capable of deriving goals for itself, evaluating possible designs, conducting experiments, and then building a better version of yourself...I think you're saying that such a device is impossible?

Sentience is completely irrelevant to this discussion, unless you're assuming some kind of soul, which is not bounded by logic and the laws of physics. If sentience arises from a physical object, a sentient machine is just like any other machine from an information-theoretic point of view.

Link to comment
Share on other sites

Are you denying the basic premise that your interpretation of the theory is wrong? Since reality disagrees with your interpretation, I see no other possibility. If your theory were a law of physics, we would not be having this discussion. It's just a theory, and it's incorrect. The math may be self consistent, but it is not what the universe is using.

We are talking about Von Neumann machines - in what way is the requirement to send additional information over a laser link a change that makes the machine no longer count as a self replicating machine that can expand until the galaxy's solid matter is consumed? What exact detail makes it different? It just means the host machine is part of the system.

How does a machine that is sentient fit into your explanation? If the machine is capable of deriving goals for itself, evaluating possible designs, conducting experiments, and then building a better version of yourself...I think you're saying that such a device is impossible?

Part of the premise is correct, an ship who can make an copy of itself will need to be more complex than an factory building the ship as it will also have to be able to build the factory.

Still its an upper level, out technological civilization can make all parts it need to operate, an more advanced technology will add extra complexity but still an upper level.

the VNP does not need to be the size of our industry as the industry produces far more parts than the probe needs, just as important industry is focused on mass production too meet the demand and keep the cost down. This increase the size and complexity a lot, making an simple plastic or metal piece for mass production take an long production line, if you need a one off an 3d printer or crc lathe will do the work. Still you will need loads of stuff, electronic, wires, tubing, drill bits, engines, sensors. each will need an long production line from raw materials.

Few has done any work on this as we could not build an practical VNP now anyway.

Link to comment
Share on other sites

In which way reality disagrees with my interpretation? Provide a detailed example, where reality creates useful information from nothing in a deterministic way.

All life on earth. The "nothing" is that the information to form the life on earth never existed until random chance and a ratcheting algorithm generated it. The physics of the planet are deterministic...

Our Von Neumann probes can use such a ratcheting algorithm. And you can make them deterministic, where the seed of that algorithm is preset, and the answers are also preset. Or you can make them partly deterministic, where the probability of getting a useful result is far more likely than the chance of garbage.

A simple example : the probe has an inbuilt simulation of physics. As it encounters technical problems, it simulates using an evolutionary algorithm possible solutions to the problem. The computer doing this might not be quite deterministic (big multiprocessing systems are often not), but the answer it creates passes the simulation test for functionality. A series of design variants are generated and tested in the real world, as small simulation errors mean that the best design in the sim may not work exactly as well in the real world. The design variant with the highest overall score is used...

I see no reason the probe couldn't use such a technique to design a more sophisticated version of itself, limited of course by ultimate theoretical limits of how efficiently you can arrange matter to perform the probe's mission objectives.

In all these cases, you are definitely getting something containing more information than you started with. You are definitely lowering entropy locally for your probe. Doesn't that little equation of yours say this is impossible?

Link to comment
Share on other sites

All life on earth. The "nothing" is that the information to form the life on earth never existed until random chance and a ratcheting algorithm generated it. The physics of the planet are deterministic...

Note that I was using "deterministic" in the computer science sense. A deterministic machine knows in advance what it's trying to build, while a nondeterministic machine (such as a living cell) introduces some elements of randomness to the product. A deterministic machine can't create new information, while a nondeterministic machine can. On the other hand, because the new information arises from randomness, it's almost always detrimental to the purpose of the machine, and often makes the product non-viable.

In all these cases, you are definitely getting something containing more information than you started with. You are definitely lowering entropy locally for your probe. Doesn't that little equation of yours say this is impossible?

Which definition of entropy you're using? In algorithmic context, lowering the entropy generally means that you're losing information, not gaining it.

Link to comment
Share on other sites

Note that I was using "deterministic" in the computer science sense. A deterministic machine knows in advance what it's trying to build, while a nondeterministic machine (such as a living cell) introduces some elements of randomness to the product. A deterministic machine can't create new information, while a nondeterministic machine can. On the other hand, because the new information arises from randomness, it's almost always detrimental to the purpose of the machine, and often makes the product non-viable.

Which definition of entropy you're using? In algorithmic context, lowering the entropy generally means that you're losing information, not gaining it.

I was as well. You don't know in advance what you're trying to build, however, the non deterministic algorithm got tested before you left at the host star, and was tested with a large number of different starting seeds until you found one that resulted in something that met your design constraints. (it isn't an exact match to your blueprint, it's basically a form of lossy compression)

Umm, a non deterministic algorithm seeded with a starting number that results in a desired result that will be arrived at every time...what have we done here?

The entropy definition I'm using is the physics one - the probe's a physical object. As you gain information stored in your memory banks and more tiny parts as the probe gets larger with more systems, entropy decreases because the order of the many atoms crammed into your probe matters, and that order has been forced into a low entropy configuration.

Link to comment
Share on other sites

You don't know in advance what you're trying to build, however, the non deterministic algorithm got tested before you left at the host star, and was tested with a large number of different starting seeds until you found one that resulted in something that met your design constraints. (it isn't an exact match to your blueprint, it's basically a form of lossy compression)

Umm, a non deterministic algorithm seeded with a starting number that results in a desired result that will be arrived at every time...what have we done here?

A completely deterministic algorithm that can't create new information. Pseudo-random number generators are just iterated hash functions. The only randomness in their output comes from the seed. If we fix the seed in advance, we get an ordinary deterministic algorithm. Even if we run the algorithm with a truly random seed, the amount of new information it can create is limited by the size of the seed.

Entropy and information content both refer essentially to the shortest description of the object. If we take a non-deterministic algorithm with a hard-coded seed, the algorithm is obviously a description of its output. As the shortest description of the output can't be longer than a concrete description we already know, the entropy/information content of the output can't be higher than that of the algorithm.

The entropy definition I'm using is the physics one - the probe's a physical object. As you gain information stored in your memory banks and more tiny parts as the probe gets larger with more systems, entropy decreases because the order of the many atoms crammed into your probe matters, and that order has been forced into a low entropy configuration.

We're talking about information and complexity, so the information-theoretic definition of entropy is the correct one. A few examples:

  • If we have a binary sequence consisting of n 1-bits, its information content and entropy are both around log n bits.
  • If we have a random binary sequence of length n, its information content is around log n bits, while its entropy is around n bits.
  • If we have a computer program of size n (bits), its entropy is probably around n/10 bits, while its information content is lower.
  • If we take the same computer program and encode it in the low-order bits of a byte sequence, and then encrypt the sequence, we have a random-looking bit sequence of length 8n. The entropy and the information content of the sequence are still roughly the same as in the previous case.

Link to comment
Share on other sites

Ok, so we can build self modifying algorithms right this second that start simple and become far more complex than their original source code. Practically all of the neural net-AI algorithms give you this result. There is some simple code that defines the ANN, the rules used to evaluate a given ANN configuration's score, and the generational breeding.

Now, all the examples are fed a dataset, though, and you would argue that the information is coming from the dataset. The complexity of the input data set is greater than the resulting neural network to solve that data set - an example of a mario AI I saw only had a few hundred neurons, while the source code to Mario itself, plus the structure of the computer chip that runs Mario, is probably much more complex. More complex examples like an AI that recognizes dog breeds is obviously using a far more complex input data set.

Assuming you can build a probe that eats rocks and designs itself new equipment to eat rocks better, maybe the information to do so is coming from the laws of physics of the real world? That is, the probe has a test chamber, and it builds physical rock-eating robots. It tests how well the robots eat rocks, and promotes the ones that have higher scores. That "test chamber" contains complex pieces of rock, is governed by whatever mechanism calculates the answers to the laws of physics, and the laws themselves create complex and difficult to predict results that are all data input to this hypothetical machine of yours.

I think your arguments are correct for an error-free, closed digital system that has no information inputs of any kind. However, a von neumann probe is not such a system, which is where you went wrong - it's adding information because it's using sensors that are injecting additional information, such as a sensor that measures the rock-eating performance of a probe subsystem. You may argue that you want to pretend it's a spherical cow and doesn't require input matter and energy, but that would be simplifying it inappropriately.

Sorry if I seem so determined by my position when I don't know the details of the theory you're using. Most people don't. Does this theory have any practical applications? As far as I am aware, the absolute state of the art in useful computer science - stuff that is only marginally useful because it is so new - is various forms of artificial neural network, where recent results have gotten much better because of the use of GPUs and more sophisticated rules for the ANN that use tricks stolen from nature.

Edited by SomeGuy12
Link to comment
Share on other sites

Please be more specific. What are the exact statements you're interested in, what is your understanding of the issues, and why do you think so?

No, I do not want or need to be more specific, I want to put your money where your mouth is. You have repeatedly stated your views in this thread, yet never substantiated them, most specifically your ideas of ever increasing complex products needing ever more complex production facilities. Please state your definitions and then back those up with facts and sources.

It is a very interesting subject, but going round in circles in to too terribly useful, therefore I ask you to stop distracting and changing the subject.

Edited by Camacha
Link to comment
Share on other sites

Ok, so we can build self modifying algorithms right this second that start simple and become far more complex than their original source code. Practically all of the neural net-AI algorithms give you this result. There is some simple code that defines the ANN, the rules used to evaluate a given ANN configuration's score, and the generational breeding.

Self-modification doesn't help, but observing the environment and experimenting with it can extract information from it. With them, we come back to some of the issues I mentioned earlier:

  • A single probe might not be smart enough or creative enough to figure out the solutions. To alleviate the problem, we need multiple autonomous entities with enough equipment to do the experiments they choose to do independently. Essentially, we need to launch multiple probes instead of just one.
  • Experimentation in an unknown environment is inherently dangerous. A significant fraction of the probes can be expected to damage or destroy themselves in the process. The obvious solution is to launch even more probes to the same destination.

Now the whole thing is starting to look more like colonization than von Neumann probes.

Sorry if I seem so determined by my position when I don't know the details of the theory you're using. Most people don't. Does this theory have any practical applications? As far as I am aware, the absolute state of the art in useful computer science - stuff that is only marginally useful because it is so new - is various forms of artificial neural network, where recent results have gotten much better because of the use of GPUs and more sophisticated rules for the ANN that use tricks stolen from nature.

Probability and information are two sides of the same coin. There are many fields and subfields of research studying their fundamental properties, and algorithmic information theory is one of them. Anyone who's serious about (for example) data compression, error correction, cryptography, or machine learning should be familiar with it.

I haven't really followed machine learning and the neighboring fields for a decade or so. Back then, people were mostly excited about Bayesian inference and support vector machines.

Link to comment
Share on other sites

Self-modification doesn't help, but observing the environment and experimenting with it can extract information from it. With them, we come back to some of the issues I mentioned earlier:

  • A single probe might not be smart enough or creative enough to figure out the solutions. To alleviate the problem, we need multiple autonomous entities with enough equipment to do the experiments they choose to do independently. Essentially, we need to launch multiple probes instead of just one.
  • Experimentation in an unknown environment is inherently dangerous. A significant fraction of the probes can be expected to damage or destroy themselves in the process. The obvious solution is to launch even more probes to the same destination.

Self modification helps, since it can help probes to adapt to problems that previously could not be tackled. Sending more probes is not necessary, since reproduction does exactly that. They might even start forming communities of probes, with each having a specialisation, but that is for the probes to find out.

It would be human hubris to think we can come up with something that will fit the bill in all cases. Also, could you respond to the request in my previous posts?

Link to comment
Share on other sites

Now the whole thing is starting to look more like colonization than von Neumann probes.

Aren't we arguing about the size of the apple now? We could :

1. Package several autonomous systems into a single probe. By sharing subsystems (you need a minimum amount of mass to absorb the gamma rays produced by an antimatter fired starship engine, so a bigger engine is more efficient) you save on total mass. Once it arrives at the destination, the autonomous probes split up...kind of like cancer does when it colonizes the body...so that a single adverse event or failure won't stop the overall effort.

2. Once we consume an entire star's worth of retrievable solid matter (presumably any star in our local group will have at least an earth-mass or more worth of solid rocks captured around it) we have an awful lot of resources for launching the next set of "probes" to a star not yet colonized. Once you are converting earth-masses or more worth of rocks into machinery, you have an awful lot of it.

This, by the way, means that a realistic probe might in fact have incredibly sophisticated software systems, equivalent to cramming an artificial neural network emulating the mind states of a 100 people or more into it. If you had the equivalent of the minds of 100 people, you would be able to do very intelligent experimentation and engineering. You do realize that this probably would fit into 100 kilograms or less of computing machinery, assuming you had 3 dimensional, nanostructured hardware. (the brain weighs 1.3 kilograms, and obviously you don't need many of the brain's systems for this, also, you could make equivalent circuits with a lot less mass because they don't have to be alive and self repairing nor use proteins...)

And the reason it agrees with your theory above is that when this probe encounters an obstacle it lacks the programming to overcome, it can extract information from the environment and use that combined with base programming (knowledge about the laws of physics, prior techniques for similar problems, etc etc etc) to craft a new solution. It's not a closed system and information is both entering and leaving.

For that matter, isn't a human brain, like the one you are using the read this message, a device that destroys information on a colossal scale? The hugely complex state your mind is in right now is only transient, and only a tiny fraction of the information will be stored.

Edited by SomeGuy12
Link to comment
Share on other sites

An VNP with an error would stop working as in unable to repair itself or make other parts. Its very unlikely that it would say start producing more and more mining robots who mine or similar cancer like replication problems. its some ways to make this even more safe, if you compress and encrypt the relevant files any bit fail will result in scrambled data, you then use error correction and multiple copies to avoid corruption. Now design system so it regularly has to read the files from disc.

The VNP would not have much to do with planets other than land probes on it for exploring. If you see any sign of an advanced civilization, radio, primarily but also light and it would report back and go into some sort of ambassador role. An pre industrial civilization would be hard to detect but would also not notice anything except perhaps an lander or two.

I imagine the VNP would have two purposes, primarily would be exploration they would move outward exploring solar systems, report back in the chain make copies of itself for more exploration. It would not send VNP to systems who have them, might send an backup probe to an system with ship underway but nothing more.

After the expansion part it would maintain the VNP communication network and report back. it might also get new orders or plans from the operators.

An second stage might be to prepare a planet for colonization.

Safe replication is not my concern, per se. It is important to consider, and you raise a good point that runaway replication can be guarded against, but my argument (and Keith Cooper's) is that the probe must be responsible. I believe that checking for civilizations and establishing contact must be the first thing on a VNP's "to-do" list as soon as it enters a target system, rather than (if I read your response correctly), as a side activity to replication / colonization prep.

An argument can be made that the resources of another star system, if the system has a life-bearing planet, belongs to the present or future civilization on said planet. If the first thing our probe did was to make more copies of itself as soon as it arrives, the native civilization may see that as stealing. I cannot imagine they will approve, or be receptive to the probe making contact later on. (As an analogy, imagine I entered your home without knocking or asking permission, and immediately helped myself to the contents of your fridge. You'd be pretty ticked at me, I'd suspect. An alien civilization would behave similarly to a probe mooching off a few asteroids... and may be doubly ticked at the civilization that sent it).

As for pre-industrial civilizations, or even planets where sentient life has yet to evolve, again courtesy plays a role here. Suppose a VNP arrived in our solar system a million years ago. No human civilization was around, and by your criteria, the VNP would then decide that it can do whatever it wants with the resources of our system, and digs in. Maybe it's makers are still interested and decide to send some colonists along as well. By the time we reach the present age, when our civilization is slowly getting interested in expanding into space, we'd find that we've been crowded out or that all the good resources have been used up. We might be able to settle the inner planets, but everything else is occupied or consumed. The native peoples of North America can tell you what that's like, to have your future cut off by another civilization.

I believe we need to be more responsible than that, which means that if our future probe is to replicate without asking permission, it should limit its numbers - two or three probes to send to other stars at most - and cease replicating altogether once those two or three probes are on their way. So long as the probe can maintain itself, it can afford to be patient, waiting millions of years if need be before someone it can talk to evolves and starts exploring.

Either way, the probe would need to wait until it can contact with a native civilization and engage in a cultural exchange. Then, and only then, can it ask for permission from the natives to replicate, and do so only if the natives agree.

Incidentally, as probes might continue on in a chain from system to system for millenia, and thus contact with the parent civilization is likely to dry up, the probe will likely need a true AI for its brain - not just to make the judgement call on to replicate or not, but also to interact with the native civilization on its own and to negotiate for permission to replicate so it can expand the network. A possible trade is to offer to carry an AI modeled off the natives, so that the natives' culture can be carried to the next star, as payment for the asteroids needed to build those two or three VNPs.

Link to comment
Share on other sites

Andrew, the reason your idea will not work is because a probe setup that doesn't care about the rules would outbreed/outcompete one that does. Turning whole star systems into more probes, faster probes, or much bigger probes, armed with weapons, is a better strategy than nibbling a few asteroids and waiting. That's the lowest common denominator and what you would expect to see.

The fact that we exist at all strongly suggests that alien intelligences capable of building such things are very, very, very far away from us. (not in our galaxy far)

Link to comment
Share on other sites

With respect, Someguy12, to suggest that any alien intelligences capable of building VNPs would choose to release what amounts to a technological plague on the galaxy speaks very poorly of our hypothetical aliens' ethics. Surely they are at least a little wiser than that.

But even if aliens have chosen the low road, caring for nothing but themselves, does not mean that we should do likewise.

Link to comment
Share on other sites

With respect, Someguy12, to suggest that any alien intelligences capable of building VNPs would choose to release what amounts to a technological plague on the galaxy speaks very poorly of our hypothetical aliens' ethics. Surely they are at least a little wiser than that.

But even if aliens have chosen the low road, caring for nothing but themselves, does not mean that we should do likewise.

Or hopefully those aliens would take the Ian M Banks approach and decide that turning chunks of the galaxy into swarms of identical machines would be a) too egotistical to be cool and B) deeply, deeply boring.

- - - Updated - - -

Incidentally, as probes might continue on in a chain from system to system for millenia, and thus contact with the parent civilization is likely to dry up, the probe will likely need a true AI for its brain - not just to make the judgement call on to replicate or not, but also to interact with the native civilization on its own and to negotiate for permission to replicate so it can expand the network. A possible trade is to offer to carry an AI modeled off the natives, so that the natives' culture can be carried to the next star, as payment for the asteroids needed to build those two or three VNPs.

Now that's a beautiful idea. Humanity finally makes it out to the asteroid belt, uncovers alien probe - and in exchange for a couple of lumps of rock we get our own ambassador to the stars.

Link to comment
Share on other sites

The problem it's a quickdraw contest/who can be the biggest jerk. Whoever creates the "techno-plague" first will have the biggest impact on future events in the galaxy. By choosing not to do this, you are choosing a suboptimal method for survival. (you'd attach copies of yourselves to the machinery, so the farther the plague spreads, the farther you personally spread)

Same argument applies in that it's better to drain a significant fraction of the resources of a star, if needed, building a really really big rocket that can hit a higher fraction of C. Well, if you need to do that - there probably is an efficient and elegant method that gets you to 0.9 C and only requires eating a few asteroids to build the equipment. An antimatter rocket with a big enough mass ratio can do this in theory.

Link to comment
Share on other sites

Of course, that assumes that "being the biggest jerk" as you so aptly put it is an optimal survival strategy. It's tempting, but I suspect that it is not feasible over the long run... or else locusts would have driven everything larger than them to extinction long ago.

Earthly biology shows that species that replicate without limit usually either have high attrition rates (ensuring only a few survive long enough to have offspring) or risk destroying themselves by burning through the available food faster than is sustainable. How this might play out on a galactic scale is a tough question to answer, as we know little or nothing about what might serve as "natural" checks on VNP population. But I would also suggest that those civilizations that chose not to be jerks might soon find it in their interest to start culling jerk-minded VNPs (and their parent civilizations); especially if they've been on the receiving end of a jerk-class VNP.

I would hope that most civilizations that survive long enough to design and build a VNP will also see the folly of being jerks to the galaxy. Or, to quote Commander Norton from Clarke's Rendezvous with Rama, "The human race has to live with its conscience. [Whatever the Hermians argue,] survival is not everything."

Link to comment
Share on other sites

Self modification helps, since it can help probes to adapt to problems that previously could not be tackled.

The context was information/complexity. Self-modification can't create any new information, so it's an irrelevant technical detail in that discussion.

Sending more probes is not necessary, since reproduction does exactly that. They might even start forming communities of probes, with each having a specialisation, but that is for the probes to find out.

Sending more probes is necessary, because exploration and experimentation is a risky business.

Look at any item around you. Before we could figure out how to do it right, people probably died because of our early failed attempts. I don't see any reason why developing new stuff would be less risky for probes.

Also, could you respond to the request in my previous posts?

I believe that the discussion about algorithmic information answers those requests.

Aren't we arguing about the size of the apple now?

I think it's an important distinction.

Von Neumann probes are essentially a brute-force approach to interstellar expansion. A probe arrives at a new star system, looks at the system arrogantly, imposes its will upon the lowly matter (regardless of whether it's just a dead planet or an advanced civilization), and creates a bunch of identical copies of itself to continue the expansion. Colonization, on the other hand, is more organic. You explore the system, determine what kind of production it can support, and figure out ways to produce something that can continue the expansion to the next star system.

Link to comment
Share on other sites

The context was information/complexity. Self-modification can't create any new information, so it's an irrelevant technical detail in that discussion.

Self-modification would mostly be used to adapt to local or new circumstances, but you could even truly introduce new information like our DNA does through mutation.

Sending more probes is necessary, because exploration and experimentation is a risky business.

Look at any item around you. Before we could figure out how to do it right, people probably died because of our early failed attempts. I don't see any reason why developing new stuff would be less risky for probes.

Sure, but you only need enough to ensure that ones survives to reproduce. You might even make the first generation or unit extremely conservative, and only go into full exploration mode when enough copies have been made to guarantee a certain level of survival. Maybe the probes could even produce simplified sacrificial units.

I believe that the discussion about algorithmic information answers those requests.

We were talking about physical complexity and production, not software complexity :) If you would be so kind as to answer the questions in relation to physical devices, that would be splendid.

Link to comment
Share on other sites

Self-modification would mostly be used to adapt to local or new circumstances, but you could even truly introduce new information like our DNA does through mutation.

The new information comes from random events, not from self-modification itself. Furthermore, the random events typically have a neutral or negative effect on the system, so they're not a very good source of new information.

Sure, but you only need enough to ensure that ones survives to reproduce. You might even make the first generation or unit extremely conservative, and only go into full exploration mode when enough copies have been made to guarantee a certain level of survival. Maybe the probes could even produce simplified sacrificial units.

"Extremely conservative" also means "extremely bad at making new discoveries". The first probes have to take more risks to be able to do anything useful, while the later ones can benefit from the discoveries of their predecessors and be more conservative.

We were talking about physical complexity and production, not software complexity :) If you would be so kind as to answer the questions in relation to physical devices, that would be splendid.

The fun thing about the core areas of theoretical computer science is that the results are applicable to all kinds of systems, not just computers. The explicit purpose of Turing machines was to develop formalism that could simulate any kind of machine, no matter whether it's abstract of physical, and then prove that there are things no machine can do. Computers were just an unexpected application of the results.

We can choose any reasonable standard for describing physical objects, and define the complexity of an object to be length of the shortest description of all of its relevant characteristics. It doesn't matter which one we choose, because all of them are equivalent, up to an additive constant. After we have chosen the standard, the results from algorithmic information theory are applicable to physical objects described using the standard.

Link to comment
Share on other sites

The new information comes from random events, not from self-modification itself. Furthermore, the random events typically have a neutral or negative effect on the system, so they're not a very good source of new information.

That makes no sense. Why would random events have a neutral or negative effect? You also seem to have ignored the benefits of random modification, see my comparison with DNA.

"Extremely conservative" also means "extremely bad at making new discoveries". The first probes have to take more risks to be able to do anything useful, while the later ones can benefit from the discoveries of their predecessors and be more conservative.

Again, this makes no sense. There is no reason why the initial probe could not be conservative until it has multiplied. It does not need to discover, it needs to multiply. Multiplying initially is a relatively simple task, with little risk.

The fun thing about the core areas of theoretical computer science is that the results are applicable to all kinds of systems, not just computers. The explicit purpose of Turing machines was to develop formalism that could simulate any kind of machine, no matter whether it's abstract of physical, and then prove that there are things no machine can do. Computers were just an unexpected application of the results.

We can choose any reasonable standard for describing physical objects, and define the complexity of an object to be length of the shortest description of all of its relevant characteristics. It doesn't matter which one we choose, because all of them are equivalent, up to an additive constant.

Interesting statements, but lacking any substantiation that your computer science approach is in any shape, way or form applicable to our earlier discussion about physical products and real world probes. You are just trying to shift the problem somewhere else, but not coming any closer to finally proving anything.

I am going to ask you one more time to stop bobbing and weaving and creating smoke screens and start backing things up and quoting sources. The alternative would be for everyone to ignore your statements completely. This thread deserves more than words without meaning.

Link to comment
Share on other sites

That makes no sense. Why would random events have a neutral or negative effect? You also seem to have ignored the benefits of random modification, see my comparison with DNA.

DNA is a perfect example of this. While mutations are the only source of new genetic information, their short-to-medium term effects are mostly harmful and rarely beneficial. Cells have a plenty of mechanisms for preventing mutations, and for repairing the damage if a mutation happens. There is also a lot of redundancy in the genome, which helps to minimize the effects of mutations.

In a large population, which can afford to lose a plenty of individuals, a low mutation rate can help to increase the genetic variation in the population, increasing its fitness in the long term (after hundreds or thousands of generations). In short-to-medium term, the primary adaptation mechanism is recombining the genetic variation that already exists in the population. Recombination is much less likely to have adverse effects than mutation, because individuals have already lived with all of the genes involved.

Again, this makes no sense. There is no reason why the initial probe could not be conservative until it has multiplied. It does not need to discover, it needs to multiply. Multiplying initially is a relatively simple task, with little risk.

Replication is not as simple as copying a piece of data. It involves adapting the resource extraction and production activities of the probe to the conditions in the new environment. Many of these activities may involve location-specific risks that are not always apparent in advance. If the probe can't afford to take risks, it may not have access to all the resources it needs, or it may not be able to process them in the forms they occur in in the current environment.

Interesting statements, but lacking any substantiation that your computer science approach is in any shape, way or form applicable to our earlier discussion about physical products and real world probes. You are just trying to shift the problem somewhere else, but not coming any closer to finally proving anything.

Computer science is the science of systems, processes, mechanisms, complexity, information, and similar things. It's a methodological science with a similar role in the 21st century science as statistics had in the 20th century science. Its English name is kind of unfortunate, because computer science is no more about computers than astronomy is about telescopes. In many other languages, people use more appropriate names, which translate as e.g. computing, informatics, datalogy, information processing science, and science of computation.

Computer science is simply the most relevant field of study to this discussion.

Link to comment
Share on other sites

[...]

Random events are not negative or neutral. A probe encounters an event, learns something, improves itself. The outcome is almost always positive.

Replication is not as simple as copying a piece of data. It involves adapting the resource extraction and production activities of the probe to the conditions in the new environment. Many of these activities may involve location-specific risks that are not always apparent in advance. If the probe can't afford to take risks, it may not have access to all the resources it needs, or it may not be able to process them in the forms they occur in in the current environment.

You are confusing low risk with no risk. The premise was that you need many probes due to the risks, while I said that you probably only need a few because the intent is to replicate anyway. Exploring a full system will certainly have very risky parts, but there is no need to dive in straight away.

Computer science is the science of systems, processes, mechanisms, complexity, information, and similar things. It's a methodological science with a similar role in the 21st century science as statistics had in the 20th century science. Its English name is kind of unfortunate, because computer science is no more about computers than astronomy is about telescopes. In many other languages, people use more appropriate names, which translate as e.g. computing, informatics, datalogy, information processing science, and science of computation.

Computer science is simply the most relevant field of study to this discussion.

The most relevant field to physical production is actual physical production, which I do happen to know. You stated that a more complex product would require an every increasingly production facility. I have given examples, digital manufacturing through CNC machines and 3D printing, for instance, why this is not necessarily the case. Flexibility and complexity are not the same. So can we finally conclude that increasingly complex probes do not necessarily require an even more complex production environment? Great! :)

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...