Jump to content

Von Nuemann Machines


KAL 9000

Recommended Posts

They're for interstellar vandals and egotists. Because nothing says 'civilized species here' like taking a chunk of space and converting it into myriad copies of the same thing. I hope that by the time we have the technical prowess to build such things, that we have the societal prowess not to.

Edited by KSK
Link to comment
Share on other sites

I just finished a book about some Neumann beings, possibly the creation of some far away and long gone species, they came to our solar system, turned the Earth into a huge Dyson motor to speed up its spin to dismantle the whole planet for resources. Quite a wild read. It's Terry Pratchett and Steven Baxters The Long Utopia. The 4th novel in the Long Earth series, I enjoyed them a lot, especially the third one, The Long Mars.

Check it out if you are into sci-fi reading.

Edited by Dafni
Link to comment
Share on other sites

6 hours ago, KSK said:

They're for interstellar vandals and egotists. Because nothing says 'civilized species here' like taking a chunk of space and converting it into myriad copies of the same thing. I hope that by the time we have the technical prowess to build such things, that we have the societal prowess not to.

Absolutely. Releasing such a probe is probably the single greatest crime a space-faring civilisation could commit.

Link to comment
Share on other sites

22 hours ago, Bill Phil said:

Of course KAL 9000 would mention von Neumann probes... he wants to make copies of himself! He is HAL's brother's cousin's father's roommate  after all.

Anyhow, they would be useful for exploration of large areas.

Actually, I'm HAL's Kerbal alter ego who doesn't turn evil.

Link to comment
Share on other sites

33 minutes ago, Dkmdlb said:

All living things are Von Neumann machines.

Actually, they're self replicating systems. Not von Neumann machines, which are a more specific subset.

Also, humans don't "self" replicate. Offspring bear a resemblance, but are not exactly alike their parents. Also, it only takes one von Neumann to replicate, while humans need two people of opposite gender.

Link to comment
Share on other sites

5 hours ago, NFUN said:

probe.gif

Slyandro, I summon thee!

 

And grey goo won't happen. If it could, bacteria would probably have done it already. 

There's a catastrophic flaw in your argument.  To TLDR it, the algorithm of evolution (what created bacteria) tends to get stuck on local maxima.  It's easy to visualize why : imagine what a large scale, sweeping upgrade to a living organism would require.  It would usually require a large number of codons to code for the new proteins and functions to support the new systems, right?  Well, what's the chance that random mutations result in such a complex change?  The probability is infinitesimal.  It doesn't work that way.  Instead, evolution has to arrive at novel systems a tiny incremental step at a time.  But if these tiny incremental steps are making the organisms who make these steps less efficient at reproducing  than competitors without the changes, they often get out-competed and the potential change dies off.  

Another issue is the codespace for current living organisms is cramped, limited to just 3 codons, and so there isn't room in the code-space to describe new amino acids that might be helpful to make organisms better.  This version lock-in is thought to be several billions years old.

Anyways, grey goo would be designed by intelligent designers a piece at a time.  Human or AI designers would decide they want some kind of tiny robot that can copy itself using stuff in our environment.  They would sketch out on whiteboards the overall goal.  They would then optimize separately the thousands of molecular subsystems a tiny robot like that would need, working on each little piece by itself in test chambers until it meets the design requirements.  Finally, they would assemble the prototype robots, each one containing tens of thousands of separate subsystems, and test their replication.  The designers might decide certain internal systems aren't up to spec and gut large sections of the design, changing all of it using procedural software algorithms.  

This is not something evolution can do.  These "grey goo bots" would be so tightly constructed that a slight design change would result in it not being able to self replicate - so evolution can't "get there" by chance.  The first bot might have to be constructed using very complex artificial equipment in a vacuum chamber - not something found on primordial earth.  And so on.

 

Link to comment
Share on other sites

4 hours ago, SomeGuy123 said:

There's a catastrophic flaw in your argument.  To TLDR it, the algorithm of evolution (what created bacteria) tends to get stuck on local maxima.  It's easy to visualize why : imagine what a large scale, sweeping upgrade to a living organism would require.  It would usually require a large number of codons to code for the new proteins and functions to support the new systems, right?  Well, what's the chance that random mutations result in such a complex change?  The probability is infinitesimal.  It doesn't work that way.  Instead, evolution has to arrive at novel systems a tiny incremental step at a time.  But if these tiny incremental steps are making the organisms who make these steps less efficient at reproducing  than competitors without the changes, they often get out-competed and the potential change dies off.  

Another issue is the codespace for current living organisms is cramped, limited to just 3 codons, and so there isn't room in the code-space to describe new amino acids that might be helpful to make organisms better.  This version lock-in is thought to be several billions years old.

Anyways, grey goo would be designed by intelligent designers a piece at a time.  Human or AI designers would decide they want some kind of tiny robot that can copy itself using stuff in our environment.  They would sketch out on whiteboards the overall goal.  They would then optimize separately the thousands of molecular subsystems a tiny robot like that would need, working on each little piece by itself in test chambers until it meets the design requirements.  Finally, they would assemble the prototype robots, each one containing tens of thousands of separate subsystems, and test their replication.  The designers might decide certain internal systems aren't up to spec and gut large sections of the design, changing all of it using procedural software algorithms.  

This is not something evolution can do.  These "grey goo bots" would be so tightly constructed that a slight design change would result in it not being able to self replicate - so evolution can't "get there" by chance.  The first bot might have to be constructed using very complex artificial equipment in a vacuum chamber - not something found on primordial earth.  And so on.

 

True, however grey goo will have the same constrains as living creatures regarding energy and resources. 
For input energy only solar work for nanoscale stuff and you would be limited to that energy density.  
You also need raw materials here you have more options than life, however using harder materials will increase energy use, it will also produce heat and is likely to require many types of bots for each task, in short you need to create some sort of organism like an tree, unorganized grey goo will not be able to do stuff like eat an planet simply because it would not transfer energy and resources around. it will be another sort of single celled life, it would be most dangerous if it attacked other types of life.

I think nanotech assembly will be most practical inside an machine providing energy and resources. 
It would be far simpler to build an large scale won newman machine. Or basicaly an factory who can build robots doing various tasks including building another factory. 

Link to comment
Share on other sites

There was a similar thread earlier this year. I still believe what I said back then: Von Neumann machines are probably impossible in the real world.

Biological organisms obviously show that limited self-replication is possible. If the environment provides the basic building blocks and enough easily usable energy in a highly refined form, even complex systems can replicate themselves successfully. On the other hand, we have no evidence whatsoever that a probe arriving at a dead solar system can build all the infrastructure necessary to replicate itself.

We can easily imagine the concept of a Von Neumann machine and then start discussing about the concept. Unfortunately such discussions tell us very little about the real world, because there is nothing to anchor the concept to reality. The conclusions we make would be conclusions about our mental models of Von Neumann machines, and we have no way to tell how well those models correspond to the real world.

Link to comment
Share on other sites

Jouni : I'm going to raise you one.  I'm going to say what you should know.  Both your facts and intuition should tell you this.  Not only are Von Neumann machines possible, we already have them.  

Can you or can you not sketch out a gigantic sphere.  Inside that sphere (it's really really gigantic) is enough of Earth's biosphere to form a stable, closed system, with some artificial intervention from time to time.  There are humans living in the sphere.  There are also enough pieces of factory machinery in the sphere to manufacture every piece of factory machinery in the sphere.  Since every piece of machinery we have today was made using another set of machinery, it's at least possible to pile them all into a gigantic sphere.

The sphere floats through space and spacecraft go out and collect rocks and bring them back to the sphere.  Equipment inside melts them down to plasma and separates the constituent elements, feeding the industrial chain.

Given enough asteroids and enough time, this sphere can copy itself.

Sure, it's an abysmal Von Neumann machine.  And you may argue definitions with me - since it has humans in it, it's not really a "machine".  But it's existence proof - the Earth itself is such a sphere.  You could probably construct one much smaller than the Earth, a mere 10 kilometers across or so using known densities today.  And it might take a century to copy itself.  

As I've argued in other threads, a human being is a robot made of meat tied to a network of self modifying summation gates, about 86 billion * 1000 of them.  Eventually, we could make silicon equivalents to those gates.  

Eventually, we could stack the silicon wafers on top of each other so instead of a chip it's a cube.  This would radically increase density - human equivalent gate networks might be a 1x1 meter cube or smaller.

We could shrink the industrial equipment needed to copy the industrial equipment by either shrinking the individual assembly lines https://www.youtube.com/watch?v=vEYN18d7gHgto the nanoscale - something living cells already have, so saying "impossible" would be misinformed - or other ways.

So, if you can cram a human mind into a 1x1 meter cube, and some humanoid robots for the mind to control, and then the factories can be crammed to the nanoscale, and you bring along more than 1 human mind - don't want to be lonely - you could get to a scale that could fit into a plausible starship and it would be small enough to self replicate rapidly.  10-1000 metric tons for a ballpark number of the size of 1 "replication subunit".  

Anyways, what precisely is your objection?  Are you going to argue that since human minds are onboard, it's not a machine?  Are you going to argue that modern science isn't 100% certain you can actually construct close enough equivalents to brain cells in silicon circuits, therefore since the certainty is only about 90%, it's "impossible"?  What's your reasoning?  Please go into detail, I've gone into detail for my arguments.  I'd like to hear more than "it's still my opinion it's impossible".

9 hours ago, Jouni said:

There was a similar thread earlier this year. I still believe what I said back then: Von Neumann machines are probably impossible in the real world.

 

Link to comment
Share on other sites

10 hours ago, magnemoe said:

True, however grey goo will have the same constrains as living creatures regarding energy and resources. 
For input energy only solar work for nanoscale stuff and you would be limited to that energy density.  
You also need raw materials here you have more options than life, however using harder materials will increase energy use, it will also produce heat and is likely to require many types of bots for each task, in short you need to create some sort of organism like an tree, unorganized grey goo will not be able to do stuff like eat an planet simply because it would not transfer energy and resources around. it will be another sort of single celled life, it would be most dangerous if it attacked other types of life.

I think nanotech assembly will be most practical inside an machine providing energy and resources. 
It would be far simpler to build an large scale won newman machine. Or basicaly an factory who can build robots doing various tasks including building another factory. 

I agree with you.  Also, it would be a crime that would make the Holocaust look like a speeding ticket to actually purposefully create a von neumann machine that is both nanoscale and optimized to "eat" existing biological material.  

I think if the tiny robots were made of diamond plating and they were extremely tightly built, they could be more efficient than existing life.  You're right, they wouldn't "eat" the earth.  It would look more like this growing mass of tendrils covering the planet, killing every living thing they touch because living things lack the enzymes to "fight" against diamond plated machines.  It would be limited by the rate of solar input and how much energy they can obtain by oxidizing biomass.  The diamond plating would be very energy expensive to create so the growth rate might be slow, hopefully taking years instead of days to cover the planet.

And yeah, no eating the planet.  It would be a surface layer only.  You'd be perfectly safe inside underground vaults.  Could possibly go outside in the post-apocalyptic landscape in a space suit, the nanospores taking a long time to eat through the suit...

Link to comment
Share on other sites

8 minutes ago, KAL 9000 said:

Von Nuemann machines are cosmic vandalism, but still, they're worth discussing.

I see it as an arms race.  Universe wide, say there are 10 total intelligent species.  And each of these species has 3 major factions who compete with each other and do things differently.

So there are 30 subgroups who at any time, if anyone creates a von neumann machine, those machines will be here until the heat death since they just copy and copy and copy.  Only way to fight one is you'd have to make your own von neumann machines and reach certain stars "first", building defensive weaponry of some sort to stop the incoming "spores" from the von neumann machine you are fighting.  

Given how vast the universe is, '30' is probably many orders of magnitude too low.  And since we are separated by vast gulfs of spacetime from other alien races, we don't know if they haven't already released von neumann machines.

So it makes perfect logical sense for us to release our own the very moment it is feasible for us to do so because of the von neumann machines that alien races might have created that we can't see yet.  If we don't, we'll miss out on "owning" millions or billions of stars.

Or, looking at it another way : let's suppose in 2200 when we have the tech for the machines, there are 3 factions.  The Americas Coalition, the Asian Coalition, the European Coalition.  Every faction mildly dislikes and competes with one another.  Whoever releases von neumann machines first (these machines aren't "dumb", they contain the minds and ideals of the factions who launched them) will be around for millions of years, while everyone else is dead.

You get a rumor, possibly incorrect, that another coalition is researching the machines...

See, it's not really stoppable.

 

Edited by SomeGuy123
Link to comment
Share on other sites

14 minutes ago, SomeGuy123 said:

Anyways, what precisely is your objection?  Are you going to argue that since human minds are onboard, it's not a machine?  Are you going to argue that modern science isn't 100% certain you can actually construct close enough equivalents to brain cells in silicon circuits, therefore since the certainty is only about 90%, it's "impossible"?  What's your reasoning?  Please go into detail, I've gone into detail for my arguments.  I'd like to hear more than "it's still my opinion it's impossible".

If I knew precisely what my objection is based on, I would write a bunch of scientific papers about it.

At a very high level, my objection is based on entropy and complexity. All the evidence so far suggests that in order to build a complex system, we must start with an even more complex system. Self-replication seems to work only if the replicating system is a part of a complex ecosystem, or if something has already built the components used to construct the copies. Given a source of energy, evolution and similar mechanisms can increase the complexity of the ecosystem over time. Unfortunately evolution seems to be slow and random, and nobody still understands it precisely enough.

Also, this discussion is mostly nonsense. We're talking about things so much beyond the experience of the human race that there is no way to anchor the discussion to reality. We're not even basing the discussion on the best existing knowledge. We're just using vague everyday language with ill-defined concepts. Basically, we're taking a Wittgensteinian language-game so far from the actions it's based on that the words lose their meanings. Within the language-game, our arguments may seem plausible, but because the words are not connected to the reality in any meaningful sense, we're just talking about invisible pink unicorns and arguing how many angels can dance on the head of a pin.

Link to comment
Share on other sites

23 minutes ago, Jouni said:

If I knew precisely what my objection is based on, I would write a bunch of scientific papers about it.

At a very high level, my objection is based on entropy and complexity. All the evidence so far suggests that in order to build a complex system, we must start with an even more complex system. Self-replication seems to work only if the replicating system is a part of a complex ecosystem, or if something has already built the components used to construct the copies. Given a source of energy, evolution and similar mechanisms can increase the complexity of the ecosystem over time. Unfortunately evolution seems to be slow and random, and nobody still understands it precisely enough.

Also, this discussion is mostly nonsense. We're talking about things so much beyond the experience of the human race that there is no way to anchor the discussion to reality. We're not even basing the discussion on the best existing knowledge. We're just using vague everyday language with ill-defined concepts. Basically, we're taking a Wittgensteinian language-game so far from the actions it's based on that the words lose their meanings. Within the language-game, our arguments may seem plausible, but because the words are not connected to the reality in any meaningful sense, we're just talking about invisible pink unicorns and arguing how many angels can dance on the head of a pin.

I'm an engineer.  I feel extremely confident that if I had limitless monies and an entire army of engineers to help me, I could build a silicon (or other substrate) circuit equivalent to everything a synapse does.  There would be additional switching networks so this silicon circuit array would be able to add new connections just like the brain does.  

One of the key reasons I think this is possible is I know the real brain is terribly noisy and imprecise.  This means if the emulated equivalent has some mistakes in it, it's going to be "close enough" to still give similar results, and in brains, similar results still work.  (in the microchip world, "similar results" mean a crash or freeze, but brain architectures are resilient to this)

So I know a box can be made that is just like a human.  The little human might "live" in a virtual world, controlling a software "avatar" that can walk around and touch things and see and smell and hear and everything else.  Obviously, for the purposes of von neumann, the virtual world would be tightly coupled to the external real world.  And obviously, this is just my mental model - the first thing you'd do if you really had stuff like this would be to optimize it.  These virtual humans wouldn't have senses and reflexes they don't need any more, they'd probably be much smarter and have all kinds of integrated helper subsystems in their minds. 

That's the intelligent part.  The hardware part, well, as I said, nanoscale assembly lines.  Millions of em.  Redundant.  As entropy damages this machine, here's what happens : say entropy damages a nanoscale assembly line.  There were 100 copies of this line, now there are 99.  A replication order gets placed once enough are damaged.  A robot goes out and grabs the module containing the damaged line, and puts it into the intake hopper of a plasma furnace.  The damaged component gets converted to plasma, separated using electromagnets (basically a calutron) to constituent elements, and the elements get made into feedstock in chemical synthesis reactors.

The feedstock feeds into the nanoassembly lines, who combined the feedstock molecules into larger components, which feed more lines, which...until you are back to solid robotic systems that look exactly like the systems that were in the damaged component.  

You then replace the damaged module, everything back to normal.  You have to do this continuously - in a very real way, this machine is "alive" in that it needs a constant supply of energy to keep remanufacturing itself and if the power supply fails for a long enough period of time, enough parts will fail that the machine will not be able to function even if the power is restored later.  (it would be "dead")

So these machines have to hang out near stars, collecting solar energy from the burning of matter, or they die.  Eventually.  Obviously you could launch machines with a vast store of stored energy - probably in the form of antimatter - like the fat reserves in a seed for a plant - while it journeys to another star.

That's the part of it you were missing.  These machines need a constant supply of energy or they fail.  Total entropy of the universe still increases, and the actions of these machines increase entropy because they are "dirtying" the clean sunlight they receive into disordered infrared waste heat.  Also, data redundancy - the information needed for these machines - these include the digital files for the "personalities" who operate them, the digital files describing the exact schematics for each onboard component, and so on - need to be kept where there's a hash associated with every piece of data, and it's raided with about x10 redundancy, the data copied in separate modules all over the machine.  Periodically, every piece of stored information gets compared with it's hash (every "sector" in the data store subsystem), and if the hash doesn't match, that particular copy of the information gets deleted and restored with information from the 10 times redundancy information.  

This reduces the chance of random mutation to "not gonna happen before the stars run out of gas".  Actual engineers designing these machines might use more or less redundancy, I'm just giving example numbers to illustrate the idea.  

Again, this constant copying process requires energy, or the machines "die".  Eventually - don't get me wrong, these machines might be so sturdy you could cut their power for millennia and then bring them back online, but eventually there is too much damage.

 

Edited by SomeGuy123
Link to comment
Share on other sites

47 minutes ago, SomeGuy123 said:

I'm an engineer.  I feel extremely confident that if I had limitless monies and an entire army of engineers to help me, I could build a silicon (or other substrate) circuit equivalent to everything a synapse does.  There would be additional switching networks so this silicon circuit array would be able to add new connections just like the brain does.

I'm a computer scientist. I come from a field founded on hierarchies of impossibility results. No matter how clever solution someone proposes to problem X, one of those impossibility results often says that the proposed solution won't work, because solving X is logically / information-theoretically / statistically impossible and/or computationally infeasible under/without complexity assumptions.

We can start with the second law of thermodynamics. From a computer science point of view, it's a statistical impossibility result about physically realizable systems. It talks about increasing physical entropy. We could probably make a similar statement about decreasing computationally bound algorithmic information, which would be a computational impossibility result about physically realizable systems. We could not prove it, however, just like we can't prove the second law of thermodynamics or the Church-Turing thesis. "Physically realizable" is an empirical property, not a formal one. (We could try to disprove it, however, just like we can try to disprove the second law of thermodynamics.)

If we had such a result about decreasing algorithmic information, we could derive all kinds of other impossibility results about physically realizable systems. Here, I believe, it would be possible to prove that no physically realizable system is complex enough to replicate itself without outside help.

Now we're just missing a few lifetimes' worth of major breakthroughs in theoretical computer science...

Link to comment
Share on other sites

1 hour ago, Jouni said:

I'm a computer scientist. I come from a field founded on hierarchies of impossibility results. No matter how clever solution someone proposes to problem X, one of those impossibility results often says that the proposed solution won't work, because solving X is logically / information-theoretically / statistically impossible and/or computationally infeasible under/without complexity assumptions.

If we had such a result about decreasing algorithmic information, we could derive all kinds of other impossibility results about physically realizable systems. Here, I believe, it would be possible to prove that no physically realizable system is complex enough to replicate itself without outside help.

Now we're just missing a few lifetimes' worth of major breakthroughs in theoretical computer science...

But we already have such a system.  Like, 100% definitely have such a system.  I already gave it to you - that sphere full of apes and factory machinery and a whole bloody biosphere.  And before you say "we don't actually have it", we do.  We live on such a sphere, it's just a bit large. 

The correct answer when hardware in real life disagrees with your mathematical theories is to defer to real life.  Your theory is flawed.  Systems do exist that can copy themselves.

Oh, I just realized there's another flaw in your logic.  

If you think about a more practical self replicating system, such as a 3d printer that can print it's own components(not actually physically realizable for practical reasons but it's a good abstract model.  Basically the classic idea of nanotechnology, where there is 1 print head placing atoms 1 at a time), you realize that the information the system stores, as digital files containing the blueprints, do NOT contain the full details of the actual system.

That's your flaw.  Your blueprints say "atom A goes next to atom B at X,Y, Z".  With run length and other compression.  Atoms A and B, their electron shells are these interacting dynamic systems - your blueprints don't actually say what the electrons are doing or will be doing.  Same with everything on down to subatomic particles.  And as the machine runs for a while and entropy operates on it, parts of it get into unknown states not specified on the blueprints either.  As I mentioned to you earlier, you fix these unknown states when they functionally stop working correctly by converting the whole thing to plasma and remaking it.  

I think that's where you went wrong.  Your blueprints are more of a procedure.  Put "A next to B".  They do not contain the information used to develop the blueprints, not directly, that information was flushed when you went from your development rig to a deployed von neuman machine.  (they might contain intelligent agents who know where to start to design a new development rig).  They don't contain much of the fine details of the matter being used - the matter being used is part of the "outside help" you are missing.  An atom of a real chemical element is a system that obeys complex rules and if handled the right way will do what you want.  You can abstractly think about it as a pre-programmed state machine you are using as a building block.

Edited by SomeGuy123
Link to comment
Share on other sites

3 hours ago, KAL 9000 said:

Von Nuemann machines are cosmic vandalism, but still, they're worth discussing.

Not necessarily. If you limit them to only using free floating resources, and only a certain number of machines per system to be created, then you can still have billions to trillions of probes, with thousands to explore a solar system, while not interfering as much.

Link to comment
Share on other sites

For that matter, another thing you're missing is that DNA at least is a procedural algorithm.  It tells the cells obeying DNA to do certain things and listen to certain signals.  This creates "procedural design" : for example, your nerves find the receptors in your skin that provide information for the nerve to carry by "homing in" on chemical signals given off by unbound receptors.  

So the nerve cell slowly grows its way to the target.  The actual route the nerve ends up taking is not specified in DNA.  It's procedural.

Similarly, you could save a huge amount of memory when designing von neuman machines by giving them procedural rules, such as designing where all the electrical cables go by some kind of route minimization algorithm, instead of baking into the blueprint the exact cable routings.  And so on and so forth.  

Such tricks radically reduce how much information a self replicating system needs to contain in order to copy itself.

Link to comment
Share on other sites

9 hours ago, SomeGuy123 said:

But we already have such a system.  Like, 100% definitely have such a system.  I already gave it to you - that sphere full of apes and factory machinery and a whole bloody biosphere.  And before you say "we don't actually have it", we do.  We live on such a sphere, it's just a bit large.

Please demonstrate in sufficient detail how the Earth can build another copy of itself. Not just something you believe would be equivalent to the relevant parts of the ecosystem and the civilization, but a functionally equivalent copy of the entire planet. Otherwise your claim is just wishful thinking.

You will probably win a few dozen Nobel prizes in the process.

9 hours ago, SomeGuy123 said:

I think that's where you went wrong.  Your blueprints are more of a procedure.  Put "A next to B".  They do not contain the information used to develop the blueprints, not directly, that information was flushed when you went from your development rig to a deployed von neuman machine.  (they might contain intelligent agents who know where to start to design a new development rig).  They don't contain much of the fine details of the matter being used - the matter being used is part of the "outside help" you are missing.  An atom of a real chemical element is a system that obeys complex rules and if handled the right way will do what you want.  You can abstractly think about it as a pre-programmed state machine you are using as a building block.

This is where you went wrong. My "blueprints" can be anything that can be logically described. No matter how clever tricks you figure out (or a billion other engineers figure out in a billion years), it's just a trivial special case already covered by the impossibility result. Unless physics are so weird that they can't be described, simulated, or understood, the universe is equivalent to a Turing machine. We already know a lot of things Turing machines can't do.

The matter used in construction is not "outside help". In order for the machine to replicate itself reliably, it must already know which materials it needs and how they're going to be used. Therefore the machine must already contain all relevant information about the materials. Any remaining information content is just random noise that can only make the replication less reliable.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...