Jump to content

Could you copy the brain to a computer?


gmpd2000

Recommended Posts

Conciousness is as physical as the programs running on a computer.

As I said, most of the analogies with computers simply don't work. It's not nearly as simple as you imagine. A computer program is information, beyond the physical medium where it's stored, but information only exists as a cognitive phenomenon. What's a cognitive phenomenon and how it relates to the physical realm? You're back to the same problem.

Exept the brain operates mainly on chemical reactions, and computers purely on electricity.

No. Computers also need an operator. :)

This is the same 'homunculus argument' problem that appears when you try to explain vision by analogy with a camera. It might help understand parts of it for practical purposes, but it doesn't work at the conceptual level.

There is nothing that is non-physical, but some things are more than the sum of their parts.

That statement is contradictory. If there is nothing that is non-physical, then everything is necessarily only the sum of its physical parts. If the physical parts together add something beyond their sum, that something is necessarily non-physical.

Edited by lodestar
Link to comment
Share on other sites

That statement is by no means contradictory, unless you intentionally interpret it as such. Adenine for example is Hydrogen and Nitrogen, but it is so much more than just Hydrogen and Nitrogen.

And to apply that to the brain, what is the brain? A collection of neurons. But that's by no means all it is. The way the neurons are arranged gives rise to phenomena like memory, conciousness, sapience, which are however, still not more than physical. Likewise, a Clock is not just a jumble of gears and springs.

A computer needs programming, not an operator. Once it's programmed, it can perform its functions for a very long time, even indefinitely, as long as it has power. The brain is different in the sense that it is self-programming.

Edited by SargeRho
Link to comment
Share on other sites

No, that much more is not non-physical. The arrangement of the parts of a system often gives them properties or capabilities the parts themselves do not have. Nitrogen and Hydrogen can't give the cell any orders. ACGT however, can.

Gears and springs can't measure the passage of time, gears and springs combined as a clock, can.

Can you take a bunch of nuclear fission on a scale? Perhaps with a portion of natural selection and some extra Mozilla Firefox-sauce?

So what if we have no theory to properly explain the mind yet? We didn't have one to explain the tides a few 100 years ago either. Just over a century ago we couldn't explain planetary motion properly too. Namely, there should be a planet in an orbit close to Mercury according to Newton's Laws, but Relativity solved that. It also happens to explain gold's color, which is a result of relativistic effects on the electrons.

I probably worded what I said poorly, but I don't know a better way to say it. The arrangement of things is of course also physical.

Edited by SargeRho
Link to comment
Share on other sites

Note that I wasn't saying that we wouldn't be able to achieve this in the future, I was saying we don't know enough now to know if we would or not. Yes, I was saying that simulation is an open question, you're just assuming that I wasn't. There are some interesting arguments that question whether simulating the brain is possible at all on our technology (eg: Roger Penrose in "The Emperor's New Mind"). If you're asserting that we do currently know enough about the brain to know how to build a mechanical one then I'm sorry but you're wrong. We've only just started to crack it open with new gizmos like fMRI, we've got a lot to learn yet.

As for your second "nonsense" then if you know how to shrink circuitry to the degree required to create a nanoelectronic device of equivalent complexity to a human brain then I assume you'll be picking your nobel prize up later in the week? Congrats, btw :wink: The brain is a highly sophisticated nanosystem, we don't have the proficiency in either nanotech or synthetic biology (or indeed know if either of those are the right approaches) to know what parts of the sci-fi vision will be practical.

Your claim was, "We don't know." That's nonsense. We know what it takes to store all relevant information, and such storage is available, and we have very good estimates on what it would take to simulate the human brain, as well as what outcomes of such simulation can be like. I've addressed that in my earlier post. So the parts I've quoted remain to be nonsense.

As for your additional nonsense on nanosystems and complexity, the bottle neck right now is speed of memory access and total processing power, both of which can easily be addressed with an optical system. We don't have these built, and it might be a few decades before we do, but this isn't a conceptual problem. Just an engineering one. As for the size of the system, I've never claimed that it'd be something that can fit in human cranium. We are discussing capabilities of clusters which consist of entire rooms of computer parts.

But yes, if we want to build something of the same size capable of simulating human brain, that will require some conceptually different approaches to computation which we don't have a start on yet. Nothing impossible there either, but I'd agree with you on lots of unknowns there.

Link to comment
Share on other sites

No, that much more is not non-physical. The arrangement of the parts of a system often gives them properties or capabilities the parts themselves do not have. Nitrogen and Hydrogen can't give the cell any orders. ACGT however, can.

Dude, seriously, you're trying to make up your own ontology out of the blue without knowing how to do it, while there's at least two millennia of discussion on that. Why don't you just study that first? It's much easier. Why bang your head on things people have already banged their heads a lot for two thousand years? If you're saying the arrangement of parts of a system give them properties the part themselves do not have, then you're saying the arrangement itself has being, but if that's the case, then it's necessarily non-physical, since the arrangement can exist as a mental concept before the parts are actually arranged, otherwise we wouldn't be able to arrange it. We can only build a clock because we can conceptualize the clock as a being in itself, before we even have its parts available. We can build many different clocks, with many different mechanisms, and still call it a clock, despite their parts not being at all alike. If you're saying the arrangement doesn't have being but merely actualizes a latent property in the parts being arranged, then those properties don't have physical existence until actualized by that arrangement, which counters the premise that everything is non-physical anyway. What you're calling "the arrangement of the parts of a system" is simply called form. You want to argue if forms are universal or not, fine, but arguing that forms are actually purely physical is just a contradiction. You wouldn't need them if they were physical.

So what if we have no theory to properly explain the mind yet? We didn't have one to explain the tides a few 100 years ago either.

Sure, but that's not the point here. The point is that you're presenting a confusion of materialist, idealist, nominalist, physicalist and computationalist theories of mind, as if they solved the binding problems, while actually none of them do. You want to solve the problem, cheers for you, but I doubt you'll do before understanding what others have already tried and why it failed.

Edited by lodestar
Link to comment
Share on other sites

Atomic bonds and electrons repelling eachother are entirely responsible for (maintaining) the forms of, well, everything, it is thus physical. I am not aware of any interactions that don't occur throug the four fundamental forces. These forces, particularly the strong, weak and electromagnetic (and at astronomical scales, also gravity) are what maintain, and in the end also create, forms. I don't see what is non-physical about this?

Link to comment
Share on other sites

Neurobiologists might have worked out pretty well how many known chemical interactions happen in the nervous system, but obviously, that's eons away from actually solving the many binding problems. There are many more fundamental philosophical issues to be solved before empirical knowledge of brain functions can actually answer anything.

It seems like various posters here are speaking mutually unintelligible languages. Lodestar makes a typically Chalmersesqe assertion that our materialistic knowledge about the biology of the brain doesn't do anything to solve "the hard problem" (the existence of subjective experience), which is essentially a pure philosophical problem, and thus biology is worthless as far as engineering a synthetic consciousness goes. Nevertheless there are obvious empirical links between the state of our subjective experience, and the material world. E.g. if someone destroys my brain, my subjective experiences will cease.

I think it is entirely possible that with more years of tinkering we may build conscious machines even despite our primitive understanding of the neurobiology of consciousness and having still left these lingering philosophical questions unresolved.

That said with regards to the OP, if it means doing spatial simulations with high levels of physical fidelity in real time, then it doesn't seem likely to happen any time soon. The required computer technology is many orders of magnitude beyond current capabilities. On the other hand if you're using comparatively simple integrate and fire models, or Hodgekin-Huxley models than the simulation is much more tractable. There is some agreement among neuroscientists that action potentials are the fundamental form of communicated information within the network, so this level of abstraction may be totally acceptable.

The C2 model by Modha et al. is an attempt to simulate a neural network on the scale of the brain of a cat, this simulation would have a network with over a billion nodes and 6 trillion connections. The model uses a simplified representation of neurons, synapse states are described by only 16 bytes of information (or 16 characters in other words). This requires at least 96 TB of memory and additionally since the states of each node are updated once per millisecond and the behavior of the node is governed by a set of differential equation, we have 1E9/1E-3= 1 trillion solutions per second. Fortunately supercomputers such as the IBM Blue Gene /P used in the model are capable of satisfying these computational demands. Even more recent supercomputers are capable of quadrillions of floating point operations per second, and may have thousands of terabytes of memory. This seems to imply that, at least using a simplified neural model complete simulation of a single human brain is almost within reach technologically even today.

Link to comment
Share on other sites

A CPU can simulate a neural net.

There is a difference between any object and a CPU. However, we use computers to simulate all sorts of non-computer things every day.

You can theoretically break down any object into a model that can be simulated by an algorithm: weather models, astronomical models, nuclear reaction models, cars, planes, bridges... Brain models are no different. It's just a matter of understanding how the physical object interacts with its environment and having the computing power to simulate that interaction.

you would be better off just having an electronic neural net. chips exist (or are in development) for that, though i dont think they have appeared in many consumer devices yet (waiting for that ai accelerator card/skynet). so a cpu might be able to simulate a neural net at a rate that makes plants look intelligent. you can brute force it using a big enough supercomputer, using many orders of magnitude more energy (in terms of electrical power) than the brain, but you will never be able to match grey matter 1 for 1. the cost (in terms of energy) of backing up and maintaining a human mind on a simulated neural net would likely be astronomical. think of it as having to eat $10000 in food every hour just to survive. only if you were really rich and could afford it, or if we thought you have a very high value to society, or you owned a fusion reactor, would we put you to silicon.

an analog neural net on the other hand, will be much more efficient. a quick google search finds this, explaining in electronics terms, how neurons work:

http://www.mindcreators.com/NeuronModel.htm

and what you get is a not very complex circuit compared to the monstrosity that is your typical floating point unit. electrical signals travel roughly at the speed of light, so your electrons have more distance to travel by simulating it with math. you got to run laps around the cpu and sometimes out on the bus to get data from ram, just to figure out what the state of a neuron is. if you use an analog system isnstead then the electrons just have to travel through the parts that make up the neuron. you can get the state of the neuron almost instantly. so i would prefer a custom analog architecture.

its not hard to put analog stuff on a chip, your basic opamp is a good example, and one of the first chips most electronics hobbyists like myself learns how to use (at least in the days before arduino). the neuron equivalent circuit is essentially 3 variable current sources (in the diagram a battery and a variable resistor) and a capacitor all in parallel. we know how to put capacitors on chips, you are using billions of them right now in your ram. the current sources are the same tech found in voltage regulators (most of which can be configured as current regulators as well). being variable they would be controlled from a bus line on an analog interconnect (normally they would be controlled by neurotransmitter levels), as well as the input and output. you are better off designing a whole new chip architecture to combine all this functionality, pretty much an analog equivalent of an fpga. you would have an array of neurons and switchable interconnects, so you can mimic the self rewiring of the net on the fly.

simulations give us insights into how to design such a chip. but using a general purpose processor for this job is horribly inefficient.

i have severe doubts that we would be able to transfer a human mind into the chip (a model of sufficient capability). to do that you need to know the state of every neuron, the state of neurotransmitter levels throughout the brain, and how the neurons are connected, etc. you would need to be able to take a snapshot of all this stuff in an instant, because if you scan the brain over time you end up with a composite of the state of various parts of the brain over time. who knows what this will do to distort your thought process. even then this would merely be a rough copy. this wont make you immortal, it might make a poor copy of you immortal, but you still have to die.

Edited by Nuke
Link to comment
Share on other sites

Atomic bonds and electrons repelling eachother are entirely responsible for (maintaining) the forms of, well, everything, it is thus physical. I am not aware of any interactions that don't occur throug the four fundamental forces. These forces, particularly the strong, weak and electromagnetic (and at astronomical scales, also gravity) are what maintain, and in the end also create, forms. I don't see what is non-physical about this?

Form is not used here in the colloquial sense, but as a technical term. You're confusing form with shape. Form is a metaphysical concept, not physical as shape. Yes, you can say in the physical sense that the shape of all matter is maintained by physical forces, but as I said above, if you are the one conferring shape to some matter, that shape it will have when you're finished already exists as an abstract concept that we call form. Before you arrange the gears and springs into a clock, you obviously already know that such arrangement will yield a new object with the form 'clock', and you know that new object has properties the isolated gears and shapes don't have.

The problem is that you seem to believe in some form of ontological materialism, as most people with some scientific education, but you're letting some concepts from hylomorphic dualism leak in, and that just doesn't work. You want to say nothing non-physical exists, fine, but you can't eat your cake and have it at the same time. You can't say nothing non-physical exists and at the same time say that things aren't just the sum of its physical parts. As I said, you're trying to create a new ontology out of the blue, and it fails on the first step.

Link to comment
Share on other sites

It seems like various posters here are speaking mutually unintelligible languages.

As the old saying goes, "if everyone is thinking alike, then no one is thinking."

Lodestar makes a typically Chalmersesqe assertion that our materialistic knowledge about the biology of the brain doesn't do anything to solve "the hard problem" (the existence of subjective experience), which is essentially a pure philosophical problem, and thus biology is worthless as far as engineering a synthetic consciousness goes.

Tipically Chalmersesqe? I liked that. I never studied Chalmers, but it's good to know I'm on that track. :D

Nevertheless there are obvious empirical links between the state of our subjective experience, and the material world. E.g. if someone destroys my brain, my subjective experiences will cease.

Huh? How is that an obvious empirical link? How do you know that empirically? When was your brain destroyed?

I think it is entirely possible that with more years of tinkering we may build conscious machines even despite our primitive understanding of the neurobiology of consciousness and having still left these lingering philosophical questions unresolved.

Well... I'll wait for the answer above before answering that. Maybe you indeed have empirical data for that, who knows. :)

Edited by lodestar
Link to comment
Share on other sites

I know some scientists were able to fully simulate a rat's brain on a computer, albeit a very powerful one.

With enough processing power, you could.

It would take a long time to map out the whole brain, so the subject would likely need to be cryogenically frozen, so that they may be there for decades if necessary.

Link to comment
Share on other sites

I know some scientists were able to fully simulate a rat's brain on a computer, albeit a very powerful one.

That's the Blue Brain project mentioned before on this topic, but it simulated a rat's neocortical column, not the full brain. However, the intent of the simulation was to reproduce the observable phenomena related to the brain's function, so you could actually study phenomena that couldn't be observed simultaneously with a real brain, not to reproduce brain function with the intent of creating artificial intelligence.

As a matter of fact, AI research today goes in the opposite direction, not attempting an exact reproduction of brain function at all.

Edited by lodestar
Link to comment
Share on other sites

@Lodestar We'll build the artificial brain, and if it acts as human as we do, then by induction it is as likely to be human as we are; whereas if it does not, then it is not human because no human brain thus acts.

Your metaphysics are unscientific: they by assuming a dualistic consciousness contradict scientific monism and by doubting induction contradicts empiricism.

-Duxwing

Link to comment
Share on other sites

Your claim was, "We don't know." That's nonsense. We know what it takes to store all relevant information, and such storage is available, and we have very good estimates on what it would take to simulate the human brain, as well as what outcomes of such simulation can be like. I've addressed that in my earlier post. So the parts I've quoted remain to be nonsense.

You're talking about digitising consciousness. Now, I'd be the first to jump with joy if that came true. Personally I think a beam of data is the only practical way a human can cross interstellar distances. However, we really don't know if it'll be achievable. The more we learn about the brain the more we find that there is actually very little commonality between computing paradigms and how it works, so I wouldn't be so confident that we have a data structure that could be used. AFAIK it's definitely still an unanswered question. If you know of anyone anywhere that's done work that suggests otherwise I'd be very interested.

Have you read the Roger Penrose book I recommended earlier? It's interesting, it discusses whether any Turing machine could ever simulate a mind. Penrose argues that it can't. I hope he's wrong, btw, but at this stage it's difficult to disprove.

As for your additional nonsense on nanosystems and complexity, the bottle neck right now is speed of memory access and total processing power, both of which can easily be addressed with an optical system. We don't have these built, and it might be a few decades before we do, but this isn't a conceptual problem. Just an engineering one.

Can I make one request? Can we stop with the "nonsense" here, this is not a playground. We're both grown ups, I'd like to converse like them please. I don't mind if you disagree, but lets keep the language civil. It really undermines your credibility IMO.

Back to your point, the topic at hand is (to paraphrase the OP): "could it be done". We're not solely concerned with conceptual blockers here, the discussion includes implementation. As it happens the required technologies (or more accurately those we guess might be required) are so far in their infancy that "we don't know" is in fact the right answer to the "can it be done" question. Now, if you take a more blue sky approach to the question and ignore implementation details, then your answer might be more like your "theoretically yes". Those two aren't mutually exclusive, there's lot's of stuff that's theoretically feasible, but we don't know if it's practical. This happens to be one of them IMO.

As for the size of the system, I've never claimed that it'd be something that can fit in human cranium. We are discussing capabilities of clusters which consist of entire rooms of computer parts. But yes, if we want to build something of the same size capable of simulating human brain, that will require some conceptually different approaches to computation which we don't have a start on yet. Nothing impossible there either, but I'd agree with you on lots of unknowns there.

The requirement for small size isn't because of some aesthetic desire to make a system the size and shape of a cranium. There are physical reasons why increasing miniaturisation is synonymous with increasing processing power. It may well be that an artificial brain ends up being a lot smaller than your head, who knows? What we do know is that our current tech (microelectronics) isn't anywhere near up to the job, and it'll be some time before the evolutionary replacement (nanoelectronics) has been explored to a degree that allows us to make good predictions about exactly what it's capable of. Fingers crossed, though.

Edited by Seret
Link to comment
Share on other sites

@Lodestar We'll build the artificial brain, and if it acts as human as we do, then by induction it is as likely to be human as we are

You don't need an artificial brain for that... you may act as human, but then I don't know if you have a consciousness as mine. I can only assume that you are. That induction is perfectly valid, but it doesn't solve the binding problem, it only circumvents it for practical purposes. Maybe tomorrow we do solve the binding problem, find a way to actually validate a third-party consciousness, and discover that many humans don't have one and are just zombies. Who knows?

Your metaphysics are unscientific

How can a metaphysics be unscientific? You can only think that if you believe the metaphysical premises of science are reality itself. A metaphysics can't contradict science, it can only contradict whatever metaphysical premises you took for a particular scientific interpretation of the reality, but there's nothing against giving a scientific interpretation with another set of metaphysical premises. We do that all the time.

they by assuming a dualistic consciousness contradict scientific monism

Dualism contradicts monism? Really? That's a surprise. :)

by doubting induction contradicts empiricism.

What are you talking about? You have to appeal to that induction precisely because you can't experience any other consciousness but your own! Even if I doubt that induction, which I didn't, how it can be contradicting empiricism if there's no empirical component to it, and that's precisely the point of it? In essence, you're getting yourself into the same contradiction 'architeuthis' above did.

Link to comment
Share on other sites

I read an interesting point regarding this today. For some background, I have always thought that we could eventually transfer the brain to a computer, but this changed my mind.

Neural tissue stores information with a relatively sparse code (as opposed to a dense or holographic code), meaning that at any time relatively few neurons are active, and any given neuron is sensitive to a very specific stimuli. This is consistent with the evolutionary constraint of maximising information transmission per neural spike. The spareseness intensifies as you go "up" the heirarchy of the cortex towards more abstract representations and sensory integration areas.

This has a very serious consequeunce which I had thusfar overlooked. When you consider a single neuron in high-level cortex, it is extremely difficult to find the stimuli that activates it. You might have to spend hours looking at pictures of random things to determine what that neuron stands for. In order to copy a brain to a computer, it would be necessary at least to learn which stimuli every neuron is sensitive to. The time needed to do this is simply far greater than a human lifetime.

Edited by nhnifong
Link to comment
Share on other sites

This has a very serious consequeunce which I had thusfar overlooked. When you consider a single neuron in high-level cortex, it is extremely difficult to find the stimuli that activates it. You might have to spend hours looking at pictures of random things to determine what that neuron stands for. In order to copy a brain to a computer, it would be necessary at least to learn which stimuli every neuron is sensitive to. The time needed to do this is simply far greater than a human lifetime.

You are thinking of transcribing personality from one architecture to another. But that's not a requirement of the question.

If you build an artificial neural net which has the same connections as the brain in question, and load it with the activation functions which correspond to the synapses in that brain, you don't need to know which neuron is responsible for what. You just run the simulation, and let things happen naturally.

Of course, to do this you need to both map the entire brain, and measure, very precisely, concentrations of various chemicals at all of the synapses. We don't have the tech to do this yet, but the basic approach of freeze-and-slice is definitely workable here. So it's a matter of time before we'll be able to get all of this information from an actual brain. If you want to do this in-viva, it's a different question entirely.

Of course, once you have a simulation running, working out the actual way the information is stored sounds much more plausible. But "decoding" the brain would require way more processing power than just simulating it. So again, this is not something we're going to do for a long, long time.

Link to comment
Share on other sites

You don't need an artificial brain for that... you may act as human, but then I don't know if you have a consciousness as mine. I can only assume that you are. That induction is perfectly valid, but it doesn't solve the binding problem, it only circumvents it for practical purposes. Maybe tomorrow we do solve the binding problem, find a way to actually validate a third-party consciousness, and discover that many humans don't have one and are just zombies. Who knows?

I therefore consider you an annoying hallucination. :P

How can a metaphysics be unscientific? You can only think that if you believe the metaphysical premises of science are reality itself. A metaphysics can't contradict science, it can only contradict whatever metaphysical premises you took for a particular scientific interpretation of the reality, but there's nothing against giving a scientific interpretation with another set of metaphysical premises. We do that all the time.

Your metaphysics denies empiricism, a scientific cornerstone.

Dualism contradicts monism? Really? That's a surprise. :)

Monism is another scientific cornerstone.

What are you talking about? You have to appeal to that induction precisely because you can't experience any other consciousness but your own! Even if I doubt that induction, which I didn't, how it can be contradicting empiricism if there's no empirical component to it, and that's precisely the point of it? In essence, you're getting yourself into the same contradiction 'architeuthis' above did.

Consciousness can and should be inducted from non-experiential and therefore impersonal evidence. If one assumes that one is conscious, then given sufficient technology one can study one's brain or sufficient stupidity hit literally oneself over the head until one gathers sufficient data to induct that one's consciousness emerges from one's brain; and if one assumes that universal laws govern reality, then one can by repeating either experiment on other humans and thus finding sufficiently similar mechanisms induct that others are like oneself and therefore beings whose consciousness emerges from material phenomena.

-Duxwing

Link to comment
Share on other sites

I therefore consider you an annoying hallucination. :P

Nope. I didn't said the whole being would be an hallucination, but just that you can't know if anyone else has a consciousness like yours.

Your metaphysics denies empiricism, a scientific cornerstone.

It doesn't deny empiricism, it merely states that all empirical knowledge is necessarily inductive, since it follows from an induction.

Monism is another scientific cornerstone.

A cornerstone conveniently shattered by quantum phenomena linking subject and object. We already reject monism in many fields.

Consciousness can and should be inducted from non-experiential and therefore impersonal evidence.

Right. Inducted. An induction might be enough for most practical purposes, but it isn't enough to solve the binding problem, and that's the point here.

Link to comment
Share on other sites

Just finished Steins;Gate, and such a mechanic seems pretty important to the story. However, Im kind of sceptical of it being possible. Maybe if you built and exact 100% to each individual cell electronic copy of the human brain some kind of intelligence could be created, but the copying of human thoughts to a computer sounds pretty impossible as we don't really understand what thoughts are yet.

Link to comment
Share on other sites

I see all of you are ignoring my earlier post. I've said that the brain is not a bunch of neurons communicating by ionic pulses. It's a wet, squishy pile of gooey tissue which also secretes hormons which are then dispersed by its vessels to the whole body, including the other parts of the brain. Those hormons are chemical signalization and they are included in lots of feedback loops. Other organs interact with the brain, chemically and electrically. How are you going to simulate something that's driven by chaos?

You're looking at this problem from a purely sterile, engineering position, but you forget the physiology of the organ, which is more important.

Just finished Steins;Gate, and such a mechanic seems pretty important to the story. However, Im kind of sceptical of it being possible. Maybe if you built and exact 100% to each individual cell electronic copy of the human brain some kind of intelligence could be created, but the copying of human thoughts to a computer sounds pretty impossible as we don't really understand what thoughts are yet.

We do understand what they are, but we don't know the whole picture. It's way too complex, but it's mechanic in its core.

Edited by lajoswinkler
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...