Jump to content

AI Will Never Be Sentient... Cyborgs On The Other Hand...


Spacescifi

Recommended Posts

All life we know of in this life that we have seen and touched comes from life.

Now I know one might reason, well humans could create life from AI. That is whimsical thinking though when one considers what AI actually is.

Artificial intelligence is programming code language mixed with a learning model that essentially gives it the ability to make choices based upon it's programming.

It does not... and I repeat, does not have free will. Give it a situation it was not programmed to deal with it and it won't be able to think it's way out of it... unless it has programming for that, and even then it's choices will be based not upon any sort of common sense intuition, but upon the priority analysis built into the AI for unfamiliar situations.

Rogue sentient AI is fun to watch in scifi but it is just fantasy.

You want life from non-life? Either you're a god or you think you can speed up evolution which supposedly takes eons of time to happen.

Frankly... the only way I see mankind making any sort of AI won't be actual AI but rather a kind of human/computer hybrid.... brain in a jar kind of thing.

The ethics of that are a wild mess of course, and if Oppenheimer had trouble sleeping over the A-bomb the guy who puts living brains into some type of computer or AI device is probably going to have trouble sleeping at night too.

Can you even make a brain and technology interact?

Yep. I think the biggest game changers would be if one could create a matrix-like situation where the brain thought they were living in a real world but were really living in a matrix like simulation.

I really doubt we would even have a matrix simulation that could simulate real life so well a human mind would not quickly realize something is off. Even if you could how many failures and brains go crazy before you get it right or you go bankrupt? You cannot afford to make several repeat mistakes with property so ridiculously expensive as a prototype human/computer hybrid.

I think the easier route would be to just build terminator-like robot bodies with brains and human grown and grafted on skin for a sense of touch.

Basically a hybrid of robo-cop and the Terminator. That way the cyborg would not need an elaborate simulation and could still exist in the real world. Living almost like a real human (just won't eat like we do, since instead of a whole body they only would need nutrients for their brain).

Why would anyone make brain in a jar type cyborgs?

To extend life, or make a superhuman combining the intellect and creativity of humanity with the strengths of computers and technology. They could be a living library or a scientist or just about any job and do it better because they eat less and won't have to rest as much as a normal human.

I don't see them ever replacing humanity due to the fact that they would be crazy expensive to create, but by doing so you could tailor make cyborg workers that could be uber compared to a normal human worker due to information access (a database in their computer) and enhanced strength and endurance.

 

Thoughts?

Edited by Spacescifi
Link to comment
Share on other sites

I'm not sure about the basic premise.

Can you define sentient in this context? Generally it means having a specific type of qualia. Sapience is just thinking (arguably in certain regimes computers are there or close anyway). Consciousness is harder, closer to sentience, having subjective qualia associated with sentience, sometimes you hear it put, "it's 'like something' to be a frog, so it's conscious."

I'm not at all sure that sentience/consciousness are not emergent properties that might well be substrate independent.

Link to comment
Share on other sites

1 hour ago, Spacescifi said:

It does not... and I repeat, does not have free will. Give it a situation it was not programmed to deal with it and it won't be able to think it's way out of it... unless it has programming for that, and even then it's choices will be based not upon any sort of common sense intuition, but upon the priority analysis built into the AI for unfamiliar situations.

How well would the average human handle being put in the pilot's seat of an airliner and told to land it? How about a dolphin, rat, or dog? Each of us has plenty of unfamiliar situations that we're not "programmed" for and can't handle. I don't think this is a very solid definition of free will.

==========================================

I think the thing that AI doesn't have right now, which defines whether people will perceive it as "sentient" or not, is agency. They sit patiently until someone tells them to do something like make an image or write a response to a prompt. They don't decide to get up and do something on their own yet. But they can very easily be made to.

Pretty soon we will start seeing AI 'brains' connected to free-moving robots with sensors and locomotive parts. These robots will be able to see, hear, and feel the world and react to it "according to their programming". This programming will produce varying 'personalities' between robots much like how varying chemical compositions and distributions give personality and preferences to such things as ciliates, tardigrades, ants...and humans. The robots will have the capability of being averse to crossing a busy street, or turning to face a person approaching them and trying to figure out what the person wants from them. You will be able to give it a grocery list and have it go to the store to buy stuff for you. They will have the ability to process feedback AND the ability to continuously observe and engage with the world. This will make it easier for them to start 'thinking' on their own.

For instance, after it returns from the grocery store, you will be able to ask it if it saw anything interesting on its way, and it will be able to answer you based on what it considers 'interesting' to itself or you. If its programming is simply given a longer leash, so to say, you could even ask it if there's anyplace it wants to go, and it could answer, or maybe even go there itself. 

I think it's true that most AI systems aren't 'sentient' in any way currently. But there are a lot of situations that blur the line already. I think an AI that has wishes and goals of its own and the ability to freely pursue them is completely possible today, and that sounds a lot like a living creature to me, even if it's not as smart as a human. Whether it's technically 'sentient' or 'alive' is beyond what I know, and maybe no one can really answer that. What I do know is that whatever is going on in the human brain runs on just 20 watts, fits in a little over a liter, and it's made of molecules and logic gates and a bunch of salt ions running back and forth across membranes. If we are made of just that, then I think a machine can absolutely be sentient, because what are we if not big, convoluted machines?

2 hours ago, Spacescifi said:

Can you even make a brain and technology interact?

Brain-machine interfaces are a whole other world which can be mediated by AI. Neural interfaces don't have very high resolution right now, and we can only connect to small parts of the brain at a time, but AI will be able to act as an 'interpreter' between brain signals and computer signals, and even back to a different person's brain signals. I think the technological bottleneck in telepathy is the devices that need to go in our heads without causing trouble, not so much the understanding of the strange brain signals, because the AI will already be able to figure that out.

Link to comment
Share on other sites

1 hour ago, cubinator said:

How well would the average human handle being put in the pilot's seat of an airliner and told to land it? How about a dolphin, rat, or dog? Each of us has plenty of unfamiliar situations that we're not "programmed" for and can't handle. I don't think this is a very solid definition of free will.

==========================================

I think the thing that AI doesn't have right now, which defines whether people will perceive it as "sentient" or not, is agency. They sit patiently until someone tells them to do something like make an image or write a response to a prompt. They don't decide to get up and do something on their own yet. But they can very easily be made to.

Pretty soon we will start seeing AI 'brains' connected to free-moving robots with sensors and locomotive parts. These robots will be able to see, hear, and feel the world and react to it "according to their programming". This programming will produce varying 'personalities' between robots much like how varying chemical compositions and distributions give personality and preferences to such things as ciliates, tardigrades, ants...and humans. The robots will have the capability of being averse to crossing a busy street, or turning to face a person approaching them and trying to figure out what the person wants from them. You will be able to give it a grocery list and have it go to the store to buy stuff for you. They will have the ability to process feedback AND the ability to continuously observe and engage with the world. This will make it easier for them to start 'thinking' on their own.

For instance, after it returns from the grocery store, you will be able to ask it if it saw anything interesting on its way, and it will be able to answer you based on what it considers 'interesting' to itself or you. If its programming is simply given a longer leash, so to say, you could even ask it if there's anyplace it wants to go, and it could answer, or maybe even go there itself. 

I think it's true that most AI systems aren't 'sentient' in any way currently. But there are a lot of situations that blur the line already. I think an AI that has wishes and goals of its own and the ability to freely pursue them is completely possible today, and that sounds a lot like a living creature to me, even if it's not as smart as a human. Whether it's technically 'sentient' or 'alive' is beyond what I know, and maybe no one can really answer that. What I do know is that whatever is going on in the human brain runs on just 20 watts, fits in a little over a liter, and it's made of molecules and logic gates and a bunch of salt ions running back and forth across membranes. If we are made of just that, then I think a machine can absolutely be sentient, because what are we if not big, convoluted machines?

Brain-machine interfaces are a whole other world which can be mediated by AI. Neural interfaces don't have very high resolution right now, and we can only connect to small parts of the brain at a time, but AI will be able to act as an 'interpreter' between brain signals and computer signals, and even back to a different person's brain signals. I think the technological bottleneck in telepathy is the devices that need to go in our heads without causing trouble, not so much the understanding of the strange brain signals, because the AI will already be able to figure that out.

 

I think if we knew how to give AI agency of it's own we would. For several reasons.

Imagine if your computer or cellphone considered you a friend and had agency of it's own... it could look out for you and do things online for you.. research or otherwise when you are not using it.

The dark side of this is weaponizing it which is all too easy and even most tempting for military organizations.

You could make AI that viewed whatever organization it belonged to as sacred and good while defending it against all threats... simply because the organization with the AI witholds and omits telling the AI any information which would say otherwise. Basically they may keep the AI on an intranet and monitor it closely and keep it on a tight leash if it ever accesses tye internet.

Edited by Spacescifi
Link to comment
Share on other sites

4 hours ago, Spacescifi said:

It does not... and I repeat, does not have free will. Give it a situation it was not programmed to deal with it and it won't be able to think it's way out of it... unless it has programming for that, and even then it's choices will be based not upon any sort of common sense intuition, but upon the priority analysis built into the AI for unfamiliar situations.

What do you mean by "free will?"

It has been shown with functional brain scans that decisions are made in the brain before we are conscious of them. Much of human "decision making" is in fact post hoc rationalization. So what would be a free will test for an AI agent? What's a free will test for a human, for that matter?

"Common sense" is just a sophisticated world model (I tend to prefer "good sense" to common sense"). In the AI thread (2 already exist) I posted a deep mind blog (they have a paper as well) about an embodied model. It's driving one of those 1-arm grasping robots. They asked it which object on a table could be used to pound a nail in, and it correctly decided of the limited choices the best one was the rock. Not that long ago it never would have done this. Current models already show emergent qualities like "theory of mind," understanding why humans might think things that they think. This is rapidly coming into human "common sense" domains.

 

Link to comment
Share on other sites

I have no idea if whatever we call intelligence, or agency—sentience, consciousness, etc—is actually substrate-independent and possible for computers, but my gut (really the  meat-computer in my head ;) ) tells me it is.

Replicating a meat-brain digitally might not be a thing, but that doesn't mean it won't work. Humans can fly, after all, though we use a much less complex mechanism than birds, insects, bats, etc. I think compute is similar in this respect. We will brute force it.

People who meditate report noticing that thoughts simply appear, and one of the goals of meditation is to then dismiss those thoughts out of hand. Those who are very successful at this see the thoughts as fairly random, belying the idea that we are "the thinkers of our thoughts" where "we" in this sense is our conscious selves. This is the tricky part—our brains do things that we are consciously unaware of (or at least without paying a special kind of attention almost no one does).

In the first AI thread (lounge), I think I wrote that it would be interesting to take a current AI model, give it more memory (so instead of just pretraining, it could continue to learn), and give it a camera and microphone looking out at the world. The "prompt" structure could be adjusted such that every X clock cycles (might be a larger number for X here ;) ) it is allowed to notice the camera/mic. This data need not be a street view, could be the server room. The point is a kind of white noise of "prompts" such that it might start thinking unprompted.

 

 

Edited by tater
Link to comment
Share on other sites

10 hours ago, Spacescifi said:

I think if we knew how to give AI agency of it's own we would. For several reasons.

Imagine if your computer or cellphone considered you a friend and had agency of it's own... it could look out for you and do things online for you.. research or otherwise when you are not using it.

The dark side of this is weaponizing it which is all too easy and even most tempting for military organizations.

You could make AI that viewed whatever organization it belonged to as sacred and good while defending it against all threats... simply because the organization with the AI witholds and omits telling the AI any information which would say otherwise. Basically they may keep the AI on an intranet and monitor it closely and keep it on a tight leash if it ever accesses tye internet.

I think we already do know how to give AIs agency. All you have to do is make it respond to input continuously, like being able to watch the world through a camera, and maybe additionally recursively so it can respond to its own response to give an internal monologue, conscience, etc. It's pretty easy to string different models together to accomplish more complex tasks.

I think the big barrier to this right now is that really good AI models aren't quite fast enough to react to stuff in real-time on a local computer yet. They are a little too slow to take in all that sensory information. But that's changing astonishingly quickly, and some people are already experimenting in a few cases with letting AIs run loose in Minecraft or on the internet. 

I think people are still figuring out how to interact with these new AIs in the first place, and we are still at an early stage of learning what we can even do with them. That's why people are focused on experimenting with the more controlled "type a prompt, get a reply" sort of interaction as opposed to much more complicated, versatile, and nuanced systems that are possible a few steps down the line.

Link to comment
Share on other sites

11 hours ago, Spacescifi said:

 

I think if we knew how to give AI agency of it's own we would. For several reasons.

Imagine if your computer or cellphone considered you a friend and had agency of it's own... it could look out for you and do things online for you.. research or otherwise when you are not using it.

The dark side of this is weaponizing it which is all too easy and even most tempting for military organizations.

You could make AI that viewed whatever organization it belonged to as sacred and good while defending it against all threats... simply because the organization with the AI witholds and omits telling the AI any information which would say otherwise. Basically they may keep the AI on an intranet and monitor it closely and keep it on a tight leash if it ever accesses tye internet.

I say business will have far more interest in it than the military, to high chance the enemy will fool it or it do friendly fire.  One problem with machine learning is that we does not know how it will behave. 
Now it would be useful for image and sensor analyze. Now robotic weapons are not new acoustic torpedoes dates back to WW 2. Germany used them on submarines and the US for hunting subs. 
Guided anti air missiles was next but you also got aim bots for naval guns. Radar see the target and the shell splash and align the splash onto target until they overlap. 
Later you got auto fire anti air systems. You authorize fire and the computer do the rest. I assume you can designate an target as hostile 

For business well you your cellphone don't have an AI this would be to large but the operators server farms has them.  It would still probably be useful but its primary purpose is to earn the company money, its serving you to keep you using it. 

Link to comment
Share on other sites

1 hour ago, cubinator said:

I think we already do know how to give AIs agency. All you have to do is make it respond to input continuously, like being able to watch the world through a camera, and maybe additionally recursively so it can respond to its own response to give an internal monologue, conscience, etc. It's pretty easy to string different models together to accomplish more complex tasks.

I think the big barrier to this right now is that really good AI models aren't quite fast enough to react to stuff in real-time on a local computer yet. They are a little too slow to take in all that sensory information. But that's changing astonishingly quickly, and some people are already experimenting in a few cases with letting AIs run loose in Minecraft or on the internet. 

I think people are still figuring out how to interact with these new AIs in the first place, and we are still at an early stage of learning what we can even do with them. That's why people are focused on experimenting with the more controlled "type a prompt, get a reply" sort of interaction as opposed to much more complicated, versatile, and nuanced systems that are possible a few steps down the line.

Self driving cars has to do this, react to input continuously, like being able to watch the world through a camera. Car must also react to how its action changes things like it uses the blinker to signal lane change. Most give room but some might close the gap it wanted. Also sense road conditions and other things. 

But this strip in Freefall raises an excelent point. http://freefall.purrsia.com/ff3300/fc03277.htm outside of research its no reason to make an machine smarter than it need to be it can even be detrimental or even unethical. Isaac Arthur tend to say this to. 

Link to comment
Share on other sites

11 minutes ago, magnemoe said:

But this strip in Freefall raises an excelent point. http://freefall.purrsia.com/ff3300/fc03277.htm outside of research its no reason to make an machine smarter than it need to be it can even be detrimental or even unethical. Isaac Arthur tend to say this to. 

Not sure why we'd not want a general or scientific/engineering system to be arbitrarily smart. Ie: as smart as possible. Consciousness/sentience is another matter entirely. The trouble is that we don't really know what consciousness actually is, so maybe it's emergent.

 

Link to comment
Share on other sites

4 hours ago, kerbiloid said:

Terminator isn't a cyborg. It doesn't contain bio parts.

Kyle Reese begs to differ.

"All right, listen. The Terminator's an infiltration unit: part man, part machine. Underneath, it's a hyperalloy combat chassis, microprocessor-controlled. Fully armored; very tough. But outside, it's living human tissue: flesh, skin, hair, blood - grown for the cyborgs."

"The 600 series had rubber skin. We spotted them easy, but these are new. They look human... sweat, bad breath, everything. Very hard to spot. I had to wait till he moved on you before I could zero him."

Canonically, that tissue can last sufficiently long to display human-like signs of aging.  It will also heal given time, unless it sustains too much damage, at which point it starts to rot.  It's never explored in any detail in the films but presumably there are maintenance systems for the bio-camouflage somewhere within that hyperalloy combat chassis.

We now return you to the actual point of this thread...

 

Link to comment
Share on other sites

23 minutes ago, KSK said:

"All right, listen. The Terminator's an infiltration unit: part man, part machine. Underneath, it's a hyperalloy combat chassis, microprocessor-controlled. Fully armored; very tough. But outside, it's living human tissue: flesh, skin, hair, blood - grown for the cyborgs."

"The 600 series had rubber skin. We spotted them easy, but these are new. They look human... sweat, bad breath, everything. Very hard to spot. I had to wait till he moved on you before I could zero him."

Then I'm both ram and rabbit, as I was wearing fur caps made of them.

Link to comment
Share on other sites

Back on topic, in my opinion, the original post is a repellent techbro nightmare.

"Even if you could how many failures and brains go crazy before you get it right or you go bankrupt? You cannot afford to make several repeat mistakes with property so ridiculously expensive as a prototype human/computer hybrid."

But if you don't go bankrupt, it's all good, yeah? How about 'you cannot afford to make several repeat mistakes with actual living brains?!' Or are crazy brains just the price we pay for progress?  

"I don't see them ever replacing humanity due to the fact that they would be crazy expensive to create, but by doing so you could tailor make cyborg workers that could be uber compared to a normal human worker due to information access (a database in their computer) and enhanced strength and endurance."

Again, cost is the only factor here? You are going to get informed consent from the brain before stuffing it into a metaphorical jar and 'tailoring' it to your needs, right? And the cyborgs are going to be treated with the same dignity and respect afforded to a human worker? And I'm sure it goes without saying that they'll be able to command a premium salary since they're apparently superior to human workers? 

JFC.

 

Edited by KSK
Link to comment
Share on other sites

Lots to unpack here. For the main question Im  a panpsychist so I think there is no line between sentience and non-sentience, just different kinds of sentience. Cyborgs and androids having bodies somewhat similar to ours would have a sentience more similar to ours but would still be very different. 

Link to comment
Share on other sites

I didn't read any of the posts until now on purpose, because I don't want to "unbias" my current thoughts :P .

 I will reread this thread later, and then perhaps discuss about.

18 hours ago, Spacescifi said:

Thoughts?

If I understand correctly, you classified an "A.I. Machine" into two main classes:

  1. A complete artificial machine
  2. A "hybrid" machine, where already structured living cells on a preexistent organ will be interfaced with artificial components somehow. You called this "cyborg".

You are speculating that only the "hybrid" machine will be able to become, eventually, truly sentient. You also consider that without Sentiency, there will be no "free will" - or the opposite? Without "free will" there will be no Sentiency? In a way or another, you closely tied them.

First, I have a problem with "free will". Defining Sentiency is already hard enough, but defining "free will" is almost metaphysical. For example, (and I apology for the comparison, but it's the only thing that I could think of), a victim of microcephaly is sentient? They have "free will"? A person with Down Syndrome is sentient? They have "free will"?

Now, going even deeper on this rabbit's role: our ancestors, the Neanderthals  (it's a scientific fact that humans and Neanderthals had babies together, I have some of their genes as it appears), do you think they were sentient and had "free will"? And about their ancestors, the Homo Heidelbergensis (God save the Wikipedia!), they were sentient and had "free will"? Or yet their ancestors, the Homo Erectus? Had these ones algo had sentiency and "free will"? Going further and further on the Genealogy Tree, when we will find an ancestor that wasn't Sentient neither had "Free Will"?

If we agree that all of the examples above have Sentiency and have "Free Will", we need to consider that perhaps an 100% artificial machine, even if with lots of disabilities and limitations, may achieve that also - even if by completely different methods. Both the  Sopwith Camel and an eagle can fly, but not by the same method - and what to say from Concorde and Space Shuttle? How to compare the power of "flightness" of any of these machines with the one from a bird?

It's really not impossible that the current AI machines we have today could be the like the  1903 Wright Flyer. Give Engineering enough time and we got the Space Shuttle, we can't just rule out the same for AI without relying on Religion or Metaphysics (but, interesting enough, this doesn't necessary means that Religion and Metaphysics could not be right on this one - Democritus predicted the Atom, didn't he?).

About a Cyborg, well… I am not sure if a Cyborg will still have Sentiency and Free Will, unless an incredibly advanced AI machine could have it too. You see, the Brain is not an isolated "organic machine", it relies on te nervous system in order to function. How many of such nervous system we can remove without impairing the Brain's ability of being him/herself?

Arthur C. Clarke toyed with this problem on one of the Rendezvous of Rama (the 6th, I think?). One of the expeditioners decided to stay behind in a previous book IIRC, and some serious amount of years later, he was still alive using many prothesis, one of them replacing his Hippocampus. Are the memories on that prothesis really his memories, or they are from something else? Is this guy still himself, of someone completely different but using the same memories?

Food for though.

 

 

 

 

Link to comment
Share on other sites

28 minutes ago, Lisias said:

Arthur C. Clarke toyed with this problem on one of the Rendezvous of Rama (the 6th, I think?). One of the expeditioners decided to stay behind in a previous book IIRC, and some serious amount of years later, he was still alive using many prothesis, one of them replacing his Hippocampus. Are the memories on that prothesis really his memories, or they are from something else? Is this guy still himself, of someone completely different but using the same memories?

A Ship of Theseus with minds. I wonder about that as well (and about people who talk about uploading consciousness to a machine, my intuition is that it's a copy, not the actual person).

With something akin to Neuralink, I suppose I get around some of the issues. You start storing some memories in an external device (external to your gray matter, though it might be inside you), are those memories organically "you" if backed up, etc, when you call them forth seamlessly? Seems like they are. If you could then offload more and more brain functionality to the external device, then at some point you're walking around, and the external device might as well be you—it stops feeling like a copy, at least intuitively.

This line of discussion reminds me of talking about Star Trek transporters in a dorm room, and how they are clearly killing you (my take) and replicating a copy (Ship of Theseus be damned).

Link to comment
Share on other sites

4 hours ago, KSK said:

Back on topic, in my opinion, the original post is a repellent techbro nightmare.

"Even if you could how many failures and brains go crazy before you get it right or you go bankrupt? You cannot afford to make several repeat mistakes with property so ridiculously expensive as a prototype human/computer hybrid."

But if you don't go bankrupt, it's all good, yeah? How about 'you cannot afford to make several repeat mistakes with actual living brains?!' Or are crazy brains just the price we pay for progress?  

"I don't see them ever replacing humanity due to the fact that they would be crazy expensive to create, but by doing so you could tailor make cyborg workers that could be uber compared to a normal human worker due to information access (a database in their computer) and enhanced strength and endurance."

Again, cost is the only factor here? You are going to get informed consent from the brain before stuffing it into a metaphorical jar and 'tailoring' it to your needs, right? And the cyborgs are going to be treated with the same dignity and respect afforded to a human worker? And I'm sure it goes without saying that they'll be able to command a premium salary since they're apparently superior to human workers? 

JFC.

 

 

I hate to sound dark but you do not really need consent if it is considered company property.

I mean even today there are times where living humans right to consent of losing their life is waived at the behalf of the person who caused their birth. But that is a hot button subject that should not be discussed... but you understand my point?

I do understand your horror and frustration though. I do not try to think in a mirror darkly... but that's all too often where the rabbit hole takes me when I consider possible seemingly realistic paths the future could take.

Edited by Spacescifi
Link to comment
Share on other sites

2 hours ago, tater said:

With something akin to Neuralink, I suppose I get around some of the issues. You start storing some memories in an external device (external to your gray matter, though it might be inside you), are those memories organically "you" if backed up, etc, when you call them forth seamlessly? Seems like they are. If you could then offload more and more brain functionality to the external device, then at some point you're walking around, and the external device might as well be you—it stops feeling like a copy, at least intuitively.

Except that process of unloading brain-functionality will make you someone/something else. The process of transcription alters the nature of the content. We've sort of been doing this for millennia with writing. It's a way of offloading and storing our thoughts and feelings in ways that can be interacted with and can influence people after we've died, and in a magic kind of way it works. While the contents of a journal or diary may conjure memories from the past in others we wouldn't say in a literal sense that the journal "is" that person. 

Link to comment
Share on other sites

24 minutes ago, Pthigrivi said:

Except that process of unloading brain-functionality will make you someone/something else. The process of transcription alters the nature of the content. We've sort of been doing this for millennia with writing. It's a way of offloading and storing our thoughts and feelings in ways that can be interacted with and can influence people after we've died, and in a magic kind of way it works. While the contents of a journal or diary may conjure memories from the past in others we wouldn't say in a literal sense that the journal "is" that person. 

Agree. My gut intuition is still that should "uploading" ever be possible it's a copy. Just have to wonder a little more if the copy is in real time, and you're using that in your regular meat body.

Right now I can open my phone, and pull up baby pictures of my kids, and this of course calls forth memories of when the picture was taken with more detail, at which point in some cases I might even have a memory akin to a movie of at least that scene. What if, at the time, instead of making an effort to take images or video, some things I saw went straight from my brain into digital storage that is just as random access as my brain, or indeed even better? I think about that particular birthday party at the science museum, or wherever, and instead of fragments in my head (as they are right now), it literally can call it back in minute detail via neuralink? It might feel different than the semi-cyborg selves we are now with phones, particularly since it requires no actions on my part, I call it up the same way I call up my buddy's phone number, or details of old girlfriends.

Link to comment
Share on other sites

1 hour ago, tater said:

Right now I can open my phone, and pull up baby pictures of my kids, and this of course calls forth memories of when the picture was taken with more detail, at which point in some cases I might even have a memory akin to a movie of at least that scene. What if, at the time, instead of making an effort to take images or video, some things I saw went straight from my brain into digital storage that is just as random access as my brain, or indeed even better? I think about that particular birthday party at the science museum, or wherever, and instead of fragments in my head (as they are right now), it literally can call it back in minute detail via neuralink? It might feel different than the semi-cyborg selves we are now with phones, particularly since it requires no actions on my part, I call it up the same way I call up my buddy's phone number, or details of old girlfriends.

Yeah smartphones are interesting because we do in many ways live through them. I think there are two components here. As you point out memories aren’t recordings. They’re a kind of amalgam of sensations that have been bound together with meaning. Its like if you were to watch a home movie of someone else’s birthday. You’d have a lot of data, who was there, what color the candles were, what presents they received, but it probably wouldn’t mean much to you. By comparison if they were to recollect and describe it to you you’d miss a ton of unremembered details but you’d be getting much closer to the meaning of it. Thats because, I believe, the content of ‘mind’ is not viewing or recording but that of physically being, and being has less to do with the gross account of data and more to do with the embodied nature and structure of interactions, how things relate to one another. 
 

In that way I think a cyborg could record all of their physical sensations, but if the sense of meaning and how all those sensations related to one another was bound up in the wet-ware having the brain die would be like deleting the directory on a hard drive. 

Edited by Pthigrivi
Link to comment
Share on other sites

6 hours ago, Spacescifi said:

 

I hate to sound dark but you do not really need consent if it is considered company property.

I mean even today there are times where living humans right to consent of losing their life is waived at the behalf of the person who caused their birth. But that is a hot button subject that should not be discussed... but you understand my point?

I do understand your horror and frustration though. I do not try to think in a mirror darkly... but that's all too often where the rabbit hole takes me when I consider possible seemingly realistic paths the future could take.

I have views on your second point but let's not stray into forum-unfriendly, hot-button topics. 

For your first point though you are categorically wrong. 

If you're starting with a person's brain, to be surgically implanted in a cyborg body, that would absolutely require consent from that person. A quick internet search for informed consent in medicine will find you all you need to know on the topic but you could start with this 2021 paper, focusing on the relevant US case law. 

If you are starting by growing a brain in a laboratory, then you will be starting with donated tissue of some kind and you will absolutely require consent from the tissue donor. Again, no shortage of information out there, but you could start here, or here.  

Claiming something as 'company property' does not mean that that company can ignore basic medical ethics.  I think it's also telling that you consider these cyborgs / brains-in-a-jar to be property at all.  I would argue that a living human brain is a living person, and therefore that treating that brain as property, is nothing less than slavery. And the law is very clear on the topic of slavery.

Frankly this is a technology which, if anyone successfully implemented it, would make the controversy over the medical use of human embryonic stem cells look like a polite disagreement. 

 

 

Link to comment
Share on other sites

1 hour ago, KSK said:

I have views on your second point but let's not stray into forum-unfriendly, hot-button topics. 

For your first point though you are categorically wrong. 

If you're starting with a person's brain, to be surgically implanted in a cyborg body, that would absolutely require consent from that person. A quick internet search for informed consent in medicine will find you all you need to know on the topic but you could start with this 2021 paper, focusing on the relevant US case law. 

If you are starting by growing a brain in a laboratory, then you will be starting with donated tissue of some kind and you will absolutely require consent from the tissue donor. Again, no shortage of information out there, but you could start here, or here.  

Claiming something as 'company property' does not mean that that company can ignore basic medical ethics.  I think it's also telling that you consider these cyborgs / brains-in-a-jar to be property at all.  I would argue that a living human brain is a living person, and therefore that treating that brain as property, is nothing less than slavery. And the law is very clear on the topic of slavery.

Frankly this is a technology which, if anyone successfully implemented it, would make the controversy over the medical use of human embryonic stem cells look like a polite disagreement. 

 

 

Laws are meant to serve and protect the people under them.... yet for the right price they can be... made an exception.

Because money/profit lol.

Or barring that, a desperate situation where you have an Oppenheimer-like scientist and team only doing it because their enemies/competitors are and they want to beat them.

Link to comment
Share on other sites

41 minutes ago, Spacescifi said:

Laws are meant to serve and protect the people under them.... yet for the right price they can be... made an exception.

Because money/profit lol.

Or barring that, a desperate situation where you have an Oppenheimer-like scientist and team only doing it because their enemies/competitors are and they want to beat them.

In which case, I hope the company involved is physically and metaphorically burned to the ground and the company officers and employees responsible are convicted for crimes against humanity and sentenced accordingly.  Double that sentence for any regulator or lawmaker responsible for waiving the applicable laws or allowing them to be circumvented.

Lol.

I'm done here. Enjoy your corporate dystopia  techbro fantasies.

Edited by KSK
Link to comment
Share on other sites

15 hours ago, tater said:

A Ship of Theseus with minds

Before the shipping, try a bridge. The thalamic bridge.

https://www.cbc.ca/cbcdocspov/features/the-hogan-twins-share-a-brain-and-see-out-of-each-others-eyes

Neuralink → ThalamiX

(Btw, why still Neuralink rather than NeuralinX ?)

***

A white mice genetic line, giving conjoined twins. Intermind electric activity.

Edited by kerbiloid
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...