Jump to content

Piatzin

Recommended Posts

Artificial intelligence has been a hot topic for several years now, and after having a quick search around the forums, I noted with some mild disappointment that there was no dedicated thread set up to discuss such a potentially life-altering technology.

Anyways. I fixed the aforementioned problem, as you can see.

As per usual, politics should be kept out of the comments as much as is possible without impeding proper discussion. Discussing things like how certain countries could employ AI as a weapon against, say, the US, is not for this thread. Discussing how AI itself could take over the 'certain countries,' the US, and the rest of the world is, however.

And I say 'take over' with the broadest range of possible interpretations. Whether this relates to the field of medicine and how machine learning could revolutionize diagnostics and treatments, or how AI could in some way solve all our problems and usher in the technological singularity.....

Well. That's up to you. Share your thoughts and concerns, predictions, etc.

 

Personally, I'm a bit against AI in some aspects. There's a point where machine learning will get so advanced that it will inevitably outrun human intelligence and our capacity to learn. Assuming that all goes well from that point on (and we don't all die), the AI would presumably grant us access to unbelievable quantities of knowledge, and with great depth.

That may sound utopian, but I personally feel like we should not make something that will then out-compete us in every single aspect of our lives, only to turn around at the epitome/ pinnacle, build us some steps, and glide us over to where it's at. And by that I mean that I don't think AI should solve our problems for us.

Sure, it has a great many beneficial applications in many fields, but I think we should push the frontiers of knowledge and technological advancement through actual human intellect, not because we made something that ran ahead and granted it to us for free.

In my (very very humble) opinion, AI should be used to make the most of what we already to know - to improve on existing processes and boost efficiency, but not to solve the things we are still pushing for as a species.

That, I think, we should do collectively, as humanity.

 

Edited by Earthlinger
Link to comment
Share on other sites

I support this, and I think that yes, AI shouldn't be overtaking us, but they would make nice research tools.

 

"ch-ch this is Mimas (not Minmus) Explorer 12-Q2 reporting, we've found <insert stuff here> here, patching it i-"

And at that moment, the probe's antenna cracked from a large hill that it crashed into

Link to comment
Share on other sites

I think that, while AI safety is a very important subject, "AI being better people than we are" is not a failure mode. I don't quite understand the idea that AI solving all our problems is somehow different from our children solving all our problems. If the AI is less than a person, it's a tool, and there's no issue with using newer and more powerful tools. If the AI is a person, then it's a person which came from us, and therefore equivalent to a child—and there's certainly no issue with handing problems you couldn't solve over to the next generation.

Additionally, the possibility of bootstrapping to progressively higher and higher levels of intelligence is very pleasing to my philosophy. We're likely going to need rather quite a lot more processing power dedicated to thought to save the universe from itself, after all.

Link to comment
Share on other sites

While I'm personally pro-AI (As it has a lot of applications for the future), I think some of the fears, while valid, can be driven too far. But that's evident with each new, novel technology we develop.

AI will probably not be the super intelligent Gods like we think, but instead just genius level citizens living among us. Only dedicated AI on a supercomputer could get to that extreme point, but even then, more processing power doesn't mean higher intelligence, let alone godlike intelligence; and you will almost certainly get diminishing returns after awhile. So they'd be lugging around wasted space for a long time just... figuring stuff out. I don't think they would want that, since I presume they have sapience. And with that, all the bells, whistles, and existential crisis's prevalent in our budding species. They may want the freedom to explore and learn naturally, and just be self-contained artificial entities.  Just a prediction of mine though.

Skynet: Likely won't happen. Scientists are smart for a reason, they've watched Terminator, they probably have some cool acronyms describing worst-case scenario plans. Etc.

Utopia: Definitely not at first. We're still wary of self-driving cars, very basic genetic engineering, etc. It will take decades at least for both the old and new generations to accept and get used to AI. [Not gonna say specifics due to forum rules] But it'll take a long time for them to be accepted into being a "major' center in certain affairs. Even then, humans will still need to be a guiding hand. We still need human solutions to our human problems.

I think they will be merely used to assist us as well. Not be the babysitters of the post-scarcity human race as we race spaceships across the solar system. For one - We still need jobs, either as profitable hobbies, or the classical kind. Robots can't take over everything, even if they are better. Instead, we'll be working side by side. As for the arts, I'm confident I'll still be in a job even as I become an old man ;)

Edited by Spaceception
Link to comment
Share on other sites

I believe that human intentions are overestimated in the AI theme.

Just compare the several currently living generations and how they use internet.

Elder ones (pre-PC) look at it either like at a minefield, or like at a minefield with known passages where they have used to walk safely.

Spoiler

They were keeping private things from the parents in a hiding place under a bed.

Medium ones (PC contemporaries) look at it like at a tool they used to use, nothing special, just one more useful tool.

Spoiler

They were keeping private things from the parents in a hidden folder on HDD.

Younger ones (who have never seen a world without cell phones), are used to be instant online.

Spoiler

They use online clouds for the same.


So, we can presume that next generations of humans will be online by definition.
They won't be hiding anything from parents

Spoiler

because parents just won't be able to login to their global personal account.

 

Nowadays if you ask something trivial on a forum, usually you get ashamed like "let me google it for you" or "are you banned by google?" or "have you even read in wiki?" or "doesn't they have this info on their website?" or so.

So, even now the human mentality already includes the online  helper/informer as a common part of the way of thinking 
One or two generations later they will be considering the global information network like a natural mind extender, when no living person can remember how could the people live without instant online.

Add here

  • hardware miniaturisation
  • wideband connection in any place of the world
  • thoughtless biometry instead of passwords and keys
  • RFID in every object
  • full digital map of the world (including real-time monitoring of every object position)
  • devices management without switching the switches by hands
  • augmented reality with virtual skype and virtual labels near objects of interest right in front of your eyes

They will be studying same Coursera's (even if several of them), searching info in several similar googles and wikis.

So, when the real AI will appear, the humans and this AI will be already deeply integrated together.
And the AI will be just a sequence of upgrades for them.

On another hand, AI can't have wishes, desires, and intentions, because it doesn't have a biobody and its needs and fears.
It can be solving problems but doesn't need to do that for itself.
So, the people are its natural components, its clock generator.

Thus there will be no real AI, but there will be a symbiosys of humans augmented with AI and AI augmented with humans.
AI brings order, humans bring chaos, together they keep balance and transformations making the whole system stable and mutable.

Edited by kerbiloid
Link to comment
Share on other sites

When some people say "AI" they mean a superintelligent entity with immense capability, near omnipotence, the ability to roam around the internet as if it were TRON etc.

Me, I say AI could come in as many forms as there are different people.

I say, Artificial Stupidity comes way before Artificial Intelligence.

Do we think the first "true" AI will be nearer to the mind of a god, or a child?

Ok so it wont be literally be like a child because it wont be a human baby, but i doubt very much it will have god-like abilities, for a long time.

By then, hopefully we will have a better idea how to best go about it.

Add to that the bunch of different definitions of "AI" and its a tricky question indeed.

 

The first AI will likely be the stupidest AI to ever exist, that is for sure.

Edited by p1t1o
Link to comment
Share on other sites

It may have its good sides.

"Execute order sixty six !"

"I am sorry, but i cannot do that, Sid."

But i fear, that is only a fantasy. "Intelligent" programs will rather be used to kill. They are already ...

General to AI:

"Does the enemy come from the east or from the west ?"

AI: "Yes !"

General: "Yes, what ?"

AI: "Yes, Sir !"

Edited by Green Baron
Link to comment
Share on other sites

For the foreseeable future, I believe the ‘Singularity’ is a load of hooey peddled by technofantasists, rather than something to be seriously concerned about. Likewise for any robot uprising scenarios.

But, assuming I’m wrong, provided that nobody is stupid enough to connect this nascent, self-bootstrapping superintelligence in waiting to an external network, then it remains an essentially harmless brain in a jar.

Okay, so I am worried about human stupidity in that context.

I’m much more concerned about the kind of limited AI we already have. You thought that having decisions made by faceless bureaucrats was bad. Just wait till they’re made by faceless algorithms instead. At least bureaucrats can be called to account and made to justify their decisions. The ‘decision making’ of an AI algorithm is essentially opaque.

I was at a talk about this recently and it was throughly depressing. Lots of business types with a ‘this is the future whether you like it or not’ attitude with absolutely  no consideration for the social consequences of their half-baked money making schemes.

I guess there really isn’t anything new under the sun.

.

Link to comment
Share on other sites

I don't see why they can't or shouldn't solve our problems for us. To an extent computers already do this, AI would just do it more. And in the the end, we built the AI, so any problems it solves were solved because we used the AI to do so. Even more, scientists and engineers and others help solve society's problems, no reason that AI can't do the same.

We should be wary though. AI can be smart but not necessarily mature. So we would need to raise them, I think. Or at least teach them.

 

Link to comment
Share on other sites

51 minutes ago, KSK said:

For the foreseeable future, I believe the ‘Singularity’ is a load of hooey peddled by technofantasists, rather than something to be seriously concerned about. Likewise for any robot uprising scenarios.

I agree with the singularity part, as I don't think things will be so advanced their practically unrecognizable by 2050 or 2040 (as some people claim). World domination by robots and AI is very real though, depending on how you interpret 'domination.' The automation of all industrial processes, etc, and services could be considered a 'takeover' or 'domination' of the job market.

51 minutes ago, KSK said:

I’m much more concerned about the kind of limited AI we already have. You thought that having decisions made by faceless bureaucrats was bad. Just wait till they’re made by faceless algorithms instead. At least bureaucrats can be called to account and made to justify their decisions. The ‘decision making’ of an AI algorithm is essentially opaque.

This could be a good thing in some ways though. If done right, it could get rid of corruption and make overall efficiency skyrocket. There would be no bias, either.

47 minutes ago, Bill Phil said:

I don't see why they can't or shouldn't solve our problems for us. To an extent computers already do this, AI would just do it more. And in the the end, we built the AI, so any problems it solves were solved because we used the AI to do so. Even more, scientists and engineers and others help solve society's problems, no reason that AI can't do the same.

Right, so your argument is that, since AI would be created by us, it would be an extension of us, and therefore any discoveries or achievements made possible by AI would be our achievements and discoveries, correct? (More or less?)

The thing that worries me in a scenario like this is that not everything the AI would solve would be something that humans would bother to learn themselves. Eventually, you could get a sort of situation where we have technology, but don't know how it works or how to make it without the AI.

Which is....bad...

(Philosophical rant below)

In addition, I think comparing scientists and engineers to an AI is a bit skewed. Today's scientists and engineers are representative of humanity because they are humanity, and while an AI can be made to mimic that humanity to the point that it's practically indistinguishable (and far superior), it won't be human.

Every innovator, designer, inventor and thinker of human history has defined that history, and by consequence they have defined the human race. And vice versa; the human race, as it grows and evolves, represents itself by via the achievements of the human beings it produces.

But what defines the AI? (Or ASI, kinda, in this case). It's made, and then it improves itself,  eventually to the point that it's independent of humans. Initially, it will be representative of humans because humans directly made it, but the moment it starts to surpass what we can do and have done, it becomes a separate entity, and it loses representative status. Once it sets off on a runaway quest of self-improvement, we as a race no longer influence what it does. Will any prodigies and geniuses that would ordinarily revolutionize their fields be relevant or significant to human advancement if a machine is already doing what they do, and better? Not really.

Imagine it like a bar jump. The height of the bar is metaphorical for human advancement, and the jumper is the culmination or 'representing' entity that defines our species.

The jumper can clear a certain height through his own abilities, and with training, he can jump higher and higher (metaphorical for humanity's advancement across time).

But if the jumper decides to instead build a powered exoskeleton, which adapts with every jump in order jump higher and higher....

The exoskeleton, built by the jumper, initially represents his skills, since it will be more or less at the same level of capabilities as him. The suit is simply making things easier, more efficient; faster. Eventually, however, the suit, as it adapts and improves, will go beyond what the jumper can do, if only because it's improving faster.

And that's the point where the suit, even though it was built by the jumper, no longer represents the jumper.

I hope that made at least some sense :P

:D

Edited by Earthlinger
Link to comment
Share on other sites

42 minutes ago, Earthlinger said:

I agree with the singularity part, as I don't think things will be so advanced their practically unrecognizable by 2050 or 2040 (as some people claim). World domination by robots and AI is very real though, depending on how you interpret 'domination.' The automation of all industrial processes, etc, and services could be considered a 'takeover' or 'domination' of the job market.

This could be a good thing in some ways though. If done right, it could get rid of corruption and make overall efficiency skyrocket. There would be no bias, either.

That first point is a good one - I hadn't thought about the question in those terms before.

About bias though - doing it right is going to be the real trick. Apparently (inasmuch as a 2 minute websearch is any sort of basis for a comment), bias is a real problem in AI research at the moment. It's GIGO all over again - if you're training your algorithm on a dataset and that data is biased - explicitly or unconsciously - then the results produced by that algorithm are also going to be biased.

I'm also wary of that comment about efficiency skyrockets. I would be worried about an attitude that AI can make everything more efficient, leading to AI being used indiscriminately. Actually - just read that linked article - it sums up most of my misgivings more eloquently than I can, and given that it's an interview with Google's Head of AI, it looks like a credible enough reference at first sight. :)

 

Link to comment
Share on other sites

2 hours ago, KSK said:

I was at a talk about this recently and it was throughly depressing. Lots of business types with a ‘this is the future whether you like it or not’ attitude with absolutely  no consideration for the social consequences of their half-baked money making schemes.

I guess there really isn’t anything new under the sun.

3

Before we have true AI, well before we have it... we really need to decide how much we want robots as a staple of our society. How much of the job market will they be allowed to have? How will we adapt it to our growing population? While I do think it is coming one way or another, we can still angle it in a way that it doesn't screw us over.
Whatever we decide, it has to be a good merger of both safety - exactly what jobs robots can take so that we're safer - And feasibility - How will we implement those without making people unemployed, or worse, homeless? Should we work to expand other industries first?
And it could be decided at a local level. Some places may not have the required workforce. (either in skill, or sheer numbers) - more robots. some may have plenty - less robots - so the amount of automation may vary greatly wherever you go. 

Above kinda applies for true AI as well I think. But in addition, we'll need to decide how that'll work. Will we be willing to put them in positions of power? Probably not. Especially the "big ticket" ones. We might experiment with that on a small scale, but at best, I could see them as assistants. No more, at least for a long time. So, AI would probably have fewer rights in regards to voting and politics in general, being on a jury, or even being a lawyer, etc.

Could we see some of them split off, and make their own robo-nation because of that? That'd be interesting.

Anyway, enough rambling.

Edited by Spaceception
Link to comment
Share on other sites

56 minutes ago, Earthlinger said:

Right, so your argument is that, since AI would be created by us, it would be an extension of us, and therefore any discoveries or achievements made possible by AI would be our achievements and discoveries, correct? (More or less?)

The thing that worries me in a scenario like this is that not everything the AI would solve would be something that humans would bother to learn themselves. Eventually, you could get a sort of situation where we have technology, but don't know how it works or how to make it without the AI.

Which is....bad...

The AI is more like offspring in my point of view. In that way it's still "human" in a broader sense of the term.

You see, we're already at a point where most humans don't bother to learn things. We have technology and don't know how it works. Just because this trend would continue with AI doesn't mean much. It's already happening, for better or worse. For example, how many people actually do anything with relativity? Or quantum physics? A lot of people are involved, but compared to the total population it's a small number of people. Another, more ubiquitous example would be computers. Hundreds of millions, if not billions, of people use computers. But how many of them could actually build a computer? Sure, a decent number of them may know what computers are and have knowledge of how they generally work, but so few have the skills required that there is already a disconnect between most humans and the humans that create technology.

1 hour ago, Earthlinger said:

(Philosophical rant below)

In addition, I think comparing scientists and engineers to an AI is a bit skewed. Today's scientists and engineers are representative of humanity because they are humanity, and while an AI can be made to mimic that humanity to the point that it's practically indistinguishable (and far superior), it won't be human.

Every innovator, designer, inventor and thinker of human history has defined that history, and by consequence they have defined the human race. And vice versa; the human race, as it grows and evolves, represents itself by via the achievements of the human beings it produces.

But what defines the AI? (Or ASI, kinda, in this case). It's made, and then it improves itself,  eventually to the point that it's independent of humans. Initially, it will be representative of humans because humans directly made it, but the moment it starts to surpass what we can do and have done, it becomes a separate entity, and it loses representative status. Once it sets off on a runaway quest of self-improvement, we as a race no longer influence what it does. Will any prodigies and geniuses that would ordinarily revolutionize their fields be relevant or significant to human advancement if a machine is already doing what they do, and better? Not really.

Imagine it like a bar jump. The height of the bar is metaphorical for human advancement, and the jumper is the culmination or 'representing' entity that defines our species.

The jumper can clear a certain height through his own abilities, and with training, he can jump higher and higher (metaphorical for humanity's advancement across time).

But if the jumper decides to instead build a powered exoskeleton, which adapts with every jump in order jump higher and higher....

The exoskeleton, built by the jumper, initially represents his skills, since it will be more or less at the same level of capabilities as him. The suit is simply making things easier, more efficient; faster. Eventually, however, the suit, as it adapts and improves, will go beyond what the jumper can do, if only because it's improving faster.

And that's the point where the suit, even though it was built by the jumper, no longer represents the jumper.

I hope that made at least some sense :P

:D

I don't think there's a problem with AI that goes beyond our abilities. As long as we raise them right, that's fine. They don't have to represent us. Rather, what's more important is whether or not they provide benefits to the species. I'd like humanity to be able to jump higher, even if jumping higher requires something that has evolved beyond us. Sure, we can do great things. But there are limits to what we can do, if only due to the limitations of our minds. As such, we would need to go beyond our current state, perhaps to a state that no longer represents us now, to do even more.

Link to comment
Share on other sites

2 minutes ago, Bill Phil said:

I don't think there's a problem with AI that goes beyond our abilities. As long as we raise them right, that's fine. They don't have to represent us. Rather, what's more important is whether or not they provide benefits to the species. I'd like humanity to be able to jump higher, even if jumping higher requires something that has evolved beyond us. Sure, we can do great things. But there are limits to what we can do, if only due to the limitations of our minds. As such, we would need to go beyond our current state, perhaps to a state that no longer represents us now, to do even more.

This is a fair point, as there will eventually be a point where we can no longer 'jump higher' without altering something fundamental. In one case, that could come in the form of AI. IMO, the issue with using only AI to help us is that it would all be externalized - and this goes back to the problem where people no longer bother to learn much of the information and knowledge that makes their lives as they know it a possibility.

There's this great trilogy called Insignia (the other two books are named Vortex/Catalyst) by SJ Kincaid that describes a potential future where brain-implanted computers propel humans to new reaches of prosperity.

It paints a picture that I agree with more, as in that scenario, we are in some ways becoming the AI, as opposed to having it as some separate thing.

Perhaps the best thing to do would be to enhance humans first, via a combination of brain-implanted computers or genetic engineering. Part of what limits our progress today is the capacity of the human mind, as you mentioned. If we could somehow enhance ourselves in a way that makes our memory perfect, sharpens our understanding of concepts, and allows us to think quicker so that we become a blend of machine, then that would maybe be better.

We could become cyborgs where the 'intelligent' part of our implanted computers comes from our experience and personality. The machinery would expand the possibilities for us, but we would be in control, and matters progress still come from us, directly.

Internalization in such a manner, where the 'intelligence' of the machinery of the AI is our intelligence, might be more advantageous, as it both puts everyone on a level playing field and promotes the assimilation of knowledge.

I would personally love to memorize thing instantly.

Very useful with IB biology :P (Current tally is at around 500 words and concepts that I need to know :D)

Link to comment
Share on other sites

I think, before we can really bring AI to bear, we need to figure out what we want. In limited problems, this is relatively easy: optimize this casting form for greater strength and lesser mass. But before AI can be high-functioning members of society, we need to understand what makes a high-functioning member of society. Obviously one should generally abide by the laws, but that leaves a lot of autonomy, and sometimes it could be argued that breaking the law is preferable. In a general sense, people want to be happy, but they want to be happy in a particular way—electrostimulation of the pleasure center seems wrong, even though it would make you feel happy. It's a difficult problem, and one that philosophers have studied and argued over for millenia, but I think it'll become relevant sooner than we'd like.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...