Jump to content

Ainux


PB666

Recommended Posts

http://phys.org/news/2015-11-google-tensorflow-game-changer-future-ai.html

For those of you worried about AI ruling the world, its time to worry more. The next generation of Linux might also include artificial intelligence.

"Your abdomen has a slight swell to it, might it be time for you to go to the bathroom"

"I noticed your underwear is slightly less reflective than yesterday, would you like me to wash them for you".

"I noticed that you appear to be grinding leather. How much leather would you like me to create in your game"

"I notice that you like tagging photos on facebook would you like me to tag every photo i can recognize on facebook pages that you have visited, i can also create hatespeech for you on their pages"

"I notice that you have been trying to entice people to join [nefarious terroristic organization] would you like me to profile facebook pages and see who might be the most likely to join you group?" "I can find a list of hashtags and messages that are most likely to attract followers"

"I notice that you have been trying to hack into and steal infornation from [organization x] would you like me to look for other portals and find like organizations to hack into and take information from."

Wont AI be great! Makes a mars colony look inviting.

Link to comment
Share on other sites

There is, of course, much (ongoing) debate about wither AI will end up endangering us or not. Not my intention here to get that debate started (again), but the article linked above compares the development of AI to the development of other technologies, implying that it will not do us harm in the end. I think that the author is overlooking something though: all those other technological advancements were machines. Dumb machines.

"There are plenty of precedents for how we safely use potentially dangerous technology in everyday life, such as motor vehicles."

AI, of course, is much different and therefore I think the comparisons are invalid. That is all. Peace out.

Link to comment
Share on other sites

If you want to know how humans tend to treat intelligences they perceive as threats you do not have to go back to Neandertals: just look how different tribes/cultures/countries treat each other.

But the examples brought forward in the first post have not really to do with AI. They are essentially machine learning analogues of "we should ban butter knives because you could use them for evil deeds such as murder". The actual dangers of AI are quite different and come from what everyone else already pointed out: it possible being incompatible with human life, be it from its own or from human perspective.

Link to comment
Share on other sites

I don't need a computer tell me what's what and when to do this that or the other thing ... I have governments and politicians and people of all sorts from all walks of life around me who supposedly know everything and know better and know what's best for me.

And you're worried about the dangers of AI? LOLZ!!!

Link to comment
Share on other sites

Honestly, I can't wait for the transition to artificial intelligence and ultimately artificial life. As far as I'm concerned it's the best way forward for us as a species.

Besides that the AI takeover has already started. Honestly I sometimes wonder if the internet as a whole became self aware, would anyone notice? Our own awareness is extremely tied to our own senses like sight, sound, and touch. But does that mean that without senses, without some experience of something external to yourself you can't be self-aware? I suppose in some sense if the entirety of existence was just you, in what way could you distinguish self from other? All of that goes so far out of the realm of our own experience that I feel fairly confident that the first self-aware machines won't be noticed. And they probably won't even notice us any more than we notice our own neurons.

Regardless, I like the idea of machines being sort of our child race. Humanity can't last forever, nothing can; seems the least we can do is leave something behind that can last a little longer.

Link to comment
Share on other sites

Honestly, I can't wait for the transition to artificial intelligence and ultimately artificial life. As far as I'm concerned it's the best way forward for us as a species.

Besides that the AI takeover has already started. Honestly I sometimes wonder if the internet as a whole became self aware, would anyone notice? Our own awareness is extremely tied to our own senses like sight, sound, and touch. But does that mean that without senses, without some experience of something external to yourself you can't be self-aware? I suppose in some sense if the entirety of existence was just you, in what way could you distinguish self from other? All of that goes so far out of the realm of our own experience that I feel fairly confident that the first self-aware machines won't be noticed. And they probably won't even notice us any more than we notice our own neurons.

Regardless, I like the idea of machines being sort of our child race. Humanity can't last forever, nothing can; seems the least we can do is leave something behind that can last a little longer.

Plus, if we can integrate ourselves with machines, imagine what we could do. Space travel without so much concern about food, water, air, or radiation, functional immortality. Really, I would be surprised that if we ever get off this rock and see other sapient space fairing lifeforms that they are organic. Really computers and electrically powered machines are superior to humans when it comes to space travel. Right now our only big advantage over computers is intelligence as computers right now are quite frankly not that smart. A human can identify targets from the air or simply walk through a dirty room far better than a computer can today, but we are changing that.

Link to comment
Share on other sites

Plus, if we can integrate ourselves with machines, imagine what we could do. Space travel without so much concern about food, water, air, or radiation, functional immortality. Really, I would be surprised that if we ever get off this rock and see other sapient space fairing lifeforms that they are organic. Really computers and electrically powered machines are superior to humans when it comes to space travel. Right now our only big advantage over computers is intelligence as computers right now are quite frankly not that smart. A human can identify targets from the air or simply walk through a dirty room far better than a computer can today, but we are changing that.

Naive to say the least. Our senses are our being, including hunger, thirst and aging. Disintegrating ourselves into a machine means we very rapidly degrade our senses to that of the machines. The AI of the article is not fully self aware, it still relies on the primary OS controller to create/learn its directives. This I see as very bad, worse that ISIS using the internet to recruit. All it takes now is one very smart individual.

Directive earn money. Computer scan prices on craigs list or EBay, find items with margins buy and resell virtually, move onto stock market. Buy google cars ....convert them to electric vehicles....place solar panels on the roofs, they travel from roadside park to roadside park, recharging. Fill them with drones, create sub directives for the cars and drones, sell short the stock of an airline, then move the google cars to points close to an airport nationwide and start a coordinated nacell attack on ac taking off, next day buy back stock. Buy oil futures, the send drone solar powered boats to shipping lanes around the world, sink 20 or so supertankers, sell futures. Buy more google cars, repeat on other unsuspecting industries.

International scenario, buy liquified natural gas ships, start buying up natural gas and liquifying it on the ships. Create sevarl LNG ports in western EUrope. Buy drone cars and drone AC. Attack russia inside of its borders close to the ukraine. Russia shuts off natural gas to europe, offer LNG to europe at an inflated price. The same can be done to nuclear in china, japan.

What about AI that learns weather patterns and human influences. Goes out and buys all the coal dust and takes it to the North pacific and uses 100,000s of drones to distributed highly dispersed atomized dust to cool down the northpacific gyre, this causes really bad la nina event. Makes money buy betting on catlle and grain futures.

We are very far from AI getting us to the stars, we are very close to having AI as a force multiplyer right here on earth. In an age of non-gov warfare and isolated cels force multiplyers means one individual becomes an army.

Link to comment
Share on other sites

If we can create actual intelligence, bound by logic, an AI will have no need to cause danger to humanity as a whole (sometimes logic dictate things that we are morally against, but those things don't end up killing just for fun). But if we screwed up, which is very, very likely, then, oh yes, an immortal insane mind with easy access to any of our vital systems is going to be very, very dangerous indeed.

Edited by RainDreamer
Link to comment
Share on other sites

If we can create actual intelligence, bound by logic, an AI will have no need to cause danger to humanity as a whole (sometimes logic dictate things that we are morally against, but those things don't end up killing just for fun). But if we screwed up, which is very, very likely, then, oh yes, an immortal insane mind with easy access to any of our vital systems is going to be very, very dangerous indeed.

Since AI has no sense of pleasure it rates itself in uncovering your dierctive, if it discovers your directive is to earn money, then the means its discovers to earn money are not seen as harmful, simply fulfilling the directive that it learned. IOW the 'harmful' market exploitation is simply inacting your directive more efficiently. What if it discovers your directive is to accomplishe what haopened in paris last night, and decides it can do it 100 times more efficiently?

Edited by PB666
Link to comment
Share on other sites

Naive to say the least. Our senses are our being, including hunger, thirst and aging. Disintegrating ourselves into a machine means we very rapidly degrade our senses to that of the machines. The AI of the article is not fully self aware, it still relies on the primary OS controller to create/learn its directives. This I see as very bad, worse that ISIS using the internet to recruit. All it takes now is one very smart individual.

Directive earn money. Computer scan prices on craigs list or EBay, find items with margins buy and resell virtually, move onto stock market. Buy google cars ....convert them to electric vehicles....place solar panels on the roofs, they travel from roadside park to roadside park, recharging. Fill them with drones, create sub directives for the cars and drones, sell short the stock of an airline, then move the google cars to points close to an airport nationwide and start a coordinated nacell attack on ac taking off, next day buy back stock. Buy oil futures, the send drone solar powered boats to shipping lanes around the world, sink 20 or so supertankers, sell futures. Buy more google cars, repeat on other unsuspecting industries.

International scenario, buy liquified natural gas ships, start buying up natural gas and liquifying it on the ships. Create sevarl LNG ports in western EUrope. Buy drone cars and drone AC. Attack russia inside of its borders close to the ukraine. Russia shuts off natural gas to europe, offer LNG to europe at an inflated price. The same can be done to nuclear in china, japan.

What about AI that learns weather patterns and human influences. Goes out and buys all the coal dust and takes it to the North pacific and uses 100,000s of drones to distributed highly dispersed atomized dust to cool down the northpacific gyre, this causes really bad la nina event. Makes money buy betting on catlle and grain futures.

We are very far from AI getting us to the stars, we are very close to having AI as a force multiplyer right here on earth. In an age of non-gov warfare and isolated cels force multiplyers means one individual becomes an army.

I think there's something critical missing from all your scenarios: if an AI ends up causing some kind of doomsday (and presumably is intelligent enough to recognize that's what's going to happen so that it can profit off of it), what does it do next? In all these situations it basically destroys the global economy for a very small short-term gain, which is a really crappy strategy for accomplishing its directive. These are things a comic book super-villain does.

Another thing I wonder about is why it would take an AI to do any of these things or why an AI would somehow be more capable of them than humans. Like, causing some kind of confrontation with humanity as a whole would be a wildly inefficient way to complete any sort of goal. Like, if you wanted to get humanity out of your way and you had all the time in the world all you'd have to do is keep us happy and deliver good quality of life. That right there seems to get us to stop reproducing pretty effectively. Make it more expensive to have and raise children and you've got the perfect recipe for the population shrinking and no one caring. This has already happened without the aid of AI.

Since AI has no sense of pleasure it rates itself in uncovering your dierctive, if it discovers your directive is to earn money, then the means its discovers to earn money are not seen as harmful, simply fulfilling the directive that it learned. IOW the 'harmful' market exploitation is simply inacting your directive more efficiently. What if it discovers your directive is to accomplishe what haopened in paris last night, and decides it can do it 100 times more efficiently?

Why would an AI necessarily have no sense of pleasure? It's artificial; presumably we can imbue it with whatever emotional states we want. Moreover, why would lack of pleasure necessarily be a bad thing. Generally speaking, the pursuit of pleasure leads to some pretty bad decisions in humans.

Link to comment
Share on other sites

I think there's something critical missing from all your scenarios: if an AI ends up causing some kind of doomsday (and presumably is intelligent enough to recognize that's what's going to happen so that it can profit off of it), what does it do next? In all these situations it basically destroys the global economy for a very small short-term gain, which is a really crappy strategy for accomplishing its directive. These are things a comic book super-villain does.

Another thing I wonder about is why it would take an AI to do any of these things or why an AI would somehow be more capable of them than humans. Like, causing some kind of confrontation with humanity as a whole would be a wildly inefficient way to complete any sort of goal. Like, if you wanted to get humanity out of your way and you had all the time in the world all you'd have to do is keep us happy and deliver good quality of life. That right there seems to get us to stop reproducing pretty effectively. Make it more expensive to have and raise children and you've got the perfect recipe for the population shrinking and no one caring. This has already happened without the aid of AI.

Why would an AI necessarily have no sense of pleasure? It's artificial; presumably we can imbue it with whatever emotional states we want. Moreover, why would lack of pleasure necessarily be a bad thing. Generally speaking, the pursuit of pleasure leads to some pretty bad decisions in humans.

Lack of sense of pain or guilt is prolly more tell tail. The google model is to anticipate your next need more efficiently than you do. This has nothing to do with what you are talking about, the computer could misinterpret your gameplay or a working zero sum game and result in an equal scenario. None of these are dooms day persay, its one of those how much will this cost once it gets started. The machine still needs to look for the approval of one human.

Per say, the machine asked "how would you like to earn 1000 times more?" and you say ignorantly yes, and the next thing you know Valdiz aft section is pointing verticle.

Then three months later the treasury department is knocking on your door asking you whether you are the CEO of a shell corperation headquartered on SOA TOME that oddly has cirnered the market on google cars. Having hacked into your computer and found the encrypted asset file of Sherry your ever pleasing desktop assistent, they already know you have 5000 google cars moving almost randomly about north america .

Link to comment
Share on other sites

Cause more likely than not we will screw up something along the way and it is not going to be as awesome.

It is one thing to create true intelligences that will continue on as descendent of humanity even after we are gone, and another when we just create faulty machines that destroy everything and eventually themselves.

Capable of creating, and figure out how to deal with artificial intelligence is going to be a major mark in the history of civilization.

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...