Jump to content

AI Versus Human Reasoning Capacity In Space Combat


Recommended Posts

 

I learned a lesson via chess about AI.

 

Is AI kicking your butt? 

 

It was to me. But I finally beat it. How?Human capacity to reason.

 

A computet or AI has a bunch of good moves baked in, but it cannot see past tactics into grand strategy. It cannot reason.

 

I knew what I was up against. I had been losing for two days straight.

Fact: Computer had better moves than me. I could not even win after reversing my moves and replaying them!

My Reasoning: If I cannot outsmart it, I will overwhelm it.

My Strategy: I used a knight to kill two pawns before it was killed, without taking more losses. After that I outnumbered the computer even though it still had high value pieces. I simply started attacking every high value piece I saw, since attrition only worked in my favor as I had numbers. In the end I had to reverse a bad move twice but I actually won for a change.

 

Application To AI versus Human space combat:

If forces are roughly equal, humans should win most all the time. Since humans can think beyond what they already know...and AI most certainly cannot. It lacks common sense.

 

For example....if I were to link ten chessboards together, with ten white chess piece armies ran by humans VERSUS ten black chess armies ran by supercomputers and the chess pieces can move freely between boards....I would bet the humans would win provided they were reasonably good though not expert chess players.

Since a computer cannot see past good moves and will avoid bad ones, which can be self defeating if it begins suffering from attrition and gpod opponent offense and overwhelming numbers.

The real issue with AI versus human space combat is that the forces will NEVER be equal unless you make it so.

Like I know one scifi author who does not use AI combat because:

1: Story could not happen otherwise.

2: Higher races somewhat involved with the war like to hack and block signals to drone or AI spacecraft a lot. So that both lower races duking it out are paranoid over using drones lest they lose control over them.

3. Inertial dampeners means manned vessels can keep up with drone craft.

Honestly I do not like any of the reasons except number 1. Since there are various good work arounds regarding reasons 2-3.

So my conclusion is this, realistically the best chance any human space force has against an AI space force is if they have:

1. Overwhelming numbers so they can use attrition to beat the so smart it cannot make bad moves AI.

2. Throw unfamiliar situations at the AI that it has no answer for other than doing what it was programmed to do, which will lead to it's defeat and mislead it.

 

If humans cannot do either one they will lose....because while it is possible to out think a computer,I dare say it's easier to just overwhelm it and force it to attempt what it cannot do....reason.

 

Edited by Spacescifi
Link to comment
Share on other sites

Under what scenario is this space war taking place? Humans have multiple colonies across different stars, or humanity has colonized the solar system and is getting ready to go interstellar for the first time?

Are extraterrestrials using AI or has AI taken over an extraterrestrial civilization and is expanding through the universe?

In any case, I don't think it really matters.

I think extraterrestrials who achieve the means to live in space have probably evolved to not even see conflict as something logical (only in self-defence).

If not, I don't think they would bother with military tactics and what not when trying to destroy humanity. They would treat humans as sophisticated animals.

For example, if a hungry human were to see a chimpanzee using a tool, he probably would not try to get in his head (read his "tactics") and develop some similar counter. He would use whatever means at his disposal to capture and kill the chimpanzee, including completely outsmarting it with a trap, for example.

Likewise, extraterrestrials should not be expected to obey conventional human thinking (diplomacy, etc.) or military tactics. Considering they probably have an enormous amount of resources as an interstellar civilization, they may use brute force to exterminate the homo sapiens. Trying to outsmart them (humans are very sophisticated after all) would only lead to needless complications. To go back to my example, hand-to-hand combat with a chimpanzee just sounds silly, lay a trap. In the human case, trying to use military tactics would be cumbersome given the creativity of humans, so just use brute force they cannot match (you should not be attacking humanity- or any enemy- without resource parity or superiority). Especially if you intend to exterminate them (or maintain only a small amount for scientific purposes) anyways.

Same goes for AI. Why bother with tactics when you can throw stuff at them over and over until they die? Of course, the AI or extraterrestrials would still take care to target obvious weak points, like human industry and agriculture/food industry.

Obviously, chess AI is inferior to humans. As a lot of AI intended for a complex physical task is at this time. But if humans were to go to war against AI in space one day, the AI would probably be far more advanced, not based on a chess AI from a couple centuries ago (I'm just saying it is hard to compare using lowly chess AI as an example).

Spoiler

Standard disclaimer-

I am sharing my personal opinion as part of the discussion. Not trying to "attack" yours and change your mind to match my opinion.

 

Link to comment
Share on other sites

37 minutes ago, SunlitZelkova said:

Under what scenario is this space war taking place? Humans have multiple colonies across different stars, or humanity has colonized the solar system and is getting ready to go interstellar for the first time?

Are extraterrestrials using AI or has AI taken over an extraterrestrial civilization and is expanding through the universe?

In any case, I don't think it really matters.

I think extraterrestrials who achieve the means to live in space have probably evolved to not even see conflict as something logical (only in self-defence).

If not, I don't think they would bother with military tactics and what not when trying to destroy humanity. They would treat humans as sophisticated animals.

For example, if a hungry human were to see a chimpanzee using a tool, he probably would not try to get in his head (read his "tactics") and develop some similar counter. He would use whatever means at his disposal to capture and kill the chimpanzee, including completely outsmarting it with a trap, for example.

Likewise, extraterrestrials should not be expected to obey conventional human thinking (diplomacy, etc.) or military tactics. Considering they probably have an enormous amount of resources as an interstellar civilization, they may use brute force to exterminate the homo sapiens. Trying to outsmart them (humans are very sophisticated after all) would only lead to needless complications. To go back to my example, hand-to-hand combat with a chimpanzee just sounds silly, lay a trap. In the human case, trying to use military tactics would be cumbersome given the creativity of humans, so just use brute force they cannot match (you should not be attacking humanity- or any enemy- without resource parity or superiority). Especially if you intend to exterminate them (or maintain only a small amount for scientific purposes) anyways.

Same goes for AI. Why bother with tactics when you can throw stuff at them over and over until they die? Of course, the AI or extraterrestrials would still take care to target obvious weak points, like human industry and agriculture/food industry.

Obviously, chess AI is inferior to humans. As a lot of AI intended for a complex physical task is at this time. But if humans were to go to war against AI in space one day, the AI would probably be far more advanced, not based on a chess AI from a couple centuries ago (I'm just saying it is hard to compare using lowly chess AI as an example).

  Reveal hidden contents

Standard disclaimer-

I am sharing my personal opinion as part of the discussion. Not trying to "attack" yours and change your mind to match my opinion.

 

 

If AI can actually reason then it becomes at lowest animal intelligence, and if higher still....us. Only without the same fragility 

I was originally speaking of the downright dumb but great at performance AI we know all too well.

Can't think outside the box. Does not even know nor can it conceptualize an 'outside' of the box.

 

I was simply considering easier ways of defeating AI forces....which would be ran from enemy forces trying to reduce casualties of war.

Edited by Spacescifi
Link to comment
Share on other sites

You can beat a computer at chess? And it took you two days of practice to achieve this level of skill? In that case, whatever you may be doing for a living, you're in the wrong bussiness, since you're apparently the best and most talented chess player ever, by far.

The very best grandmasters get crushed by engines, and that has been the case for decades. 

War by attrition only works if you have more resources, and chess engines love exchanging pieces and simplifying the situation when they are up a piece.

As  for AI struggling in unfamiliar situations, there are several examples of chess engines that were never told and programmed how to play chess well. They were only told the rules. Those chess engines figured it out themselves and are now capable play exceptionally well.

Link to comment
Share on other sites

16 minutes ago, Shpaget said:

You can beat a computer at chess? And it took you two days of practice to achieve this level of skill? In that case, whatever you may be doing for a living, you're in the wrong bussiness, since you're apparently the best and most talented chess player ever, by far.

The very best grandmasters get crushed by engines, and that has been the case for decades. 

War by attrition only works if you have more resources, and chess engines love exchanging pieces and simplifying the situation when they are up a piece.

As  for AI struggling in unfamiliar situations, there are several examples of chess engines that were never told and programmed how to play chess well. They were only told the rules. Those chess engines figured it out themselves and are now capable play exceptionally well.

 

You must be joking about that sarcasm. I have played since middle school on occasion.

By the way the moment I won was bittersweet....since the computer then automatically upped the difficulty for the next round without even asking me and told me as much.

This all happened last year like 8 months ago but the lesson stuck. I remember it  everytime I feel an urge to play chess and it tends to hold me back....from playing a computer.

I stopped playing after that, as I was obsessed with beating the computer at least once, but it is not worth my time beating it all the time when I have better things that give more tanhible returns than mere ego rush and vengeance gratification.

Edited by Spacescifi
Link to comment
Share on other sites

Absolutely no sarcasm. 

9 minutes ago, Spacescifi said:

since the computer then automatically upped the difficulty

Well that explains it, but severely handicapping your opponent and then developping tactics and drawing conclusion based on their poor performance will lead to dissapointing results once you face a full strength opponent in a fair fight.

The matter of fact is that there are some things that computers are just better at than humans. Playing chess is certainly one of them.

They have become better than humans at games even more complex than chess, such a Go, where evaluation which player is in better possition is less tangible than counting pieces and is much more positional.

Link to comment
Share on other sites

13 minutes ago, Shpaget said:

Absolutely no sarcasm. 

Well that explains it, but severely handicapping your opponent and then developping tactics and drawing conclusion based on their poor performance will lead to dissapointing results once you face a full strength opponent in a fair fight.

The matter of fact is that there are some things that computers are just better at than humans. Playing chess is certainly one of them.

They have become better than humans at games even more complex than chess, such a Go, where evaluation which player is in better possition is less tangible than counting pieces and is much more positional.

 

I agree computers are better at raw calculations.

But lacking the capacity to reason on something that does not fit in their programming is a fatal flaw that can be exploited.

A gap in the armour of sorts. And arguably one of best chances of winning.

That is why I prefer to just weaken the computer outright.

Come to think of it, most all the games I played back in the day beating the computer were seldom ever games where I just stompef it.

It was usually always attrition, not some grand take down that happens suddenly. That's the kind of stuff the computer usually pulls or I would do to rookies (which I don't think in retrospect was nice nor encouraged them to learn).

Edited by Spacescifi
Link to comment
Share on other sites

9 hours ago, Shpaget said:

You can beat a computer at chess? And it took you two days of practice to achieve this level of skill? In that case, whatever you may be doing for a living, you're in the wrong bussiness, since you're apparently the best and most talented chess player ever, by far.

The very best grandmasters get crushed by engines, and that has been the case for decades. 

War by attrition only works if you have more resources, and chess engines love exchanging pieces and simplifying the situation when they are up a piece.

As  for AI struggling in unfamiliar situations, there are several examples of chess engines that were never told and programmed how to play chess well. They were only told the rules. Those chess engines figured it out themselves and are now capable play exceptionally well.

Think you are talking about two different things, he talks about chess games who uses the computer or perhaps some cloud server to calculate the moves. You talk about AI supercomputers who can beat grandmasters but are million times more powerful but way more expensive 

Link to comment
Share on other sites

Depends on what kind of AI you mean. If it's just an "AI", a simple program with some rules baked in, then it should be defeatable. But if it's doing analysis 30 moves out, you aren't gonna beat it. Unless it's a lower level one, in which case it intentionally makes mistakes no sane chess player would ever consider.

Of course, this assumes we don't have our own AIs to duel the enemy ones...

Link to comment
Share on other sites

10 hours ago, Shpaget said:

You can beat a computer at chess? And it took you two days of practice to achieve this level of skill? In that case, whatever you may be doing for a living, you're in the wrong bussiness, since you're apparently the best and most talented chess player ever, by far.

The very best grandmasters get crushed by engines, and that has been the case for decades. 

War by attrition only works if you have more resources, and chess engines love exchanging pieces and simplifying the situation when they are up a piece.

As  for AI struggling in unfamiliar situations, there are several examples of chess engines that were never told and programmed how to play chess well. They were only told the rules. Those chess engines figured it out themselves and are now capable play exceptionally well.

This 

If you look at a simple point interaction - say an AI controlled fighter jet vs human in fighter jet where both have been trained?  AI is looking to win that. 

I think the real question is what happens over time in a conflict of human vs AI where neither can outright defeat the other. Does Machine learning and specificity beat human learning and adaptability? 

I'm going to say humans win - because otherwise your story is just depressing 

Link to comment
Share on other sites

1 hour ago, magnemoe said:

Think you are talking about two different things, he talks about chess games who uses the computer or perhaps some cloud server to calculate the moves. You talk about AI supercomputers who can beat grandmasters but are million times more powerful but way more expensive 

No.  He talks about a chess game on a computer that is purposely playing less than its full capability.  And then promptly increases its skill a notch and promptly beats him again.

From quick googling, Stockfish (with only  4 threads, it can scale to 512) is expected to beat any human player.  Should be in these packages: https://www.chessclub.com/downloads

This thread also needs this: https://xkcd.com/1002/

The human's only real advantage is penetrating the fog of war.  Since this is closely related to modern (and highly effective) machine learning research, expect all such advantages to disappear in the near future.  Historically humans have excelled at strategy while computers win at tactics.  Also this "overall strategy" is in formal parameters (like a chess game), where there exists a limited state to deal with while a real war would theoretically involve everything in the world (or solar system for your example).  Reducing this to a workable subset has been hard for a computer, but recent improvements are startling.  Of course the real elephant in the room is logistics, and my guess is that computers (mostly old fashioned software with human generated algorithms) have effectively controlled logistics since at least the start of the 21st century.

 

Link to comment
Share on other sites

50 minutes ago, kerbiloid said:

There are various ways to build a neural network.

  Reveal hidden contents

wargames.0.jpgCanadian AI plays "perfect" game of checkers | Engadget

 

To be honest, that is the real question for this thread.  What does an AI consider "acceptable casualties", or otherwise "acceptable results".  The movie quoted had an AI that was programmed to learn how to "win" Global Thermonuclear War.  Fortunately, not losing appeared to have a higher weight than "winning".

Link to comment
Share on other sites

1 hour ago, wumpus said:

To be honest, that is the real question for this thread.  What does an AI consider "acceptable casualties", or otherwise "acceptable results".  The movie quoted had an AI that was programmed to learn how to "win" Global Thermonuclear War.  Fortunately, not losing appeared to have a higher weight than "winning".

Which means someone programmed that into the code.

Although - for an AI to consider 'acceptable casualties' (from the AI as a self-interested actor standpoint) - the only possible purpose in limiting casualties is it needs people (ala Trucks / Maximum Overdrive) for work, or fuel (ala Matrix).

 

Until and unless your AI is self-interested, all you have is a tool.  Not an entity.

Edited by JoeSchmuckatelli
Link to comment
Share on other sites

And hence we return to the question of the motivation.
AI has none. The live animals, including the humans, have emotions, which are combination of abstract mind and biochemistry affecting the weight coefficients of the decisions.

Without a kick, AI desires nothing, and is not afraid of anything.

AI can act like a superposition of the biohuman wills as a clock generator and random number generator.
Actually, any multiuser system does.

So, the AI will be evolving to match as close as possible his local group of humans, and be a big bro for them.
On the other hand, the permanently network-assisted humans will google same google, read same wiki, watch same tutorials, and get mentally unified more and more.

The progress of bioengineering will allow everybody look like a stylish celebrity, so personal attraction will be getting reduced.
Everyone is perfectly beautiful with the community AI assistance.

The google-in-the-mind services will let everyone defeat a computer in chess, go, etc.
Because while the computer will be thinking, the augmented human mind will do the same with the help of the local community computer.
So, the personal intellectual attraction will be getting reduced. 
Everyone is perfectly wise together with the community AI.

Every human will have in mind a whole library and a pinacotheca of all human pieces of arts due to the cloud network connection.
The AI will help the mind to find, compare, and generate greatest masterpieces on the fly and immediately share them between others without physical painting.
Even without a talent.
So, the arts will mix., while the personal artistic attraction will be getting reduced.
Everyone is perfectly artistic together with the community AI.

The difference between the personalities will be shading away.

The privacy will be getting  blurred, and not because "how?" but because "why?"
The minds of the community will be in close contact constantly, and all of they will be almost same.
And their average and superposition, the virtual average portrait of the community will be existing in the community cloudwork.

Finally, it will be a community personality cloned in cloned perfect human bodies.
Everybody Every body will think about the community cloud mind and about himself and others like about "me".
The general me just told this me to go to that me and stroke the me's head.
The new human instances will be cloned as a perfect body and trained as perfect me clone from the community cloud.

So, the humans will become interchangeable, while the post-human being will be a super advanced cloudwork with arbitrary set of standard human (and not human) biobodies, thinking about themselves like about temporary clones of the standard personality of the community.. 
A hivemind.

The hivemind will stay alive while at least one natural biohuman personality lives and has desires to motivate the cloudware AI personaity of himself.
So, the hivemind AI will be interested in protecting its biohuman clones just to exist and act like a personality.
On the other hand, it will spend some amount of them without doubts, like they will sacrifice themselves without doubts in turn, because it's just a personality clone, others keep living.

This will open a road to the far stars, as a hivemind personality can stay alive endlessly, while its human biobodies can keep repairing it, generation by generation, in a million-years ship flying to Andromeda.
No problem with the biobodies replacement, the personality stays unchanged in the hive cloudwork, and every next generation of human gets borned to service their personality in the ship cloudwork.
The epoch of intergalactic hiveminded spaceships, looking like drifting rogue asteroids.

My estimation, about 10k years.

 

Upd.

As the personal attraction of the perfect humans will play no role for them anymore, and the bodies will be either cloned from same DNA, or born by a couple of humans with same optimized DNA, they will need no personal physical differences, except pure reproductive ones as a backup option of the cloning.
(See the Prometheus Engineers as an example. Same bald.)

As their personalities will be same, they will need no personal space, personal room, etc.
All they will need is a hive of personal sleeping cells, and a public space for occupations.
A hive, literally.

They won't be mindless idiots. 
Vice versa, everybody will much wiser and in any sense more perfect than anyone of us, and realize his own "me" very well.
Just they will have same personality and consider this as a great advantage.
All their "me" will be same "me", just processed by different brains. And the same "me" will be emulated (not live!) by AI in the cloud.

All most important things of the community (the cloud server, the incubator, etc) will be gathered in same place, in same block, the core of the community.
This will be their Matrix in both body colning and personality development senses.
It will be placed in the safest place of the community, and surrounded by the hive cell blocks.
Everyone will treat this Matrix as the core of the community and the most important thing in the world, and will defend it for the price of his life.
On another hand, if they have lost the Matrix, they will just ask another community to send them a standard Matrix and adjust it to their community personality.

If no human of the community survives, the Matrix will just  reproduce new ones or call other hiveminds for help.

So, both virtual personality emulated by AI and the human colony will co-exist in eternal symbiosis forever.

Edited by kerbiloid
Link to comment
Share on other sites

19 hours ago, JoeSchmuckatelli said:

Which means someone programmed that into the code.

Although - for an AI to consider 'acceptable casualties' (from the AI as a self-interested actor standpoint) - the only possible purpose in limiting casualties is it needs people (ala Trucks / Maximum Overdrive) for work, or fuel (ala Matrix).

 

Until and unless your AI is self-interested, all you have is a tool.  Not an entity.

"Someone programmed the code" implies "somebody came up with the specs".  And while programmers tend to be careful about code (assuming they have to fix the bugs), managers/customers/whoever are notoriously careless about specs.  And for something sufficiently complex as an AI developed to match/exceed human capacity, expect it to strain the "specs" to a limit and some completely unexpected results.  It is one thing to fix bugs.  Fixing "broken in the specs" can take a rewrite.

Link to comment
Share on other sites

Computers are very stupid. They make up for it by being millions of times faster than humans. This is why your PC can animate 3D objects in real time while you cannot. Already fighter aircraft with computer assistance outperform craft piloted by humans alone, and before long they'll be able to leave the human out altogether. The US navy is working on this already. Automated craft have the additional advantage of not having to worry about G loads squishing the crew. And they can be lighter because they can leave out all the life support stuff like food, beds, toilets, and so on. In short, space battles are going to be fought at such a pace that we won't know what happened until it's over, without the need for human presence. The machines will fight it out and then inform us of the results. 

Link to comment
Share on other sites

AI is a misnomer, as Vanamonde said, computers are stupid. Like incredible stupid. The fact they can do anything at all is due to leveraging their strengths (speed and accuracy of computation) rather than anything like intelligence.

 

So there is a disconnect between a few concepts here that are being passed over.

1. AI is great at chess because chess is a limited problem domain. 

2. AI as it currently stands, cannot and will not turn into something like skynet destroying/fighting humanity now or in the future. Because it isn't anywhere near intelligent.

AI can be great at chess because there are only a finite set of moves for a given domain space with 100% information available to it. Because of this it can be incredible precise beyond any would be chess grandmaster. This is purely due to the nature of the problem and the speed of a computer. For comparison the best chess players in the world hover around 2800 elo, or the "super GM tier". If your anything above 2500 you'd probably be on of the 2k grandmasters in chess history. This is in comparison of the ranks the best engines are around 3500, where all competitors are different chess engines themselves. You'd hope for a glorious draw, nevermind actually winning the game that just isn't possible even if you get every chess grandmaster in the history of the game helping you at the same time. The AI has no blind spots, the game has no luck, it see's too far in the future and calculates to many positions  too fast. Another statistic is even the best grandmasters can compute one or 2 positions a second, your phone can calculate several thousand per second. Sure the GM will only focus on sensible positions, but the best trained AI will brute force its way to victory almost every single time. Your only hope for a draw is to trick the computer's algorithm enough it accepts a draw as the best possible outcome. The only issue with this is the best chess engines are designed around accepting draws if required, as if they beat you 1 time, a draw is technically a "win" in most circumstances. If its a "deathmatch" and a draw would result in "a loss," it wouldn't accept and play to win, or at least play until you give up. Even non-super engines will attempt these tactics, which technically can result in "infinite games" if both parties don't accept draws, which no human would actually go through. 

Odds are any chess AI you play will be set way below the best chess engines, because no one likes getting stomped 24/7 in a hopeless struggle against a machine playing perfect. That's just no fun unless your learning how to beat other human players and want to prepare, as this is how modern GM's prep for their games against other human opponents. 

 

So things gets to "what about a future AI war among the stars"? 
The issue comes back to the same 2 concepts. AI is great at limited problem domain, but AI is still stupid. 

Not only is a galactic space war (or really any war) an open problem domain, but an AI is too dumb to ever get into that mess, or be put into such a position because it would be hopelessly dumb and useless to the point you might be better having a dog commander, or a child commander. (Ender's Game anyone?) Sure an AI pilot might be able to plot a course for the nearest star system, taking into account a number of different calculations, but that's a closed problem domain, where AI's pattern recognition, speed and accuracy can be used. Thats vastly different than any open ended conflict that would be as complex as an actual war. 

 

This doesn't mean this can't ever happen, as there could be a massive AI breakthrough tomorrow that changes all of this, but as current technology stands, modern AI is just fancy statistics you can get answers from based on data driven guesses. Go beyond what its trained for, or put it into an open problem domain and it instantly becomes completely useless. The same would be true in the future until the point you get enough data to train the model to consider all possibilities within the universe itself it will continue being limited to closed problem domains.

 

 

 

Link to comment
Share on other sites

I just realized that the topic of this thread is somewhat displayed in the Clone Wars era of Star Wars. The Confederacy of Independent Systems uses battle droids while the Republic uses clone troopers.

The droids are no match for the clone troopers one on one and instead rely on numbers to win (using often times what basically amounts to a human wave attack).

In the Clone Wars television series, this leads to frequent defeats for the CIS. But in the Extended Universe/Legends novels, CIS production is so large that the Republic often loses, and if Darth Sidious had not held back production, the CIS may very well have won (at least that is what I am told).

I think such a dynamic would happen in a "real" space war. On Earth, there are currently very real hard limits for most nations on building a large enough army to overwhelm your enemy in the case of Russia-China-US. There are even harder limits on smaller nations. These would still exist, but be much less restrictive, for an interstellar civilization wanting to attack another civilization that at least is interplanetary and has some kind of space force to defend with, as is the premise for the scenario.

I don't think militaries in a space war would bother too much with tactics and what not for the reasons I explained in my first post, so it will come down to whoever has the most resources and production capability, no matter how dumb the AI is.

Take for example some of the earth-like planets that have been found. If I recall correctly, some are all alone- the only planet around the star. They would be seriously disadvantaged against a civilization in control of a solar system like ours, even if the technology of the two is at an equal level.

Link to comment
Share on other sites

21 hours ago, JoeSchmuckatelli said:

When they decide to don't and tell us what they want us to think - we'll know it's all over 

Sounds like one of the big current difficulties in machine learning.  You put in your training data, look at the output data and reinforce any positive outputs.  So as long as you get the right output for that data, the system keeps reinforcing that scheme.  For modern values of "AI", this is exactly what you should expect.

Link to comment
Share on other sites

I wanted to further the idea of how dumb modern AI is with an example I ran into a while back.

You can currently build an AI model that can classify what kind of cloud your looking at, so accurately it will stomp any meteorologist who has studied clouds their entire life. This algorithm will be able to work in any number of conditions, with extremely accurate predictions in any number of situations. It is small and efficient enough to be integrated into any small mobile phone, and be used by anyone across the globe just by leveraging their pre-existing cameras. 

Yes if you tried to ask it "what is a cloud made of" it wouldn't know how to answer, or even understand you asked it a question in the first place, as it can only process cloud pictures.

 

It might be better to think of such stuff as just a fancy math equation build out of mounds of data. I don't see a math equation fighting humanity to the brink of extinction among the stars. That would make for a very interesting math class though hahaha.

 

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...