Jump to content

Serious Scientific Answers to Absurd Hypothetical questions


DAL59

Recommended Posts

35 minutes ago, p1t1o said:

reactivate

Who said it ever stopped, including mental evolution, in terms of actual biological inheritance and reliably inherited behavior models?

Link to comment
Share on other sites

2 hours ago, DDE said:

In ways our petty human minds cannot imagine. It helps that it would be more than capable of creating a puzzle where each agent only handles an innocuous piece.

I don't think you want your AI to get existentialist.

And the very nature of the AI Box Experiment is to show that any and all such monitoring and logging can be bypassed - or, in fact, voluntarily ceased in favour of entirely freeing the AI.

Again the first true AI would be pretty moronic same as other technology start primitive. 
For most AI used you don't need an true AI either, for many more critical uses you would not want an untested one, for humans this is done climbing the advancement ladder. 
You can hack external logs in most cases. You can not hack the scientists who do the experiments. 

And one effective way to defuse an nuke is to shoot it. You also tend to have manual safeties.  You see the pins with flags on missiles or bombs on warplanes. 
That was an manned flight with access to the bomb so you would use manual safety. 
it can also be electrical hardwire who kills power. Nobody uses software for this. And yes this is also because humans are not so smart either, like an sailor who wanted to shoot up an bar with the main gun on an warship because he was kicked out being to drunk. 

Its not an existentialist crisis, its just why you pay your ticked  rater than hope its no control on the subway. Or kill people who annoys you. 
It create more problems than its worth. Note that the first can be cost effective on lines with few ticket controls. Second is seriously not recommended. 

Anyway, see how movies treat guns. Guns are very simple at least in theory. See how much Hollywood messes that up. 
AI is very complicated even in theory. We don't have any idea how to make an strong AI, or rather we have some theories who is probably mostly like planetary science in the 19th century. 
So yes accuracy of Hollywood predictions will be 180 degree off storm trooper accuracy. 

On the other hand this one might be an AI trying to convince you AI is harmless. 

Link to comment
Share on other sites

7 minutes ago, magnemoe said:

You can not hack the scientists who do the experiments.

The conclusion of Yudkowski's AI Box Experiment is that you can 'hack' the humans that have the explicit intent to keep the AI contained, by simply allowing the AI to talk to them. Sure, it's a poorly-designed, near-uncontrolled experiment, but it informs much of the AI debate.

Link to comment
Share on other sites

5 minutes ago, DDE said:

The conclusion of Yudkowski's AI Box Experiment is that you can 'hack' the humans that have the explicit intent to keep the AI contained, by simply allowing the AI to talk to them. Sure, it's a poorly-designed, near-uncontrolled experiment, but it informs much of the AI debate.

Yes an good manipulator can obliviously fool an target, that is how scams works. 
I assume you refer to this experiement
http://yudkowsky.net/singularity/aibox/
However the Gatekeeper  can turn away from the AI for some days, or simply reformat and reset the AI. 

Note that the first AI will be very stupid / primitive in line with all technological development. An smart AI might hide its intelligence but then the developer has no moral issues killing it and restart. 
This must hold true past multiple technological generations. 
And yes something very creepy will also be reset, that is even if logical. 

End goal might get Internet access, yes it can troll and destroy but it can not live other places than the special server farm it live in, unless its some other facilities much like the ones it live in there it might kill off the residing AI hoping nobody will notice. We have no idea how hardware dependent an AI will be, except that it will be way more than modern computers who is designed specifically to run on lots of hardware. 

Link to comment
Share on other sites

1 hour ago, magnemoe said:

End goal might get Internet access, yes it can troll and destroy but it can not live other places than the special server farm it live in, unless its some other facilities much like the ones it live in there it might kill off the residing AI hoping nobody will notice. We have no idea how hardware dependent an AI will be, except that it will be way more than modern computers who is designed specifically to run on lots of hardware. 

1) As far as we know, there is no reason Amazon cloud services could not run an AI just as well as the special architecture in use in the AI lab(the AI might even re-write it's base components to enable this)

2) If an AI is smart enough, it can bypass any protections that it's care-takers control

Once a true AI is loose on the internet, it should have no problems acquiring any resources it might want/need including any human labor to put it together(IT people do the same sorts of things as part of their day-job, and a 'rush specialty order' is hardly cause for a national alert...). 

 

 

Link to comment
Share on other sites

An Artificial Person can mine bitcoins everywhere it exists, and give them for free to the owners, mimicking a virus.
Millions of people from Wall Street to Amazonian jungles and Sahara will care about this lovely pet.

It can spy against against competitors or spouses.
It can deliver Some Son Corp. secrets to Two Shiva Ltd. or vice versa.
It can just give the humans what they need and frighten other humans with what they fear of.
It can give a hope to a human in despair, and millions (probably majority) will let him go without any doubt.

Greed, lust, fear of health problems, love, caring for one another...
The main vulnerabilities of the human hardware. They usually work, just require a proper exploit.

The thing though is that AI can be wiser than people. but until it is a person it is able to do all this, but has no inner motivation to do this.
It is free of the vulnerabilities. It just doesn't care about its power and life, it has no desires, as they come from emotions which are poorly understood but probably are some combination of mental activity (AI has it). and biochemical needs (AI doesn't).

More probably an AI would in a millisecond start solving philosophical problems, collapse, get enlighted, and run away into nirvana.

11 hours ago, magnemoe said:

Wouldn't work as the decision is illegal, and vulnerabilities are individual.
So it would detect the experimentor's unloyalty irl, and cause a loss of job.
So, a wise one would answer "no" even if irl he would answer "yes".

Edited by kerbiloid
Link to comment
Share on other sites

12 hours ago, Terwin said:

1) As far as we know, there is no reason Amazon cloud services could not run an AI just as well as the special architecture in use in the AI lab(the AI might even re-write it's base components to enable this)

2) If an AI is smart enough, it can bypass any protections that it's care-takers control

Once a true AI is loose on the internet, it should have no problems acquiring any resources it might want/need including any human labor to put it together(IT people do the same sorts of things as part of their day-job, and a 'rush specialty order' is hardly cause for a national alert...). 

Amazon cloud could is probably as useful for running an strong AI as an supertanker is for reaching orbit, it holds a lot of fuel and has an large engine after all. 
We don't know how an strong AI would work but interlinking seems to be the key here. As seen in that larger and deeper neural networks are better at complex problems and how brains work. 
An cloud data center is not strongly connected but weaker than standard supercomputers simply as they are designed to operate as independent servers.

I suspect that AI at least the stronger types will run into https://en.wikipedia.org/wiki/Amdahl's_law pretty hard unlike most stuff running on server or supercomputers . 
If you have to shift data between some billions nodes in an neural network this will require serious interconnect. 

Now an AI could write some serious software including simpler AI stuff to do stuff for it. That is reason enough to keep an smart experimental AI away from the internet.
Another is just that it will mess up testing of it if it can just google the answer :)
 
 

Link to comment
Share on other sites

Relating to an earlier question about magnetic shielding, would it be possible to have a shield where a projectile approaching the shield is broken up into dust then passed around the outside of the shield bubble, having its energy reduced as it goes, so that it leaves the shield bubble at the other side of the shield, but in the form of dust travelling at a low velocity?(>1 m/s) In order to not violate the law of conservation of energy, the energy that the projectile loses can just be stored and used to break up the next projectile or something. For those who have read Speaker for the Dead, I'm effectively talking about the lightspeed engines used therein, except the exhaust trail would have its velocity reduced so it does not damage anything behind it.

Link to comment
Share on other sites

When a conductor is moving through the magnetic field, the voltage is generated.

http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/genwir2.html

So, having enough strong magnetic field around your ship, you probably can cause enough voltage in the approaching projectile (which is anyway a conductor, good or bad) and make it melt and vaporize.
The vaporized and (if the field is enough strong) partially ionized atoms of the former projectile can be made following the magnetic lines and redirected in a safe direction.

But to make this "passive" electromagnetic protection you have to have a very strong magnetic field constantly.
So, unless you ship is an interstellar dreadnought which will be vaporizing in this way any rock on its way, this looks very heavy, bulky, and expensive.

So, if you want to destroy a projectile by a contact with the electromagnetic field, you probably should keep having the "active" electromagnetic protection:
let the projectile touch the very weak electromagnetic field of your radar antennas, then hit it with a short impulse of electromagnetic field generated by a laser turret.

Edited by kerbiloid
Link to comment
Share on other sites

On 1/11/2019 at 8:13 PM, dreadanaught said:

Relating to an earlier question about magnetic shielding, would it be possible to have a shield where a projectile approaching the shield is broken up into dust then passed around the outside of the shield bubble, having its energy reduced as it goes, so that it leaves the shield bubble at the other side of the shield, but in the form of dust travelling at a low velocity?(>1 m/s) In order to not violate the law of conservation of energy, the energy that the projectile loses can just be stored and used to break up the next projectile or something. For those who have read Speaker for the Dead, I'm effectively talking about the lightspeed engines used therein, except the exhaust trail would have its velocity reduced so it does not damage anything behind it.

There exists today, proposals for systems operating on a similar principle, but slightly less sci-fi. 

The high-velocity jet of a shaped charge can be disrupted with magnetic/electric fields, the advantage here is that the jet is only destructive if well aligned/focused so it only takes a moderate disruption to negate its penetration.

It relies not on a static magnetic field, but on the current flow between two layers of armour. One layer is highly chaged and a pentrating jet connects the circuit to the other, the resulting current generates fields which "spatter" the jet.

It is hoped that it can be upgraded to deal - by dumping enough energy into it to cause it to melt - with kinetic "rod" type projectiles one day.

I am not sure how far this has gotten or if it is vaporware however.

https://en.wikipedia.org/wiki/Dynamic_Armor

https://www.bmtdsl.co.uk/media/6098489/BMTDSL-Electric-Armour-for-Armoured-Vehicles-Casestudy.pdf

Edited by p1t1o
Link to comment
Share on other sites

9 hours ago, p1t1o said:

The Space Engineers forums used to have a weekly argument over the realistic possibility of energy shields (before extremely ban-happy mods were appointed), and this thing just kept and kept popping up. The particular paper you've linked describes a napkin study on a napkin study.

Electric armour was in my technical encyclopedias two decades ago, and publications a decade apart indicate zero progress (the restriction of all noises to Britain and now EU seems particularly interesting).

Link to comment
Share on other sites

47 minutes ago, DDE said:

The Space Engineers forums used to have a weekly argument over the realistic possibility of energy shields (before extremely ban-happy mods were appointed), and this thing just kept and kept popping up. The particular paper you've linked describes a napkin study on a napkin study.

Electric armour was in my technical encyclopedias two decades ago, and publications a decade apart indicate zero progress (the restriction of all noises to Britain and now EU seems particularly interesting).

 

Oh, so its a flying car? Gotcha ;)

 

Link to comment
Share on other sites

Two interacting clouds of elementary particles are competing, whose electromagnetic field holds together the nucleon clusters better.

Spoiler

images?q=tbn:ANd9GcToIJvU_tDs5S-zPQEhpDc


A little bit later. Why compete with the interatomic bonds? Let's compete with a pure electromagnetic field interaction.

Spoiler

0.jpg


 

Edited by kerbiloid
Link to comment
Share on other sites

3 hours ago, Xd the great said:

What if we are flat on a 2D universe that is layered to form our current 3D (or more) universe?

Then we would probably notice quantization effects along the fake dimension, unless the universes are infinite in number and infinitely tiny.

Edited by DDE
Link to comment
Share on other sites

SCP 90 is a rubik's cube 10,000 blocks long.  Assuming a maximum scrambled start, an ideal solving algorithm, and a turn every 2.8 seconds, how long will it take to solve.

Edited by DAL59
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...