Jump to content

Does science should be censored ?


Pawelk198604

Recommended Posts

The fundamental problems with current AI :

It isn't "smart" as with human's "smart".

Or at least yet. But it shows that it only responds to what was given.

 

I call BS on the claims of/from the AI and I call BS that this should be outcried.

Edited by YNM
Link to comment
Share on other sites

4 hours ago, YNM said:

The fundamental problems with current AI

t isn't "smart" as with human's "smart".

Or at least yet. But it shows that it only responds to what was given.

 

I call BS on the claims of/from the AI and I call BS that this should be outcried.

This is not a problem so much in AI but in the way humans think. And yes its is a problem with the way people think. I can remember going into group meetings and a director asking

'why are they asking for statistics, they never asked before all we did is given them numbers, it was good enough'. And by and large what happens next is that researchers do the minimal statistical analysis that gets them published (as better published in a journal edited by a close colleague, which can get really bad if that colleague happens to be a member of the national academy of science and recommends your paper for publication in PNAS).
So the basic problem is that these things are what you call multi-layer problems. For example suppose your asked to determine the accuracy of a string of numbers, but that string may contain characters or garbage also.
The original algorhithm only looks for numbers, in the same way an author only looks at one source of variation. But if there are several layers of variation, the authors don't look beyond that.

Lets take a few real life problems.

Suppose you have dinosaur fossil X found at three points in time. Then you claim dinosaur appeared at T1 and went extinct at T3. This is actually wrong, the dinosaur appeared before T1 and went extinct after T3.
Thats the second layer. The first layer of the problem is that the dating has variance also, so that needs to be cross multiplied into, which means you now have two inequities that are represented by a cumulative probability function (0ne rising and the other falling) on both ends. Still not done with the problem. What if at a long period after you find a new species that most likely descended from the one you think went extinct at T3 There is a certain probability that this is a split, a continuous evolution or a result of parallel evolution. This specific issue will likely never be addressed formally. Even when this is known it will unlikely be stated.

Another example, suppose you discover a gene that appears to perform the same unique function in two species. The gene also has some similar sites and some different sites (once such example was published by a very famous author, and it was later found out that the argument was biased and more sequence proved it to be in fact the conclusion were wrong). SO the probability then sorts many different ways.
-What is the probability that two SNPs appear twice.
-What is the probability of the gene passing from X population to Y or vice versa.
-What is the probability of the gene passing from Z population, independently to both.
-what are the probability that independent variants existed in both populations (incomplete constriction at the locus) and that similarities were simply due to 'gene conversion' (for many human evolution studies abortive recombination was not even considered as a possibility until after 2005).

In these cases authors will employ clocking methodologies (often biased) to rule in favor of one conclusion over the other, but often not present the probabilistic comparison. IOW the authors satisfied by their dating (or other guessing technique) and their sequence comparisons will draw a conclusion without finishing a proper statistical analysis. (Prior to 2010 this used to happen all the time).

If science or AI was a horse, we would call this putting the blinders on, restricting interpretation to a certain restricted field of results and ignoring sources of variations that one does not expect.

 

 

Link to comment
Share on other sites

A few things. First, a bit of nit-picking about this:

First, The AI algorithm used to guess whether or not a face corresponds with a homosexual person is a technology, not strictly science. The finding "this AI can identify such" would be the science involved.

Second, there's always been at least some responsible censoring in science. Patient data is always scrubbed of names and partially randomized to make it difficult to identify the patient involved. Some scientific papers and presentations released by drug companies are scrubbed of molecular structures if it's to be done before the patent is approved.

Third, best guess is their AI would fail pretty spectacularly once applied to "people who don't attend MIT".

Link to comment
Share on other sites

5 hours ago, PB666 said:

So the basic problem is that these things are what you call multi-layer problems. For example suppose your asked to determine the accuracy of a string of numbers, but that string may contain characters or garbage also.

But if things are a little bit like the stuff shown in my linked video, there's no absolute way to tell what "traits of subject" the machine actually has learned apart from guessing correctly. Esp. the research is basically yet another form of image recognition, and in the article, the "problems" is the claims that it has identified traits.

I'm not sure how they're actually doing it but if it's not a breakthrough then I question it the patterns it found and the need to rally over it.

The only thing I'm rallying is the opennes and methods used. Seriously, if it really is something that could do more than identify abstract pixels (like, approximate what that abstract looks like) that'd be some real breakthough.

 

5 hours ago, AntINFINAIt said:

They copyed the brain of a warm and put it in am machine!

Which works, but even a random set of connections can work when it comes to AIs. It has more to do with the viability of worn neurons than AI worms.

Edited by YNM
Link to comment
Share on other sites

1 hour ago, YNM said:

But if things are a little bit like the stuff shown in my linked video, there's no absolute way to tell what "traits of subject" the machine actually has learned apart from guessing correctly. Esp. the research is basically yet another form of image recognition, and in the article, the "problems" is the claims that it has identified traits.

I'm not sure how they're actually doing it but if it's not a breakthrough then I question it the patterns it found and the need to rally over it.

The only thing I'm rallying is the opennes and methods used. Seriously, if it really is something that could do more than identify abstract pixels (like, approximate what that abstract looks like) that'd be some real breakthough.

They ask it to recognize something and the weigh its responses, recognize something between 0 and 9, therefore if is an O, its a zero, if its a Z its a 2, if its a I is at 1. It ignores all other possibilities, if you give it random stuff it chooses a 5. IOW it can only recognize things from the layer it was trained on. When I used to do alot of reviewing we used to have these PhD learning algorithms, very popular in India and China at some point because they could crank out papers without doing any benchwork. The algorithms improvements were often incremental, 1/2 to 2 percent improvements per paper (in other words insignificant). The problem is that if they used the entire available database to train with, there is nothing left to test with, without a test these could not be published. They would apply the problem to anything, Nucleic acid structure, protein folding, etc . . . .

 

56 minutes ago, YNM said:

Which works, but even a random set of connections can work when it comes to AIs. It has more to do with the viability of worn neurons than AI worms.

C elegams has a limited number of neurons. In  previous lift I used to work on these also, 959 or cells in prereproductive (L4) female, 302 of which are neurons. 

Link to comment
Share on other sites

  I feel very uncomfortable with any technology or science that is created specifically to single out a single population of people.  Especially if it is a population that is actively being persecuted in different parts of the world and lives could be put at risk as a result.  

Link to comment
Share on other sites

33 minutes ago, KG3 said:

  I feel very uncomfortable with any technology or science that is created specifically to single out a single population of people.  Especially if it is a population that is actively being persecuted in different parts of the world and lives could be put at risk as a result.  

Technology evolves, people who try to stop that process are called ludites, their place in history is generally not favorable. The classification problem has been around forever, its not the fault of technology but humans. On many forms and most old forms i am classified as white. There in no part of my body that is white, the other term is caucasian, none of my ancestors are from the caucasus mntns that i know of. White means not black or asian, latino or native americans, even if there are blacks than lighter skin pigmentation than whites. What created this problem is the concentric circles of identity, as western Europeans moved out beyond their peoples they classified based upon their relative distance from them by several metrics, not the mean differences between groups but perceptional us-them metrics (somewhat better than the way the Norse used to do it). If you measure strictly based on all genetic differences there would be no tripart or quadrapart races outside of Africa, all eurasians would be classed with NE and East Africans. So the race system is flawed from its derivation. If you do a learning algorhitm  based on skewed references, then it will be skewed. 

Therefore if the references are biased or skewed it is up to the standards organizations to remove the skew. In terms of the AI 'gadar' I have no idea what the concern is about, they are probably picking up on grooming habits etc. I once went to see a LBGT movie and throughout the movie the 'hip' members of the audiance were laughing their .. off for which me and my wife couldn't understand at all why they were laughing.  This happened repeatedly so it appears that the director of the movie was targeting information specifically to certain members of the audience and not others. So maybe the IA is picking up on things the general population cannot see. 

The broader question is why classify if not required. What is the interest in classification, to review FB pages to determine who is gay. . . . why are we so interested of what people on FB think about us? I think this idea that everybdy wants to be somebody identified is silly. Do something dont just be something. If FB is about being something, quit and do something. 

Link to comment
Share on other sites

4 minutes ago, Cassel said:

Science is censored, you can conduct research only on the topic you will receive a grant from the government.

Or industry, or sponsors including yourself. 
It also depend on that you want to research, public data is pretty much free,  Europa sample return is a bit expensive :)
 

Link to comment
Share on other sites

11 minutes ago, Cassel said:

Science is censored, you can conduct research only on the topic you will receive a grant from the government.

You can do whatever research you want. Government (well, wester countries, at least) will not stop you as long as you work inside law.

Link to comment
Share on other sites

5 hours ago, KG3 said:

  I feel very uncomfortable with any technology or science that is created specifically to single out a single population of people.  Especially if it is a population that is actively being persecuted in different parts of the world and lives could be put at risk as a result.  

They'll still be persecuted. If the tech is out there, and it is also not 100% (which it apparently isn't), then maybe there will be a backlash against the repressive governments/people that care about such nonsense. People can only organize themselves around sets of ideas developed when iron tools were an emerging technology for so long, they're bound to be moved into reality at some point.

Link to comment
Share on other sites

47 minutes ago, Cassel said:

Science is censored, you can conduct research only on the topic you will receive a grant from the government.

Gregor Mendel, Albert Einstein. You can probaly throw in Jane Goodall.  

Science is about gathering information and assembling knowledge. What people choose to do with that information is another matter.

You know about thirty years ago I looked at human evolution and saw all the what-alls that were wrong with it. In the following thirty years basically all the wrongs were corrected. This means basically what i saw as weaknesses others also saw as weaknesses. The process was not straitforward nor direct, things went one way and the other, but eventually most problems were solved. And a new set of problems arise that a young group of people need to see and solve. This is the way. 

Edited by PB666
Link to comment
Share on other sites

5 hours ago, Cassel said:

Science is censored, you can conduct research only on the topic you will receive a grant from the government.

So long as you work within legal and ethical limits, you can do whatever research you want. Once you're a tenured professor, you can't even be evicted from your university.

You're not guaranteed to get funded, and if you use money from another grant, you may find yourself not getting any new grants from that organization, but in theory, you can study whatever you want so long as there are no ethics rules being broken.

Link to comment
Share on other sites

The idea of regulating what people can do with intelligent systems (any sort of computer learning), is absurd. If they can do it now, your phone will do it in not all that long. This genie is never going back in the bottle.

Link to comment
Share on other sites

9 minutes ago, tater said:

The idea of regulating what people can do with intelligent systems (any sort of computer learning), is absurd.

Yup.

 

Concluding what did the AI actually figured out, however, could be as BS as gambling.

 

Edited by YNM
Link to comment
Share on other sites

19 minutes ago, tater said:

The solution will be for people to be skeptical, critical thinkers.

Have you seen the adobe audio editing? Soon it'll be video that isn't in the uncanny valley, lol.

 

People set themselves up, they put their lives on Twitter and FB. Just ditch the phone or go flip.

BTW, ty Im never going to speak to anorher human being again. :D

Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...