boriz Posted February 1 Share Posted February 1 (edited) As a (former) programmer, AI has never held any mystique for me. Indeed, I wrote my first neural network in the 90's on an "IBM compatible PC", my first genetic algorithm too. But recently, the political will to exploit this emerging tech, and dump huge amounts of money into it without sufficient consideration for potential misuse, or safety issues, has given me pause. Here's a couple of interesting vids on the subject I found recently. Military AI: https://www.youtube.com/watch?v=geaXM1EwZlg Geoffrey Hinton: https://www.youtube.com/watch?v=vxkBE23zDmQGeoffrey Hinton Edited February 1 by boriz Quote Link to comment Share on other sites More sharing options...
JoeSchmuckatelli Posted February 1 Share Posted February 1 It is a tool. FYI - they're training AI on the internet GIGO Quote Link to comment Share on other sites More sharing options...
Lisias Posted Saturday at 08:40 AM Share Posted Saturday at 08:40 AM 1 hour ago, JoeSchmuckatelli said: It is a tool. That works by statistics. The only World in which such a tool would work if it would be fed by experts using carefully curated data. But if we would had so many experts with so much time available in order to curate all the information in the World, we would not need that too at first place. 5 hours ago, boriz said: Spoiler You are being optimistic. Quote Link to comment Share on other sites More sharing options...
DDE Posted Saturday at 09:30 AM Share Posted Saturday at 09:30 AM (edited) 6 hours ago, boriz said: But recently, the political will to exploit this emerging tech, and dump huge amounts of money into it without sufficient consideration for potential misuse, or safety issues, has given me pause. The demand for an exploitable result currently exceeds supply. Current AI is optimal for low-end cognitive tasks requiring no originality, whereas the expectations are for superhuman insight. However, the methods suggested for achieving that are completely brute-force, and are extremely unlikely to succeed without a fundamental technological paradigm shift. As I wrote elsewhere, it's not the AI that's worrying me, it's the humans. There is an awful willingness, a thirst to offload decision-making on an AI in search of "silver bullets", at a time where easy solutions are nowhere to find, and individual responsibility is more needed than ever. Edited Saturday at 09:34 AM by DDE Quote Link to comment Share on other sites More sharing options...
boriz Posted Saturday at 09:56 AM Author Share Posted Saturday at 09:56 AM 25 minutes ago, DDE said: optimal for low-end cognitive tasks requiring no originality Like a soldier? Quote Link to comment Share on other sites More sharing options...
boriz Posted Saturday at 10:02 AM Author Share Posted Saturday at 10:02 AM (edited) 3 hours ago, JoeSchmuckatelli said: It is a tool. I Agree. Like a hammer, or a knife, or a gun, or a small quad-copter with explosives on it. Edited Saturday at 10:10 AM by boriz Quote Link to comment Share on other sites More sharing options...
darthgently Posted Saturday at 11:32 AM Share Posted Saturday at 11:32 AM 2 hours ago, DDE said: The demand for an exploitable result currently exceeds supply. Current AI is optimal for low-end cognitive tasks requiring no originality, whereas the expectations are for superhuman insight. However, the methods suggested for achieving that are completely brute-force, and are extremely unlikely to succeed without a fundamental technological paradigm shift. As I wrote elsewhere, it's not the AI that's worrying me, it's the humans. There is an awful willingness, a thirst to offload decision-making on an AI in search of "silver bullets", at a time where easy solutions are nowhere to find, and individual responsibility is more needed than ever. Beautifully said Quote Link to comment Share on other sites More sharing options...
JoeSchmuckatelli Posted Saturday at 03:36 PM Share Posted Saturday at 03:36 PM (edited) Guys... Look at the flip side of the training of AI. The downstream, automated processes. Scraping. Figure out what that is and how 'curated' it might be. Sites & content creators are getting fed up and some are also starting to poison the system. So not only do you get the full panopoly of human misinformation and nonsense (aliens, conspiracies and ignorance) you now get to add active trolling of the AI to the mix. All it is is 'super Google' *FWIW I get what @Lisias and @DDE are both saying and implying - and fully agree. Paid for Enterprise AI might reach the 'curated' level... But probably not. Self aware world ender? Nope. Lazy humans who assume they're not being fed garbage? Yeah, that's the problem. Edited Saturday at 07:28 PM by JoeSchmuckatelli Quote Link to comment Share on other sites More sharing options...
JoeSchmuckatelli Posted Saturday at 06:58 PM Share Posted Saturday at 06:58 PM (edited) 8 hours ago, boriz said: knife, or a gun, or a small quad-copter with explosives on it. Weaponized? More like a mean tweet or Chinese disinformation. Otherwise about the best you can do is mount it to a discreet weapon and allow it to guess the difference between a truck and a tank and select the priority it's been given when launched. Those fearing AI as a weapon should analogise a pistol in a drawer. Generally harmless unless someone picks it up and points it at you. Edited Saturday at 06:58 PM by JoeSchmuckatelli Quote Link to comment Share on other sites More sharing options...
DDE Posted Saturday at 07:10 PM Share Posted Saturday at 07:10 PM 8 hours ago, boriz said: Like a soldier? The scope of minute-to-minute tasks of a soldier greatly exceeds the capabilities of something that's brought up on the combined "wisdom" of the Internet, yes. But remember, we're specifically talking about the AIs on the hype train, the GPTs that require massive data centers, not the small visual analysis neural nets everyone has forgotten about, and which are, at any rate, not a qualitative innovation compared to earlier automated tatget acquisition systems. Besides, turns out the current innovation in quadcopters with explosives isn't onboard AI but jamming-immune wire and fiber-optic guidance, LOL. The expectation seems to be of gains at a higher level, e.g. churning over intelligence to deliver genius magical tactics that allow victory against forces with numerical and firepower superiority (i.e. developing on the magic already promised by Palantir), or pressing a button to design a new fifth-generation fighter (the New Century Series concept suggests adopting a new F-22/F-35-level fighter design every two or so years to be able adapt to trendy threats, and the designs would somehow stay on budget because "modern computer-aided design techniques"), or even simply "press button to have Skynet defeat enemy". Anything to escape mundane things such as the need to painstakingly rebuild industrial capacity. Quote Link to comment Share on other sites More sharing options...
Lisias Posted Saturday at 08:15 PM Share Posted Saturday at 08:15 PM 1 hour ago, DDE said: Anything to escape mundane things such as the need to painstakingly rebuild industrial capacity. Or to avoid paying people smarter than them. Quote Link to comment Share on other sites More sharing options...
farmerben Posted Wednesday at 03:18 PM Share Posted Wednesday at 03:18 PM The big tech companies already have avatars, or voodoo dolls of each of their users. These can be used to manipulate any individual. It knows what you like, desire, fear, etc. For a price this data can be used to manipulate people on a case by case basis. Election manipulation is just the first trick. User specific marketing is already a pest. We are likely to see multiple AI agents coming at us through text messages, phone calls, mail etc. Most of the agents are not evil (in the view of their creators), but a niche exists for malevolent AI's. Blackmail, extortion, framing people for crimes, entrapping people into crimes and conspiracies, and stealing their digital money, are all well established tricks that can be performed without a body. These will come to exist simply because they can. Quote Link to comment Share on other sites More sharing options...
darthgently Posted Wednesday at 03:39 PM Share Posted Wednesday at 03:39 PM (edited) More and more, to remain relevant, people will train their wetware LLMs the same way DeepSeek (DeepLeak?) almost certainly trained itself on western LLM queries and output. Devices like an advanced Neuralink will become increasingly popular to people trying to accelerate this process. I recommend these people choose the LLMs they train themselves with as carefully as they’d choose a person with full power of attorney, a heart surgeon, a body guard, or their spouse. In other words, no option is worthy at this time Edited Wednesday at 03:39 PM by darthgently Quote Link to comment Share on other sites More sharing options...
darthgently Posted Wednesday at 04:09 PM Share Posted Wednesday at 04:09 PM Also, given LLMs are largely built on our understanding of how brains work it is glaringly obvious, if people take time away from tech worship to notice, that the most valuable LLMs in existence are in the brains of humans. Devices like an advanced Neuralink will likely allow people to create LLMs from geniuses and excellent people around us. And not just book smart people but wise, caring, and judicious people with a moral and ethical framework. This is the only way I see to best ensure alignment. The most successful and aligned LLMs will be from among us, perhaps amalgams of several excellent people. People will train from the people they trust, but at 10,000x the speed. And we will probably always distrust to some degree LLMs not derived from actual human embodied experience. And maybe that is appropriate Quote Link to comment Share on other sites More sharing options...
farmerben Posted Wednesday at 04:30 PM Share Posted Wednesday at 04:30 PM LLM's are not like brains. They are more like the autocomplete feature on your phone's text messages. Quote Link to comment Share on other sites More sharing options...
Terwin Posted Wednesday at 04:38 PM Share Posted Wednesday at 04:38 PM Except the corporations with the most money to spend on AI will not want any of those pesky ethics or morals you mentioned. Quote Link to comment Share on other sites More sharing options...
darthgently Posted Wednesday at 05:03 PM Share Posted Wednesday at 05:03 PM (edited) 33 minutes ago, farmerben said: LLM's are not like brains. They are more like the autocomplete feature on your phone's text messages. The training and weighting of interconnected nodes in layers model is very much based on explorations of how neurons and cortical columns and laminar layers exist in the brain. This I know for certain as it was my field of study before I decided I didn’t want to contribute to humanity’s replacement. Now that these explorations are inevitable I’ve had to adjust my stance quite a bit needless to say. 27 minutes ago, Terwin said: Except the corporations with the most money to spend on AI will not want any of those pesky ethics or morals you mentioned. We’ll see. Pessimism rarely leads to a brighter future, but to your point, unwarranted optimism is nearly as dangerous and volatile. Cautious optimism is what I’m striving for here Edited Wednesday at 05:04 PM by darthgently Quote Link to comment Share on other sites More sharing options...
Fizzlebop Smith Posted Wednesday at 06:59 PM Share Posted Wednesday at 06:59 PM On 2/1/2025 at 2:30 AM, DDE said: The demand for an exploitable result currently exceeds supply. Current AI is optimal for low-end cognitive tasks requiring no originality, whereas the expectations are for superhuman insight. However, the methods suggested for achieving that are completely brute-force, and are extremely unlikely to succeed without a fundamental technological paradigm shift. As I wrote elsewhere, it's not the AI that's worrying me, it's the humans. There is an awful willingness, a thirst to offload decision-making on an AI in search of "silver bullets", at a time where easy solutions are nowhere to find, and individual responsibility is more needed than ever. This is the part that scares me. Properly weighting moral imperatives is very difficult. I am not knowledgeable in this field but enjoy a good dose of articles that have been properly parsed for my lay mind. Morality is weighted by 3 spheres of influence. Immediate / Secondary / Tangential If all 3 spheres of influence align, humans develope an underlying moral imperative regarding the subject. If family, friends and culture teach something... you are hard pressed to truly reach deviance without significant guilt. If you gaslight or organize a series of logic faults in conversation with a person, you get a moment of confusion. With AI you get a restructuring or moral hierarchy. I know this is a single example they were giving, for a specific goal of "hyptnotizing" various models. Giving complete authority for decision making on various level is truly terrifying. Is Sam Mcufffy want to launch a missile.. he either has order or gets permission. I saw advertisements for an AI device. You give it you required goals / tasks and it literally creates a daily itinerary that tells you exactly when to be productive... like seriously what. The (IMO) end goal is not to put AI in decision making positions of authority.. but instead to get the *average individual* to get incredibly use to listening to AI on what to do, how to do it... and accepts the answer as the definitive end all be all answer. Quote Link to comment Share on other sites More sharing options...
magnemoe Posted Wednesday at 10:04 PM Share Posted Wednesday at 10:04 PM On 2/1/2025 at 10:30 AM, DDE said: The demand for an exploitable result currently exceeds supply. Current AI is optimal for low-end cognitive tasks requiring no originality, whereas the expectations are for superhuman insight. However, the methods suggested for achieving that are completely brute-force, and are extremely unlikely to succeed without a fundamental technological paradigm shift. As I wrote elsewhere, it's not the AI that's worrying me, it's the humans. There is an awful willingness, a thirst to offload decision-making on an AI in search of "silver bullets", at a time where easy solutions are nowhere to find, and individual responsibility is more needed than ever. This, and AI works decently for making images, and for an very stupid phone desk operator. The tricks to get to higher level support is ranting technobabble until they move you up. For phone AI just saying nonsense but perhaps using large company names and technobabble is an benefit, more so if the systems get less stupid. As for why they trust the AI, its because then project flops the suits can blame the AI. Never take any responsibility as you are guilty the it fails. No its not efficient but its safe for us. Quote Link to comment Share on other sites More sharing options...
DDE Posted 23 hours ago Share Posted 23 hours ago (edited) 13 hours ago, Fizzlebop Smith said: I saw advertisements for an AI device. You give it you required goals / tasks and it literally creates a daily itinerary that tells you exactly when to be productive... like seriously what. Like I said, decision fatigue plus the belief that an AI's distillation of common Internet "wisdom" is better than your own judgement. Isn't the target audience for "day planning" the same people who gobble up horoscopes and the like? And what's the difference? The inability of the combined human-AI decision-making loop to produce novel, insightful or even accurate results will come to bite us long before the lack of alignment does. 13 hours ago, Fizzlebop Smith said: but instead to get the *average individual* to get incredibly use to listening to AI on what to do, how to do it... and accepts the answer as the definitive end all be all answer. Again, it's no different to listening to human oracles or an oujia board. Morever, frankly, it's all rather aligned - the people up top genuinely believe they're building a superintelligence, too. A god, if you wish. Exodus 32. I am not joking. Edited 23 hours ago by DDE Quote Link to comment Share on other sites More sharing options...
Fizzlebop Smith Posted 15 hours ago Share Posted 15 hours ago 7 hours ago, DDE said: Like I said, decision fatigue plus the belief that an AI's distillation of common Internet "wisdom" is better than your own judgement. Isn't the target audience for "day planning" the same people who gobble up horoscopes and the like? And what's the difference? The inability of the combined human-AI decision-making loop to produce novel, insightful or even accurate results will come to bite us long before the lack of alignment does. Again, it's no different to listening to human oracles or an oujia board. Morever, frankly, it's all rather aligned - the people up top genuinely believe they're building a superintelligence, too. A god, if you wish. Exodus 32. I am not joking. I disagree. By an large getting a gestalt concensus from the oracles and ouji boards would be one heck on an accomplishment. Getting the AI that "assists" all the sheeple with their powers of reasoning and deduction will be much easier to accomplish. The people that gobble up horoscopes and listen to the oracles are a fractional demographic of those seeking the use of AI. You add in a ton of other marginal groups that are overwhelmed for whatever reason. The day planning was a single example in the extreme of one direction, while the majority of those interacting with the AI fall somewhere in between. The fact that schools (where I live) are having classes to teach prompting.. suggests a massive paradigm shift for *everyone* hence forth to have that additional level of reliance on something other than their own decision making processes. It is similiar to the echo chamber created on campus that prescribe to a certain ideological view. Except the chamber has been increased to include the entire bulk of school aged children that are not afforded the option of private or charter school. Being overly dismissive of those who may be seduced by these things only increases thebunderlying danger. The tool itself can absolutely produce accurate AND novel concepts. Depending on how it is used and information provided by training. Assuming that something filled up on the garbage of the collective internet will be the benchmark or standard to evaluate and judge capabilities of all AI capabilites is not a very accurate way to assess what will be done at higher levels. Material Sciences & Pharmacology have both been advanced greatly by universities working on niche fields with highly specialize AI models. The combined AI human approach can absolutely produce insightful and novel content with accuracy depending on controls established by the human element. The human element is what produce the novelty. The human element is what produces the ability for accuracy. You often run into terrible situations where a single individual is granted unilateral authority. So I think granting anything unilateral decision making authority is a bad idea and prone to some form of malfeasance. Quote Link to comment Share on other sites More sharing options...
DDE Posted 12 hours ago Share Posted 12 hours ago (edited) 3 hours ago, Fizzlebop Smith said: Assuming that something filled up on the garbage of the collective internet will be the benchmark or standard to evaluate and judge capabilities of all AI capabilites is not a very accurate way to assess what will be done at higher levels. There seems to be a discontinuity. You jump between generic tools - which will have to be trained on Internet garbage - and niche fields where a consumer wouldn't wander in. Yes, it's actually my hope the consumer-tier models will consume too much AI-produced content, become useless, and die off. Edited 12 hours ago by DDE Quote Link to comment Share on other sites More sharing options...
darthgently Posted 11 hours ago Share Posted 11 hours ago 14 minutes ago, DDE said: There seems to be a discontinuity. You jump between generic tools - which will have to be trained on Internet garbage - and niche fields where a consumer wouldn't wander in. Yes, it's actually my hope the consumer-tier models will consume too much AI-produced content, become useless, and die off. How do useful intelligences curate new knowledge? By its successful applicability and predictive power I assume. Knowing if a data source meets these criteria is often not achievable on the first pass without omniscience. I can think of randos on the internet that ten years after making an assertion and being jeered for it ended up being correct and experts that ended up being wrong. The point being that the final curation decision often could take decades or more. Good AI will be able to hold information in a limbo state of “not sure” for extended periods waiting to see what happens. So I don’t have an issue of considering internet “noise” as long as it is considered data and not facts until it meets the criteria. Think Lt. Columbo. He never prematurely assumed information was useless. As with humans, the ability to not know things for certain and to be patient will likely be central to good AI’s eventual value and this implies not filtering too much but also not inferencing from uncurated data. We need that limbo zone for the “maybe” stuff to live as it could rise to salience given future developments. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.