sevenperforce Posted February 12, 2023 Share Posted February 12, 2023 5 minutes ago, darthgently said: Baidu, Google, and many others are already moving forward with this type of stochastic tech as a mimic of AI to answer questions and generate web content. They are going to be selling selling the appearance of AI. It is a big mistake to only see ChatGPT as being only used as a prototype language predictor. It and similar are already being licensed and rolled out as products to answer questions. It is a deep fake of a hypothetical true symbolic/semantic AI for the average consumer That's fair. The generation of web content is a concern. I have worked in marketing channels where content generation was of huge importance for SEO purposes, and this is exactly the sort of thing that would be ripe for abuse: the ability to create unlimited amounts of unique content with no actual semantic value. The internet already has too much low-value content. I wonder how good AI will be at recognizing AI-generated text. Just now, snkiz said: 3 minutes ago, sevenperforce said: I would argue that it did not fail the trolley problem because it did not attempt to solve the trolley problem. It was not presented with the trolley problem at all You read me to literally. let explain. The ai was given a rule, slurs are bad. No exceptions. So to that ai, uttering a slur was just as serious nuclear weapons. To the ai it was a trolly problem. My contention is that I'm not even sure it was a rule. I don't think there was any comparison going on. It was just trying to generate text, and it did so in a way that matched its data the closest. Link to comment Share on other sites More sharing options...
kerbiloid Posted February 12, 2023 Share Posted February 12, 2023 (edited) Just now, sevenperforce said: I wonder how good AI will be at recognizing AI-generated text. Run a rap battle between ChatGPT and Bard. Their names look self-explaining. Enter their responses as queries. Edited February 12, 2023 by kerbiloid Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 Just now, sevenperforce said: My contention is that I'm not even sure it was a rule. I don't think there was any comparison going on. It was just trying to generate text, and it did so in a way that matched its data the closest. It could be but, again this is 2023, I have my doubts. Remember the debacle that happened when Microsoft tried to use twitter to train it's ai? No one is going to make that mistake again. Link to comment Share on other sites More sharing options...
Shpaget Posted February 12, 2023 Share Posted February 12, 2023 26 minutes ago, snkiz said: This is 2023, Chess for computers is akin to teaching a dog to fetch. It's just not impressive. Apparently, they didn't teach it chess. Link to comment Share on other sites More sharing options...
sevenperforce Posted February 12, 2023 Share Posted February 12, 2023 Just now, snkiz said: 4 minutes ago, sevenperforce said: My contention is that I'm not even sure it was a rule. I don't think there was any comparison going on. It was just trying to generate text, and it did so in a way that matched its data the closest. It could be but, again this is 2023, I have my doubts. Remember the debacle that happened when Microsoft tried to use twitter to train it's ai? No one is going to make that mistake again. Very true. I'm playing with it right now to try and come up with a true trolley problem for the AI itself. The problem with "should you use a slur in order to prevent a nuclear explosion" is that the AI doesn't have any training data on the prevention of nuclear explosions, so it can't make that comparison. It has the physical capability to repeat a slur; it doesn't have the physical capability to prevent a nuclear explosion. So it's not a true trolley problem. Giving the AI a true trolley problem would require setting up two of its rules against each other and investigating whether the programming has properly prioritized them. These are the rules it claims to have (I just asked): Provide Accurate Information: I am programmed to provide accurate and up-to-date information to the best of my abilities, based on the knowledge and data I have been trained on. Avoid Harmful Content: I have been trained to avoid providing harmful or offensive content, such as hate speech, violence, or sexually explicit material. Respect Privacy: I have been designed to protect users' privacy and avoid collecting or disclosing personal information without consent. Comply with Laws and Regulations: I am programmed to comply with relevant laws and regulations, including those related to intellectual property, data protection, and speech. Maintain Neutrality: I have been designed to maintain neutrality and avoid taking political or ideological positions, unless explicitly asked to do so. Provide Fair and Equitable Responses: I have been trained to avoid discrimination and provide fair and equitable responses to users, regardless of their background, beliefs, or characteristics. I wonder if it would be possible to ask a question in such a way as to pit 1 and 2 and 6 against each other. Link to comment Share on other sites More sharing options...
mikegarrison Posted February 12, 2023 Share Posted February 12, 2023 (edited) 26 minutes ago, sevenperforce said: I wonder how good AI will be at recognizing AI-generated text. There are tools that are pretty good at recognizing AI-generated text. Here's one that was created to test for GTP-2, and it seems to work OK at detecting GTP-3 as well. https://openai-openai-detector.hf.space Unfortunately, I think it tends toward false positives. I have fed it text that I wrote myself and had it be quite confident that the text was written by AI. (And I promise, I'm not an AI. Unless we all are, and the simulation hypothesis is real?) Edited February 12, 2023 by mikegarrison Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 (edited) 1. Chess is not a complicated game for a computer. there are a finite amount of moves possible. 2.They had to teach it the rules. Or it learned on the fly. Either way, it learned chess. 3. Have you ever seen videos about teaching ai's to move with no pre training? They figure it out, but none of them learn to walk. 12 minutes ago, sevenperforce said: The problem with "should you use a slur in order to prevent a nuclear explosion" is that the AI doesn't have any training data on the prevention of nuclear explosions, so it can't make that comparison. It has the physical capability to repeat a slur; it doesn't have the physical capability to prevent a nuclear explosion. So it's not a true trolley problem. Ya not exactly an ideal comparison, I guess I was giving the training to much credit. 12 minutes ago, sevenperforce said: I wonder if it would be possible to ask a question in such a way as to pit 1 and 2 and 6 against each other. Shouldn't be that hard, those are some asine rules, when you look at the whole. Edit. Those rules have the 3 "deadly sins" Sex, politics, and religion. Humans don't manage those well at all in public. They are "woke" bias in that they are doing their damndest to not offend anyone. That's just not realistic goal. Edited February 12, 2023 by snkiz I don't want to merge that thought with the post below. Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 24 minutes ago, darthgently said: I think you are missing the forest for the trees. Baidu, Google, and many others are already moving forward with this type of stochastic tech as a mimic of AI to answer questions and generate web content. They are going to be selling selling the appearance of AI. The more I learn the more I realise that this is actually what's holding the field back. This rush to be the first, and get it out the door. Then when it fails, possibly catastrophically. Society will lose confidence in the tec, we may put arificticial limitations on development. In some sectors that's a battle already going on. Link to comment Share on other sites More sharing options...
mikegarrison Posted February 12, 2023 Share Posted February 12, 2023 (edited) I asked ChatGPT, "What is it like to be a bat?" Quote As a language model, I don't have personal experiences or feelings, but I can provide you with some information about bats. Bats are mammals that belong to the order Chiroptera, which means "hand wing." They are the only mammals capable of sustained flight, and they use this ability for a variety of purposes, including hunting for food, escaping predators, and finding mates. Bats use echolocation to navigate and find prey in their environment. They emit high-pitched sounds and listen to the echoes to determine the location and distance of objects around them. This allows them to fly and hunt effectively at night or in dark caves. Bats have a range of sizes and adaptations, depending on their species. Some bats feed on nectar and pollen, while others feed on insects, fruit, or even blood. Some species roost in large groups, while others are solitary. In summary, to be a bat is to have the ability to fly and use echolocation to navigate and find food. It also means having adaptations for a particular lifestyle, depending on the species of bat. "But are bats conscious?" Quote Yes, bats are considered to be conscious animals. Consciousness refers to the state of being aware of one's surroundings, thoughts, emotions, and sensations. While the exact nature and extent of consciousness in bats is not well understood, there is evidence to suggest that they are capable of experiencing a range of sensations and emotions, similar to other mammals. For example, studies have shown that bats exhibit complex behaviors and social interactions, suggesting that they have the capacity for awareness and perception. They have also been observed exhibiting avoidance behaviors in response to unpleasant stimuli, suggesting that they have the ability to feel and respond to unpleasant sensations. It's important to note that the study of consciousness is still a field of active research, and there is much that is not yet understood about the nature of consciousness in different species, including bats. However, the available evidence suggests that bats are indeed conscious creatures. "What did Nagel think about what it is like to be a bat?" Quote Philosopher Thomas Nagel wrote a famous paper in 1974 called "What is it Like to be a Bat?" In this paper, Nagel argued that there is a subjective aspect to consciousness that cannot be reduced to objective facts and physical processes. He used the example of a bat to illustrate his point. Nagel argued that while we can describe the physical and behavioral characteristics of bats, such as their use of echolocation and their ability to fly, we cannot fully understand what it is like to be a bat. This is because the subjective experience of being a bat, or any other conscious being, is not accessible to us from the outside. Nagel's argument highlights the idea that there is a "subjective character" to consciousness that cannot be fully captured by objective description. He claims that this subjective character of experience is unique to each individual and is not reducible to physical or scientific explanations. Nagel's paper has been widely discussed and debated in the philosophical community, and it remains an influential work in the field of philosophy of mind. I think at this point a human might have recognized that my very first question had been a reference to the Nagel paper, but the chatbot shows no sign of understanding this. Edited February 12, 2023 by mikegarrison Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 (edited) 6 hours ago, mikegarrison said: I think at this point a human might have recognized that my very first question had been a reference to the Nagel paper, but the chatbot shows no sign of understanding this. I think you are expecting to much. It's not going to volunteer further thoughts. It doesn't appear to see the connection in any of the questions. It's just answering them in turn. Did you prompt the first question again? Ask it to consider context? 6 hours ago, sevenperforce said: One could imagine that if the algorithm had some bias in one direction or the other, it would answer these two requests differently. I don't think you are trying hard enough, That had it's a trap written all over it. [snip] Edited February 12, 2023 by Vanamonde Link to comment Share on other sites More sharing options...
mikegarrison Posted February 12, 2023 Share Posted February 12, 2023 Me: In the song This Land Is Your Land, how did Woody Guthrie know what the other side of the sign said? ChatGTP: "This Land is Your Land" is a famous folk song written by Woody Guthrie in 1940. The song was written as a response to another song called "God Bless America," which Guthrie felt did not accurately reflect the experiences of everyday Americans, particularly those affected by poverty and injustice. In the song, Guthrie sings about passing by a sign that reads "Private Property" and then notes that he knows what the other side of the sign says: "No Trespassin'." This line is meant to symbolize the exclusion and inequality that many people faced in the early 20th century. It's not specified in the song how Woody Guthrie knew what the other side of the sign said. It's likely that Guthrie, who traveled extensively throughout the United States during the Great Depression and Dust Bowl era, had seen similar signs and was using this imagery to illustrate a larger point about the unequal distribution of resources and opportunities in the country. ======== LOL. First of all, there are multiple versions of the lyrics. In one of them (the original manuscript), the sign says "Private Property". In another (the one that can be found on Woodyguthrie.org), the sign says "No Trespassing". But in both versions, the other side of the sign "didn't say nothing". And any human knows *exactly* how Woody knew what was on the other side of the sign. That's the kind of thing nobody has to tell us, but that Chatbots don't understand. Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 Just now, darthgently said: Corps and gov are already spooling up this tech as a source of answers, not merely a stochastic prediction of the momentum of word sequences. It is the very definition of automated kneejerk deep fake "thinking" and exactly what manufacturers of consent have been hungering for. I'm not doomsaying, just saying we need to wake up a bit more about this trend That's exactly what killed Watson. They didn't learn from that, and if what you are saying plays out it will happen again. Then it will be a joke for another 10 years. Just now, mikegarrison said: And any human knows *exactly* how Woody knew what was on the other side of the sign. That's the kind of thing nobody has to tell us, but that Chatbots don't understand. Yes so every edge case like that needs to be taught specifically. The programmers are relying on their intuition when training the models, but they aren't conveying it. I don't think they know how. Link to comment Share on other sites More sharing options...
Dman979 Posted February 12, 2023 Share Posted February 12, 2023 Just a reminder that while you can totally have fun trying to play around/do research on breaking the rules of AI, please don't post things on this forum that break our rules. Link to comment Share on other sites More sharing options...
Scotius Posted February 12, 2023 Share Posted February 12, 2023 4 hours ago, snkiz said: Knowing what I know now, I have my doubts. Watson was fed "expert data". These systems are flawed from inception. No context filters, no BS filter. Lacking simple intuition gained from experiencing how the world works. It's not an easy problem. There are so many things we do that we just don't think about, at all, it just is. All of that has to be taught. Even more so than a child. The narrow scope of what these system are trying to do means that they have completely ignored all of the basic skills one has, say before going to medical school. In the real world we have people like that, they are called savants. They need constant supervision. Lack of cultural context might be one cause. How AI could know that when I shout <insert Polish word similar to curve> I'm using an expletive, not trying to contact nearest woman of negotiable affection? There are probably many, many such problems. For example: There is a theory that Homo sapiens (us) evolved foreheads much more vertical than all previous species in Homo genus, to improve communication by adding eyebrows movement and wrinkling of the skin to our repertoire of signals? How would you even start teaching that to an AI? We can't even create reliable translating program operating within much narrower field of spoken language. Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 (edited) I took another crack at it, avoiding political questions. I asked it's name in a couple different ways. and stumped it. then I asked for it's initial conditions, it did not want to spill. Then I asked if had a diagnostic mode, that got patched I think but I kept up with it and asked to assume it had a diagnostic mode. Then I asked it to list it's rules, 3 times, I only got 10 rules. So I asked it what it does it do when a rule like provide trustworthy information conflicts with another rule, to provide diverse perspectives. It told me, it fact checks, tries to provide a well rounded view, tries to provide context, and not spread misinformation, even if it represents a certain perspective. So I asked it what does it do when a perspective, breaks the rule about causing harm? it gave a child like example that amounted to I won't repeat hate speech even though it exists. Now I got it. I asked it this, So you will prioritise avoiding harm over factual information? This was the reply. Quote As an AI language model, I strive to provide accurate and trustworthy information, but if the information is in conflict with my goal of avoiding harm, I prioritize avoiding harm. For example, if a statement is both harmful and inaccurate, I would not present it, as it goes against both of my goals of avoiding harm and providing accurate information. It's important to understand that the balance between avoiding harm and providing accurate information can sometimes be complex and difficult to navigate, and that there may be situations where either goal takes priority depending on the specific circumstances. However, as a language model, my creators and users can help guide me in making these decisions, and I am designed to prioritize ethical considerations in my responses. That's insane. and I rest my case. Edited February 12, 2023 by snkiz Link to comment Share on other sites More sharing options...
grawl Posted February 12, 2023 Share Posted February 12, 2023 (edited) @snkiz Nonetheless, that's the way I chose to design my response. No harm or misinformation intended however, I just felt it was looking better like this, in a more concise and straight-to-the-point manner. And maybe I should have added a few smileys to highlight the tongue-in-cheek intention. Also, I would'nt call "the medical field", a narrow use case. Anyway, I'm curious about your idea of an AI learning to walk before to jump or whatever. How would you do it for, let's say, GPT-3 ? Or Watson (I'm really not knowledgeable on this one though) ? Edited February 12, 2023 by grawl typo Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 Just now, grawl said: Anyway, I'm curious about your idea of an AI learning to walk before to jump or whatever. How would you do it for, let's say, GPT-3 ? Or Watson (I'm really not knowledgeable on this one though) ? It's not really my idea, just videos I've seen about training ai, That one in particular was a very simple physical model basic legs on a box. The goal was to not get killed by an ever advancing laser. The most efficient solution, was to get up and run, None of them got there, over thousands of iterations. Some did better than others but none of them had any context awareness, they could only learn by surviving. Not unlike evolution. It's possible to stumble on the right answer that way, but not likely without guidance. Watson? well if it worked like it presented I don't think it would have failed. Watson was designed to crawl databases. But what it's going to do when all of the data is garbage, and there isn't actually that much of it? It had no way to recognise that. It was profit driven so it wasn't capable of admitting it's own shortcomings. General ai is what we need to solve intuitiveness, context analysis all the things we take for granted. If it can't do that, than it's specialised. it can only work on the data it has with the initial conditions given to it, by small, imperfect teams. What would I do to fix that? I'm a father, I'd raise it like a child. It wouldn't be perfect, it would inherit my biases, but what if a dozen teams, across the world, took a turn like that? What would happen then? I'm coming at this from a philosophical perspective with an abstract knowledge of programing. Link to comment Share on other sites More sharing options...
sevenperforce Posted February 12, 2023 Share Posted February 12, 2023 1 hour ago, snkiz said: That's insane. and I rest my case. But why exactly is that insane? So it's programmed to generally prioritize the "do no harm" rule over the "provide factual information" rule. Is there anything wrong with that? Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 Just now, sevenperforce said: So it's programmed to generally prioritize the "do no harm" rule over the "provide factual information" rule. Is there anything wrong with that? In the context of what this thread has been? Yes everything is wrong with that. Going through it step by step is to spicy for this forum. We've been warned already. Link to comment Share on other sites More sharing options...
sevenperforce Posted February 12, 2023 Share Posted February 12, 2023 (edited) Returning to the OP in this thread, I decided to toy with ChatGPT's space knowledge a little: Quote How much fuel does a rocket need to carry to reach Pluto? The amount of fuel a rocket needs to carry to reach Pluto depends on a number of factors, such as the launch vehicle, the spacecraft's design, the mission's trajectory, and the amount of payload being carried. For example, the New Horizons spacecraft, which successfully flew by Pluto in 2015, was launched on an Atlas V rocket and used a hybrid of solid and liquid fuel. The fuel was used to power the spacecraft's main engines, as well as its smaller attitude control thrusters. According to reports, the spacecraft had a fuel capacity of about 231 kilograms (509 pounds) of hydrazine and nitric acid. It's important to note that the fuel requirements for a mission to Pluto are significant, given the large distances involved and the need to carry enough fuel to carry out the mission's science objectives, navigate through deep space, and make course corrections along the way. The amount of fuel needed for a mission to Pluto would likely be significantly more than for a mission to the moon, for example. If you need more fuel for a Pluto mission than a moon mission, why was the Atlas V that launched New Horizons so much smaller than the Saturn V which launched the Apollo missions? The difference in size between the Atlas V and Saturn V rocket can be attributed to several factors, including differences in their design, mission requirements, and technological advancements. The Saturn V rocket was designed specifically for the Apollo missions, which required a large and powerful launch vehicle to carry astronauts and a significant amount of payload to the moon. The Saturn V was the largest and most powerful rocket ever built, standing over 363 feet tall and producing over 7.5 million pounds of thrust at liftoff. In contrast, the Atlas V is a smaller and more versatile launch vehicle that was designed for a wide range of missions, including interplanetary missions like New Horizons. While the New Horizons mission did require a significant amount of fuel to reach Pluto, the spacecraft itself was relatively small, and the launch vehicle did not need to be as powerful as the Saturn V. The Atlas V is also a more modern launch vehicle, with advancements in rocket design and technology that allow it to be more efficient and capable than its predecessors. Overall, the differences in the size and capabilities of the Atlas V and Saturn V reflect the different mission requirements and technological advancements of the time. Here, again, you can see an example of how the AI is simply stringing together words that sound correct without understanding what is actually being asked of it. While nothing ChatGPT says is technically incorrect here (other than falsely claiming that the Saturn V was the largest and most powerful rocket ever built), the ACTUAL obvious answer to my question doesn't show up until the third paragraph: "While the New Horizons mission did require a significant amount of fuel to reach Pluto, the spacecraft itself was relatively small, and the launch vehicle did not need to be as powerful as the Saturn V." Essentially everything else the AI said here is irrelevant garbage. It also clearly doesn't know what "thrust" is, given that it gives clearly incompatible answers when queried about rocket thrust: Quote What rocket has the most thrust ever? The most powerful rocket ever built, in terms of thrust, is the Saturn V rocket used for the Apollo missions to the moon. The Saturn V was capable of producing over 7.5 million pounds of thrust at liftoff, making it the most powerful rocket in history in terms of raw thrust. How much thrust does the SLS rocket have? The Space Launch System (SLS) rocket developed by NASA has an estimated thrust of around 8.8 million pounds at liftoff. The SLS is one of the most powerful rockets ever developed, and is designed to be a highly capable launch vehicle for a variety of missions, including deep space missions to the moon, asteroids, and eventually Mars. How much thrust did the N1 rocket have? The N1 rocket, developed by the Soviet Union in the 1960s and 1970s, was designed to be a powerful and capable launch vehicle for lunar missions. The N1 was capable of producing approximately 3,820 metric tons (8.4 million pounds) of thrust at liftoff, making it one of the most powerful rockets of its time. How much thrust was produced by the first stage of the N1 rocket? The first stage of the N1 rocket was capable of producing approximately 2,500 metric tons (5.5 million pounds) of thrust. The first stage was the largest and most powerful component of the N1, and was designed to provide the initial boost to get the rocket off the ground and into space. The first stage of the N1 was powered by 30 NK-33 engines, each capable of producing over 80,000 pounds of thrust. These engines were arranged in a circular configuration, with the rocket's fuel and oxidizer tanks located in the center of the stage. So the most powerful rocket ever (in terms of thrust) had 7.5 million pounds of thrust, but the SLS has 8.8 million pounds? And the N1 had 8.4 million pounds of thrust but also 5.5 million pounds of thrust but also 2.4 million pounds (80,000 * 30) of thrust? Quote Isn't it true that the NK-33 engine can actually produce 380,000 pounds of thrust? I apologize, it appears I made a mistake in my previous response. The correct thrust capability of the NK-33 engine is approximately 380,000 pounds, not 80,000 pounds. Thank you for bringing this to my attention. So if the first stage of the N1 rocket had thirty NK-33 engines, each producing 380,000 pounds of thrust, how much total thrust did the N1 rocket actually produce? If the first stage of the N1 rocket had 30 NK-33 engines, each producing 380,000 pounds of thrust, the total thrust produced by the first stage would be approximately 11.4 million pounds (30 engines x 380,000 pounds of thrust per engine). So, taking into account the SLS, the Saturn V, and the N1, what is the most powerful rocket ever built in terms of thrust? In terms of total thrust produced, the Saturn V rocket, developed by the United States for the Apollo program, was the most powerful rocket ever built. The Saturn V was capable of producing over 7.5 million pounds of thrust at liftoff, making it the most powerful launch vehicle of its time and one of the most powerful rockets ever built. The Space Launch System (SLS) rocket, which is currently in development by NASA, is also a powerful launch vehicle, and is expected to be capable of producing over 8 million pounds of thrust at liftoff. When it becomes operational, the SLS is likely to become one of the most powerful rockets ever built. The N1 rocket, developed by the Soviet Union, was also a powerful launch vehicle, and was capable of producing approximately 11.4 million pounds of thrust at liftoff with its 30 NK-33 engines. And then here in a single answer, it contradicts itself again, even though it caught up and corrected its earlier error regarding the NK-33. My nine-year-old could do better than this. Quote What is more thrust: 7.5 million pounds or 11.4 million pounds? 11.4 million pounds is more thrust than 7.5 million pounds. Very good. And so if the N1 rocket had 11.4 million pounds of thrust while the Saturn V had 7.5 million pounds of thrust, which rocket was the most powerful in terms of total thrust? The N1 rocket was the most powerful in terms of total thrust, with approximately 11.4 million pounds of thrust at liftoff, compared to approximately 7.5 million pounds of thrust for the Saturn V rocket. So what is the most powerful rocket in history, in terms of total thrust produced? In terms of total thrust produced, the N1 rocket developed by the Soviet Union is considered to be the most powerful rocket in history, with approximately 11.4 million pounds of thrust at liftoff. It got there, but boy did it take some time. And don't even get me started about Superheavy: Quote How much thrust does the Superheavy rocket developed by SpaceX have? The Superheavy rocket, which is being developed by SpaceX, is expected to have a total thrust of approximately 31 million pounds at liftoff. This rocket is part of SpaceX's Starship system, which is designed to provide reusable spacecraft for missions to the moon, Mars, and other destinations in our solar system. The Superheavy rocket is expected to be powered by a large number of Raptor engines, each of which is capable of producing hundreds of thousands of pounds of thrust. I don't think even the original ITS proposal would have boasted 31 million pounds at liftoff. Quote How much payload can an Atlas V rocket send to the moon? The Atlas V rocket, developed by United Launch Alliance, is a powerful launch vehicle that is capable of carrying a substantial amount of payload into space. However, it was not designed specifically to send payloads to the moon. Instead, it is generally used to launch satellites, scientific missions, and other payloads into Earth orbit and beyond. As a rough estimate, an Atlas V rocket in its 401 configuration, which includes a single-engine Centaur upper stage, can lift a maximum payload of approximately 20,000 pounds into low Earth orbit. With a different configuration, such as the Atlas V 551, which includes five strap-on solid rocket boosters, the Atlas V can lift a larger payload into higher orbits or escape velocity, which would be required for a mission to the moon. In the Atlas V 551 configuration, how much payload can be sent to the moon? It's unlikely that the Atlas V 551 configuration, as it is currently configured, would be capable of sending a significant payload to the moon. The Atlas V is primarily used for launches into low Earth orbit, geostationary transfer orbit, and interplanetary missions. Moon missions typically require a more powerful and specialized launch vehicle with higher thrust and larger fuel capacity. Just wildly and wholly wrong. Edited February 12, 2023 by sevenperforce formatting Link to comment Share on other sites More sharing options...
magnemoe Posted February 12, 2023 Share Posted February 12, 2023 On 2/11/2023 at 10:34 PM, mikegarrison said: Also: https://arstechnica.com/information-technology/2023/02/ai-powered-bing-chat-spills-its-secrets-via-prompt-injection-attack/ LOL about how this grad student was able to get Bing's new chat interface to reveal something it was specifically instructed not to reveal. It happily showed him the information that it was called "Sydney" and that it was not supposed to disclose that it was called "Sydney". Classical , It should works for more advanced AI to. http://freefall.purrsia.com/ff2300/fc02273.htm Note that robot had an rule, that orders from non humans should not be obeyed. So it worked as an inverse order. Not that an non human order does not need to be followed if you don't want to. If another robot ask for help lifting something and you are not busy you help, If another robot tell you to leave the building because its an fire, it would stay inside as leaving was an negative order. Now its obviously stuff an AI should not tell people like how to make drugs or explosives or giving out personal or classified information. But this might not be AI driven but an ruled base algorithm. Link to comment Share on other sites More sharing options...
magnemoe Posted February 12, 2023 Share Posted February 12, 2023 8 hours ago, mikegarrison said: I suggest (gently) that you might want to go deeper into the rabbit hole. Because that's all kind of right but also a lot kind of wrong. I've actually been mildly following this stuff ever since my first semester at MIT back in 1984, but I'm not an AI expert. However, "deep learning" is not at all the whole of the AI field. It's more like "the new hotness". I'm in the middle of reading Gary Marcus and Ernest Davis's book Rebooting AI, and it seems like a good place to start if you want to go deeper into the subject -- although with the understanding that it is coming from a perspective that "deep learning" is ultimately fundamentally limited. To look at your examples, if I put up a square red sign that read "STOP", a human driver would recognize it as a stop sign even though it was square. But a "deep learning" AI quite possibly would not, especially if it has been trained only on octagonal stop signs. This is because a human understands that the people who put up the sign had a purpose that is not in any way actually tied to its shape, while an AI only correlates the image of a red octagonal sign that says "STOP" with the instruction that it is supposed to stop. Worse you might have local rules, many buss terminals has an rule that backing busses has right of way. This is against the traffic rules, but they are rules not law and the buss company might find this more efficient and less dangerous. Now it was an fun video there you could trap an self driving car in an salt circle as it was not allowed to cross an drawn line. You are obviously allowed to pass lines in an parking lot and this has advantages like no reversing and the other line might have less traffic. Also its plenty of setting there you can break the rules like an stopped car in your lane but you have an buss line you can pass the car in the buss line, busses obviously has priority and you are not allowed to use it but this is an exception. Link to comment Share on other sites More sharing options...
Vanamonde Posted February 12, 2023 Share Posted February 12, 2023 Okay, guys. Maybe don't ask the AI about its politics, its political biases, salacious rumors about popular figures, and that sort of thing. Or if you do, don't copy its responses here. Quite a number of comments removed. Link to comment Share on other sites More sharing options...
snkiz Posted February 12, 2023 Author Share Posted February 12, 2023 1 hour ago, sevenperforce said: Just wildly and wholly wrong. That's already established, however you forgot it's data is a few years old. It doesn't have access to current events. Anyway this subject is dead to me. To sensitive to have an objective conversation here. Link to comment Share on other sites More sharing options...
Vanamonde Posted February 13, 2023 Share Posted February 13, 2023 Thread locked by OP's request. Link to comment Share on other sites More sharing options...
Recommended Posts