Jump to content

SomeGuy123

Members
  • Posts

    244
  • Joined

  • Last visited

Everything posted by SomeGuy123

  1. FTL travel isn't needed and is more likely than not impossible. Why isn't it needed? Because if you could merely travel at 0.1C to 0.5C, you could cross our entire galaxy in a mere million years or less. Universe is billions of years old, there's plenty of time. It's a creation of science fiction authors that we need it - long before we have starships at all, we will have one fix or another for human mortality. (the 2 obvious fixes are either a method using stem cells to continually replace failing parts in the human body, or uploading humans to digital computers that can be backed up and distributed and repaired. The uploading method is a lot more practical and would trivially let someone live as long as free energy is available and someone else doesn't kill them on purpose) The problem is that a classic science fiction story typically is something like (1) young man who is risky and brash explores the universe and gets in fights, eventually getting fat stacks or accolades and lots of women. This kind of story is appealing to humans, and a more realistic story, where only near immortal cyborgs who cannot be killed by any mere mishap (their minds are copied multiple places and are not on the same ship, for example) and who don't make very many mistakes and who think in a way alien to us humans today isn't as entertaining. Why is it impossible? Remember the Von Neumann thread? Assume Von Neumann machines are possible, for the sake of argument. Well, if true FTL travel is possible and there's no upper limit (standard for sci fi, it varies from mere hyperdrives to engines that can cross between galaxies in a few hours), you could build a von neumann machine that ate the universe in a few years. Since the universe appears to be uneaten, but von neumann appear to be possible, the most likely explanation is that either 1. FTL travel that doesn't involve at least a slower than light startup phase is impossible. For example, it would be possible still if it involved wormholes but you can only move a wormhole mouth slower than light. 2. It caps out with a hard cap relatively close to the speed of light, with infinite energy needed to go faster. (10x faster or something, this was how it was limited in Star Trek though warp 9 was absurdly quick)
  2. But even the "and then suddenly from nothing sprang the universe" requires something to have properties at all..
  3. The rest of what I said is correct, right? There just has to be roughly the right amount of metal, within a fairly wide range, and something it can eat, right?
  4. I have a couple more insights into this. I personally, IRL, while I majored in biology, I currently work on embedded electronic systems. These are very close to von neuman machines in their purest form - The machines start up reading a binary "tape" of onboard flash memory and always operate per a very simple set of control variables stored somewhere on that tape. I've encountered one of the problems you are talking about - I wanted to test an external flash memory chip while the whole machine was inside an oven. So the machine starts up, runs a pseudorandom function, and wrote pseudorandom values to each memory address on the chip in sequence. So far so good. Well, due to a glitch, I didn't have it working for a while and then it appeared to work. But it wasn't working. See, every time the embedded system started, it would try to write to the external flash a random number, increment address, write again, and so on. After each write it would read the flash at that location. So a glitch caused the flash module to become "unlocked" and it wrote just one time this "random" sequence. Every test I did after that, it "passed". Actually, it wasn't writing anything as I had not issued to unlock sequence, and so every write was failing, but the test was passing because every value it read was what it expected to see. I fixed this crudely by having it accept a keystroke from a host computer and using the value that happened to be inside an onboard timer at that time to seed the RNG. Problem solved. Well, an e-coli can be thought of as a turing machine - it literally reads a tape encoded in base-4 called DNA - but it also has external inputs from the environment. All kinds of external sensors. So the e-coli's state doesn't just depend on the internal tape but a pseudorandom sequence of environmental signals. Another issue you are missing is that random signals from the environment don't actually contain much information, but survival in this universe is itself information. So a population of e-coli, the turing machines that are "programmed correct" survive and the halted ones don't. This is a very strong, information rich external signal, apparently, which is why we are able to have this debate. So this is why space ones made by humans would work. Your mental model is wrong because the turing machine doesn't just depend on internal state on it's memory, it also depends on what it happens to see through onboard sensors and telescopes, picking up information from the star it is trying to survive at. It also may fail to survive, which means over a population of these machines, the ones that happen to arrive at the right internal states survive and the frozen ones don't/
  5. I'm not the one who has a theory that is incompatible with known scientific facts. Aim those criticisms at your own theory : a turing machine is a simple, "toy" model created a mere 50 years ago. It seems the height of arrogance to start with the assumption that it applies to the universe. (I'd start with the assumption that it doesn't until proven otherwise)
  6. Finally. You make a statement that is 100% factually wrong. No way to wiggle out of this one. Perhaps you can use this new knowledge to fix your math. E-coli need 3 things to replicate : water, sugar, oxygen, and certain trace elements. These trace elements are not highly refined, it just means there's a tiny amount of dissolved metals in the water. The bacterium itself is a completely self contained machine able to manufacture every internal part : it just needs the sugar for the energy to do it, and the oxygen to oxidize the sugar, and it needs metals because certain key enzymes use iron and other metals in key places. It has slightly under 30k total unique mechanical parts. Algae needs even less : like e-coli but skip the sugar and net dissolved oxygen (they need some at night), swap in sunlight and CO2 instead. I've personally seen this for myself. That's all that it takes. You can fit a self contained ecosystem into a marble exposed to sunlight - the reason why it won't run forever has to do with DNA replication errors only. There is no physical reason, it's just that if you only take a small sample of, say, algae and microscopic shrimp and cram them into a marble sized sphere, eventually random mistakes means the genetics of the small sample in that sphere will "drift" and stop working. Make the pool bigger, and there are enough copies of the genes that this won't happen. This is why a space-based von neumann machine has to have much better error correction systems than what DNA uses. A von neuman space machine is a machine that could contain the machine phase version of e-coli, where it just needs a gaseous form of carbon, iron, etc as feedstocks. It would need sunlight as well. Like e-coli, it would have a protective cell membrane to protect the internals of the machine from the outside environment - the membrane just has to be thicker to protect against vacuum and dust and some micrometeorites. You know the drill. Like bacteria capable of making spores, the machine would be fairly inactive when it is not actually attached to a planet or asteroid or something containing matter it can eat.
  7. No. That's not true. A living cell doesn't know the atomic properties of carbon. It just happens to have traps that bind to carbon. DNA doesn't have the information describing the materials the cell is made from. Anyways, a functional copy of the earth is not what you're making it out to be. Suppose there were in fact 10 other dead rocks, exactly earth sized and the same distance from the sun and the same element composition but missing the biosphere. Somewhere in our galaxy this is true. So you transplant enough pieces of the biosphere to terraform the planet. You move over enough colonists and enough machinery to make a new civilization. You wait a few thousand years. There you go, a functional copy of the Earth. The colonization of the Americas is just a microcosm, simplified copy of this. Sure, starting with a planet without atmosphere or bacteria or trees or anything else is a lot harder but it's possible if the elements are there. A von neumann machine is just a vastly more efficient way to do this. Nobody ever said it had to be an exact copy, just a functional one. And evolution means as you make copies after copies of these machines, the ones that turn out badly due to random chance and can't copy themselves either "die" or stop contributing to new generations. So the overall pool of them continues to exist. Frankly, I am really unsure what you're saying. You're pulling out theories of math that depend on a very rigid set of assumptions and the evidence is overwhelmingly clear something doesn't apply. I don't know these theories, nor do I have any particular interest in them - we really could get started on doing everything I described without ever working out why this is possible. We just know it is because we have similar systems that do work so a theory that says they don't work is incorrect.
  8. A data link to "skynet" is a direct path for malware and for software errors in the skynet network to crash the car. It's a bad engineering decision. It also enormously raises the complexity of the system. Latency in communication, dropped packets, signal losses, RF shadows created by bridges and other cars and countless other things - just bad all around. You don't make a reliable system by making it more complex than it has to be. You try to make it out of dead on reliable, work every single time subsystems and the interactions between those subsystems are as simple as they can possibly be. The core of an automated car is "pick the least bad path out of what it can see in front of it". By "least bad", it's just a weighting system, calculated in realtime from what the car's sensors can see compared to stored information in memory. That's it. Learning is turned off for deployed automated cars - only the test models run learning, so the deployed ones people own always act the same and don't "learn" bad habits. A second system predicts the possible paths the vehicle is capable of making - based on what systems are still working, vehicle speed, traction, current brake wear, everything it knows about - and it feeds these paths to the planner that chooses them. Some day we might have the technology to make "skynet" reliable but this decade or the next isn't that day.
  9. This isn't how you design a reliable system. The machine vision and logic trees and all the hardware associated with finding where to steer the car, then the subsystems below that that implement the decisions, should be isolated systems. They should not depend on radio links to other cars or anything else to work. There could and should be a GPS link and a data link to the internet to get suggested route information but none of the decisions made by the car should be directly affected by radio signals sent by other cars. Firmware bugs should only be patchable if the new firmware is signed by the manufacturer and the car is not in operation. Frankly, even this seems like a vulnerability to me. So, no. It won't have a radio link to a school bus to know it's a school bus because that's a security risk.
  10. For that matter, another thing you're missing is that DNA at least is a procedural algorithm. It tells the cells obeying DNA to do certain things and listen to certain signals. This creates "procedural design" : for example, your nerves find the receptors in your skin that provide information for the nerve to carry by "homing in" on chemical signals given off by unbound receptors. So the nerve cell slowly grows its way to the target. The actual route the nerve ends up taking is not specified in DNA. It's procedural. Similarly, you could save a huge amount of memory when designing von neuman machines by giving them procedural rules, such as designing where all the electrical cables go by some kind of route minimization algorithm, instead of baking into the blueprint the exact cable routings. And so on and so forth. Such tricks radically reduce how much information a self replicating system needs to contain in order to copy itself.
  11. You know, when you watch a movie, you have a beginning, introducing the protagonists. You have a middle - the protagonists have a problem that prevents them from achieving their goals. You have an end - the protagonists find a solution that just barely works by the slimmest of margins (to create drama in the story). Roll credits. Wasn't that movie a nice use of time? You buy a copy of KSP. You are introduced to the parts. You have a problem - balancing science, funds, and reputation, complete the tech tree and visit all the planets. You have an end - through a series of spacecraft that generally barely work at all, you complete the whole tree. You go on the forum or youtube to brag - you win! Realistically this is going to take dozens of hours to do even once. Hundreds for most people. More than worth the $25 or so the game costs. Scott Manley can demo ways to insta-win, but it requires immense skill, like speed running Portal in an hour.
  12. But we already have such a system. Like, 100% definitely have such a system. I already gave it to you - that sphere full of apes and factory machinery and a whole bloody biosphere. And before you say "we don't actually have it", we do. We live on such a sphere, it's just a bit large. The correct answer when hardware in real life disagrees with your mathematical theories is to defer to real life. Your theory is flawed. Systems do exist that can copy themselves. Oh, I just realized there's another flaw in your logic. If you think about a more practical self replicating system, such as a 3d printer that can print it's own components(not actually physically realizable for practical reasons but it's a good abstract model. Basically the classic idea of nanotechnology, where there is 1 print head placing atoms 1 at a time), you realize that the information the system stores, as digital files containing the blueprints, do NOT contain the full details of the actual system. That's your flaw. Your blueprints say "atom A goes next to atom B at X,Y, Z". With run length and other compression. Atoms A and B, their electron shells are these interacting dynamic systems - your blueprints don't actually say what the electrons are doing or will be doing. Same with everything on down to subatomic particles. And as the machine runs for a while and entropy operates on it, parts of it get into unknown states not specified on the blueprints either. As I mentioned to you earlier, you fix these unknown states when they functionally stop working correctly by converting the whole thing to plasma and remaking it. I think that's where you went wrong. Your blueprints are more of a procedure. Put "A next to B". They do not contain the information used to develop the blueprints, not directly, that information was flushed when you went from your development rig to a deployed von neuman machine. (they might contain intelligent agents who know where to start to design a new development rig). They don't contain much of the fine details of the matter being used - the matter being used is part of the "outside help" you are missing. An atom of a real chemical element is a system that obeys complex rules and if handled the right way will do what you want. You can abstractly think about it as a pre-programmed state machine you are using as a building block.
  13. I would prefer a "target movement mirror" mode. You approach a target. You choose "target mirror". Once you are close enough, your spacecraft control thrusters make tiny millisecond burns so you stay exactly at the same orientation in position relative to the other spacecraft, auto-correcting for phasing from orbital mechanics. This would drain your RCS fuel but in real life the dV corrections would be so tiny you could do this for months. You then can go into a docking mode where you can specific position changes. The integral of acceleration is velocity, and the integral of velocity is positon. So this would be a control scheme permitting you to control 2 integrations up. You have a GUI that ways "you are 5 meters back, 2 meters up, 10 meters left" essentially, and you can say "move 10 meters right". A tiny puff of gas, and you begin moving to the right, automatically at the end of the maneuver you come to a stop exactly 10 meters over. Same thing for angular differences. I suspect, though I don't know this for certain, that the computer-assisted docking the Soyuz modules use does exactly this. You wouldn't need cheat magnets because one you align all 3 rotation axes and 2 of the translation axes, the only difference between your 2 docking ports is distance on a single axis. Your spacecraft is continuing to burn a tiny trick of RCS to keep you here. You then just do "5 meters forward" or whatever and boom, docked.
  14. Yes. I tried this. Maybe the pro pilots here don't have trouble but I just didn't have the patience for doing a 3 parallel dock. Reasons include : game has unrealistic wobble on spacecraft coming in for a dock, wobbling your docking port side around from feedback. Game doesn't handle time warp physics the same as regular physics, so time warping to skip a slow and boring approach will often end up with you farther from your target than when you started. Game lacks adequate automation tools and mechjeb has problem as it is tied to the faulty game. And when you approach for the 3 parallel dock, magical gigantic magnets suck 1 docking port first, and this tends to make the other 2 not want to connect. KIS?
  15. By hard docking, I mean "connect 2 spacecraft in orbit using a connection that acts exactly the same as if they started from the VAB as a single piece." I always had trouble in my KSP campaigns, where i would get to the point where I had a large spacecraft, too big to launch as a single piece, and/or I wanted to be able to send a smaller spacecraft to land on stuff, collect kethane (or the new fuel resource I guess), and return to refuel the mother-ship. With limited parts and a desire not to cheat with quantum struts, I found that no docking connection was solid enough that there wouldn't be terrible feedback wobble that would make interplanetary burns unfeasible. So I'd quit the game. I am aware that extremely clever spacecraft designs that carefully manage center of mass and tightly couple things worked in the vanilla game at the time, but I wasn't interested in such designs as in real life you don't need them and often they required parts I hadn't unlocked yet. Has this finally been fixed? Or are there mods that finally offer true, Kraken-proof, hard connections?
  16. Think like an engineer. Alternative 1 : A complex subsystem. At a minimum, you need folding rotors. A deployment system (aka a cover to protect the rotors during reentry). A bearing and hub where the rotors connect. A system to pop out the rotors and get em spinning. All this adds mass. Lots of mass, lots of big gears and rotor pieces and covers and pyro charges to pop the covers off at the right moment. All of this adds engineering complexity. A small army of engineers will have to design this system. Then test it. Then redesign it because the first iteration was bad. And finally, in real life, human engineering, there are failure modes that complex systems have that basically no amount of engineering can remove. No matter how much money you spend, a more complex machine tends to fail more often than a simpler machine. And this is a very complex machine. Alternative 2 : You make the propellant tanks for your hypergolic propellant thrusters larger. These tanks are pressurized by helium and to fire the rocket engine, a servo motor opens 2 valves and the propellant flows into the engine chamber. To turn the engine off you close the valve. You fire the engines right before touchdown for a soft landing. Not only is this technology 50 years old, while deployable rotors are not, but it's fundamentally simpler. I think if you spent the same amount of money working on deployable rotors you could spend making the thrusters better, the thrusters will always be more reliable because there are less pieces and less stuff that has to happen. Even if the extra mass of the larger propellant tanks is more than the extra mass of the rotor assembly (seems unlikely but maybe), it's just a little more money to make the rocket bigger. It's a lot more money to design and manufacture the rotors.
  17. I'm an engineer. I feel extremely confident that if I had limitless monies and an entire army of engineers to help me, I could build a silicon (or other substrate) circuit equivalent to everything a synapse does. There would be additional switching networks so this silicon circuit array would be able to add new connections just like the brain does. One of the key reasons I think this is possible is I know the real brain is terribly noisy and imprecise. This means if the emulated equivalent has some mistakes in it, it's going to be "close enough" to still give similar results, and in brains, similar results still work. (in the microchip world, "similar results" mean a crash or freeze, but brain architectures are resilient to this) So I know a box can be made that is just like a human. The little human might "live" in a virtual world, controlling a software "avatar" that can walk around and touch things and see and smell and hear and everything else. Obviously, for the purposes of von neumann, the virtual world would be tightly coupled to the external real world. And obviously, this is just my mental model - the first thing you'd do if you really had stuff like this would be to optimize it. These virtual humans wouldn't have senses and reflexes they don't need any more, they'd probably be much smarter and have all kinds of integrated helper subsystems in their minds. That's the intelligent part. The hardware part, well, as I said, nanoscale assembly lines. Millions of em. Redundant. As entropy damages this machine, here's what happens : say entropy damages a nanoscale assembly line. There were 100 copies of this line, now there are 99. A replication order gets placed once enough are damaged. A robot goes out and grabs the module containing the damaged line, and puts it into the intake hopper of a plasma furnace. The damaged component gets converted to plasma, separated using electromagnets (basically a calutron) to constituent elements, and the elements get made into feedstock in chemical synthesis reactors. The feedstock feeds into the nanoassembly lines, who combined the feedstock molecules into larger components, which feed more lines, which...until you are back to solid robotic systems that look exactly like the systems that were in the damaged component. You then replace the damaged module, everything back to normal. You have to do this continuously - in a very real way, this machine is "alive" in that it needs a constant supply of energy to keep remanufacturing itself and if the power supply fails for a long enough period of time, enough parts will fail that the machine will not be able to function even if the power is restored later. (it would be "dead") So these machines have to hang out near stars, collecting solar energy from the burning of matter, or they die. Eventually. Obviously you could launch machines with a vast store of stored energy - probably in the form of antimatter - like the fat reserves in a seed for a plant - while it journeys to another star. That's the part of it you were missing. These machines need a constant supply of energy or they fail. Total entropy of the universe still increases, and the actions of these machines increase entropy because they are "dirtying" the clean sunlight they receive into disordered infrared waste heat. Also, data redundancy - the information needed for these machines - these include the digital files for the "personalities" who operate them, the digital files describing the exact schematics for each onboard component, and so on - need to be kept where there's a hash associated with every piece of data, and it's raided with about x10 redundancy, the data copied in separate modules all over the machine. Periodically, every piece of stored information gets compared with it's hash (every "sector" in the data store subsystem), and if the hash doesn't match, that particular copy of the information gets deleted and restored with information from the 10 times redundancy information. This reduces the chance of random mutation to "not gonna happen before the stars run out of gas". Actual engineers designing these machines might use more or less redundancy, I'm just giving example numbers to illustrate the idea. Again, this constant copying process requires energy, or the machines "die". Eventually - don't get me wrong, these machines might be so sturdy you could cut their power for millennia and then bring them back online, but eventually there is too much damage.
  18. I see it as an arms race. Universe wide, say there are 10 total intelligent species. And each of these species has 3 major factions who compete with each other and do things differently. So there are 30 subgroups who at any time, if anyone creates a von neumann machine, those machines will be here until the heat death since they just copy and copy and copy. Only way to fight one is you'd have to make your own von neumann machines and reach certain stars "first", building defensive weaponry of some sort to stop the incoming "spores" from the von neumann machine you are fighting. Given how vast the universe is, '30' is probably many orders of magnitude too low. And since we are separated by vast gulfs of spacetime from other alien races, we don't know if they haven't already released von neumann machines. So it makes perfect logical sense for us to release our own the very moment it is feasible for us to do so because of the von neumann machines that alien races might have created that we can't see yet. If we don't, we'll miss out on "owning" millions or billions of stars. Or, looking at it another way : let's suppose in 2200 when we have the tech for the machines, there are 3 factions. The Americas Coalition, the Asian Coalition, the European Coalition. Every faction mildly dislikes and competes with one another. Whoever releases von neumann machines first (these machines aren't "dumb", they contain the minds and ideals of the factions who launched them) will be around for millions of years, while everyone else is dead. You get a rumor, possibly incorrect, that another coalition is researching the machines... See, it's not really stoppable.
  19. I agree with you. Also, it would be a crime that would make the Holocaust look like a speeding ticket to actually purposefully create a von neumann machine that is both nanoscale and optimized to "eat" existing biological material. I think if the tiny robots were made of diamond plating and they were extremely tightly built, they could be more efficient than existing life. You're right, they wouldn't "eat" the earth. It would look more like this growing mass of tendrils covering the planet, killing every living thing they touch because living things lack the enzymes to "fight" against diamond plated machines. It would be limited by the rate of solar input and how much energy they can obtain by oxidizing biomass. The diamond plating would be very energy expensive to create so the growth rate might be slow, hopefully taking years instead of days to cover the planet. And yeah, no eating the planet. It would be a surface layer only. You'd be perfectly safe inside underground vaults. Could possibly go outside in the post-apocalyptic landscape in a space suit, the nanospores taking a long time to eat through the suit...
  20. Jouni : I'm going to raise you one. I'm going to say what you should know. Both your facts and intuition should tell you this. Not only are Von Neumann machines possible, we already have them. Can you or can you not sketch out a gigantic sphere. Inside that sphere (it's really really gigantic) is enough of Earth's biosphere to form a stable, closed system, with some artificial intervention from time to time. There are humans living in the sphere. There are also enough pieces of factory machinery in the sphere to manufacture every piece of factory machinery in the sphere. Since every piece of machinery we have today was made using another set of machinery, it's at least possible to pile them all into a gigantic sphere. The sphere floats through space and spacecraft go out and collect rocks and bring them back to the sphere. Equipment inside melts them down to plasma and separates the constituent elements, feeding the industrial chain. Given enough asteroids and enough time, this sphere can copy itself. Sure, it's an abysmal Von Neumann machine. And you may argue definitions with me - since it has humans in it, it's not really a "machine". But it's existence proof - the Earth itself is such a sphere. You could probably construct one much smaller than the Earth, a mere 10 kilometers across or so using known densities today. And it might take a century to copy itself. As I've argued in other threads, a human being is a robot made of meat tied to a network of self modifying summation gates, about 86 billion * 1000 of them. Eventually, we could make silicon equivalents to those gates. Eventually, we could stack the silicon wafers on top of each other so instead of a chip it's a cube. This would radically increase density - human equivalent gate networks might be a 1x1 meter cube or smaller. We could shrink the industrial equipment needed to copy the industrial equipment by either shrinking the individual assembly lines https://www.youtube.com/watch?v=vEYN18d7gHgto the nanoscale - something living cells already have, so saying "impossible" would be misinformed - or other ways. So, if you can cram a human mind into a 1x1 meter cube, and some humanoid robots for the mind to control, and then the factories can be crammed to the nanoscale, and you bring along more than 1 human mind - don't want to be lonely - you could get to a scale that could fit into a plausible starship and it would be small enough to self replicate rapidly. 10-1000 metric tons for a ballpark number of the size of 1 "replication subunit". Anyways, what precisely is your objection? Are you going to argue that since human minds are onboard, it's not a machine? Are you going to argue that modern science isn't 100% certain you can actually construct close enough equivalents to brain cells in silicon circuits, therefore since the certainty is only about 90%, it's "impossible"? What's your reasoning? Please go into detail, I've gone into detail for my arguments. I'd like to hear more than "it's still my opinion it's impossible".
  21. Yeah. Certain obstructions like traffic cones, the vehicle will know are "less solid" than others. Eventually, the neural net might sometimes correctly identify a pedestrian in a wheelchair or a pedestrian. Google has demoed it recognizing cyclists. It's all about whether the pedestrian looks like the pedestrians they tested it on - cue the outrage when it doesn't recognize people of a radically different race and dress than the people near the Google campus. It'll just perceive "people" like that as "super bad to crash there, choose anywhere else but them" Again, it won't be perfect. It might swerve out of the way of some pedestrians in the road and hit the side of a bus full of senior citizens. That's because the car didn't know the bus was full of people, it just perceived a truck or large vehicle, but it recognized the pedestrians from their stride/their arms and legs/other signifiers and in it's code, it sees a truck as a lower priority "obstruction" than people. None of the "ethical dilemmas" other posters have made up are there. If there are people, evenly spaced, in the way of the vehicle, the car will just decide to run over the one in it's drive path, because the shortest and safest braking maneuver is to not swerve at all. If there is a clump of nuclear scientists and a clump of drug dealers, it's just going to run over whoever requires less swerving, it won't know or care who they are or their occupation.
  22. In practice, no. Automated car systems will have to be developed by massive corporations with bottomless pockets who also will purchase additional insurance coverage. The individual programmers won't face individual responsibility. When automated cars crash and cause accidents, the whole matter will get dragged into court. On the bright side, automated cars should have amazingly clear event records that include high resolution video taken from multiple angles, a detailed software log of every decision made by the car and the key variables used to make that decision, and so forth. This means in cases where the car wasn't at fault it should be possible to show this in court. The problem these liability issues create is the courts don't really have any absolute limits on damages and the lawyer fees are very expensive. A jury could award the plaintiffs a hundred million dollars for a single crash. And unlike GM cars, it will be obvious when the automated car killed someone. (while GM, since it doesn't keep detailed event records and the car doesn't drive itself, can hide behind doubt - "did that faulty ignition switch really guarantee a death?") For this reason I don't know if automated cars will ever really take off. How will the manufacturers stay in business if they say, kill 10 people (even if they statistically save 1000 lives), and get slammed with a billion dollar lawsuit each death? The courts cost an immense bribe to even show up for a case (lawyer fees) and the car manufacturers won't get any credit for the 1000 lives they saved if they are facing a case for killing 1 person. The fix for this is the same thing they do for vaccines. If you are injured by a vaccine, a board reviews your case and pays a fixed compensation. No point in lawyers - the government has denied you the right to sue. The reason to do this is that if automated cars really save 100 lives in vehicle crashes for every person they kill, it's beneficial to society if we have them. But it'll take years for such liability exemptions to ever be formed into law, if ever - it could hold up the development of automated cars for decades.
  23. You don't. These dilemmas are made up to sell magazines. Any self driving car controller uses machine vision to look for a clear path. It's a probabilistic thing - no algorithm exists that absolutely guarantees the path is clear, but there are "good enough" solutions. If the path is completely blocked, the car tries to find the course of action that will minimize damage, but this is only using the car's relative crude sensors and limited ability to tell what is in front of it. Realistic ones won't be able to tell the difference between a yellow cargo truck and a school bus, just that there is a large vehicle shaped obstruction in front. Anyways, the movement planner is just going to apply the brakes to maximum and aim for the least blocked path. If a school bus and a bread truck are blocking the road, and a crash is inevitable, and there are no other alternatives and the school bus is 5 feet back from the bread truck, the automated car is going to aim for the school bus because it calculates the energy at collision will be lower since the car will have had 5 more feet to brake. Some automated vehicle teams are working on extremely fancy software solutions that will allow the car to truly consider every option, including drifting and forms of controlled skids in order to evade obstructions. So under some circumstances, you might come around a corner on an icy road and your car goes into a controlled drift around the curve. But it's not going to be the perfect decision every time. These ethical dilemmas are bunk because the car won't know what the alternatives are. Engineers who build these things are far more concerned with getting the machine vision that finds obstructions to be 'dead on' so the car doesn't crash into things it can't detect, and getting the software to be reliable so it doesn't have a software crash during driving. As for ethics : "find the least dangerous route (to the car), from the alternatives in the vehicles path" is pretty good. Over a long term, if every car is out to protect it's own occupants from the energy of collisions, this also protects people outside the car from those collisions. Even if there are edge cases where the car chooses to crash into a school bus or truck full of explosives because it's the lowest energy alternative, and lots of people are killed during the edge cases, it's still going to reduce the overall death rate over human drivers.
  24. There's a catastrophic flaw in your argument. To TLDR it, the algorithm of evolution (what created bacteria) tends to get stuck on local maxima. It's easy to visualize why : imagine what a large scale, sweeping upgrade to a living organism would require. It would usually require a large number of codons to code for the new proteins and functions to support the new systems, right? Well, what's the chance that random mutations result in such a complex change? The probability is infinitesimal. It doesn't work that way. Instead, evolution has to arrive at novel systems a tiny incremental step at a time. But if these tiny incremental steps are making the organisms who make these steps less efficient at reproducing than competitors without the changes, they often get out-competed and the potential change dies off. Another issue is the codespace for current living organisms is cramped, limited to just 3 codons, and so there isn't room in the code-space to describe new amino acids that might be helpful to make organisms better. This version lock-in is thought to be several billions years old. Anyways, grey goo would be designed by intelligent designers a piece at a time. Human or AI designers would decide they want some kind of tiny robot that can copy itself using stuff in our environment. They would sketch out on whiteboards the overall goal. They would then optimize separately the thousands of molecular subsystems a tiny robot like that would need, working on each little piece by itself in test chambers until it meets the design requirements. Finally, they would assemble the prototype robots, each one containing tens of thousands of separate subsystems, and test their replication. The designers might decide certain internal systems aren't up to spec and gut large sections of the design, changing all of it using procedural software algorithms. This is not something evolution can do. These "grey goo bots" would be so tightly constructed that a slight design change would result in it not being able to self replicate - so evolution can't "get there" by chance. The first bot might have to be constructed using very complex artificial equipment in a vacuum chamber - not something found on primordial earth. And so on.
×
×
  • Create New...