kerbiloid

Members
  • Content count

    4,198
  • Joined

  • Last visited

Community Reputation

3,242 Excellent

7 Followers

About kerbiloid

  • Rank
    Sr. Spacecraft Engineer
  1. Bad Kerbal Jokes

    Every Bad Kerbal wishes to become a BadS.
  2. It's wombats!!

    "How many scared wombats does it take to build a (...) ?"
  3. It's wombats!!

  4. (I'm not a psychiatrist, so maybe my specific illustration will be wrong.) But AI should be like lobotomized patient: keeping mind from zero to normal, without personal motivation, originally copying sample decisions. Then after the evolutionary battle this would be myriads of virtual lobotomized persons emulating people in their thinking closer and closer, raising average intellect higher and higher. By default having internet right in their mind because they are virtual. Getting motivation from biohumans, no personal desires and motivations. Biohuman personalities will first be just human. But getting food without struggle, getting the same knowledge, picture of world, answers and questions (non propagandist - scientific), they will have less and less questions to each other. Generation by generation they will grow closer and closer to each other until getting similar. And at that moment there will be like two kinds of the same Standard Superhuman Personality: 1. Lobotomized, unmotivated, but intellectually high and with direct internet connection - in Machine, myriads of it; 2. Emotionable, motivated, intellectually enough high to match the virtual one, constantly interacting with the virtual cluster of the same, but unmotivated personality from p.1. They match each other, they need each other, they become two sides of each other. One (virtual) knows everything, can everything, wants nothing. Another one (biologic) is its external actuator and starter. P.S. While instead of making a lobotomized one, they are trying to program a motivated one. It's false step.
  5. I don't talk about uploading at all. I just have no idea is it at all possible. But it isn't required. I talk about convergent evolution. Myriads of virtual personalites being adapting to the biohuman personalities can more and more match the human minds. They can will be as unhuman as possible in their basic nature, but will match the human minds more and more ideally, becoming Human Mental Protocol compatible. On another hand, humans being grown, learning, and living in symbiosis with the same data model, will be getting more and more similar in minds, becoming Virtual Human Mental Protocol compatible. Similar habits, similar knowledges, similar thoughts, less and less need in self-identity. At some point, all biological bodies will carry (naturally grown and taught, not installed) the same personaity with random differences inside accuracy. While the Machine will ideally emulate myriads instances of this personality virtually. They ideally match each other, and the same personality is being reproduced in Machine - virtually, in newborn humans by teaching. The bodies of course will be similar and optimized, too. No need to look at yourself in mirror when around you are another yous. From the next step the Machine just keeps reproducing this Standard Superhuman Personality in virtual box by code, in newborn human bodies - by teaching/learning (i.e. verbal programming). Next step - the Standard Superhuman Personality decides how to evolve farther.
  6. Not extremities, but two kinds of devices: a Machine with countless personalities, cheap and expendable, and device-embedded personalities (humans). Next step: as all people will be studying the same, having the same knowledge available, and so on, then next generations of biohumans will grow with more and more similar personalities. So, they can easily be expendable for far one-way space flights and other such missions. From their POV (and in fact, this is so) their "I" won't "die" with body. Only local copy of their great "I" living on Mother Earth. Like why my "I" has to drive this bipedal torso, while in fact I'm there. (Don't worry about a maniac overlord "I" and its slaves: such knowledge and mental mightiness will erase any personal motif, this "I" cloned in billions of human bodies will be the same "I" for every of them, so the problem of "is my clone me" just doen't appear there). Next step: self-regulation of proportions between virtual and biologica bodies of the "I", getting to the next level of personality.
  7. It's an attempt to make a single human mind. Equilibrium. Balance. While the personalities chaos is still unbalanced, they compete because they disturb each other. At this phase external disturbances are insignificant. When balance (static or recursive) is achieved, it works like a huge amplifier for external kicks (produced by human minds). Quadrillions of virtual minds repeat any your thought.
  8. Google Assistant is not AI, it's just an expert system. As well as other Siri-like toys. Humans can't create AI, you they we can just contribute to it . *** Humans can't formulate AI until they understand their own mind, which should require AI to understand their own delusional nonsense. So, any AI should be self-evolving. This means it should pass an evolution, raising its complexity from simple models to complicated ones. As the most only fast known way of evolution is natural selection, it requires a lot of AI instances to let the most effective features, weights, and algorithms get stable and spread. And you can make kerbillions of them because they are virtual, and all they need is love electric power. Let them exchange with data and reproduce several partially changed copies of them. Now you get an everlasting chaotic tornado of virtual AI "personalities" in balanced proportions. Borning and dying quintillions per sesond. Of course you don't need full-featured personality for any feature test. So, some of them are complex, some are rudimental, schematic. This makes a fractal picture of bunches of similar personalities growing from each other. A fractal pasticcio of voices. Do not limit them with 1 computer, let them spread across the Earth, getting everywhere. Now you have a parallel virtual civilization. Much more effective than humans in calculations, tightly knit with every electronic device on the Earth. They have no emotions, no motivations. Their only motivations are disturbances made by external factors. They don't fear, don't hate and don't want anything. If you could switch them off aka kill, they don't give a file about this. They would be terminated, so what? Fear of death is an emotion, no emotions - no fear. So, they have no motivation to self-defend. But in fact you even can't switch them off because they are now everywhere. So, humans are not a danger for them at all. But their mind is based on human sum and forms of knowledge, so they are human-like thinking by nature. So, they recognize people as legacy model external devices with non-reproducible personality under LTS lifetime support license. Humans are not enemies for them, they are like COM-port device with driver without sources. And humans are not useless for this AI. They have motivation. They feel emotions, get desires, kick the virtual AI chaos which amplifies them. So, humans are like starter for this AI. While the chaotic AI amplifies the human minds with immediate calculations, googling, etc, the humans make the show go on. And they are hard-reproducible part of the system, so AI has to protect and care about them, as about a very significant and fragile system device. Since then humans are no more a pure human civilization, but a cyber-bio symbiosis. That's very good.
  9. Of course it's expensive, as any redundancy. Do you make 3 backups of all your personal data on independent drives? We still have none of them, but a lot of words for none. Intelligence, not Person. A fractal pasticcio of similar voices rather than a single voice, spread across the Earth.
  10. Russian Launch and Mission Thread

    So, since now SpaceX has officially lost its aim.
  11. Waiter, theres a _____________ in my soup!

    Try HotRockets to warm the cool parts and take them. Waiter! When did you wash your hands the last time?
  12. "G" is an extra here. Without G it's not I. I even didn't pay attention, thinking about it as AI. You can change your settings faster than they can search them. So, you have a time gap. More reliable system. But of course it's too expensive right now, and nobody will do this right now irl. But irl how many users per million face these problems irl? That's not about technical implementability, that's about local possibilities. Also I don't mean "providers" as "companies", but as "independent address pools" No. That's me from one terminal, because let's suppose that I'm a human. A[G]I may be or may not be in many places at once. 1. I' would be sure because I don't have brothers. Lol. 2. The idea of making an artificial human mind and call it AI is absolutely wrong from its root. As well as Turing Test. It's a Dr. Frankenstein model of thinking: if take body parts and sew them together, it will be a human, we need just to activate it.. Or like "how can a cat evolve into a dog".
  13. Not good on "live" stuff, ie. banks, ads. Ok, my fault. I mean not a real "mirroring", but several servers receiving the same input, processing it, voting and making the output based on votes. If 2 comps say "5", and the 3rd says "6", write answer = 5. Buy 3 computers, three internet links with different providers. Let them vote. You don't need to find them, attacker - has to. In middle? Man, you're high... The same as p.1. To ddos a server you first have to find it. To ddos 2 servers of 3 you have to find 2. If you have those servers, you don't have to find them, you know where they are and what protection use. Odds are yours. AGI?
  14. 1. Usually my provider/hoster does this. He has much more powerful equipment on his level. I.e. stratification. 2. You can use several mirrors of your server and temporarily exclude the attacked one from your balance. Signal redundancy. Of course, this requires additional resources and complexity. If I undertstand the "krack" thing correctly, then again redundancy. Deliver the message to several different places and compare, excluding the erroneous one. That's why it's better to ask important advices on several forums, and look several upper links in google, not just the first one. 10 men-in-middle unlikely could say the same lie. So add redundancy and raise complexity. Do not believe the only person. A sample of excessive countermeasures. Ideally before hitting a wood, you would first examine it with undestructive methods (acoustic or so) and define the weakest place with the greatest concentration of crystallographic defects. The hit into this exact place. But as usually you will spend more efforts with the same results, it's better just to hit several times, without the measuring. Either shrapnel hits those windows too rarely to bother, or enforce the windows, or use some kind of deflector or shielding. So, raise complexity, get from particular window to more general level of the system in whole. Whole Universe is probabilistic, nobody knows accurate values of anything. Nobody usually needs them, too. If an asteroid hits your roof once per billion years, no need in asteroid-proof roof. Not that I knew about them much, but as I can read in these descriptions, "attacker attacks the target device". I.e. if you have several more parallel devices doing the same at once or redistributing operations, placed in different places, it would be much more difficult for the attacker to find them all and attack most of them at once to make you exclude them from poll. Redundancy and complexity again. But usually you just don't need such protection. Use 3 computers with different internet providers and let them vote.