MBobrik

Members
  • Content Count

    629
  • Joined

  • Last visited

Everything posted by MBobrik

  1. One with a six-seven digit budget could probably build a more or less autonomous robot that generates a random pattern and then imprints into the crop. But that would ultimately count as "caused by people" anyway. Extraterrestrials, as the question tries to lead us ? No, unless they have the same motivation as human pranksters, and no interest in actual communication. But then, occam's razor. Extraterrestrials with psychology that causes them to consider "paint geometric shapes over the first suitable big surface" as their first choice of communication ? Maybe, but by finding suitable crop fields and waiting with painting until no witness is around they would demonstrate they got a lot of knowledge about humans. Enough knowledge to realize that human psychology is different and this is not the best way to communicate with us. And someone building interstellar spaceships is bound to be smart enough to figure out.
  2. Actually, It was only 12 ton and its design was far from optimal ...
  3. Are you implying I that said they should be not ? Surely, when investigating causes of an accident, the number of people killed is not relevant. However, when evaluating the overall safety of that given means of transportation, the number of people who got killed using it is the most important number.
  4. except this was not a malaysian airlines plane.
  5. Accidents vary in severity. Counting accident like accident seems absurd to me. I don't think that 'accidents' of four small business jets, like bumping into the terminal at say 1 mph, emergency landing because of drunk passenger brawl or such can be lumped together as '4' accidents, and meaningfully compared to '3' accidents involving obliterated airplanes and three digit body counts.
  6. I am not buying that. to be a little over the top, this would be like saying that four crashed paragliders with the worst injury being a broken leg are worse than MH370, MH17, QZ8510 put together.
  7. A hydrogen tank and all other machinery between the reactor and the crew/ payload would provide more than adequate shielding for the crew. Ground facilities could be protected by an *external* shield. And the runway would have to be from materials that are resistant to secondary neutron activation, there should be no human nor animal outside within several miles distance from the airport while this thing is taking off. But once in air, and over ocean, the inverse square law will offer enough safety. Surely it would leave a trail of ozone, nitrogen oxides, and traces of carbon 14 in its wake but if the traffic is sufficiently low this would not be a serious environmental problem. If it were to crash, reactors running on high enriched uranium are small and opposed to all other reactors uses, run briefly. They therefore won't accumulate large amounts fission products. the worst thing that would happen is a local spot of radiated water that dissipates quickly into irrelevance. Concerning construction and operation, it would presumably operate like SR 71 except it could fly higher because its engines don't need oxygen. Then, at some point, external air would be replaced with hydrogen, and it would continue nerva-style.
  8. Small objects with high surface/weight ratio aren't subject to very high temperatures. So, as a special challenge, one can make the cake exactly thin enough to get baked during reentry but not burned.
  9. put four of them around it so that it lands like a maple seed.
  10. That could be a good measure of "hidden" progress in rocketry. Hidden because no one is actually doing this stuff so we can't directly compare. But we could use (realistic) contemporary apollo style mission cost estimates compared do the inflation adjusted cost of the actual apollo program as a measure of how or whether our technology has advanced from 1960's. The only problem I see in this, is, that today's project costs tend to be extremely underestimated, so we would have to adjust for that.
  11. This si maybe you are advocating. Nibb is more like "there are no missions that people can do better, so send in only robots"
  12. No you are being obtuse. If we restrict ourselves only to things that are surely pay back in the next two-three decades, like you are suggesting, there won't be any long term future worth speaking of.
  13. If proto-romans said "we are not ready to build a big city like that, perhaps we start in another 20000 years" there would be no Rome.
  14. Even walking can one learn only by putting effort into trying. Look who's talking: So no rational point to try walking in the foreseeable future. Stick to crawling. Perhaps in another 20000 years when obstructive attitudes like mine wane.
  15. No, I am saying, if we set them free to evolve roaming the galaxy, but we stay put here stagnating, at some point we will stop being equal partners to them. All that I am saying is, that the "we stay put here stagnating" part is bad. And is bad on its own irrespective of sentient AIs.
  16. This is like saying you won't go tho the gym until you grow muscles. Guess how long will that take. By postponing indefinitely 'till we are ready' we won't get ready. Ever. It's the same as gym. The only way of getting ready is by putting effort into trying.
  17. . That was exactly the level of smugness I was expecting from you. The rest of your post can be summed up with three words. Negativism, greed and shortsightedness. . And one more thing. There is an obvious rational, even economic justification of not giving up on the rest of the universe. Its most pedestrian version goes like "what is the expected accounting value of the rest of the universe ?" It's just beyond the capability of shortsighted thinking.
  18. No I never said that. Read carefuly. It was the "humans giving up on the rest of the universe" that I consider bad. Creating AIs and exploring the universe together is perfectly fine.
  19. guess what ? you can not say " I want .... but, there is no rational justification for ... " If you think there is no rational justification, then you want not. You want the sensible economic decision. . Oh, and one more thing. You are talking about mere flags and footprints. Almost anyone else wants infinitely many times more. Permanent presence. Expansion.
  20. Actually, you've got it wrong. I want both but Nibb rejects the latter.
  21. I am sorry, I failed to notice that even though I started about what would be necessary to explore beyond the distance where the time delay makes things like just better compression, data filtering, autonomous driving, and such insufficient, you can not even imagine that far and thus you couldn't be talking about that situation. Hard to make it obvious to you... so lets crank it up a little... imagine a probe to alpha centauri. how it could gather data efficiently when the response won't come back until long after the mission ends ? The only solution, methinks, is that the probe itself understands what it found and adapts its further exploration accordingly. anything less would lead to gross inefficiencies - first the probe catches a glimpse of something, it will take eight years to redirect it to examine it more, when it turns out to be something unexpected, further eight years to tell the probe to change its approach, and so on...
  22. You are comparing the uncomparable. Pipelines with the universe. Simple tools with artificial sentient beings. I am perfectly willing to leave routine maintenance at the bottom of the sea to the robots (we can go there any time we want anyway), but I am not willing to pass on the rest of the entire universe. And while today's simple tools are mere extensions of ourselves, future sentient AIs would be independent beings on their own, not us. We may consider them our progeny, but they won't be us. You may be content with creating a species more resistant to the rigors of space travel, and leaving the future to them, but I am not.
  23. No they aren't at all. It's the 'leaving the rest of the universe to them' part of Nibb's version of our future that I don't like.
  24. No we won't. You can't predict, control and forever predestine thinking of something as clever as you are. They would most likely not destroy us because they would have no motivation to do so, but that would be beyond our control. Except the level of pre-analyzing that would allow the robot roam free and search, eventually dealing with contingencies, while the earliest possible human reaction can be next day to decades, would amount to human level od understanding of what's going on. The truth is, that short term thinking is one of the mankind's greatest scourges. If it weren't we would be already firmly established on Mars and planning to move to Jupiter next.
  25. Are we out to exterminate squirrels ? No. I was not talking about AIs being deliberately speciecidal or even malevolent. Just about evolving past their initial programming and going their separate ways to the point where we are just another species of local fauna for them.