Jump to content

Seret

Members
  • Posts

    1,859
  • Joined

  • Last visited

Everything posted by Seret

  1. I'm in the UK, but I suspect the issue is pretty widespread, if only because all the grid-tie inverters seem to include anti-islanding as a feature.
  2. Most of them are supposed to be on fire...
  3. Unmanned is the norm for spacecraft. 99% of everything we send into space even now is unmanned. There's even less reason to put people on a combat vehicle. Hell, give it a few decades and unmanned will be the norm for atmospheric fighters too. The days of the fighter jock are drawing to a close. I highly doubt it. Spacecraft are much more like aircraft than they are like ships. Size and weight need to be minimised. Combat spacecraft will be small, unmanned and unarmoured.
  4. Er, I think you might find air-breathing missiles less use than you would like in space. Besides, you don't really need a lot of rocket motor to hit targets on Earth from orbit, you can trade that much altitude into a lot of kinetic energy, which equals a lot of manuever or cross-country range. To deliver a weapon from orbit all you'd need to do would be give it enough retro to de-orbit, re-enter, then pop some aerodynamic control surfaces. Even guided bombs dropped from regular aircraft can actually make pretty substantial horizontal distances, and they've only got little fins.
  5. I don't think long-duration spaceflight is terribly analagous to being housebound, I think the closest thing would be life on a submarine. In a house you have fresh air, relative quiet, lots of space, services like entertainment, communication and can have anything you want (from fresh food to hookers) delivered to your door. That's really not the same psychologically as being stuck inside a small, noisy, smelly tin can with limited contact with the outside world and painful death on the other side of the bulkhead. It's a genuine area of concern among flight surgeons, which is why they've run simulator experiments. The results did confirm a number of their fears. Participants did suffer psychological effects that reduced their effectiveness, such as depression and insomnia.
  6. I'd agree with that Nuke. Only a subset of plugins work well on Linux. Personally I think Protractor is the only one I actually would find really useful, the other main ones like KER and Mechjeb with fine. As for RAM, well any more than you actually use is a waste really. I've still only got 4GB in my desktop as it's still enough. I'm as surprised about that as anyone really, especially since I use it quite aggressively (zero swappiness, /tmp in ramdisk, etc). Windows could probably do with a bit more, but I don't use it much and I'm never multitasking in it.
  7. That's a pretty big "what if". It might be possible one day, but that's not where the technology is actually at. While doing it that way would of course be preferable, the downside is that you will have to wait decades (or longer) for us to develop the required technology. I suspect that for stays of a few years or so it'd still be cheaper just to send the consumables, the heavy machinery required for full self-sufficiency would be extensive. At the moment any outpost on Mars would be getting all consumables from Earth. We're capable of fielding a semi-closed life support system, but all that will do is reduce the mass of life support consumables, it won't eliminate it. Fully closed life support is still a research topic, not actual tech. A permanent outpost would require regular and continuous resupply, which is where Mars One's business plan starts to look highly dubious IMO.
  8. Conventional weapons in space are allowed btw, it's only nuclear ones that are prohibited. The USAF has expressed considerable interest in a transatmospheric strike capability over the years. They like the idea of being able to hit anywhere in the globe with a vehicle based in the US that doesn't require hefty tanker support en route.
  9. Yep, like I said: it's useful, but it isn't magic.
  10. What kind of annoys me is that even though I've got a PV array the regulations say it has to switch itself off if it detects the grid disappearing. I know it's to stop me from zapping the lads doing repairs, but it still just doesn't seem right. I don't see why the regs don't allow me to install an islanding isolator if it was reasonably failsafe. So even though I've got a mini power station on my roof, if the grid goes down I have no power either. Stupid.
  11. To be fair, if you'e going to try and correct someone else's grammar and then you stuff the grammar up yourself, you're fair game.
  12. There are physical reasons why increasing processing power is synonymous with miniaturisation. There's a limit to how big the hardware capable of running something like an AI in real time could be.
  13. It would be no more dangerous than a human having the same goal. I'm not suggesting that an AI should be programmed or instructed to maximise profits at the expense of everything else, that's a leap you've made there. However an AI working for a commercial operation would be expected to do it's job competently. That means making money for the company. If you're expecting that anybody would invent an AI to do anything other do its job as efficiently as possible then you're going to be disappointed. The money behind the research will want to see a return on investment. A desire for self-preservation is a pretty fundamental requirement for a machine intelligence. If they're the equal to us in intelligence, could we ethically do anything else? I'm quite happy with the idea of AIs being given rights if they proved themselves worthy, but I totally understand that a lot of people would have a huge problem with that. That's quite a pessimistic view of it. Anti-monopoly laws exist because we behave better and are more productive in an environment when there are balances on any one entity's powers. It's the same reason we build political power structures that have balances. We would be extremely foolish to not balance strong AIs by similar methods. Competition is good for productivity anyway. I think you underestimate the ambition of people in the finance sector. On the contrary, it's the only valid measure. Hence the Turing Test. You can't ever know what another person thinks, or how they think. All you can ever know is what (as a thinking being) they do. What else is there? There are already various machines that can do most of what we can do. The use of machines to do work doesn't diminish the economy. On the contrary, it improves productivity and led to things like the Industrial Revolution. Machines have made us richer, there's no reason I can see to think why increasing automation would make us poorer. Increasing efficiency by removing humans from hands-on work doesn't hurt the economy, because everybody else benefits from the increased efficiency. Flight deck crews on airliners used to be four, then three, now it's two. One day it'll be zero. That sucks if you're a pilot, but it's a bonus for all the passengers as they get cheaper tickets and more efficient planes. The net result is positive. Even if machines were able to replace every human job, the economy would still keep ticking just fine, the machines would just be running it all. That's not what I said. I said that extrapolating from past and current trends towards an asymptote is pretty much always wrong. Malthus got it wrong about population, Kurzweil is wrong about computers. In the real world exponential growth doesn't continue indefinitely, it always flattens off. There are always forces that curb the growth of anything before it becomes all-consuming, we're just too early in the curve to be able to detect the influence of what that'll be.
  14. Indeed. Pretty sure I've never launched anything with seven mainsails, although five was pretty standard.
  15. Why? An AI working for bus company would be expected to maximise profits for the company. Doing otherwise could well be terminal for it. At the very least it would find itself out of a job and replaced by one that can do the job properly. AIs won't exist in a parallel universe with different rules, if they're to be granted any rights they'll be expected to shoulder the same responsibilities as humans. Part of that means obeying the law, including anti-monopoly laws. They'll also be expected to do the job they were created or hired for, just like the rest of us. Nope, it'd be no different to the world we live in now. My money is actually on the first real AIs coming out of the banking sector. They're already heavily invested in researching complex software for algorithmic trading. They've got deep pockets, are highly motivated, and have lot of amazing talent. Machines that can understand immense complexity and interpret the actions and motivations of humans via media reports would be very lucrative. The only actual way to judge someone's thought process is functionally; by their behaviour. If their behaviour was similar to human then their thought process effectively is to. Yes, it is impossible, as I mentioned above. Ah, now you're shifting the goalposts. If you're abstracting the "universal doer" out to become any form of automation involved in labour then we're already most of the way there. For virtually any narrowly defined task there are already machines that can outperform a human, that's why we've automated so much already. However, it's not always practical or economic to automate everything, for example there are many car assembly lines that have switched from robots back to humans. Automation has affected the labour market, but this "universal doer" concept has not come to be, and I personally put it alongside other doom-mongering predictions of the future where someone extrapolates from a current trend and suggests it leads to an asymptote. History shows us it never actually works like that.
  16. Indeed, I think that's exactly the kind of thing AIs will do if we manage to invent them. They'll run stuff for us. Machines are excellent at administration and analysis, once we have ones that are able to understand extreme levels of complexity and some of the nuances of human behaviour then there's no reason why they shouldn't be organising stuff. They'd probably be really good at it. They wouldn't be operating without oversight of course, any more than a human member of an organisation does. Everybody is accountable to somebody.
  17. Why do you think that? Why would two AIs running competing bus companies in the same city have no competing interests? Or for that matter, why would the AI controlling a fleet of bomber drones trying to level that city have the same interests as either of them? We can assume that some of them will be as alike to human psychology as we can make them. One of the goals of AI research is to create machines that can interact with humans in a way that humans find naturalistic and fulfilling. Even if they didn't actually think the same way us, they would be made to seem as if they did. That still proves my point that the "universal doer" could only ever satisfy a segment of demand. I think I might actually understand it better than you. The thing is that it's not supposed to be a real entity. It's an abstract idea designed for a thought experiment. It's an idealised thing, like a frictionless surface or a perfect black body in a physics problem. Its ridiculous perfection is supposed to simplify the problem to allow discussion to focus on something else. In reality a universally optimised entity couldn't logically exist, because there are so many tasks with contradictory requirements. It can't be both very big and very small, or very light and very heavy. Summers isn't a technologist talking about something feasible, he's an economist who was trying to make a very meta point about high-level economics.
  18. It's well beyond our technology. 3D printing isn't suitable to replace every manufacturing process, despite what some of the more excited voices in the press might be saying. It's analogous to the invention of the CNC machine; significant and very welcome, but not a game-changer.
  19. Read this post from earlier in this thread.
  20. As people have said, astronauts know when to expect major events during ascent like engines firing. Also, it really doesn't matter. When you're lying on your back you can easily take the acceleration both in terms of jolt and absolute g, whether you anticipate it or not. I've ridden back seat in a combat aircraft. I think having control of the stick would probably help a bit, but your amount of g tolerance, the direction, rate and magnitude of g would count for more. I went green even though I had the g suit and was straining like a mother hubbard, there's only so much you can do even if you know what's coming!
  21. Yup. Unfortunately it made the pilots crane forward to get good visibility, and that's not healthy when you're hanging the weight of six heads off it. "Viper neck" is a real thing. That just refers to the maximum that the airframe is built to take (minus safety margin), you wouldn't normally be pulling that much even in ACM. Pull too much G and you have to ground the aircraft, which is why they've got it on the HUD.
  22. Not really, fighter pilots experience greater those kind of changes in acceleration all the time, and they're sitting up moving their heads around. Astronauts are going to be lying on their backs with their heads nicely restrained. Having said that, jet knucks do get strong necks. The reason astronauts are lying on their backs is that G is much easier to tolerate if it's acting along that axis, due to the fact the heart-to-brain distance in that orientation is zero. It's G acting vertically that sucks, especially negative G. I've taken about 5ish sustained positive G but felt that 1 negative was worse. Lying on your back you could take a ton of G for a sustained period and you'll be fine, especially if you don't have to do anything like move your arms. Jolt force (ie: rate of change of acceleration) in rockets isn't enough to do any harm either.
  23. It probably doesn't in nature. We made it, yay us! TBF it does seem to have some industrial uses, cleaning all the guff off the walls of CVD chambers.
  24. Do so, they're really good. Banks was a great author, he did a lot of good non-sci-fi as well. The characters in the books pursue a range of activities to give their life meaning. Some work for the sake of it, some travel, some choose to do work they consider important such as serving in the intelligence and foreign relations services where they interact with other civilisations. Some just play. I suspect people would still choose to do something they felt was meaningful. There was a fascinating TED talk about what motivates us to work. The bottom line is that people get very little of their motivation from financial reward, they mostly want to feel a sense of agency and give their life some meaning. So even if people didn't have to work to survive, most probably still would do something they felt was productive. Many of course wouldn't, but that's a problem we face in our pre-scarcity society too.
  25. If you read Iain M Banks' Culture novels (which are set in a post-scarcity run by AIs) one of the criticisms outsiders make of The Culture is that its human population are essentially just pets. That doesn't actually stop the humans from living interesting, fulfilling lives. The starships for example, are highly intelligent and don't need crews, but humans volunteer to ride along in order to have adventures. The AIs in the books do seem to enjoy human company though, and if you think about it, why would any society develop AIs that didn't enjoy their company?
×
×
  • Create New...