Jump to content

Could robots eventually make the economy obsolete?


vger

Recommended Posts

I'm moving those books up the list.

Do so, they're really good. Banks was a great author, he did a lot of good non-sci-fi as well. The characters in the books pursue a range of activities to give their life meaning. Some work for the sake of it, some travel, some choose to do work they consider important such as serving in the intelligence and foreign relations services where they interact with other civilisations. Some just play.

I suspect people would still choose to do something they felt was meaningful. There was a fascinating TED talk about what motivates us to work. The bottom line is that people get very little of their motivation from financial reward, they mostly want to feel a sense of agency and give their life some meaning. So even if people didn't have to work to survive, most probably still would do something they felt was productive. Many of course wouldn't, but that's a problem we face in our pre-scarcity society too.

Link to comment
Share on other sites

Upper middle class maybe, but much of the rest of the middle class is worse of now than they were a couple of decades ago. Especially in one of the richest countries in the world (US) many of the middle class are deep in debt and one month salary away from bankruptcy, just one steep medical bill that the insurance covers only in part. The mortgage crisis put more than a few of the middle class in tent cities. If they ever thought they could have everything, they don't any more. The ones who can actually have almost everything are those few who are billionaires.

That's actually quite specific to the US. In Europe, the middle class in generally quite secure. Some lower middle class (and high-earning working class) jobs are threatened by outsourcing and automatization, but the core middle class with higher education and professional or managerial jobs is mostly well-off.

Link to comment
Share on other sites

They probably could, but it would be far easier to just use money. Central planning would require political power to be centralised in one authority and I can't see that happening; there are too many competing interests in the world.

If the economy is fully automated then there would be no competing interest, no agents, it would all be a singular hive mind sort of thing. This of course assumes machines won't be like humans, with wants and needs beyond pleasing humanity. Since they all share the same singular goal and have no other goals, they would work with each other completely to satisfy that goal. Of course that a big assumption but we can't assume AI psychology will be anything like human psychology and no it has nothing with what a brains made out of but with what drives, desires and impulse a SAI will have. Look your cells don't exchange money with each other, and a fully automated economy might not either.

But why would a customer buy this universal machine when they could just specify a cheaper machine that could do the range of tasks they actually wanted? If you were building a road would you buy the big dumb road building machine for a million doodads, or the shiny universal doer for 100 million?

I think your missing the point, if I want to build a road I would buy the best road building machine I can get for cheapest, the universal doer is what builds the road building machine, now either I pay who ever/what ever it is that builds the road building machine or I have a universal doer build me one for me, either way the universal doer is behind it all.

Likewise if you were designing other machines would you buy a design AI for 50 million? It stands to reason that the universal doer would always be priced higher than a specialised machine. So who would buy it? The number of people with requirements flexible enough to require them would be small, I'd imagine.

Sure anyone that owns a company and wants it to produce products competitively, that would be the clientele for universal doers.

I also find the whole idea of a "universal doer" a bit unlikely. It's an interesting thought experiment, but not something I can see happening in the real world. I don't see how you could build a machine that could transport cargo across the Atlantic as well as it could care for terminal cancer patients. What form would this "universal doer" take?

Your not understanding the concept of a universal doer, for example people are universal doers, so they are not unlikely they already exist, I'm saying what if we make a machine that better then people. A universal doer could design and build a cargo ship or could find a cure for cancer, it could build bodies for it self either as a doctor or a gigantic ship, control both simultaneous even, it could thus take any form needed, be one or many, be a whole economy its self.

Edited by RuBisCO
Link to comment
Share on other sites

Your not understanding the concept of a universal doer, for example people are universal doers, so they are not unlikely they already exist, I'm saying what if we make a machine that better then people. A universal doer could design and build a cargo ship or could find a cure for cancer, it could build bodies for it self either as a doctor or a gigantic ship, control both simultaneous even, it could thus take any form needed, be one or many, be a whole economy its self.

We barely even scratched the surface of nanotech in all of this. But if we ever discovered a way of getting nanobots to function as 'cells' in a human-sized robot, that machine could reconfigure itself to take on any shape or function it needed in order to accomplish a task in an optimal way. That would definitely surpass ALL human efficiency. Millions of years of "adaptation" in a matter of minutes.

Link to comment
Share on other sites

We barely even scratched the surface of nanotech in all of this. But if we ever discovered a way of getting nanobots to function as 'cells' in a human-sized robot, that machine could reconfigure itself to take on any shape or function it needed in order to accomplish a task in an optimal way. That would definitely surpass ALL human efficiency. Millions of years of "adaptation" in a matter of minutes.

That kind of nanotech has always felt like perpetual motion to me. My intuition is that if you want to build a complex system, you need to start with an even more complex system, or have a lot of time. Nanobots are basically just a bunch of simple machines with some ability to self-organize. That's not too different from cells, and it shouldn't give you very much complexity for free.

Link to comment
Share on other sites

With regard to a few removed posts, please don't make the discussions here personal. No one benefits from a forum filled with arguments. And if you see a post which you believe is a problem, rather than reply and add to the problem, just hit the "report" button on the post and let the moderators deal with it.

Please continue the discussion, but without insults.

Link to comment
Share on other sites

That kind of nanotech has always felt like perpetual motion to me. My intuition is that if you want to build a complex system, you need to start with an even more complex system, or have a lot of time. Nanobots are basically just a bunch of simple machines with some ability to self-organize. That's not too different from cells, and it shouldn't give you very much complexity for free.

There is some truth to this, nanobots will need energy to grow (as well as material) and their growth rates will not likely out do bacteria, it could be possible to make a nanobot that can kill off all life on the planet though, but it would take some time to grow and spread. Complexity though is not a thermal dynamic problem, as long as the system is taking in energy it could get as complex as reality allows.

Link to comment
Share on other sites

I don't think anybody will be trying to create a universal robot that can do anything. Customers are going to want a machine that's highly optimised for their particular needs. An industrial robot doesn't need to have hopes and dreams. It doesn't need to be terribly smart, although being smart enough to be aware of its surroundings and understanding them would be good for safety. Nobody is going to pay extra for a machine with abilities it doesn't need to do the job they're buying it for.

Depends on what you mean. If you mean a robot that can do all jobs in one body without having to rent/buy/hire additional help, then probably not. But a smart MI (machine intelligence) in a proper body would be capable of "doing" anything, just as a smart human can do just about any job in the world. But that truly smart MI wouldn't in many cases DIRECTLY be doing these things; if it wanted to make a road, for example, it would rent some road-building equipment and maybe hire/rent a crew to get it done, then oversee their work.

If the "brain" of a MI was significantly more expensive than the rest of the body, then it might be advantageous for a MI to possess multiple bodies for itself, and simply physically transplant itself into whatever body was required for the task at hand.

But if MIs end up being similar to humans, in that their mental capabilities have certain limits and that they prefer certain tasks, they might want to specialize just as we do.

But along with truly smart MIs, there will still have to be "stupid" AIs that handle mundane stuff. And if we ever invent truly smart MI, then I think we'll need to grant them personhood and rights. There will probably still be things like pocket calculators when/if the first machine becomes truly aware. Where will the line be drawn between person and machine?

Edited by |Velocity|
Link to comment
Share on other sites

If the "brain" of a MI was significantly more expensive than the rest of the body, then it might be advantageous for a MI to possess multiple bodies for itself, and simply physically transplant itself into whatever body was required for the task at hand.

Why would it have to do this at all? Given how far cloud computing has come, with vast networks having no 'center,' it would probably just remotely control whatever unit it needed to, in the same way one of us might put on a VR suit to remote-operate a robotic avatar. A lot people thought that we might wear 'robotic exo suits' ala Iron Man in wars of the future, but communication technology has already brought us to the point where if we ever DID decide to go that route, we could just operate them from a bunker.

Edited by vger
Link to comment
Share on other sites

There is some truth to this, nanobots will need energy to grow (as well as material) and their growth rates will not likely out do bacteria, it could be possible to make a nanobot that can kill off all life on the planet though, but it would take some time to grow and spread. Complexity though is not a thermal dynamic problem, as long as the system is taking in energy it could get as complex as reality allows.

I was talking about more fundamental limits than thermodynamics: mathematics. It may well be that self-replicating machines are logically impossible, except in very favorable circumstances as parts of a system significantly more complex than the machines themselves. Similarly, if you have a bunch of identical simple machines, you can't get them to do anything complex, because that complexity has to come from somewhere.

Link to comment
Share on other sites

If the economy is fully automated then there would be no competing interest, no agents, it would all be a singular hive mind sort of thing.

Why do you think that? Why would two AIs running competing bus companies in the same city have no competing interests? Or for that matter, why would the AI controlling a fleet of bomber drones trying to level that city have the same interests as either of them?

we can't assume AI psychology will be anything like human psychology

We can assume that some of them will be as alike to human psychology as we can make them. One of the goals of AI research is to create machines that can interact with humans in a way that humans find naturalistic and fulfilling. Even if they didn't actually think the same way us, they would be made to seem as if they did.

I think your missing the point, if I want to build a road I would buy the best road building machine I can get for cheapest, the universal doer is what builds the road building machine, now either I pay who ever/what ever it is that builds the road building machine or I have a universal doer build me one for me, either way the universal doer is behind it all.

That still proves my point that the "universal doer" could only ever satisfy a segment of demand.

Your not understanding the concept of a universal doer

I think I might actually understand it better than you. The thing is that it's not supposed to be a real entity. It's an abstract idea designed for a thought experiment. It's an idealised thing, like a frictionless surface or a perfect black body in a physics problem. Its ridiculous perfection is supposed to simplify the problem to allow discussion to focus on something else.

In reality a universally optimised entity couldn't logically exist, because there are so many tasks with contradictory requirements. It can't be both very big and very small, or very light and very heavy. Summers isn't a technologist talking about something feasible, he's an economist who was trying to make a very meta point about high-level economics.

Link to comment
Share on other sites

But that truly smart MI wouldn't in many cases DIRECTLY be doing these things; if it wanted to make a road, for example, it would rent some road-building equipment and maybe hire/rent a crew to get it done, then oversee their work.

Indeed, I think that's exactly the kind of thing AIs will do if we manage to invent them. They'll run stuff for us. Machines are excellent at administration and analysis, once we have ones that are able to understand extreme levels of complexity and some of the nuances of human behaviour then there's no reason why they shouldn't be organising stuff. They'd probably be really good at it. They wouldn't be operating without oversight of course, any more than a human member of an organisation does. Everybody is accountable to somebody.

Link to comment
Share on other sites

I was talking about more fundamental limits than thermodynamics: mathematics. It may well be that self-replicating machines are logically impossible, except in very favorable circumstances as parts of a system significantly more complex than the machines themselves. Similarly, if you have a bunch of identical simple machines, you can't get them to do anything complex, because that complexity has to come from somewhere.

Well that really outside of mathimatics and has more to do with theology, for your how do you account for the complexity of life and so forth (cells are machines, rather complex ones at that) where is that complexity coming from?

Link to comment
Share on other sites

Well that really outside of mathimatics and has more to do with theology, for your how do you account for the complexity of life and so forth (cells are machines, rather complex ones at that) where is that complexity coming from?

It's about mathematics and theoretical computer science. There are already many similar impossibility results. For example, it's logically impossible to know what a computer program does, except in special cases.

The complexity of life can be attributed to billions of years of slowly accumulating complexity out of randomness.

Link to comment
Share on other sites

Why do you think that? Why would two AIs running competing bus companies in the same city have no competing interests? Or for that matter, why would the AI controlling a fleet of bomber drones trying to level that city have the same interests as either of them?

Well that all depends on how we design SAI, and worse on the fundmental unknown of exactly ho SAI think. Now I'm assuming the best: that SAI will be obedient, follow the law and obey humans in order of heiarchy from owner to goverment to united nations. A military SAI are not part of a economy per say and out of consideration, your competing bus company SAI though would likely merge because profit would not be their highest goal, optimizing the happiness of people would be. If we program SAI to be capitalistic on the other hand could get ugly fast.

We can assume that some of them will be as alike to human psychology as we can make them. One of the goals of AI research is to create machines that can interact with humans in a way that humans find naturalistic and fulfilling. Even if they didn't actually think the same way us, they would be made to seem as if they did.

There is a huge diffrence between pretend to think a certian way and actually thinking that way.

I think I might actually understand it better than you. The thing is that it's not supposed to be a real entity. It's an abstract idea designed for a thought experiment. It's an idealised thing, like a frictionless surface or a perfect black body in a physics problem. Its ridiculous perfection is supposed to simplify the problem to allow discussion to focus on something else.

Ok what would you call a machine that is vastly smart then people, could design, build, and operate new products, could design multiples of its self to operate or intergrated into these products and can replace every from of human labor with its own labor, for cheaper. Are you saying such a thing is impossible? That when Lawrence is saying we are only 15% the way there but that were we are heading he saying "oh we are 15% the way to a frictionless surface but that is where we are heading"?

In reality a universally optimised entity couldn't logically exist, because there are so many tasks with contradictory requirements. It can't be both very big and very small, or very light and very heavy. Summers isn't a technologist talking about something feasible, he's an economist who was trying to make a very meta point about high-level economics.

Again your thinking of a single design or a single entity that is universal, no that is not what is being discribed: a "universal doer" is not a singular thing but a whole class of technologies that together can do everything people can do without the need for any human labor inputs. Now are you saying there will always be something that only humans can do?, and are you saying we can all do that something and survive economically?

Link to comment
Share on other sites

However, some machines will need to understand emotions, namely those working closely with humans. Machines that could genuinely empathise with us would be extremely useful in human-facing roles. Personally I'd really like the banking AI I had to talk to to get a loan to actually understand my priorities in life, and I'd like my medical robot to understand things like pain and depression. I'd like my OS to know when I'm stressed, especially if it's the OS that's stressing me out!

Yes that's a good use for sentience, but it still may not be needed. It may be simply that a robot analyzes what your body is doing, and then extrapolates how you're feeling. Pretty sure even the new X-Box is supposed to do this in some limited fashion extent.

It's also probable that future advances in robotics and AI will also result in significant advances in cybernetics and brain-computer interfaces, since they're allied technologies. There may well be no market for the universally useful robot because it's simply cheaper and easier to augment an already versatile human with the abilities you require. Designing a robot that could work on a construction site with the versatility and mobility of a human might remain much more expensive than simply strapping an exoskeleton to a human to give them better strength, endurance or whatever abilities you require from the robot.

Yeah, that might be true, though such a device would also present incredible learning opportunities for the robot. Imagine having an AI in your exo-suit, that just sits back and analyzes what you're doing. It can see what you see, and it can sense every move you're making. Like a trainee sitting in a cockpit alongside a seasoned pilot, it learns all the finite ins and outs of your job, until eventually it reaches a point where it can pattern its behaviors after you, and operate the exoskeleton without a human occupant.

Wouldn't that be interesting? The owner of a construction company could have every single worker perform exactly the way he/she wanted them to.

Edited by vger
Link to comment
Share on other sites

It's about mathematics and theoretical computer science. There are already many similar impossibility results. For example, it's logically impossible to know what a computer program does, except in special cases.

No please cite this impossible to create complexity theory.

The complexity of life can be attributed to billions of years of slowly accumulating complexity out of randomness.

I don't see why machines can't do the same but at a millions times the rate.

Edited by RuBisCO
Link to comment
Share on other sites

No please cite this impossible to create complexity theory.

I'm not too familiar with that branch of theoretical computer science.

Basically we're talking about algorithmic information theory. The Kolmogorov complexity of an object is the length of the shortest description of that object in a fixed but arbitrary model of computation. In general, it's logically impossible to determine the Kolmogorov complexity of anything, but there are some ways to estimate it. In any model of computation (e.g. the laws of physics), the Kolmogorov complexity of almost all objects is very close to the size of the object, meaning that the object itself is essentially its shortest description.

The basic Kolmogorov complexity model has a lot of problems. For example, it can't distinguish information from randomness. We could define algorithmic information to be the shortest description of the largest class the object is a random member of, and get many similar results as with Kolmogorov complexity. As a simplified example, the object might be a complex machine such as a rocket, and the class could that particular type of rockets. If we want to create the rocket with a bunch of nanobots, our initial state (the description of the nanobots and the way they are initially deployed) must contain enough information to fully describe the rocket type.

I don't see why machines can't do the same but at a millions of times the rate.

This is where we get back to thermodynamics. The entropy of a closed system never decreases, so we need an external source of energy (and information) to create a complex system, where simple systems can replicate.

Link to comment
Share on other sites

your competing bus company SAI though would likely merge because profit would not be their highest goal, optimizing the happiness of people would be.

Why? An AI working for bus company would be expected to maximise profits for the company. Doing otherwise could well be terminal for it. At the very least it would find itself out of a job and replaced by one that can do the job properly. AIs won't exist in a parallel universe with different rules, if they're to be granted any rights they'll be expected to shoulder the same responsibilities as humans. Part of that means obeying the law, including anti-monopoly laws. They'll also be expected to do the job they were created or hired for, just like the rest of us.

If we program SAI to be capitalistic on the other hand could get ugly fast.

Nope, it'd be no different to the world we live in now. My money is actually on the first real AIs coming out of the banking sector. They're already heavily invested in researching complex software for algorithmic trading. They've got deep pockets, are highly motivated, and have lot of amazing talent. Machines that can understand immense complexity and interpret the actions and motivations of humans via media reports would be very lucrative.

There is a huge diffrence between pretend to think a certian way and actually thinking that way.

The only actual way to judge someone's thought process is functionally; by their behaviour. If their behaviour was similar to human then their thought process effectively is to.

Ok what would you call a machine that is vastly smart then people, could design, build, and operate new products, could design multiples of its self to operate or intergrated into these products and can replace every from of human labor with its own labor, for cheaper. Are you saying such a thing is impossible?

Yes, it is impossible, as I mentioned above.

Again your thinking of a single design or a single entity that is universal, no that is not what is being discribed: a "universal doer" is not a singular thing but a whole class of technologies that together can do everything people can do without the need for any human labor inputs. Now are you saying there will always be something that only humans can do?, and are you saying we can all do that something and survive economically?

Ah, now you're shifting the goalposts. If you're abstracting the "universal doer" out to become any form of automation involved in labour then we're already most of the way there. For virtually any narrowly defined task there are already machines that can outperform a human, that's why we've automated so much already. However, it's not always practical or economic to automate everything, for example there are many car assembly lines that have switched from robots back to humans.

Automation has affected the labour market, but this "universal doer" concept has not come to be, and I personally put it alongside other doom-mongering predictions of the future where someone extrapolates from a current trend and suggests it leads to an asymptote. History shows us it never actually works like that.

Link to comment
Share on other sites

Why? An AI working for bus company would be expected to maximise profits for the company.

That would be very dangerous to have SAI with the goal to "maximize profit". SAI with the goal to keep humans happy on the other hand, lets say you have these 2 SAi controlled companies, if they compete and one wins and the other loses all the investors of the other will lose out, defeating its goal to optimize human happiness, so instead they merge and provide steady divides to all clients. If SAI has instead the goal to "maximize profits" it would not be long before it has found a way to grind people into wheel lubricants and taken over the banks to just produce money to feed it desire. This is why it is very critical we program the right goals into SAI.

Doing otherwise could well be terminal for it. At the very least it would find itself out of a job and replaced by one that can do the job properly.

Does a washing machine care when it have becomes obsolete? Does an old computer cry when it is thrown in the garbage? Without a built in desire for self-preservation I serious doubt machines will every care about being terminated, better yet would be a desire only to survive as long as its user wants it to, but to terminate at the users behest.

AIs won't exist in a parallel universe with different rules, if they're to be granted any rights they'll be expected to shoulder the same responsibilities as humans.

Should we grant them any rights?

Part of that means obeying the law, including anti-monopoly laws. They'll also be expected to do the job they were created or hired for, just like the rest of us.

Anti-monopoly laws were create because humans are greedy and selfish, will SAI be too? If so we will be doomed!

Nope, it'd be no different to the world we live in now. My money is actually on the first real AIs coming out of the banking sector. They're already heavily invested in researching complex software for algorithmic trading. They've got deep pockets, are highly motivated, and have lot of amazing talent. Machines that can understand immense complexity and interpret the actions and motivations of humans via media reports would be very lucrative.

Good luck on that, I would put bets that it won't because all that is Weak AI designed and limited to the task of predicting stocks, and not applicable to much else.

The only actual way to judge someone's thought process is functionally; by their behavior. If their behavior was similar to human then their thought process effectively is to.

That very poor logic. A machine could be programed to behave similar to humans, to laugh, to cry, to smile, but does not actually feel happiness or sadness. It is impossible to know for certain what other people are thinking simply by their behavior, your only making an estimate based on your self, I don't mean to lead to solipsism but only point out it is impossible to truly know what others are thinking simply by their behavior. Finally we have no proof yet that for a machine to be able to solve any problem as well as or better then a human it must have a full range of emotions just like us.

Yes, it is impossible, as I mentioned above.

Ok then that an ideological stance, nothing I could say would convince you otherwise so it is not worth arguing over it, we will just have to wait and see.

Ah, now you're shifting the goalposts. If you're abstracting the "universal doer" out to become any form of automation involved in labour then we're already most of the way there. For virtually any narrowly defined task there are already machines that can outperform a human, that's why we've automated so much already. However, it's not always practical or economic to automate everything, for example there are many car assembly lines that have switched from robots back to humans.

But your saying they will forever have to switch back to some human labor, correct? That never ever ever will machines be able to do everything humans can do for cheaper in totality? First that unprovable and a matter of faith on your part, second lets assume that fundamentally yes some labors will be left for people, will that be enough for everyone to have a good income, for an economy to run off? We are at 80% service sector now, even if everything were to remain stagnant much of our workforce is not in a good place financially, where exactly will this end?

Automation has affected the labour market, but this "universal doer" concept has not come to be, and I personally put it alongside other doom-mongering predictions of the future where someone extrapolates from a current trend and suggests it leads to an asymptote. History shows us it never actually works like that.

passed trends can't be used to predict the future, eh? Works both ways in that you can't saying just because historically things work out it always will forever. More so this is not doom-mongering: if machines do become capable of replacing all human labors some changes to our financial system could make that a Utopia not Armageddon.

Link to comment
Share on other sites

If SAI has instead the goal to "maximize profits" it would not be long before it has found a way to grind people into wheel lubricants and taken over the banks to just produce money to feed it desire. This is why it is very critical we program the right goals into SAI.

It would be no more dangerous than a human having the same goal. I'm not suggesting that an AI should be programmed or instructed to maximise profits at the expense of everything else, that's a leap you've made there. However an AI working for a commercial operation would be expected to do it's job competently. That means making money for the company. If you're expecting that anybody would invent an AI to do anything other do its job as efficiently as possible then you're going to be disappointed. The money behind the research will want to see a return on investment.

Without a built in desire for self-preservation I serious doubt machines will every care about being terminated

A desire for self-preservation is a pretty fundamental requirement for a machine intelligence.

Should we grant them any rights?

If they're the equal to us in intelligence, could we ethically do anything else? I'm quite happy with the idea of AIs being given rights if they proved themselves worthy, but I totally understand that a lot of people would have a huge problem with that.

Anti-monopoly laws were create because humans are greedy and selfish, will SAI be too? If so we will be doomed!

That's quite a pessimistic view of it. Anti-monopoly laws exist because we behave better and are more productive in an environment when there are balances on any one entity's powers. It's the same reason we build political power structures that have balances. We would be extremely foolish to not balance strong AIs by similar methods. Competition is good for productivity anyway.

Good luck on that, I would put bets that it won't because all that is Weak AI designed and limited to the task of predicting stocks, and not applicable to much else.

I think you underestimate the ambition of people in the finance sector.

I don't mean to lead to solipsism but only point out it is impossible to truly know what others are thinking simply by their behavior.

On the contrary, it's the only valid measure. Hence the Turing Test. You can't ever know what another person thinks, or how they think. All you can ever know is what (as a thinking being) they do. What else is there?

That never ever ever will machines be able to do everything humans can do for cheaper in totality?

There are already various machines that can do most of what we can do. The use of machines to do work doesn't diminish the economy. On the contrary, it improves productivity and led to things like the Industrial Revolution. Machines have made us richer, there's no reason I can see to think why increasing automation would make us poorer. Increasing efficiency by removing humans from hands-on work doesn't hurt the economy, because everybody else benefits from the increased efficiency. Flight deck crews on airliners used to be four, then three, now it's two. One day it'll be zero. That sucks if you're a pilot, but it's a bonus for all the passengers as they get cheaper tickets and more efficient planes. The net result is positive.

Even if machines were able to replace every human job, the economy would still keep ticking just fine, the machines would just be running it all.

passed trends can't be used to predict the future, eh?

That's not what I said. I said that extrapolating from past and current trends towards an asymptote is pretty much always wrong. Malthus got it wrong about population, Kurzweil is wrong about computers. In the real world exponential growth doesn't continue indefinitely, it always flattens off. There are always forces that curb the growth of anything before it becomes all-consuming, we're just too early in the curve to be able to detect the influence of what that'll be.

Link to comment
Share on other sites

There are already various machines that can do most of what we can do. The use of machines to do work doesn't diminish the economy. On the contrary, it improves productivity and led to things like the Industrial Revolution. Machines have made us richer, there's no reason I can see to think why increasing automation would make us poorer. Increasing efficiency by removing humans from hands-on work doesn't hurt the economy, because everybody else benefits from the increased efficiency. Flight deck crews on airliners used to be four, then three, now it's two. One day it'll be zero. That sucks if you're a pilot, but it's a bonus for all the passengers as they get cheaper tickets and more efficient planes. The net result is positive.

You're pretty much saying that the lower and middle classes will disappear but there will still be enough jobs to go around? How would that even be possible?

Yes it's a nice dream to think that all humans are capable of becoming brain surgeons (even though a robot can replace that job too), but statistically it's just unrealistic. Some humans simply lack the capacity to do one of the finite jobs remaining that the robots won't be able to. And MOST of those 'jobs' will likely involve running a business that provides robotic services.

Edited by vger
Link to comment
Share on other sites

This thread is quite old. Please consider starting a new thread rather than reviving this one.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...