I came to a similar conclusion from a different angle. Instead of the past, I considered the future—specifically the future of automation. There is a popular pessimistic scenario of machines taking up human jobs making everyone—save for the tycoons who own the machines—unable to provide for themselves. This prediction is criticized by pointing out that automation in the past created better jobs, replacing the ones it took away. Which is countered by claiming that past automation was mainly replacing our muscles, but now we are working on automation that replaces our brains, which will make humans completely obsolete. And now that I read this post, I realize that these better jobs created by automation left many people behind, so wouldn’t better automation leave even more people behind?
So developing automation has ethical problems—even if it benefits society as a whole, is it really okay to sacrifice all these people to attain it?
My ethical framework is based on Pareto efficiency—solutions are only morally acceptable if they are Pareto improvements. I wouldn’t call it “fully consistent”, because it raises the question of “Pareto improvement compared to what?” and by cleverly picking the baseline you can make anything moral or immoral as you wish. But if you can hand-wave that fundamental issue away it forms this vague basic principle:
A solution where everyone benefits is preferable to a solution where some are harmed, even if the total utility of the latter is higher than that of the former.
Sometimes the difference in total utility is very big, and it seems like a waste to throw away all that utility. Luckily real life is not a simple game theory scenario with a fixed and very small number of strategies and outcomes. We have many tools to create new strategies or just modify existing ones. And if we have one outcome that generates a huge surplus at the expense of some people, we can just take some of that surplus and give it to them, to create a new outcome where we have it all—every individual is better off and the total utility is greatly increased.
Even if a solution without the surplus division can result in more utility overall, I’d still prefer to divide the surplus just so no one will have to get hurt.
And this is where UBI comes in—use a small portion of that great utility surplus we get from automation to make sure even the people who lose their jobs end up at a net benefit.
But if we apply this to the future, why not apply it to the present as well? Use the same principle for the people who already got hurt due to automation?
I came to a similar conclusion from a different angle. Instead of the past, I considered the future—specifically the future of automation. There is a popular pessimistic scenario of machines taking up human jobs making everyone—save for the tycoons who own the machines—unable to provide for themselves. This prediction is criticized by pointing out that automation in the past created better jobs, replacing the ones it took away. Which is countered by claiming that past automation was mainly replacing our muscles, but now we are working on automation that replaces our brains, which will make humans completely obsolete. And now that I read this post, I realize that these better jobs created by automation left many people behind, so wouldn’t better automation leave even more people behind?
So developing automation has ethical problems—even if it benefits society as a whole, is it really okay to sacrifice all these people to attain it?
My ethical framework is based on Pareto efficiency—solutions are only morally acceptable if they are Pareto improvements. I wouldn’t call it “fully consistent”, because it raises the question of “Pareto improvement compared to what?” and by cleverly picking the baseline you can make anything moral or immoral as you wish. But if you can hand-wave that fundamental issue away it forms this vague basic principle:
A solution where everyone benefits is preferable to a solution where some are harmed, even if the total utility of the latter is higher than that of the former.
Sometimes the difference in total utility is very big, and it seems like a waste to throw away all that utility. Luckily real life is not a simple game theory scenario with a fixed and very small number of strategies and outcomes. We have many tools to create new strategies or just modify existing ones. And if we have one outcome that generates a huge surplus at the expense of some people, we can just take some of that surplus and give it to them, to create a new outcome where we have it all—every individual is better off and the total utility is greatly increased.
Even if a solution without the surplus division can result in more utility overall, I’d still prefer to divide the surplus just so no one will have to get hurt.
And this is where UBI comes in—use a small portion of that great utility surplus we get from automation to make sure even the people who lose their jobs end up at a net benefit.
But if we apply this to the future, why not apply it to the present as well? Use the same principle for the people who already got hurt due to automation?