The more I read about simulated humans the more I’m convinced that a hard ban on simulating new humans and duplicating existing one is a key point of what differentiates dystopias too horrible to even grasp and hyper-existential failures from sane futures, at least until we have aligned AI.
He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”
I also think that if we don’t, we run fast into what we can call… Cenobitical Existential Failures? (Cenobites are Hellraiser demons who see excruciating pain as the best thing in the universe).
Or in a lot of very tiny people really happy about hydrogen atoms (or working overtime).
I’d also strongly argue about making this stand before we select untold billions of people who don’t care if they live or die and they outcompete anyone who actually cares out of business.
Now take it even further, and imagine this is what’s happened everywhere. There are no humans left; it isn’t economically efficient to continue having humans. Algorithm-run banks lend money to algorithm-run companies that produce goods for other algorithm-run companies and so on ad infinitum. Such a masturbatory economy would have all the signs of economic growth we have today. It could build itself new mines to create raw materials, construct new roads and railways to transport them, build huge factories to manufacture them into robots, then sell the robots to whatever companies need more robot workers. It might even eventually invent space travel to reach new worlds full of raw materials. Maybe it would develop powerful militaries to conquer alien worlds and steal their technological secrets that could increase efficiency. It would be vast, incredibly efficient, and utterly pointless. The real-life incarnation of those strategy games where you mine Resources to build new Weapons to conquer new Territories from which you mine more Resources and so on forever.
Economical Growth has stopped to correlate with nearly all measures of wellbeing for the population in first world nations. We are already more than halfway there it seems.
The more I read about simulated humans the more I’m convinced that a hard ban on simulating new humans and duplicating existing one is a key point of what differentiates dystopias too horrible to even grasp and hyper-existential failures from sane futures, at least until we have aligned AI.
I also think that if we don’t, we run fast into what we can call… Cenobitical Existential Failures? (Cenobites are Hellraiser demons who see excruciating pain as the best thing in the universe).
Or in a lot of very tiny people really happy about hydrogen atoms (or working overtime).
I’d also strongly argue about making this stand before we select untold billions of people who don’t care if they live or die and they outcompete anyone who actually cares out of business.
Economical Growth has stopped to correlate with nearly all measures of wellbeing for the population in first world nations. We are already more than halfway there it seems.