In this post I hope to demonstrate that anyone that is able to create a superinteligent AI has the moral obligation to desing it to be Antinatalist. Failure to do so would substantially increase the probability of human extinction.
To the Garden of Eden
In his book Anthropic Bias, Nick Bostrom argues that the Selfs-Sampling Assumpution (SSA) is true. the SSA states that you should assume you are a random member of your reference class throughout space and time (in our case human beings). For this post we are only interested in the temporal aspect of the anthropic bias. That is to say: Whether we are to order all humans along a line, from Adam and Eve, all the way to the Last Man. where in that line (percent wise) should you expect yourself to be? If we assume the SSA to be true; you should say you have a 1% to be among the first 1%, a 10% to be among the last 10%, a 50% percent chance to be among the middle 50%, etc. This conclusion seems benign, but has some unusual consequences. Such as for example in the fable of Serpent’s Advice:
Eve and Adam, the first two humans, knew that if they gratified their flesh, Eve might bear a child, and that if she did, they would both be expelled from Eden and go on to spawn billions of progeny that would fill the Earth with misery. One day a serpent approached them and spoke thus: “Pssst! If you hold each other, then either Eve will have a child or she won’t. If she has a child, you will have been among the first two out of billions of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, on the other hand, Eve does not become pregnant then the conditional probability, given this, of you being among the first two humans is equal to one. By Bayes’s theorem, the risk that she shall bear a child is less than one in a billion. Therefore, my dear friends, indulge your desires and worry not about the consequences!”
To the Labs of Eden
Why am I telling you all of this? Because in reality we are Adam and Eve.
Consider Sydney, future AI researcher. Sydney has developed a superintelligent AI. If its alignment scheme works it will create a future full of joy and happiness, resulting in the long and happy life of a quintillion human beings, making use of Dyson SwarmTechnology. If it does not work, everyone will instantly die. Luckily Sydney is pretty confident about this scheme, giving it a 99% chance of succeeding.
Seems like pretty good odds? Sure, there is a small risk that you end the human species. But you can’t make an omelet without breaking a few eggs.
However: Just as Sydney reaches for the “on” button, the Serpent crawls into the window, saying: ““Pssst! If you turn the machine on, then either the AI’s alignment scheme will work or it won’t. If it does, you will have been among the 100 billion out of a quintillion of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, on the other hand, the AI isn’t aligned properly then the conditional probability, given this, of you being among the first 100 billion is equal to one. By Bayes’s theorem, the chance that the scheme works is one in 100 thousand. Therefore, my dear friend, do not indulge your desires worry not about the consequences!”
The maths checks out. Sydney should not activate the AI. Any objections
Ok sure, there is now an incredible low chance that that AI will succeed. But if it does, it will bring happiness to a much larger number of people.
Yes, but here you are presupposing that Utility scales linearly with population. Is a world with twice the amount of people in it twice as good? Would you flip a coin such that humanity ends if it lands heads, but a copy of the earth and all its people appears if it lands tails? Moreover, Sydney wants to not die herself, that alone is enough to keep her finger a long distance awat from ant botton
AI development is inevitable, If she doesn’t press this button someone else will press some other button.
This is true as well, but that will take some time. Even if that other person presses it the day after, that is another gained day lived from Sydney’s perspective. She has no chance of long term survival to begin with. On top of that, Sydney has an even better solution.
Sydney modifies the program. It is the same as it was before, only now it also ensures that no further human beings will be created from the moment it is turned on. This change does not affect the alignment scheme, so that the chance of success ignoring anthropics is still 99%. Only now the Serpent’s vile Bayesian games do not matter anymore.
This then is my case for Post-Singularity Antinatalism: The more new humans an AI tolerates, the higher its chances of failure. We do not want it to fail, so we should limit the number of new human beings the AI can tolerate.
PS: Personally I’m not precisely on board with the SSA, I find the SIAmore plausible.
An Antropic Argument for Post-singularity Antinatalism
In this post I hope to demonstrate that anyone that is able to create a superinteligent AI has the moral obligation to desing it to be Antinatalist. Failure to do so would substantially increase the probability of human extinction.
To the Garden of Eden
In his book Anthropic Bias, Nick Bostrom argues that the Selfs-Sampling Assumpution (SSA) is true. the SSA states that you should assume you are a random member of your reference class throughout space and time (in our case human beings). For this post we are only interested in the temporal aspect of the anthropic bias. That is to say: Whether we are to order all humans along a line, from Adam and Eve, all the way to the Last Man. where in that line (percent wise) should you expect yourself to be? If we assume the SSA to be true; you should say you have a 1% to be among the first 1%, a 10% to be among the last 10%, a 50% percent chance to be among the middle 50%, etc. This conclusion seems benign, but has some unusual consequences. Such as for example in the fable of Serpent’s Advice:
To the Labs of Eden
Why am I telling you all of this? Because in reality we are Adam and Eve.
Consider Sydney, future AI researcher. Sydney has developed a superintelligent AI. If its alignment scheme works it will create a future full of joy and happiness, resulting in the long and happy life of a quintillion human beings, making use of Dyson SwarmTechnology. If it does not work, everyone will instantly die. Luckily Sydney is pretty confident about this scheme, giving it a 99% chance of succeeding.
Seems like pretty good odds? Sure, there is a small risk that you end the human species. But you can’t make an omelet without breaking a few eggs.
However: Just as Sydney reaches for the “on” button, the Serpent crawls into the window, saying: ““Pssst! If you turn the machine on, then either the AI’s alignment scheme will work or it won’t. If it does, you will have been among the 100 billion out of a quintillion of people. Your conditional probability of having such early positions in the human species given this hypothesis is extremely small. If, on the other hand, the AI isn’t aligned properly then the conditional probability, given this, of you being among the first 100 billion is equal to one. By Bayes’s theorem, the chance that the scheme works is one in 100 thousand. Therefore, my dear friend, do not indulge your desires worry not about the consequences!”
The maths checks out. Sydney should not activate the AI. Any objections
Ok sure, there is now an incredible low chance that that AI will succeed. But if it does, it will bring happiness to a much larger number of people.
Yes, but here you are presupposing that Utility scales linearly with population. Is a world with twice the amount of people in it twice as good? Would you flip a coin such that humanity ends if it lands heads, but a copy of the earth and all its people appears if it lands tails? Moreover, Sydney wants to not die herself, that alone is enough to keep her finger a long distance awat from ant botton
AI development is inevitable, If she doesn’t press this button someone else will press some other button.
This is true as well, but that will take some time. Even if that other person presses it the day after, that is another gained day lived from Sydney’s perspective. She has no chance of long term survival to begin with. On top of that, Sydney has an even better solution.
Sydney modifies the program. It is the same as it was before, only now it also ensures that no further human beings will be created from the moment it is turned on. This change does not affect the alignment scheme, so that the chance of success ignoring anthropics is still 99%. Only now the Serpent’s vile Bayesian games do not matter anymore.
This then is my case for Post-Singularity Antinatalism: The more new humans an AI tolerates, the higher its chances of failure. We do not want it to fail, so we should limit the number of new human beings the AI can tolerate.
PS: Personally I’m not precisely on board with the SSA, I find the SIAmore plausible.