AI alignment isn’t the only problem. Most people’s values are sufficiently unaligned with my own that find solving AI unattractive as a goal. Even if I had a robust lever to push, such as donating to an AI alignment research org or lobby think tank and it was actually cost-effective, the end result would still be unaligned (with me) values being loaded. So there are two steps rather than one: First, you have to make sure the people who create AI have values aligned with yours, and then you have to make sure that the AI has values aligned with the people creating it.
Frankly, this is hopeless from my perspective. Just the first step is impossible. I know this from years of discussions and debates with my fellow human beings, and from observing politics. The most basic litmus test for me is if they force fates worse than death on people who explictly disagree. In other words, if suffering is mandatory or if people will respect other people’s right to choose painless death as an ultima ratio solution for their own selves (not forcing it on others). This is something so basic and trivial, and yet so existential that I consider it a question where no room for compromise is possible from my perspective. And I am observing that, even though public opinion robustly favors some forms of suicide rights, the governments of this world have completely botched the implementation. And that is just one source of disagreement, the one I choose as a litmus test because the morally correct answer is so obvious and non-negotiable from my perspective.
The upside opportunities from the alleged utopias we can achieve if we get the Singularity right also suffer from this problem. I used to think that if you can just make life positive enough, the downside risks might be worth taking. So we could implement (voluntary) hedonic enhancements, experience machines and pleasure wireheading offers to make it worthwhile for those people who want it. These could be so good that it would outweigh the risk, and investing in such future life could be worth it. But of course those technologies are decried as “immoral” also, by the same types of “moralists” who decry suicide rights. To quote former LessWrong user eapache:
...the “stimmer”’s (the person with the brain-stimulating machine) is distinctly repugnant in a way that feels vaguely ethics-related.
...Anything that we do entirely without benefit to others is onanistic and probably wrong.
There is a lot of talk about “moral obligations” and “ethics” and very little about individual liberty and the ability to actually enjoy life to its fullest potential. People, especially the “moral” ones, demand Sacrifices to the Gods, and the immoral ones are just as likely to create hells over utopias. I see no value in loading their values into an AI, even if it could be done correctly and cost-effectively.
Luckily, I don’t care about the fate of the world in reflective equilibrium, so I can simply enjoy my life with lesser pleasures and die before AGI takes over. At least this strategy is robust and doesn’t rely on convincing hostile humans (outside of deterring more straightforward physical attacks in the near-term, which I do with basic weaponry) let alone solving the AGI problem. I “solve” climate change the same way.
AI alignment isn’t the only problem. Most people’s values are sufficiently unaligned with my own that find solving AI unattractive as a goal. Even if I had a robust lever to push, such as donating to an AI alignment research org or lobby think tank and it was actually cost-effective, the end result would still be unaligned (with me) values being loaded. So there are two steps rather than one: First, you have to make sure the people who create AI have values aligned with yours, and then you have to make sure that the AI has values aligned with the people creating it.
Frankly, this is hopeless from my perspective. Just the first step is impossible. I know this from years of discussions and debates with my fellow human beings, and from observing politics. The most basic litmus test for me is if they force fates worse than death on people who explictly disagree. In other words, if suffering is mandatory or if people will respect other people’s right to choose painless death as an ultima ratio solution for their own selves (not forcing it on others). This is something so basic and trivial, and yet so existential that I consider it a question where no room for compromise is possible from my perspective. And I am observing that, even though public opinion robustly favors some forms of suicide rights, the governments of this world have completely botched the implementation. And that is just one source of disagreement, the one I choose as a litmus test because the morally correct answer is so obvious and non-negotiable from my perspective.
The upside opportunities from the alleged utopias we can achieve if we get the Singularity right also suffer from this problem. I used to think that if you can just make life positive enough, the downside risks might be worth taking. So we could implement (voluntary) hedonic enhancements, experience machines and pleasure wireheading offers to make it worthwhile for those people who want it. These could be so good that it would outweigh the risk, and investing in such future life could be worth it. But of course those technologies are decried as “immoral” also, by the same types of “moralists” who decry suicide rights. To quote former LessWrong user eapache:
https://www.lesswrong.com/posts/e2jmYPX7dTtx2NM8w/when-is-it-wrong-to-click-on-a-cow
There is a lot of talk about “moral obligations” and “ethics” and very little about individual liberty and the ability to actually enjoy life to its fullest potential. People, especially the “moral” ones, demand Sacrifices to the Gods, and the immoral ones are just as likely to create hells over utopias. I see no value in loading their values into an AI, even if it could be done correctly and cost-effectively.
Luckily, I don’t care about the fate of the world in reflective equilibrium, so I can simply enjoy my life with lesser pleasures and die before AGI takes over. At least this strategy is robust and doesn’t rely on convincing hostile humans (outside of deterring more straightforward physical attacks in the near-term, which I do with basic weaponry) let alone solving the AGI problem. I “solve” climate change the same way.