Building every possible universe seems like a very direct way of purposefully creating one of the biggest possible S-risks. There are almost certainly vastly more dystopias of unimaginable suffering than there are of anything like a utopia.
So to me this seems like not just “a bad idea” but actively evil.
Fair enough, my writing was confusing, sorry, I didn’t mean to purposefully create dystopias, I just think it’s highly likely they will unintentionally be created and the best solution is to have an instant switching mechanism between observers/verses + an AI that really likes to be changed. I’ll edit the post to make it obvious, I don’t want anyone to create dystopias.
Building every possible universe seems like a very direct way of purposefully creating one of the biggest possible S-risks. There are almost certainly vastly more dystopias of unimaginable suffering than there are of anything like a utopia.
So to me this seems like not just “a bad idea” but actively evil.
I wrote a response, I’ll be happy if you’ll check it out before I publish it as a separate post. Thank you! https://www.lesswrong.com/posts/LaruPAWaZk9KpC25A/rational-utopia-and-multiversal-ai-alignment-steerable-asi
Fair enough, my writing was confusing, sorry, I didn’t mean to purposefully create dystopias, I just think it’s highly likely they will unintentionally be created and the best solution is to have an instant switching mechanism between observers/verses + an AI that really likes to be changed. I’ll edit the post to make it obvious, I don’t want anyone to create dystopias.