I think something strange is going on here. Suppose we have an AI that will fill the universe with X. Now slide X along the continuous evolutionary tree. Do we get a huge update against an AI that fills the universe with slime mold? What about an AI that fills the universe with Neanderthals? Dolphins?
The only two consistent lines would be
1) Put every detail you have into the selection. You believe you are randomly selected from the agents who are exactly like you in every last detail. So long as the AI doesn’t run history simulations, you don’t have to make any anthropic updates.
2) put nothing in to SSA, you are randomly sampled from all the items in the universe. This leads to the strange conclusion that, given a fixed number of copies of you, you are probably in a universe that doesn’t contain many other entities you could be. Given our universe is full of stuff, this seems like a bad prediction. (Ie this SSA is very surprised that you are a human, not a random hydrogen atom.)
And even granting the premise, suppose you build an AI that fills the universe with simulations of people who think they are putting the finnishing touches on their ASI. You have a bays update of around a billion to one towards that working. (If it didn’t work, most randomly chosen people don’t think they are AI researchers. My experience of doing AI research is surprising. But if it does work, almost every human in the universe has that experience.) Assuming the experience of a modern AI researcher is relatively pleasant, both compared to historical human experience and nonexistance, the both total and average utilitarianism endorse this route. The utilitarian thing to do in this setting it to try to make AI, but also make sure to have a fun time doing it. (This feels like a conclusion that is also self benefiting, so a motivated reasoning warning applies here)
I think something strange is going on here. Suppose we have an AI that will fill the universe with X. Now slide X along the continuous evolutionary tree. Do we get a huge update against an AI that fills the universe with slime mold? What about an AI that fills the universe with Neanderthals? Dolphins?
The only two consistent lines would be
1) Put every detail you have into the selection. You believe you are randomly selected from the agents who are exactly like you in every last detail. So long as the AI doesn’t run history simulations, you don’t have to make any anthropic updates.
2) put nothing in to SSA, you are randomly sampled from all the items in the universe. This leads to the strange conclusion that, given a fixed number of copies of you, you are probably in a universe that doesn’t contain many other entities you could be. Given our universe is full of stuff, this seems like a bad prediction. (Ie this SSA is very surprised that you are a human, not a random hydrogen atom.)
And even granting the premise, suppose you build an AI that fills the universe with simulations of people who think they are putting the finnishing touches on their ASI. You have a bays update of around a billion to one towards that working. (If it didn’t work, most randomly chosen people don’t think they are AI researchers. My experience of doing AI research is surprising. But if it does work, almost every human in the universe has that experience.) Assuming the experience of a modern AI researcher is relatively pleasant, both compared to historical human experience and nonexistance, the both total and average utilitarianism endorse this route. The utilitarian thing to do in this setting it to try to make AI, but also make sure to have a fun time doing it. (This feels like a conclusion that is also self benefiting, so a motivated reasoning warning applies here)