the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
But I dont understand why FAI should model only people living near singulaity.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea:
If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small.
So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.
Sorry for taking such a long time to respond.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea: If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small. So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.