Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not. As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe.
Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
It may just test different solutions of Fermi paradox on simulations, which it must do.
What? What does this mean?
Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
But I dont understand why FAI should model only people living near singulaity.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea:
If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small.
So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.
Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe. Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
What? What does this mean?
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
Sorry for taking such a long time to respond.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea: If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small. So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.