We have additional evidence for BB, that is idea of eternal fluctuation of vacuum after heat death, which may give us very strong prior. Basically if there is 10 power 100 BBs for each real mind it will override the evidence by non randomness of our environment.
How? The proportion of chaotic minds to orderly minds will never change. Even if there are infinite BBs in the future, it doesn’t alter how likely it is that the ‘heat death’ model is simply mistaken, and that some infinite source of computing is found for us to use.
I agree that sapient beings are more probable because they have many more internal states. But it also means that you and I are in the middle of IQ distribution in the universe, that is no superintelligence exists anywhere. This is grim. It is like DA for intelligence and it means that high intelligence post-humans are impossible.
Whoa whoa whoa. I don’t think that sapient beings having more internal states makes them more likely to be selected. I was talking about the simulation argument I’ve advanced on this thread.
Our current model of the universe makes it seem easy and straightforward for superintelligence to exist. Even if we were to wipe ourselves out, the fact that we live in a Big World means that superintelligence will always be taking most of the measure. This is precisely what I argued on this thread.
You long example is in fact about aliens who created DA for themselves. My idea was that you may use mediocracy logic for any reference class, from which you randomly chosen, and you could belong to several such classes simultaneously. But the class of observers who knows about DA, is special class because it will appear in any alien specie, and in any thought experiment. This class include such observers from all possible species and so we may speak about their distribution in the universe. Also such class is smallest and imply soonest Doom in DA. Even Carter who created DA in 1983 knew it, and as he was the only one at the moment in this class, he felt himself in danger.
Now I understand. But the fact that most humans do not comprehend the DA doesn’t neutralize its effects on humanity, does it?
(I’m beginning to realize what a nightmare anthropics is.)
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not.
As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
No, my DA version only make it stronger. Doom is near.
Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not. As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe.
Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
It may just test different solutions of Fermi paradox on simulations, which it must do.
What? What does this mean?
Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
But I dont understand why FAI should model only people living near singulaity.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea:
If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small.
So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.
How? The proportion of chaotic minds to orderly minds will never change. Even if there are infinite BBs in the future, it doesn’t alter how likely it is that the ‘heat death’ model is simply mistaken, and that some infinite source of computing is found for us to use.
Whoa whoa whoa. I don’t think that sapient beings having more internal states makes them more likely to be selected. I was talking about the simulation argument I’ve advanced on this thread.
Our current model of the universe makes it seem easy and straightforward for superintelligence to exist. Even if we were to wipe ourselves out, the fact that we live in a Big World means that superintelligence will always be taking most of the measure. This is precisely what I argued on this thread.
Now I understand. But the fact that most humans do not comprehend the DA doesn’t neutralize its effects on humanity, does it?
(I’m beginning to realize what a nightmare anthropics is.)
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not. As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
No, my DA version only make it stronger. Doom is near.
Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe. Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
What? What does this mean?
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
Sorry for taking such a long time to respond.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea: If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small. So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.