Basically the idea is that between ordinary BB and real brains exist third class of objects. These objects temporary appear from fluctuation but are able to create very large number of minds during its short existence. These objects are more complex than ordinary brain and thus more rare, but as they are able to creates many minds, the minds inside these objects will dominate.
At first I named these objects “Bolzmann typewriters” but later I understood that it could be just a computer with a program which is able to create minds. (And as simulated mind is simpler than biological brain, which include all neurons and atoms, such simple simulated minds must dominate.)
Another type of Bolzamnn typewriter are universes fine tuned to create as many minds as possible (and even our universe is a type of it.)
If we are in Bolzmann typewriter or Bolzmann supercomputer it may have observable consequences, like small “mistakes in the matrix”. It also may have abrupt end.
You’re operating under the assumption that only humans count as observers, which is almost certainly not true and breaks the whole theory down.
(Btw, if such complicated things can exist in high-entropy environments, than why aren’t we able to survive there after heat death? Unless we’re talking about quantum permutations?)
When I search my position in the class of observers that are like me, the only thing which is define this class of observers is that it is able to write down and understand this sentence. And I should not count the ones who are not able to understand it, because I already know that they are not me.
In short:
If one ask “Why I am not a worm?”, the answer is: because a worm can’t make this question.
So the right question would be in case of BB: “from all observers who could think that they are in BB, am I most common or not?” The answer depends on how random our circumstances are. My surrounding seems to be not so random as TV signal noice: I sit in my room.
The problem is that we can’t take for granted that BB could judge randomness of their surroundings adequately. For example: in a dream you may have a thought and think that it is very wise. But in the morning you will understand that it is bullshit.
So, in fact, we have a class of observers, which now defined by two premises: the thought: “Am I a BB” and the observation: “My surrounding seems to be not enough random for BB” (which may be untrue, but we still think so)
Now we could ask a question where is the biggest part of this subset of observers? And even for this subset of observers we still have to conclude that its biggest is in BB.
Personally, I think that it is just a problem of our theory or reality, and if we move to another reality theory, the problem will disappear. The next level theory will be theory of qualia universe. But there may be other solutions: if we take linear model of reality than only information is identity substrate but not continuity, and so copies are smothly add up to one another.
When I search my position in the class of observers that are like me, the only thing which is define this class of observers is that it is able to write down and understand this sentence. And I should not count the ones who are not able to understand it, because I already know that they are not me. In short: If one ask “Why I am not a worm?”, the answer is: because a worm can’t make this question.
But if the question has nothing to do with whether or not you understand it? Taking the DA as our example, the only thing you ought to be concerned about is what human are you. I don’t see why comprehension of the DA is relevant to that.
The problem is that we can’t take for granted that BB could judge randomness of their surroundings adequately. For example: in a dream you may have a thought and think that it is very wise. But in the morning you will understand that it is bullshit.
And our knowledge of BBs comes solely from a long series of assumptions and inferences. If most observers are Boltzmann brains, than most observers, of whatever type, will experience chaos. If you’re going to say that that might not be true because BBs are deluded, I have to ask why the same doesn’t apply to the argument that we might be BBs. It’s a great deal more complicated than my own argument, which is that chaos is more common than order.
Why not assume an evil daemon, if we’re going to reason this way?
Look, the following two statements are true: “Most of observers, who are not expiring chaos, are still BB (if BB exist)” But “the fact that I do not expiring chaos is argument against BB theory”—this is your point. The main question is which of the statements is stronger in Bayesian point of view.
Lets make a model: or only 1 real observer exists, or exist two world, and the second one exists with probability P. The second one (which is BB) includes 1 million observers of which 1000 are non-chaos observes. Given that me is non-chaos observer what is the probability of P? In this case P is like 0, 001 and you win.
The problem with this conclusion that it relies on ability of BB truely distinguish the type of reality they are in. If we prove some how that most BB are not able to understand that they live in random environment, than our reality check does not work.
EDITED: most people do not understand that they have night dreams during the dream while they have quiet random experience there. So we cant use the fact that I don’t think that I am in a dream as a proof that I am not in a dream. And even less we can rely on BB in this ability.
Edited 2: If you were randomly picked of all possible observers you should be a worm or other simple creature. The fact that you are not worm may be used as a proof that worms does not exist. Which is false. You not a random observer. You are randomly selected from observers who could understand that they are an observers. ))
And if we speak about DA—generally it works to any referent class, but gives different ends for different classes. It is natural to apply it to only those who understand it. Refernce class of humans does not have any special about it.
Unfortunately as a class of those who understand DA is small, this means sooner end. But the end may not mean human extinction, but only that DA will be disproven or that people will stop to think about it.
The problem with this conclusion that it relies on ability of BB truely distinguish the type of reality they are in. If we prove some how that most BB are not able to understand that they live in random environment, than our reality check does not work.
But the whole chain of reasoning is still circular. You haven’t explained why being a Boltzmann brain is more plausible than being under a daemon’s spell.
If you were randomly picked of all possible observers you should be a worm or other simple creature.
Yes, and my argument here accounts for that: sapient beings will have many more instances of themselves and therefore much higher measure than animals.
The fact that you are not worm may be used as a proof that worms does not exist. Which is false. You not a random observer. You are randomly selected from observers who could understand that they are an observers. ))
Let’s take some aliens as our example. These aliens have intellects between a human’s and a chimpanzee’s. One in a hundred of them develops much greater intelligence than others (similar to Egan’s aliens in Incandescence). They consist of a single united herd, but are the size of bugs. After a hundred thousand years of wandering the desert, they come to a large lake, teeming with food and fresh water and devoid of any real predators. The elders expect that their race will soon number a millionfold of what they were.
But unknown to them, a meteorite is headed directly at the lake- the species will certainly be wiped out in a few months. The few aliens gifted with intellect reason that their observations are highly unlikely should the lake really multiply their numbers by a million. But the rest of the herd cannot comprehend these arguments, and care only for day-to-day survival.
Their selection is from their species, and they can make inferences from that. Why would it be any different?
We have additional evidence for BB, that is idea of eternal fluctuation of vacuum after heat death, which may give us very strong prior. Basically if there is 10 power 100 BBs for each real mind it will override the evidence by non randomness of our environment. (Bostrom wrote about similar logic in Presumptuous philosopher.)
What I wanted to say that efforts to disprove BB existence by relying on BB ability to distinguish chaotic and non chaotic environments are themselves looks like circular logic)))
I agree that sapient beings are more probable because they have many more internal states. But it also means that you and I are in the middle of IQ distribution in the universe, that is no superintelligence exists anywhere. This is grim. It is like DA for intelligence and it means that high intelligence post-humans are impossible. It may still allow some kind of mechanical superintelligence, which uses completely different thinking procedures and lack qualia.
Basically, the main meta difference between your and mine positions is that you want to return the world to normal, and I want it to be strange and exploite its strangeness. :))
You long example is in fact about aliens who created DA for themselves. My idea was that you may use mediocracy logic for any reference class, from which you randomly chosen, and you could belong to several such classes simultaneously. But the class of observers who knows about DA, is special class because it will appear in any alien specie, and in any thought experiment. This class include such observers from all possible species and so we may speak about their distribution in the universe.
Also such class is smallest and imply soonest Doom in DA. Even Carter who created DA in 1983 knew it, and as he was the only one at the moment in this class, he felt himself in danger.
In your example you also have a subclass of aliens who knows all that, and it will exist not for long. It will be killed by meteorite in several months. )) Its subclass is smaller and its time duration is smaller. But result is the same—extinction.
We have additional evidence for BB, that is idea of eternal fluctuation of vacuum after heat death, which may give us very strong prior. Basically if there is 10 power 100 BBs for each real mind it will override the evidence by non randomness of our environment.
How? The proportion of chaotic minds to orderly minds will never change. Even if there are infinite BBs in the future, it doesn’t alter how likely it is that the ‘heat death’ model is simply mistaken, and that some infinite source of computing is found for us to use.
I agree that sapient beings are more probable because they have many more internal states. But it also means that you and I are in the middle of IQ distribution in the universe, that is no superintelligence exists anywhere. This is grim. It is like DA for intelligence and it means that high intelligence post-humans are impossible.
Whoa whoa whoa. I don’t think that sapient beings having more internal states makes them more likely to be selected. I was talking about the simulation argument I’ve advanced on this thread.
Our current model of the universe makes it seem easy and straightforward for superintelligence to exist. Even if we were to wipe ourselves out, the fact that we live in a Big World means that superintelligence will always be taking most of the measure. This is precisely what I argued on this thread.
You long example is in fact about aliens who created DA for themselves. My idea was that you may use mediocracy logic for any reference class, from which you randomly chosen, and you could belong to several such classes simultaneously. But the class of observers who knows about DA, is special class because it will appear in any alien specie, and in any thought experiment. This class include such observers from all possible species and so we may speak about their distribution in the universe. Also such class is smallest and imply soonest Doom in DA. Even Carter who created DA in 1983 knew it, and as he was the only one at the moment in this class, he felt himself in danger.
Now I understand. But the fact that most humans do not comprehend the DA doesn’t neutralize its effects on humanity, does it?
(I’m beginning to realize what a nightmare anthropics is.)
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not.
As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
No, my DA version only make it stronger. Doom is near.
Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not. As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe.
Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
It may just test different solutions of Fermi paradox on simulations, which it must do.
What? What does this mean?
Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
But I dont understand why FAI should model only people living near singulaity.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea:
If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small.
So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.
Interesting. If you have time please elaborate.
I was planning to write a post about one day…
Basically the idea is that between ordinary BB and real brains exist third class of objects. These objects temporary appear from fluctuation but are able to create very large number of minds during its short existence. These objects are more complex than ordinary brain and thus more rare, but as they are able to creates many minds, the minds inside these objects will dominate. At first I named these objects “Bolzmann typewriters” but later I understood that it could be just a computer with a program which is able to create minds. (And as simulated mind is simpler than biological brain, which include all neurons and atoms, such simple simulated minds must dominate.)
Another type of Bolzamnn typewriter are universes fine tuned to create as many minds as possible (and even our universe is a type of it.)
If we are in Bolzmann typewriter or Bolzmann supercomputer it may have observable consequences, like small “mistakes in the matrix”. It also may have abrupt end.
You’re operating under the assumption that only humans count as observers, which is almost certainly not true and breaks the whole theory down.
(Btw, if such complicated things can exist in high-entropy environments, than why aren’t we able to survive there after heat death? Unless we’re talking about quantum permutations?)
In fact, I think that only humans who are able to understand Doomsday Argument should be counted as observers… :) But where I use this idea here?
Yes, may be we can survive after heat death in such fluctuation and in my recent roadmap “How to survive the end of the Universe” it was suggested.
All I’m saying is that out of all possible observers that would arise in a Boltzmann state, ours is a long way from the most common.
Why?
When I search my position in the class of observers that are like me, the only thing which is define this class of observers is that it is able to write down and understand this sentence. And I should not count the ones who are not able to understand it, because I already know that they are not me. In short: If one ask “Why I am not a worm?”, the answer is: because a worm can’t make this question.
So the right question would be in case of BB: “from all observers who could think that they are in BB, am I most common or not?” The answer depends on how random our circumstances are. My surrounding seems to be not so random as TV signal noice: I sit in my room.
The problem is that we can’t take for granted that BB could judge randomness of their surroundings adequately. For example: in a dream you may have a thought and think that it is very wise. But in the morning you will understand that it is bullshit.
So, in fact, we have a class of observers, which now defined by two premises: the thought: “Am I a BB” and the observation: “My surrounding seems to be not enough random for BB” (which may be untrue, but we still think so)
Now we could ask a question where is the biggest part of this subset of observers? And even for this subset of observers we still have to conclude that its biggest is in BB.
Personally, I think that it is just a problem of our theory or reality, and if we move to another reality theory, the problem will disappear. The next level theory will be theory of qualia universe. But there may be other solutions: if we take linear model of reality than only information is identity substrate but not continuity, and so copies are smothly add up to one another.
But if the question has nothing to do with whether or not you understand it? Taking the DA as our example, the only thing you ought to be concerned about is what human are you. I don’t see why comprehension of the DA is relevant to that.
And our knowledge of BBs comes solely from a long series of assumptions and inferences. If most observers are Boltzmann brains, than most observers, of whatever type, will experience chaos. If you’re going to say that that might not be true because BBs are deluded, I have to ask why the same doesn’t apply to the argument that we might be BBs. It’s a great deal more complicated than my own argument, which is that chaos is more common than order.
Why not assume an evil daemon, if we’re going to reason this way?
Look, the following two statements are true: “Most of observers, who are not expiring chaos, are still BB (if BB exist)” But “the fact that I do not expiring chaos is argument against BB theory”—this is your point. The main question is which of the statements is stronger in Bayesian point of view.
Lets make a model: or only 1 real observer exists, or exist two world, and the second one exists with probability P. The second one (which is BB) includes 1 million observers of which 1000 are non-chaos observes. Given that me is non-chaos observer what is the probability of P? In this case P is like 0, 001 and you win.
The problem with this conclusion that it relies on ability of BB truely distinguish the type of reality they are in. If we prove some how that most BB are not able to understand that they live in random environment, than our reality check does not work.
EDITED: most people do not understand that they have night dreams during the dream while they have quiet random experience there. So we cant use the fact that I don’t think that I am in a dream as a proof that I am not in a dream. And even less we can rely on BB in this ability.
Edited 2: If you were randomly picked of all possible observers you should be a worm or other simple creature. The fact that you are not worm may be used as a proof that worms does not exist. Which is false. You not a random observer. You are randomly selected from observers who could understand that they are an observers. ))
And if we speak about DA—generally it works to any referent class, but gives different ends for different classes. It is natural to apply it to only those who understand it. Refernce class of humans does not have any special about it. Unfortunately as a class of those who understand DA is small, this means sooner end. But the end may not mean human extinction, but only that DA will be disproven or that people will stop to think about it.
But the whole chain of reasoning is still circular. You haven’t explained why being a Boltzmann brain is more plausible than being under a daemon’s spell.
Yes, and my argument here accounts for that: sapient beings will have many more instances of themselves and therefore much higher measure than animals.
Let’s take some aliens as our example. These aliens have intellects between a human’s and a chimpanzee’s. One in a hundred of them develops much greater intelligence than others (similar to Egan’s aliens in Incandescence). They consist of a single united herd, but are the size of bugs. After a hundred thousand years of wandering the desert, they come to a large lake, teeming with food and fresh water and devoid of any real predators. The elders expect that their race will soon number a millionfold of what they were.
But unknown to them, a meteorite is headed directly at the lake- the species will certainly be wiped out in a few months. The few aliens gifted with intellect reason that their observations are highly unlikely should the lake really multiply their numbers by a million. But the rest of the herd cannot comprehend these arguments, and care only for day-to-day survival.
Their selection is from their species, and they can make inferences from that. Why would it be any different?
We have additional evidence for BB, that is idea of eternal fluctuation of vacuum after heat death, which may give us very strong prior. Basically if there is 10 power 100 BBs for each real mind it will override the evidence by non randomness of our environment. (Bostrom wrote about similar logic in Presumptuous philosopher.) What I wanted to say that efforts to disprove BB existence by relying on BB ability to distinguish chaotic and non chaotic environments are themselves looks like circular logic)))
I agree that sapient beings are more probable because they have many more internal states. But it also means that you and I are in the middle of IQ distribution in the universe, that is no superintelligence exists anywhere. This is grim. It is like DA for intelligence and it means that high intelligence post-humans are impossible. It may still allow some kind of mechanical superintelligence, which uses completely different thinking procedures and lack qualia.
Basically, the main meta difference between your and mine positions is that you want to return the world to normal, and I want it to be strange and exploite its strangeness. :))
You long example is in fact about aliens who created DA for themselves. My idea was that you may use mediocracy logic for any reference class, from which you randomly chosen, and you could belong to several such classes simultaneously. But the class of observers who knows about DA, is special class because it will appear in any alien specie, and in any thought experiment. This class include such observers from all possible species and so we may speak about their distribution in the universe. Also such class is smallest and imply soonest Doom in DA. Even Carter who created DA in 1983 knew it, and as he was the only one at the moment in this class, he felt himself in danger.
In your example you also have a subclass of aliens who knows all that, and it will exist not for long. It will be killed by meteorite in several months. )) Its subclass is smaller and its time duration is smaller. But result is the same—extinction.
How? The proportion of chaotic minds to orderly minds will never change. Even if there are infinite BBs in the future, it doesn’t alter how likely it is that the ‘heat death’ model is simply mistaken, and that some infinite source of computing is found for us to use.
Whoa whoa whoa. I don’t think that sapient beings having more internal states makes them more likely to be selected. I was talking about the simulation argument I’ve advanced on this thread.
Our current model of the universe makes it seem easy and straightforward for superintelligence to exist. Even if we were to wipe ourselves out, the fact that we live in a Big World means that superintelligence will always be taking most of the measure. This is precisely what I argued on this thread.
Now I understand. But the fact that most humans do not comprehend the DA doesn’t neutralize its effects on humanity, does it?
(I’m beginning to realize what a nightmare anthropics is.)
Ok, look. By definition BBs are random. Not only random are their experience but also their thoughts. So, half of them think that they are in chaotic environment, and 50 per cent thinks that they are not. So thought that I am in non-chaotic environment has zero information about am I BB or not. As BB exist only one moment of experience, it can’t make long conjectures. It can’t check its surrounding, then compare it (with what?), then calculate its measure of randomness and thus your own probability of existence.
Finally, what do you mean by “measure”? The fact that Im not a superintelligence is evidence against that superintelligence are dominating class of beings. But some may exist.
No, my DA version only make it stronger. Doom is near.
Sorry for the late response. I’ve been feeling a lot better and found it hard to discuss the subject again.
Ideas or concepts are qualia themselves, aren’t they? And since consciousness is inherently a process, I don’t think that you can reduce it to ‘one moment’ of experience. You would benefit to read about philosophical skepticism.
My whole argument here is that all of my experiences are explained by friendly superintelligence. Measure means the likelihood of a given perception being ‘realized’. I can conclude from this that humans therefore have a very high measure; we are the dominant creatures of existence. Presumably because we later create superintelligence that aligns with our goals. Animals or ancient humans would have much lower measures.
May be better to speak about them as one acts of experience, not moments.
Ok, but why it should be friendly? It may just test different solutions of Fermi paradox on simulations, which it must do. It would result in humans of 20-21 century to be dominating class of observers in the universe, but each test will include global catastrophe. Or you mean that friendly AI will try to give humans the biggest possible measure? But our world is not paradise.
What? What does this mean?
No, it’s trying to give measure to the humans that survived into the Singularity. Not all of them might simulate the entire lifespan, but some will. They will also simulate them postsingularity, although we will be actively aware of this. This is what I mean by ‘protecting’ our measure.
the main question for any AI is its relations with other AIs in the universe. So it should learn somehow if any exist and if not, why. The best way to do it is to model AIs development on different planets. I think it include billion simulations of near-singularity civilizations. That is ones that are in the equivalent of the beginning of 21 century in their time scale. This explain why we found our selves in the beggining of 21 century—it is dominating class of simulations. But thee is nothing good in it. Many extinction scenarious will be checked in such simulations, and even if they pass, they will be switched off.
But I dont understand why FAI should model only people living near singulaity. Only to counteract this evil simulations?
Sorry for taking such a long time to respond.
Any successful FAI would create many more simulated observers, in my scenario. Since FAI is possible, it’s much more likely that we are in a universe that generates it.
But we will simply continue on in the simulations that weren’t switched off. These are more likely to be friendly, so it would end up the same.
It doesn’t. People living postsingularity would be threatened by simulations, too. Assuming that new humans are not created (unlikely given that each one has to be simulated countless times) most of them will have been born before it took place. Why not begin it there?
After thinking more about the topic and while working on simulation map, I find the following idea: If in the infinite world exist infinitely many FAIs, each of them couldn’t change the landcsape of simulation distibution, because his share in all simulations is infinitely small. So we need acasual trade between infinite number of FAIs to really change proportion of simulations. I can’t say that it is impossible, but it maybe difficult.
Ok, I agree with you that FAI will invest in preventing BBs problem by increasing measure of existence of all humans ( if he find it useful and will not find simpler method to do it) - but anyway such AI must dominate measure landscape, as it exist somewhere.
In short, we are inside (one of) AI which tries to dominate inside the total number of observer. And most likely we are inside most effective of them (or subset of them as they are many). Most reasonable explanation for such desire to dominate total number of observers is Friendliness (as we understand it now).
So, do we have any problems here? Yes—we don’t know what is the measure of existence. We also can’t predict landscape of all possible goals for AIs and so we could only hope that it is Friendly or that its friendliness is really good one.
So our laws of physics seem consistent because this requires less code.