My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions.
I disagree on the first part, and agree on the second part.
with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.
Yes, and that’s enough for rational decision making. I’m not really sure why you’re not seeing that...
Yes, and that’s enough for rational decision making.
I agree that you can turn the handle on a particular piece of mathematics that resembles decisionmaking, but some part of me says that you’re just playing a game with yourself: you decide that everything exists, then you put a prior over everything, then you act to maximize your utility, weighted by that prior. It is certainly a blow to one’s intuition that one can only salvage the ability to act by playing a game of make-believe that some sections of “everything” are “less real” than others, where your real-ness prior is something you had to make up anyway.
Others also think that I am just slow on the uptake of this idea. But to me the idea that reality is not fixed but relative to what real-ness prior you decide to pick is extremely ugly. It would mean that the utility of technology to achieve things is merely a shared delusion, that if a theist chose a real-ness prior that assigned high real-ness only to universes where a theistic god existed then he would be correct to pray, etc. Effectively you’re saying that the postmodernists were right after all.
Now, the fact that I have a negative emotional reaction to this proposal doesn’t make it less true, of course.
There is a deep analogy between how you can’t change the laws of physics (contents of reality, apart from lawfully acting) and how you can’t change your own program. It’s not a delusion unless it can be reached by mistake. The theist can’t be right to act as if a deity exists unless his program (brain) is such that it is the correct way to act, and he can’t change his mind for it to become right, because it’s impossible to change one’s program, only act according to it.
The problem is that this point of view means that in a debate with someone who is firmly religious, not only is the religious person right, but you regret the fact that you are “rational”; you lament “if only I had been brought up with religious indoctrination, I would correctly believe that I am going to heaven”.
Any rational theory that leaves you lamenting your own rationality deserves some serious scepticism.
Following the same analogy, you can translate it as “if only the God did in fact exist, …”. The difference doesn’t seem particularly significant—both “what ifs” are equally impossible. “Regretting rationality” is on a different level—rationality in the relevant sense is a matter of choice. The program that defines your decision-making algorithm isn’t.
I still fear that you are reading in my words something very different from what I intend, as I don’t see the possibility of a religious person’s mind actually acting as if God is real. A religious person may have a free-floating network of beliefs about God, but it doesn’t survive under reflection. A true god-impressed mind would actually act as if God is real, no matter what, it won’t be deconvertable, and indeed under reflection an atheist god-impressed mind will correctly discard atheism.
Not all beliefs are equal, a human atheist is correct not just according to atheist’s standard, and a human theist is incorrect not just to atheist’s standard. The standard is in the world, or, under this analogy, in the mind. (The mind is a better place for ontology, because preference is also here, and human mind can be completely formalized, unlike the unknown laws of physics. By the way, the first post is up).
So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a “god-impressed mind” who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn’t. The same with an atheist god-impressed mind and a human atheist. You can’t expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.
I should say again: the program that defines the decision-making algorithm can’t be normally changed, which means that one can’t be really “converted” to a different preference, though one can be converted to different beliefs and feelings. Observations don’t change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you’d try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the “level of conviction”, it’s pretty much irrelevant, just as the presence of a losing psychological drive. It’d take great integrity (not “strength of conviction”) in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.
Yes, that wasn’t careful. In this context, I mean “no large shift of preference”. Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person’s past, which doesn’t necessarily all has to be from the person’s brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)
I agree that it’s ugly to think of the weights as a pretense on how real certain parts of reality are. That’s why I think it may be better to think of them as representing how much you care about various parts of reality. (For the benefit of other readers, I talked about this in What Are Probabilities, Anyway?.)
Actually, I haven’t completely given up the idea that there is some objective notion of how real, or how important, various parts of reality are. It’s hard to escape the intuition that some parts of math are just easier to reach or find than others, in a way that is not dependent how human minds work.
I disagree on the first part, and agree on the second part.
Yes, and that’s enough for rational decision making. I’m not really sure why you’re not seeing that...
I agree that you can turn the handle on a particular piece of mathematics that resembles decisionmaking, but some part of me says that you’re just playing a game with yourself: you decide that everything exists, then you put a prior over everything, then you act to maximize your utility, weighted by that prior. It is certainly a blow to one’s intuition that one can only salvage the ability to act by playing a game of make-believe that some sections of “everything” are “less real” than others, where your real-ness prior is something you had to make up anyway.
Others also think that I am just slow on the uptake of this idea. But to me the idea that reality is not fixed but relative to what real-ness prior you decide to pick is extremely ugly. It would mean that the utility of technology to achieve things is merely a shared delusion, that if a theist chose a real-ness prior that assigned high real-ness only to universes where a theistic god existed then he would be correct to pray, etc. Effectively you’re saying that the postmodernists were right after all.
Now, the fact that I have a negative emotional reaction to this proposal doesn’t make it less true, of course.
There is a deep analogy between how you can’t change the laws of physics (contents of reality, apart from lawfully acting) and how you can’t change your own program. It’s not a delusion unless it can be reached by mistake. The theist can’t be right to act as if a deity exists unless his program (brain) is such that it is the correct way to act, and he can’t change his mind for it to become right, because it’s impossible to change one’s program, only act according to it.
The problem is that this point of view means that in a debate with someone who is firmly religious, not only is the religious person right, but you regret the fact that you are “rational”; you lament “if only I had been brought up with religious indoctrination, I would correctly believe that I am going to heaven”.
Any rational theory that leaves you lamenting your own rationality deserves some serious scepticism.
Following the same analogy, you can translate it as “if only the God did in fact exist, …”. The difference doesn’t seem particularly significant—both “what ifs” are equally impossible. “Regretting rationality” is on a different level—rationality in the relevant sense is a matter of choice. The program that defines your decision-making algorithm isn’t.
I still fear that you are reading in my words something very different from what I intend, as I don’t see the possibility of a religious person’s mind actually acting as if God is real. A religious person may have a free-floating network of beliefs about God, but it doesn’t survive under reflection. A true god-impressed mind would actually act as if God is real, no matter what, it won’t be deconvertable, and indeed under reflection an atheist god-impressed mind will correctly discard atheism.
Not all beliefs are equal, a human atheist is correct not just according to atheist’s standard, and a human theist is incorrect not just to atheist’s standard. The standard is in the world, or, under this analogy, in the mind. (The mind is a better place for ontology, because preference is also here, and human mind can be completely formalized, unlike the unknown laws of physics. By the way, the first post is up).
So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a “god-impressed mind” who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn’t. The same with an atheist god-impressed mind and a human atheist. You can’t expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.
I should say again: the program that defines the decision-making algorithm can’t be normally changed, which means that one can’t be really “converted” to a different preference, though one can be converted to different beliefs and feelings. Observations don’t change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you’d try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the “level of conviction”, it’s pretty much irrelevant, just as the presence of a losing psychological drive. It’d take great integrity (not “strength of conviction”) in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.
I doubt that you can define a way to choose an algorithm out of a human brain that makes that sentence true.
Yes, that wasn’t careful. In this context, I mean “no large shift of preference”. Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person’s past, which doesn’t necessarily all has to be from the person’s brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)
I agree that it’s ugly to think of the weights as a pretense on how real certain parts of reality are. That’s why I think it may be better to think of them as representing how much you care about various parts of reality. (For the benefit of other readers, I talked about this in What Are Probabilities, Anyway?.)
Actually, I haven’t completely given up the idea that there is some objective notion of how real, or how important, various parts of reality are. It’s hard to escape the intuition that some parts of math are just easier to reach or find than others, in a way that is not dependent how human minds work.