Subjective anticipation is a concept that a lot of people rest their axiology on. But it looks like subjective anticipation is an artefact of our cognitive algorithms, and all kinds of Big world theories break it. For example, MW QM means that subjective anticipation is nonsense.
Personally, I find this extremely problematic, and in practise, I think that I am just ignoring it.
I think mind copying technology may be a better illustration of the subjective anticipation problem than MW QM, but I agree that it’s a good example of the ontology problem. BTW, do you have a reference for where the ontology problem was first stated, in case I need to reference it in the future?
Thanks for the pointer, but I think the argument you gave in that post is wrong. You argued that an agent smaller than the universe has to represent its goals using an approximate ontology (and therefore would have to later re-phrase its goals relative to more accurate ontologies). But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology. With such compressed preferences, it may not have the computational resources to determine with certainty which course of action best satisfies its preferences, but that is just a standard logical uncertainty problem.
I think the ontology problem is a real problem, but it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.
But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology.
Yes, if it has compressible preferences, which in reality is the case for e.g. humans and many plausible AIs.
In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.
it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.
I think that depends upon the structure of reality. Maybe there will be a series of philosophical shocks as severe as the physicality of mental states, Big Worlds, quantum MWI, etc. Suspicion should definitely be directed at what horrors will be unleashed upon a human or AI that discovers a correct theory of quantum gravity.
Just as Big World cosmology can erode aggregative consequentialism, maybe the ultimate nature of quantum gravity will entirely erode any rational decision-making; perhaps some kind of ultimate ensemble theory already has.
On the other hand, the idea of a one-time shock is also plausible.
The reason I think it can just be a one-time shock is that we can extend our preferences to cover all possible mathematical structures. (I talked about this in Towards a New Decision Theory.) Then, no matter what kind of universe we turn out to live in, whichever theory of quantum gravity turns out to be correct, the structure of the universe will correspond to some mathematical structure which we will have well-defined preferences over.
perhaps some kind of ultimate ensemble theory already has [eroded any rational decision-making].
I addressed this issue a bit in that post as well. Are you not convinced that rational decision-making is possible in Tegmark’s Level IV Multiverse?
The next few posts on my blog are going to be basically about approaching this problem (and given the occasion, I may as well commit to writing the first post today).
You should read [*] to get a better idea of why I see “preference over all mathematical structures” as a bad call. We can’t say what “all mathematical structures” is, any given foundation only covers a portion of what we could invent. As the real world, mathematics that we might someday encounter can only be completely defined by the process of discovery (but if you capture this process, you may need nothing else).
Hope to finish it today… Though I won’t talk about philosophy of mathematics in this sub-series, I’m just going to reduce the ontological confusion about preference and laws of physics to a (still somewhat philosophical, but taking place in a comfortably formal setting) question of static analysis of computer programs.
Yes, talking about “preference over all mathematical structures” does gloss over some problems in the philosophy of mathematics, and I am sympathetic to anti-foundationalist views like Awodey’s.
Also, in general I agree with Roko on the need for an AI that can do philosophy better than any human, so in this thread I was mostly picking a nit with a specific argument that he had.
(I was going to remind you about the missing post, but I see Roko already did. :)
we can extend our preferences to cover all possible mathematical structures.
I define the following structure: if you take action a, all possible logically possible consequences will follow, i.e. all computable sensory I/O functions, generated by all possible computable changes in the objective physical universe. This holds for all a. This is facilitated by the universe creating infinitely many copies of you every time you take an action, and there being literally no fact of the matter about which one is you.
Now if you have already extended your preferences over all possible mathematical structures, you presumably have a preferred action in this case. But the preferred action is really rather unrelated to your life before you made this unsettling discovery. Beings that had different evolved desires (such as seeking status versus maximizing offspring) wouldn’t produce systematically different preferences, they’d essentially have to choose at random.
If Tegmark Level 4 is, in some sense “true”, this hypothetical example is not really so hypothetical—it is very similar to the situation that we are in, with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.
My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions. However, mere MW quantum mechanics casts doubt on the idea of anticipated subjective experience, so I am suspicious of my anti-multiverse intuition. Perhaps what we need is the equivalent of a theory of born probabilities for Tegmark Level 4 - something in the region of what Nick Bostrom tried to do in his book on anthropic reasoning (though it looks like Nick simply added more arbitrariness into the mix in the form of reference classes)
My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions.
I disagree on the first part, and agree on the second part.
with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.
Yes, and that’s enough for rational decision making. I’m not really sure why you’re not seeing that...
Yes, and that’s enough for rational decision making.
I agree that you can turn the handle on a particular piece of mathematics that resembles decisionmaking, but some part of me says that you’re just playing a game with yourself: you decide that everything exists, then you put a prior over everything, then you act to maximize your utility, weighted by that prior. It is certainly a blow to one’s intuition that one can only salvage the ability to act by playing a game of make-believe that some sections of “everything” are “less real” than others, where your real-ness prior is something you had to make up anyway.
Others also think that I am just slow on the uptake of this idea. But to me the idea that reality is not fixed but relative to what real-ness prior you decide to pick is extremely ugly. It would mean that the utility of technology to achieve things is merely a shared delusion, that if a theist chose a real-ness prior that assigned high real-ness only to universes where a theistic god existed then he would be correct to pray, etc. Effectively you’re saying that the postmodernists were right after all.
Now, the fact that I have a negative emotional reaction to this proposal doesn’t make it less true, of course.
There is a deep analogy between how you can’t change the laws of physics (contents of reality, apart from lawfully acting) and how you can’t change your own program. It’s not a delusion unless it can be reached by mistake. The theist can’t be right to act as if a deity exists unless his program (brain) is such that it is the correct way to act, and he can’t change his mind for it to become right, because it’s impossible to change one’s program, only act according to it.
The problem is that this point of view means that in a debate with someone who is firmly religious, not only is the religious person right, but you regret the fact that you are “rational”; you lament “if only I had been brought up with religious indoctrination, I would correctly believe that I am going to heaven”.
Any rational theory that leaves you lamenting your own rationality deserves some serious scepticism.
Following the same analogy, you can translate it as “if only the God did in fact exist, …”. The difference doesn’t seem particularly significant—both “what ifs” are equally impossible. “Regretting rationality” is on a different level—rationality in the relevant sense is a matter of choice. The program that defines your decision-making algorithm isn’t.
I still fear that you are reading in my words something very different from what I intend, as I don’t see the possibility of a religious person’s mind actually acting as if God is real. A religious person may have a free-floating network of beliefs about God, but it doesn’t survive under reflection. A true god-impressed mind would actually act as if God is real, no matter what, it won’t be deconvertable, and indeed under reflection an atheist god-impressed mind will correctly discard atheism.
Not all beliefs are equal, a human atheist is correct not just according to atheist’s standard, and a human theist is incorrect not just to atheist’s standard. The standard is in the world, or, under this analogy, in the mind. (The mind is a better place for ontology, because preference is also here, and human mind can be completely formalized, unlike the unknown laws of physics. By the way, the first post is up).
So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a “god-impressed mind” who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn’t. The same with an atheist god-impressed mind and a human atheist. You can’t expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.
I should say again: the program that defines the decision-making algorithm can’t be normally changed, which means that one can’t be really “converted” to a different preference, though one can be converted to different beliefs and feelings. Observations don’t change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you’d try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the “level of conviction”, it’s pretty much irrelevant, just as the presence of a losing psychological drive. It’d take great integrity (not “strength of conviction”) in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.
Yes, that wasn’t careful. In this context, I mean “no large shift of preference”. Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person’s past, which doesn’t necessarily all has to be from the person’s brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)
I agree that it’s ugly to think of the weights as a pretense on how real certain parts of reality are. That’s why I think it may be better to think of them as representing how much you care about various parts of reality. (For the benefit of other readers, I talked about this in What Are Probabilities, Anyway?.)
Actually, I haven’t completely given up the idea that there is some objective notion of how real, or how important, various parts of reality are. It’s hard to escape the intuition that some parts of math are just easier to reach or find than others, in a way that is not dependent how human minds work.
In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.
I believe even personal identity falls under this category. A lot of moral intuitions work with the-me-in-the-future object, as marked in the map. To follow these intuitions, it is very important for us to have a good idea of where the-me-in-the-future is, to have a good map of this thing. When you get to weird thought experiments with copying, this epistemic step breaks down, because if there are multiple future-copies, the-me-in-the-future is a pattern that is absent. As a result, moral intuitions, that indirectly work through this mark on the map, get confused and start giving the wrong answers as well. This can be readily observed for example from preferential inconsistency in time expected in such thought experiments (you precommit to teleporting-with-delay, but then your copy that is to be destroyed starts complaining).
Personal identity is (in general) a wrong epistemic question asked by our moral intuition. Only if preference is expressed in terms of the territory (or rather in a form flexible enough to follow all possible developments), including the parts currently represented in moral intuition in terms of the-me-in-the-future object in the territory, will the confusion with expectations and anthropic thought experiments go away.
Subjective anticipation is a concept that a lot of people rest their axiology on. But it looks like subjective anticipation is an artefact of our cognitive algorithms, and all kinds of Big world theories break it. For example, MW QM means that subjective anticipation is nonsense.
Personally, I find this extremely problematic, and in practise, I think that I am just ignoring it.
I think mind copying technology may be a better illustration of the subjective anticipation problem than MW QM, but I agree that it’s a good example of the ontology problem. BTW, do you have a reference for where the ontology problem was first stated, in case I need to reference it in the future?
I mentioned it on my blog in august 2008 in the post “ontologies, approximations and fundamentalists”
Peter de Blanc invented it independently, and I think that one of Eliezer and Marcello probably did too.
I invented it sometime around the dawn of time, don’t know if Marcello did in advance or not.
Actually, I don’t know if I could have claimed to invent it, there may be science fiction priors.
Thanks for the pointer, but I think the argument you gave in that post is wrong. You argued that an agent smaller than the universe has to represent its goals using an approximate ontology (and therefore would have to later re-phrase its goals relative to more accurate ontologies). But such an agent can represent its goals/preferences in compressed form, instead of using an approximate ontology. With such compressed preferences, it may not have the computational resources to determine with certainty which course of action best satisfies its preferences, but that is just a standard logical uncertainty problem.
I think the ontology problem is a real problem, but it may just be a one-time problem, where we or an AI have to translate our fuzzy human preferences into some well-defined form, instead of a problem that all agents must face over and over again.
Yes, if it has compressible preferences, which in reality is the case for e.g. humans and many plausible AIs.
In reality problems of the form where you discover that your preferences are stated in terms of an incorrect ontology, e.g. souls, anticipated future experience, are where this really bites.
I think that depends upon the structure of reality. Maybe there will be a series of philosophical shocks as severe as the physicality of mental states, Big Worlds, quantum MWI, etc. Suspicion should definitely be directed at what horrors will be unleashed upon a human or AI that discovers a correct theory of quantum gravity.
Just as Big World cosmology can erode aggregative consequentialism, maybe the ultimate nature of quantum gravity will entirely erode any rational decision-making; perhaps some kind of ultimate ensemble theory already has.
On the other hand, the idea of a one-time shock is also plausible.
The reason I think it can just be a one-time shock is that we can extend our preferences to cover all possible mathematical structures. (I talked about this in Towards a New Decision Theory.) Then, no matter what kind of universe we turn out to live in, whichever theory of quantum gravity turns out to be correct, the structure of the universe will correspond to some mathematical structure which we will have well-defined preferences over.
I addressed this issue a bit in that post as well. Are you not convinced that rational decision-making is possible in Tegmark’s Level IV Multiverse?
The next few posts on my blog are going to be basically about approaching this problem (and given the occasion, I may as well commit to writing the first post today).
You should read [*] to get a better idea of why I see “preference over all mathematical structures” as a bad call. We can’t say what “all mathematical structures” is, any given foundation only covers a portion of what we could invent. As the real world, mathematics that we might someday encounter can only be completely defined by the process of discovery (but if you capture this process, you may need nothing else).
--
[*] S. Awodey (2004). `An Answer to Hellman’s Question: ’Does Category Theory Provide a Framework for Mathematical Structuralism?”. Philosophia Mathematica 12(1):54-64.
The idea that ethics depends upon one’s philosophy of mathematics is intriguing.
By the way, I see no post about this on the causality relay!
Hope to finish it today… Though I won’t talk about philosophy of mathematics in this sub-series, I’m just going to reduce the ontological confusion about preference and laws of physics to a (still somewhat philosophical, but taking place in a comfortably formal setting) question of static analysis of computer programs.
Great to hear. Looking forward to reading it.
Yes, talking about “preference over all mathematical structures” does gloss over some problems in the philosophy of mathematics, and I am sympathetic to anti-foundationalist views like Awodey’s.
Also, in general I agree with Roko on the need for an AI that can do philosophy better than any human, so in this thread I was mostly picking a nit with a specific argument that he had.
(I was going to remind you about the missing post, but I see Roko already did. :)
I define the following structure: if you take action a, all possible logically possible consequences will follow, i.e. all computable sensory I/O functions, generated by all possible computable changes in the objective physical universe. This holds for all a. This is facilitated by the universe creating infinitely many copies of you every time you take an action, and there being literally no fact of the matter about which one is you.
Now if you have already extended your preferences over all possible mathematical structures, you presumably have a preferred action in this case. But the preferred action is really rather unrelated to your life before you made this unsettling discovery. Beings that had different evolved desires (such as seeking status versus maximizing offspring) wouldn’t produce systematically different preferences, they’d essentially have to choose at random.
If Tegmark Level 4 is, in some sense “true”, this hypothetical example is not really so hypothetical—it is very similar to the situation that we are in, with the caveat that you can argue about weightings/priors over mathematical structures, so some consequences get a lower weighting than others, given the prior you chose.
My intuition tells me that Level 4 is a mistake, and that there is such a thing as the consequence of my actions. However, mere MW quantum mechanics casts doubt on the idea of anticipated subjective experience, so I am suspicious of my anti-multiverse intuition. Perhaps what we need is the equivalent of a theory of born probabilities for Tegmark Level 4 - something in the region of what Nick Bostrom tried to do in his book on anthropic reasoning (though it looks like Nick simply added more arbitrariness into the mix in the form of reference classes)
I disagree on the first part, and agree on the second part.
Yes, and that’s enough for rational decision making. I’m not really sure why you’re not seeing that...
I agree that you can turn the handle on a particular piece of mathematics that resembles decisionmaking, but some part of me says that you’re just playing a game with yourself: you decide that everything exists, then you put a prior over everything, then you act to maximize your utility, weighted by that prior. It is certainly a blow to one’s intuition that one can only salvage the ability to act by playing a game of make-believe that some sections of “everything” are “less real” than others, where your real-ness prior is something you had to make up anyway.
Others also think that I am just slow on the uptake of this idea. But to me the idea that reality is not fixed but relative to what real-ness prior you decide to pick is extremely ugly. It would mean that the utility of technology to achieve things is merely a shared delusion, that if a theist chose a real-ness prior that assigned high real-ness only to universes where a theistic god existed then he would be correct to pray, etc. Effectively you’re saying that the postmodernists were right after all.
Now, the fact that I have a negative emotional reaction to this proposal doesn’t make it less true, of course.
There is a deep analogy between how you can’t change the laws of physics (contents of reality, apart from lawfully acting) and how you can’t change your own program. It’s not a delusion unless it can be reached by mistake. The theist can’t be right to act as if a deity exists unless his program (brain) is such that it is the correct way to act, and he can’t change his mind for it to become right, because it’s impossible to change one’s program, only act according to it.
The problem is that this point of view means that in a debate with someone who is firmly religious, not only is the religious person right, but you regret the fact that you are “rational”; you lament “if only I had been brought up with religious indoctrination, I would correctly believe that I am going to heaven”.
Any rational theory that leaves you lamenting your own rationality deserves some serious scepticism.
Following the same analogy, you can translate it as “if only the God did in fact exist, …”. The difference doesn’t seem particularly significant—both “what ifs” are equally impossible. “Regretting rationality” is on a different level—rationality in the relevant sense is a matter of choice. The program that defines your decision-making algorithm isn’t.
I still fear that you are reading in my words something very different from what I intend, as I don’t see the possibility of a religious person’s mind actually acting as if God is real. A religious person may have a free-floating network of beliefs about God, but it doesn’t survive under reflection. A true god-impressed mind would actually act as if God is real, no matter what, it won’t be deconvertable, and indeed under reflection an atheist god-impressed mind will correctly discard atheism.
Not all beliefs are equal, a human atheist is correct not just according to atheist’s standard, and a human theist is incorrect not just to atheist’s standard. The standard is in the world, or, under this analogy, in the mind. (The mind is a better place for ontology, because preference is also here, and human mind can be completely formalized, unlike the unknown laws of physics. By the way, the first post is up).
So your argument is that the reason that the theists are wrong is because they only sorta-kinda believe in God anyway, but if they really believed, then they’d be just as right as we are?
But only in the sense that their calculation could be correct according to a particularly weird prior. The difference between normal theist and a “god-impressed mind” who both believe in God is that of rationality: the former makes mistakes in updating beliefs, the latter probably doesn’t. The same with an atheist god-impressed mind and a human atheist. You can’t expect to find that weird a prior in a human. And of course, you should say that the god-impressed are wrong about their beliefs, though they correctly follow the evidence according to their prior. If you value their success in the real world more than the autonomy of their preference, you may want to reach into their minds and make appropriate changes.
I should say again: the program that defines the decision-making algorithm can’t be normally changed, which means that one can’t be really “converted” to a different preference, though one can be converted to different beliefs and feelings. Observations don’t change the algorithm, they are processed according to that algorithm. This means that if you care about reflective consistency (and everyone does, in the sense of preservation of preference), you’d try to counteract the unwanted effects of environment on yourself, including the self-promoting effects where you start liking the new situation. The extent to which you like the new situation, the “level of conviction”, it’s pretty much irrelevant, just as the presence of a losing psychological drive. It’d take great integrity (not “strength of conviction”) in the change for significantly different values to really sink in, in the sense that the new preference-on-reflection will resemble the new beliefs and feelings similarly to how the native preference-on-reflection will resemble native (sane, secular, etc.) beliefs and feelings.
I doubt that you can define a way to choose an algorithm out of a human brain that makes that sentence true.
Yes, that wasn’t careful. In this context, I mean “no large shift of preference”. Tiny changes occur all the time (and are actually very important if you scale them up by giving the preference with/without these changes to a FAI). You can model the extent of reversibility (as compared to a formal computer program) by roughly what can be inferred about the person’s past, which doesn’t necessarily all has to be from the person’s brain. (By an algorithm in human brain I mean all of human brain, basically a program that would run an upload implementation, together with the data.)
I agree that it’s ugly to think of the weights as a pretense on how real certain parts of reality are. That’s why I think it may be better to think of them as representing how much you care about various parts of reality. (For the benefit of other readers, I talked about this in What Are Probabilities, Anyway?.)
Actually, I haven’t completely given up the idea that there is some objective notion of how real, or how important, various parts of reality are. It’s hard to escape the intuition that some parts of math are just easier to reach or find than others, in a way that is not dependent how human minds work.
I believe even personal identity falls under this category. A lot of moral intuitions work with the-me-in-the-future object, as marked in the map. To follow these intuitions, it is very important for us to have a good idea of where the-me-in-the-future is, to have a good map of this thing. When you get to weird thought experiments with copying, this epistemic step breaks down, because if there are multiple future-copies, the-me-in-the-future is a pattern that is absent. As a result, moral intuitions, that indirectly work through this mark on the map, get confused and start giving the wrong answers as well. This can be readily observed for example from preferential inconsistency in time expected in such thought experiments (you precommit to teleporting-with-delay, but then your copy that is to be destroyed starts complaining).
Personal identity is (in general) a wrong epistemic question asked by our moral intuition. Only if preference is expressed in terms of the territory (or rather in a form flexible enough to follow all possible developments), including the parts currently represented in moral intuition in terms of the-me-in-the-future object in the territory, will the confusion with expectations and anthropic thought experiments go away.