Couldn’t you ask the same question of any UDT agent?
UDT has answers. “Probability” there plays a particular role in the decision algorithm, and you could name that element with a different word while keeping the role. They are probabilities of given program producing given execution history, inferred given the assumption that the agent (as a program) performs given action. An answer of this form would clarify your usage.
I can control logical facts, insofar as they depend on my thoughts
Also physical facts, which is the way you actually think about physical facts, using the observations of otherwise opaque logical specification.
I have a preference over the possible observer-moments which I personally may experience
This is unclear, since you also have preference about what happens with the world, and then why consider some arbitrary boundary that is “you” specially?
I can consider two probability distributions over observer-moments and decide which one it would rather experience. A probability distribution is a mental construct which I need to give as an input to my preference ordering.
Very unclear. Why would you privilege “experiencing” something as a criterion for decision-making? Why do you expect a preference that needs an argument of this particular form?
For example, how do you express preference about other human-like agents in the worlds you can influence? What about situations like absent-minded driver where many usual uses of probabilities break down?
This is unclear, since you also have preference about what happens with the world, and then why consider some arbitrary boundary that is “you” specially?
I believe my only preferences are over experiences. I care nothing about a world without observers, and I care about our world exactly insofar as the world has an effect on the experiences of certain observers.
I am biologically given an intuitive preference function over probability distributions of moments that I would personally experience. The basis of my morality is to use this preference to try and make interpersonal utility comparisons, although of course there are many many problems. Putting probability distributions over observer moments is a critical first step in this program.
Putting probability distributions over observer moments is a critical first step in this program.
There are some problems with the notion of ‘observer moments’. I’m inclined to think they are unresolvable, but perhaps you have some ideas for how to tackle them.
I’ve already mentioned the problem of the ‘boundary’ of a subjective state. For instance, consider an old memory which it would take you a long time to ‘dredge up’. How is that different in principle from having some information written down somewhere in a notebook in front of you? Is that old memory part of your ‘observer moment’? (But then how about a slightly fresher memory)? Is the notebook? (But then how about that thick reference book on your shelf)? It seems obvious to me that there’s no principled line to be drawn here.
Then there’s the problem of whether a given system is an observer or not. For instance, Dennett is notorious for (correctly!) attributing ‘intentional states’ to a thermostat: you can view it as an agent who believes the temperature is x and wants it to be y. But is a thermostat an observer? Presumably not, but again it seems that there’s no principled line to be drawn between thermostats and people.
And then there’s the problem of ‘how many observers’. E.g. Is a split-brain patient two observers or one? How about an ordinary person? How much of the corpus callosum needs to be severed to get two observers?
Finally, if A is much much cleverer, more alert, and knowledgeable than B then A ought to have a greater density of ‘observer moments’ than B? But exactly how much greater? The idea that there’s a principled way of determining this seems overoptimistic.
(Going a little off topic: I’ve actually been thinking along vaguely similar lines to you just lately. I’ve been trying to devise an approach to the mind-body problem in terms of “perspectives”. My overall goal was to try to ‘do justice to’ the common intuition that a subjective point of view must be determinate and cannot only half-exist, while maintaining the substance of Dennett’s position which implies that there may be no fact of the matter as to whether a person is conscious and what they’re conscious of. My key idea is that (a) there exist genuine facts of the form “From perspective P, such-and-such would be consciously experienced” but (b) the perspectives themselves do not exist—they’re not part of the “state of the world”. (Instead, they’re “perspectives from which the state of the world manifests itself”). An analogy would be how, in mathematical logic, we have this duality between theories and models, or more broadly, between syntax and semantics. The “syntax side” enables you to state and prove things about stuff that exists, but the syntax itself doesn’t exist. (In the same way that there’s no such set as ZFC.)
My ‘perspectives’ are essentially the same as your ‘pointers-to-observers’. However, I’d want to stress the fact that perspectives are ubiquitous—e.g. you can take the perspective of a rock, or a thermostat. And you can take the perspective of a person in many different ways, with no fact of the matter about which is right, or even whether it’s right to take one. (But from any given perspective, all the subjective facts are nice and ‘determinate’.)
It never occurred to me to try to consider the Kolmogorov complexities of perspectives. It’s an interesting idea, but it’s hard to wrap one’s head around given that there are an unlimited number of ways of defining the same person’s perspective.)
I believe my only preferences are over experiences. I care nothing about a world without observers, and I care about our world exactly insofar as the world has an effect on the experiences of certain observers.
How do you know this? (What do you think you know, how do you think you know it?) Your brain might contain this belief. Should you draw a conclusion from this fact that what the belief claims is true? Presumably, if your brain contained a belief that 15+23=36, you’ll have a process that would not stop at accepting this claim just because your brain claimed it’s so, you’ll be able to do better. There is no magical truth-machine in your mind, everything should be suspect, understood and believed stronger only when further reflection permits.
What “certain observers”? You were talking only about your own experiences. How do you take into account experiences of other agents? (Or even yourself at other moments, or given alternative observations, imperfect copies.)
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind? I think this has basically wandered into a metaethics discussion. It is a fact that I feel nothing about worlds without observers. Perhaps if you gave me an argument to care I would.
By “certain observers” I only meant to refer to “observers whose experiences are affected by the world”.
Then you shouldn’t be certain. Beliefs of unknown origin tend to be unreliable. They also don’t allow figuring out what exactly they mean.
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind?
Consider the above example of believing that 15+23=36. You can find the facts allowing you to correct the error elsewhere in your mind, they are just beliefs other than the one that claims 15+23 to be 36. You can also consult a calculator, something not normally considered part of your mind. You can even ask me.
By “certain observers” I only meant to refer to “observers whose experiences are affected by the world”.
This doesn’t help. I don’t see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
I’m certainly not. Like I said, if you have any arguments I expect they could change my opinion.
I don’t see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible. In what other sense would I care about anything?
I’m certainly not. Like I said, if you have any arguments I expect they could change my opinion.
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn’t be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible.
I see. I think this whole line of investigation is very confused.
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn’t be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
I don’t quite understand. I am not currently certain, in the way I use the term. The way I think about moral question is by imaging some extrapolated version of myself, who has thought for long enough to arrive at stable beliefs. My confidence in a moral assertion is synonymous with my confidence that it is also held by this extrapolated version of myself. Then I am certain of a view precisely when my view is stable.
You can come to different conclusions depending on future observations, for example, in which case further reflection would not move your level of certainty, the belief would be stable, and yet you’d remain uncertain. For example, consider your belief about the outcome of a future coin toss: this belief is stable under reflection, but doesn’t claim certainty.
Generally, there are many ways in which you can (or should) make decisions or come to conclusions, your whole decision problem, all heuristics that make up your mind, can have a hand in deciding how any given detail of your mind should be.
(Also, being certain for the reason that you don’t expect to change your mind sounds like a bad idea, this could license arbitrary beliefs, since the future process of potentially changing your mind that you’re thinking about could be making the same calculation, locking in into a belief with no justification other than itself. This doesn’t obviously work this way only because you retain other, healthy reasons for making conclusions, so this particular wrong ritual washes out.)
UDT has answers. “Probability” there plays a particular role in the decision algorithm, and you could name that element with a different word while keeping the role. They are probabilities of given program producing given execution history, inferred given the assumption that the agent (as a program) performs given action. An answer of this form would clarify your usage.
Also physical facts, which is the way you actually think about physical facts, using the observations of otherwise opaque logical specification.
This is unclear, since you also have preference about what happens with the world, and then why consider some arbitrary boundary that is “you” specially?
Very unclear. Why would you privilege “experiencing” something as a criterion for decision-making? Why do you expect a preference that needs an argument of this particular form?
For example, how do you express preference about other human-like agents in the worlds you can influence? What about situations like absent-minded driver where many usual uses of probabilities break down?
I believe my only preferences are over experiences. I care nothing about a world without observers, and I care about our world exactly insofar as the world has an effect on the experiences of certain observers.
I am biologically given an intuitive preference function over probability distributions of moments that I would personally experience. The basis of my morality is to use this preference to try and make interpersonal utility comparisons, although of course there are many many problems. Putting probability distributions over observer moments is a critical first step in this program.
There are some problems with the notion of ‘observer moments’. I’m inclined to think they are unresolvable, but perhaps you have some ideas for how to tackle them.
I’ve already mentioned the problem of the ‘boundary’ of a subjective state. For instance, consider an old memory which it would take you a long time to ‘dredge up’. How is that different in principle from having some information written down somewhere in a notebook in front of you? Is that old memory part of your ‘observer moment’? (But then how about a slightly fresher memory)? Is the notebook? (But then how about that thick reference book on your shelf)? It seems obvious to me that there’s no principled line to be drawn here.
Then there’s the problem of whether a given system is an observer or not. For instance, Dennett is notorious for (correctly!) attributing ‘intentional states’ to a thermostat: you can view it as an agent who believes the temperature is x and wants it to be y. But is a thermostat an observer? Presumably not, but again it seems that there’s no principled line to be drawn between thermostats and people.
And then there’s the problem of ‘how many observers’. E.g. Is a split-brain patient two observers or one? How about an ordinary person? How much of the corpus callosum needs to be severed to get two observers?
Finally, if A is much much cleverer, more alert, and knowledgeable than B then A ought to have a greater density of ‘observer moments’ than B? But exactly how much greater? The idea that there’s a principled way of determining this seems overoptimistic.
(Going a little off topic: I’ve actually been thinking along vaguely similar lines to you just lately. I’ve been trying to devise an approach to the mind-body problem in terms of “perspectives”. My overall goal was to try to ‘do justice to’ the common intuition that a subjective point of view must be determinate and cannot only half-exist, while maintaining the substance of Dennett’s position which implies that there may be no fact of the matter as to whether a person is conscious and what they’re conscious of. My key idea is that (a) there exist genuine facts of the form “From perspective P, such-and-such would be consciously experienced” but (b) the perspectives themselves do not exist—they’re not part of the “state of the world”. (Instead, they’re “perspectives from which the state of the world manifests itself”). An analogy would be how, in mathematical logic, we have this duality between theories and models, or more broadly, between syntax and semantics. The “syntax side” enables you to state and prove things about stuff that exists, but the syntax itself doesn’t exist. (In the same way that there’s no such set as ZFC.)
My ‘perspectives’ are essentially the same as your ‘pointers-to-observers’. However, I’d want to stress the fact that perspectives are ubiquitous—e.g. you can take the perspective of a rock, or a thermostat. And you can take the perspective of a person in many different ways, with no fact of the matter about which is right, or even whether it’s right to take one. (But from any given perspective, all the subjective facts are nice and ‘determinate’.)
It never occurred to me to try to consider the Kolmogorov complexities of perspectives. It’s an interesting idea, but it’s hard to wrap one’s head around given that there are an unlimited number of ways of defining the same person’s perspective.)
How do you know this? (What do you think you know, how do you think you know it?) Your brain might contain this belief. Should you draw a conclusion from this fact that what the belief claims is true? Presumably, if your brain contained a belief that 15+23=36, you’ll have a process that would not stop at accepting this claim just because your brain claimed it’s so, you’ll be able to do better. There is no magical truth-machine in your mind, everything should be suspect, understood and believed stronger only when further reflection permits.
What “certain observers”? You were talking only about your own experiences. How do you take into account experiences of other agents? (Or even yourself at other moments, or given alternative observations, imperfect copies.)
How do I know what I care about? I don’t know.
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind? I think this has basically wandered into a metaethics discussion. It is a fact that I feel nothing about worlds without observers. Perhaps if you gave me an argument to care I would.
By “certain observers” I only meant to refer to “observers whose experiences are affected by the world”.
Then you shouldn’t be certain. Beliefs of unknown origin tend to be unreliable. They also don’t allow figuring out what exactly they mean.
Consider the above example of believing that 15+23=36. You can find the facts allowing you to correct the error elsewhere in your mind, they are just beliefs other than the one that claims 15+23 to be 36. You can also consult a calculator, something not normally considered part of your mind. You can even ask me.
This doesn’t help. I don’t see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
This is awesome—the exchange reminds me of this dialogue by Smullyan. :-)
I’m certainly not. Like I said, if you have any arguments I expect they could change my opinion.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible. In what other sense would I care about anything?
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn’t be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
I see. I think this whole line of investigation is very confused.
I don’t quite understand. I am not currently certain, in the way I use the term. The way I think about moral question is by imaging some extrapolated version of myself, who has thought for long enough to arrive at stable beliefs. My confidence in a moral assertion is synonymous with my confidence that it is also held by this extrapolated version of myself. Then I am certain of a view precisely when my view is stable.
In what other way can I be certain or uncertain?
You can come to different conclusions depending on future observations, for example, in which case further reflection would not move your level of certainty, the belief would be stable, and yet you’d remain uncertain. For example, consider your belief about the outcome of a future coin toss: this belief is stable under reflection, but doesn’t claim certainty.
Generally, there are many ways in which you can (or should) make decisions or come to conclusions, your whole decision problem, all heuristics that make up your mind, can have a hand in deciding how any given detail of your mind should be.
(Also, being certain for the reason that you don’t expect to change your mind sounds like a bad idea, this could license arbitrary beliefs, since the future process of potentially changing your mind that you’re thinking about could be making the same calculation, locking in into a belief with no justification other than itself. This doesn’t obviously work this way only because you retain other, healthy reasons for making conclusions, so this particular wrong ritual washes out.)