I believe my only preferences are over experiences. I care nothing about a world without observers, and I care about our world exactly insofar as the world has an effect on the experiences of certain observers.
How do you know this? (What do you think you know, how do you think you know it?) Your brain might contain this belief. Should you draw a conclusion from this fact that what the belief claims is true? Presumably, if your brain contained a belief that 15+23=36, you’ll have a process that would not stop at accepting this claim just because your brain claimed it’s so, you’ll be able to do better. There is no magical truth-machine in your mind, everything should be suspect, understood and believed stronger only when further reflection permits.
What “certain observers”? You were talking only about your own experiences. How do you take into account experiences of other agents? (Or even yourself at other moments, or given alternative observations, imperfect copies.)
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind? I think this has basically wandered into a metaethics discussion. It is a fact that I feel nothing about worlds without observers. Perhaps if you gave me an argument to care I would.
By “certain observers” I only meant to refer to “observers whose experiences are affected by the world”.
Then you shouldn’t be certain. Beliefs of unknown origin tend to be unreliable. They also don’t allow figuring out what exactly they mean.
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind?
Consider the above example of believing that 15+23=36. You can find the facts allowing you to correct the error elsewhere in your mind, they are just beliefs other than the one that claims 15+23 to be 36. You can also consult a calculator, something not normally considered part of your mind. You can even ask me.
By “certain observers” I only meant to refer to “observers whose experiences are affected by the world”.
This doesn’t help. I don’t see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
I’m certainly not. Like I said, if you have any arguments I expect they could change my opinion.
I don’t see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible. In what other sense would I care about anything?
I’m certainly not. Like I said, if you have any arguments I expect they could change my opinion.
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn’t be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible.
I see. I think this whole line of investigation is very confused.
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn’t be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
I don’t quite understand. I am not currently certain, in the way I use the term. The way I think about moral question is by imaging some extrapolated version of myself, who has thought for long enough to arrive at stable beliefs. My confidence in a moral assertion is synonymous with my confidence that it is also held by this extrapolated version of myself. Then I am certain of a view precisely when my view is stable.
You can come to different conclusions depending on future observations, for example, in which case further reflection would not move your level of certainty, the belief would be stable, and yet you’d remain uncertain. For example, consider your belief about the outcome of a future coin toss: this belief is stable under reflection, but doesn’t claim certainty.
Generally, there are many ways in which you can (or should) make decisions or come to conclusions, your whole decision problem, all heuristics that make up your mind, can have a hand in deciding how any given detail of your mind should be.
(Also, being certain for the reason that you don’t expect to change your mind sounds like a bad idea, this could license arbitrary beliefs, since the future process of potentially changing your mind that you’re thinking about could be making the same calculation, locking in into a belief with no justification other than itself. This doesn’t obviously work this way only because you retain other, healthy reasons for making conclusions, so this particular wrong ritual washes out.)
How do you know this? (What do you think you know, how do you think you know it?) Your brain might contain this belief. Should you draw a conclusion from this fact that what the belief claims is true? Presumably, if your brain contained a belief that 15+23=36, you’ll have a process that would not stop at accepting this claim just because your brain claimed it’s so, you’ll be able to do better. There is no magical truth-machine in your mind, everything should be suspect, understood and believed stronger only when further reflection permits.
What “certain observers”? You were talking only about your own experiences. How do you take into account experiences of other agents? (Or even yourself at other moments, or given alternative observations, imperfect copies.)
How do I know what I care about? I don’t know.
How would I learn that I care about things other than observers? Where could I find such a fact other than in my mind? I think this has basically wandered into a metaethics discussion. It is a fact that I feel nothing about worlds without observers. Perhaps if you gave me an argument to care I would.
By “certain observers” I only meant to refer to “observers whose experiences are affected by the world”.
Then you shouldn’t be certain. Beliefs of unknown origin tend to be unreliable. They also don’t allow figuring out what exactly they mean.
Consider the above example of believing that 15+23=36. You can find the facts allowing you to correct the error elsewhere in your mind, they are just beliefs other than the one that claims 15+23 to be 36. You can also consult a calculator, something not normally considered part of your mind. You can even ask me.
This doesn’t help. I don’t see how your algorithm discussed in the post would represent caring about anything other than exactly its own current state of mind, which was my question.
This is awesome—the exchange reminds me of this dialogue by Smullyan. :-)
I’m certainly not. Like I said, if you have any arguments I expect they could change my opinion.
There are other observers with low complexities (for example, other humans). I can imagine the possibility of being transformed into one of them with probability depending on their complexity, and I can use my intuitive preferences to make decisions which make that imagined situation as good as possible. In what other sense would I care about anything?
No, you are talking about a different property of beliefs, lack of stability to new information. I claim that because of lack of reflective understanding of the origins of the belief, you currently shouldn’t be certain, without any additional object-level arguments pointing out specific problems or arguments for an incompatible position.
I see. I think this whole line of investigation is very confused.
I don’t quite understand. I am not currently certain, in the way I use the term. The way I think about moral question is by imaging some extrapolated version of myself, who has thought for long enough to arrive at stable beliefs. My confidence in a moral assertion is synonymous with my confidence that it is also held by this extrapolated version of myself. Then I am certain of a view precisely when my view is stable.
In what other way can I be certain or uncertain?
You can come to different conclusions depending on future observations, for example, in which case further reflection would not move your level of certainty, the belief would be stable, and yet you’d remain uncertain. For example, consider your belief about the outcome of a future coin toss: this belief is stable under reflection, but doesn’t claim certainty.
Generally, there are many ways in which you can (or should) make decisions or come to conclusions, your whole decision problem, all heuristics that make up your mind, can have a hand in deciding how any given detail of your mind should be.
(Also, being certain for the reason that you don’t expect to change your mind sounds like a bad idea, this could license arbitrary beliefs, since the future process of potentially changing your mind that you’re thinking about could be making the same calculation, locking in into a belief with no justification other than itself. This doesn’t obviously work this way only because you retain other, healthy reasons for making conclusions, so this particular wrong ritual washes out.)