Before I continue with the discussion I have to say that this depth of analysis does not seem relevant to the practical applications of rationality that I asked about in the original post. The LessWrong wiki states under the entry for ‘truth’:
‘truth’ is a very simple concept, understood perfectly well by three-year-olds, but often made unnecessarily complicated by adults.
This, ‘relative truth’ as I call it, would be perfectly adequate for our discussion before we started philosophising.
Nevertheless, philosophising is good fun! :)
So...
Let’s take a statement like 2+2=4. It’s not a statement about nature. It’s a statement about how abstract mathematical entities relate to each other that are independent from nature.
I assume you are using ‘nature’ in the same way I use `reality’? If yes, it is absurd to say that these entities are independent of nature. Everything is part of nature. Nature is everything there is. It includes you. Your brain. Mathematics. The question you can ask is: Why does abstraction have these properties? Why do they sometimes describe other parts of reality? Does every mathematical truth have a correspondence to this other part of reality we call the physical world? These are all valid and fascinating questions.
You can’t claim this and saying at the same time that someone’s probability is false or wrong. You just defined truth in a way that it’s an attribute of different claims.
I clearly stated that ‘relative truth’ can approximate/model parts of ‘absolute truth’ in a way that can be useful.
Dropping the concept (in LW speak tabooing it) is another way.
You definitely convinced me to be more careful when I use the word. Seriously.
If your objection to believing that “X happens with probability P” isn’t anymore that this might be false, what’s the objection about?
That is my objection. You think there is a conflict because you are not distinguishing between ‘absolute’ and ‘relative’ when you follow my definition. In the original post, I was just observing the situation we are in when we use rational assessment with incomplete data. I am interested to see if we can find ways to calibrate for such distortions. I will expand in future posts.
Finally, I wouldn’t want to give you the impression that I am certain about the view of ‘truth’ I am presenting here. And I hope you are not sure of your assessments either. But this is where I currently am. This is my belief system.
Finally, I wouldn’t want to give you the impression that I am certain about the view of ‘truth’ I am presenting here.
I didn’t have the impression.
Saying truth is about correspondence to reality is quite different than saying that it is about reality.
In Bayesianism a probability that a person associates with an event should be a reflection of the information available to the person. Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it’s subjective.
In the frequentist idea of probability there’s the notion that the probability is independent from the observer and the information that the observer has, but that assumption isn’t there in the Bayesian notion of probability.
Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it’s subjective.
Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.
Indeed this seemed to be something people are aware of from what I gathered from the answer of MrMind and this one from Vaniver. Vaniver in particular pointed me towards an attempt to model the issue in order to mitigate it but it presupposes a computable universe and, most importantly, that the agent has logical omniscience and an infinite amount of time. This puts it out of the realm of practical rationality. A brief description of further attempts to mitigate the issues left me, for now, unconvinced.
Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.
The idea of accuracy presupposes that you can compare a value to another reference.
If I say that Alice is 1,60m tall but she’s 1,65m, that’s inaccurate. If I however say, that there’s a 5% chance that Alice is taller than 1,60m that’s not inaccurate. My ability to predict height might be badly calibrated and I have a bad Briers score or Log score.
I am using ‘inaccurate’ as equivalent to ‘badly calibrated’ here. Why do you feel it is important to make the distinction? I understand why it is important when dealing with clearly quantified data. But in every day life do you really mentally attempt to assign probability to all variables?
I am using ‘inaccurate’ as equivalent to ‘badly calibrated’ here.
To determine whether a person is well calibrated or isn’t you have to look at multiple predictions of the person. It’s an attribute heuristic for decision making.
On the other hand a single statement such as Alice is 1,60m might be inaccurate. Being inaccurate is a property of a statement and not just a property of how the statement was generated.
But in every day life do you really mentally attempt to assign probability to all variables?
Assigning probabilities to event takes effort. As such it’s not something you can do for two-thousand statements in a day. To be able to assign probabilities it’s also important to precisely define the belief.
If I take a belief like “All people who clicked on ‘Going’ will come to the event tonight”, I can assign a probability. The exercise of assigning that probability makes me think more clearly about the likelihood of it happening.
Thanks for the clarifications. One last question as I am sure all these will come out again and again as I am interacting with the community.
Can you give me a concrete example of a complex, real life problem or decision where you used the assignment of probabilities to your beliefs to an extend that you find satisfactory for making the decision. I am curious to see the mental process of really using this way of thinking. I assume it is a process happening through sound in the imagination and more specifically through language (the internal dialogue). Could you reproduce it for me in writing?
I applied for a job. There was uncertainty around whether or not I get the job. Having an accurate view of the probability of getting the job informs the decision of how important it is to spend additional effort.
I basically came up with a number and then ask myself whether I would be surprised if the event happens or doesn’t happen.
I currently don’t have a more systematic process.
I remember a conversation with a CFAR trainer. I said “I think X is a key skill”. They responded with: “I think it is likely that X is a key skill but I don’t know that it has to be a key skill.”.
We didn’t put numbers on it but having probabilities in the background results in us being able to discuss our disagreement even through we both think “X is likely a key skill”.
I had never someone outside of this community tell me “you are likely right but I don’t see why I should believe that what you are saying is certain”.
The kind of mindset that produces a statement like this is about taking different probabilities seriously.
‘I have reached this mindset through studying views of assumptions and beliefs from other sources. Maybe this is another way to make the realisation.’
It’s more than just a mindset. In this case the result were concrete discoursive practice. There are quite many people who profess to have a mindset that separates shades of gray. The amount of people who tell you voice disagreement when you tell them something they believe is likely to be true and that’s important is much lower.
Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them? And stretch out the example?
Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them?
Do I need to express it in numbers? In my mind I follow and practice, among others, the saying: “Study the assumptions behind your actions. Then study the assumptions behind your assumptions.”
Having said that, I can not think of an example of applying that in a situation where I was in agreement. I am thinking that ‘I would not be in agreement without a reason regarding a belief that I have examined’ but I might be rationalising here. I will try to observe myself on that. Thanks!
I am thinking that ‘I would not be in agreement without a reason regarding a belief that I have examined’ but I might be rationalising here
We both had reasons for believing it to be true. On the other hand human believe things that are wrong. If you ask a Republican and a Democrat whether Trump is good for America they might have both reasons for their belief but they still disagree. That means for each of them there’s a chance of them being wrong despite having reasons for their beliefs.
The reasons he had in this mind pointed to the belief being true but they didn’t provide him the certainty that it’s true.
It was a belief that was important enough for him to be right and not only have reasons for holding his belief.
The practice of putting numbers on a belief forces you to be precise about what you believe.
Let’s say that you believe: “It’s likely that Trump will get impeached.” If Trump actually get’s impeached you will tell yourself “I correctly predicted it, I was right”. If he doesn’t get impeached you are likely to think “When I said likely than it meant that there was a decent chance that he get’s impeached but I didn’t mean to say that the chance was more than 50%.
The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.
When Elon Musk started SpaceX he reportedly thought that it had a 10% chance of success. Many people would think of 10% of success as. It’s highly unlikely that the company succeeds. Elon on the other hand thought that given the high stakes 10% chance of success is enough to found SpaceX.
The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.
I will have to explore this further. At the moment the method seems to me to just give an illusion of precision which I am not sure is effective. I could say that I assign a 5% probability that the practice is useful to represent my belief. I will now keep interacting with the community and update my belief according to the evidence I see from people that are using it. Is this the right approach?
Before I continue with the discussion I have to say that this depth of analysis does not seem relevant to the practical applications of rationality that I asked about in the original post. The LessWrong wiki states under the entry for ‘truth’:
This, ‘relative truth’ as I call it, would be perfectly adequate for our discussion before we started philosophising.
Nevertheless, philosophising is good fun! :)
So...
I assume you are using ‘nature’ in the same way I use `reality’? If yes, it is absurd to say that these entities are independent of nature. Everything is part of nature. Nature is everything there is. It includes you. Your brain. Mathematics. The question you can ask is: Why does abstraction have these properties? Why do they sometimes describe other parts of reality? Does every mathematical truth have a correspondence to this other part of reality we call the physical world? These are all valid and fascinating questions.
I clearly stated that ‘relative truth’ can approximate/model parts of ‘absolute truth’ in a way that can be useful.
You definitely convinced me to be more careful when I use the word. Seriously.
That is my objection. You think there is a conflict because you are not distinguishing between ‘absolute’ and ‘relative’ when you follow my definition. In the original post, I was just observing the situation we are in when we use rational assessment with incomplete data. I am interested to see if we can find ways to calibrate for such distortions. I will expand in future posts.
Finally, I wouldn’t want to give you the impression that I am certain about the view of ‘truth’ I am presenting here. And I hope you are not sure of your assessments either. But this is where I currently am. This is my belief system.
I didn’t have the impression.
Saying truth is about correspondence to reality is quite different than saying that it is about reality.
In Bayesianism a probability that a person associates with an event should be a reflection of the information available to the person. Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it’s subjective.
In the frequentist idea of probability there’s the notion that the probability is independent from the observer and the information that the observer has, but that assumption isn’t there in the Bayesian notion of probability.
Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.
Indeed this seemed to be something people are aware of from what I gathered from the answer of MrMind and this one from Vaniver. Vaniver in particular pointed me towards an attempt to model the issue in order to mitigate it but it presupposes a computable universe and, most importantly, that the agent has logical omniscience and an infinite amount of time. This puts it out of the realm of practical rationality. A brief description of further attempts to mitigate the issues left me, for now, unconvinced.
The idea of accuracy presupposes that you can compare a value to another reference.
If I say that Alice is 1,60m tall but she’s 1,65m, that’s inaccurate. If I however say, that there’s a 5% chance that Alice is taller than 1,60m that’s not inaccurate. My ability to predict height might be badly calibrated and I have a bad Briers score or Log score.
I am using ‘inaccurate’ as equivalent to ‘badly calibrated’ here. Why do you feel it is important to make the distinction? I understand why it is important when dealing with clearly quantified data. But in every day life do you really mentally attempt to assign probability to all variables?
To determine whether a person is well calibrated or isn’t you have to look at multiple predictions of the person. It’s an attribute heuristic for decision making.
On the other hand a single statement such as Alice is 1,60m might be inaccurate. Being inaccurate is a property of a statement and not just a property of how the statement was generated.
Assigning probabilities to event takes effort. As such it’s not something you can do for two-thousand statements in a day. To be able to assign probabilities it’s also important to precisely define the belief.
If I take a belief like “All people who clicked on ‘Going’ will come to the event tonight”, I can assign a probability. The exercise of assigning that probability makes me think more clearly about the likelihood of it happening.
Thanks for the clarifications. One last question as I am sure all these will come out again and again as I am interacting with the community.
Can you give me a concrete example of a complex, real life problem or decision where you used the assignment of probabilities to your beliefs to an extend that you find satisfactory for making the decision. I am curious to see the mental process of really using this way of thinking. I assume it is a process happening through sound in the imagination and more specifically through language (the internal dialogue). Could you reproduce it for me in writing?
I applied for a job. There was uncertainty around whether or not I get the job. Having an accurate view of the probability of getting the job informs the decision of how important it is to spend additional effort.
I basically came up with a number and then ask myself whether I would be surprised if the event happens or doesn’t happen.
I currently don’t have a more systematic process.
I remember a conversation with a CFAR trainer. I said “I think X is a key skill”. They responded with: “I think it is likely that X is a key skill but I don’t know that it has to be a key skill.”. We didn’t put numbers on it but having probabilities in the background results in us being able to discuss our disagreement even through we both think “X is likely a key skill”.
I had never someone outside of this community tell me “you are likely right but I don’t see why I should believe that what you are saying is certain”.
The kind of mindset that produces a statement like this is about taking different probabilities seriously.
My thought is:
‘I have reached this mindset through studying views of assumptions and beliefs from other sources. Maybe this is another way to make the realisation.’
My doubt is:
‘Maybe I am missing something that the use of probabilities adds to this realisation’
Hope to continue the discussion in the future.
It’s more than just a mindset. In this case the result were concrete discoursive practice. There are quite many people who profess to have a mindset that separates shades of gray. The amount of people who tell you voice disagreement when you tell them something they believe is likely to be true and that’s important is much lower.
Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them? And stretch out the example?
Do I need to express it in numbers? In my mind I follow and practice, among others, the saying: “Study the assumptions behind your actions. Then study the assumptions behind your assumptions.”
Having said that, I can not think of an example of applying that in a situation where I was in agreement. I am thinking that ‘I would not be in agreement without a reason regarding a belief that I have examined’ but I might be rationalising here. I will try to observe myself on that. Thanks!
We both had reasons for believing it to be true. On the other hand human believe things that are wrong. If you ask a Republican and a Democrat whether Trump is good for America they might have both reasons for their belief but they still disagree. That means for each of them there’s a chance of them being wrong despite having reasons for their beliefs.
The reasons he had in this mind pointed to the belief being true but they didn’t provide him the certainty that it’s true.
It was a belief that was important enough for him to be right and not only have reasons for holding his belief.
The practice of putting numbers on a belief forces you to be precise about what you believe.
Let’s say that you believe: “It’s likely that Trump will get impeached.” If Trump actually get’s impeached you will tell yourself “I correctly predicted it, I was right”. If he doesn’t get impeached you are likely to think “When I said likely than it meant that there was a decent chance that he get’s impeached but I didn’t mean to say that the chance was more than 50%.
The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.
When Elon Musk started SpaceX he reportedly thought that it had a 10% chance of success. Many people would think of 10% of success as. It’s highly unlikely that the company succeeds. Elon on the other hand thought that given the high stakes 10% chance of success is enough to found SpaceX.
I will have to explore this further. At the moment the method seems to me to just give an illusion of precision which I am not sure is effective. I could say that I assign a 5% probability that the practice is useful to represent my belief. I will now keep interacting with the community and update my belief according to the evidence I see from people that are using it. Is this the right approach?
The word “useful” itself isn’t precise and as such the precision of 5% might be more precise than warranted.
Otherwise having your number and then updating it according to what you see from people using it, is the Bayesian way.
How would you express the belief?