“I don’t know” means “I can’t predict the outcome better than a dart throwing monkey”. Doctors usually know more than that.
The example I chose is a bit misleading. I am just using it to indicate the problem though. You are thinking of ‘doctors’ as the doctors in our culture but I was hypothesising a situation in which the field is much more primitive. It is interesting to me to see how rationality is applied to a field where data is unavailable, scarce or inconclusive.
You seem to have a lot of black and white terms like “right hypothesis”, “accepting explanations as truths” and “I don’t know” in your thinking.
I don’t see the problem of saying ‘I don’t know’ when the data is obviously insufficient for me to judge. I would actually consider it counter productive to give a probability in this case, as it might create the illusion that I know more than I actually do. In this sense my ‘I don’t know’ is not a value of 0 but admitting that there is no reason to use rational jargon at all.
The terms ‘right hypothesis’ and ‘truth’ are used, in the above comment, in the context of what is considered a good enough truth in science. You are right that this can be confusing if we get into the epistemological details though I thought it was sufficient for the purpose of communicating in this thread. I can change it to ‘scientific fact’ and we should be ok?
After Popper science isn’t about establishing truth or the right hypothesis.
I would actually consider it counter productive to give a probability in this case, as it might create the illusion that I know more than I actually do.
If you can do better than a random guess (the dart throwing monkey) than you have knowledge in the Bayesian sense.
There could be situations where you really don’t know more than the dart throwing monkey and were it thus doesn’t make sense to speak about probability but in most cases we know at least a little bit.
After Popper science isn’t about establishing truth or the right hypothesis.
I am not familiar with Popper but I would agree anyway. I will be more careful with my terms. Would ‘scientific fact’ work though? I think it does but I am open to being corrected.
If you can do better than a random guess (the dart throwing monkey) than you have knowledge in the Bayesian sense.
[1] What if a rational assessment of inconclusive data weighs you towards the wrong direction. Wouldn’t you then start doing worse than the dart throwing monkey?
There could be situations where you really don’t know more than the dart throwing monkey and were it thus doesn’t make sense to speak about probability but in most cases we know at least a little bit.
I would challenge your ‘in most cases’ statement. I would also challenge the contention that a little bit is better than nothing according to [1].
No. Everything in science is falsifiable and open to challenge.
[1] What if a rational assessment of inconclusive data weighs you towards the wrong direction.
It’s certainly possible to be completely deceived by reality.
Whenever you act where an outcome matters to you, you will take the expected outcomes into account. Even if you say “I don’t know” you still have to make decisions about what you do about an issue.
No. Everything in science is falsifiable and open to challenge.
I understand and agree with that. I am just trying to find the term I can use when discussing scientific results. I thought ‘scientific fact’ was ok cause it includes ‘scientific’ which implies all the rest. But yes the word ‘fact’ is misleading. Should we just call it ‘scientific result’? What do you recommend?
I can’t stress enough how useful that link is to me as a new LW user. My criticisms are quite close to what David Chapman is saying and it is really nice to see how someone representative of LW responds to this.
The kind of “I don’t know that you advocate is what Scott calls Anton-Wilsonism.
Discussing in LW is giving me the impression at the moment that I have to learn to talk in a new language. I have to admit that at the moment all the corrections you guys have indicated are an improvement on my previous way of expressing. Very exciting!
But this is a great opportunity to deepen my understanding of the language by practising it. Let me try to reformulate my ‘I don’t know’ in the Bayesian language. So, what I mean by ‘I don’t know’ is that you should use a uniform distribution. For example, you have attached the label of ‘Anton-Wilsonism’ to me according to what I have currently expressed. I could assume, if you are literally using this way of thinking, that you went through the process of considering a weight for the probability that I am an exact match of what Scott is describing and decided that based on your current evidence I am. This also implies that you have, now or in the past, assigned ratings for all the assumptions and conclusions made in Scott’s two paragraphs (there are quite a few) and you are applying all these to my model. So:
Did you really quantify all that or were these labels applied, as it commonly happens in humans, automatically?
Do you think that my recommendation of creating your model using uniform distributions would be useful as we are going through the process of getting more evidence about each other.
It is a trivial, but indicative of an attitude, example that using my approach your action could change from writing (but most importantly thinking):
“The kind of ‘I don’t know’ that you advocate is what Scott calls Anton-Wilsonism.”
to
“Is the kind of ‘I don’t know’ you advocate what Scott calls Anton-Wilsonism?”
When doing credence calibration I don’t get result that indicate that I should label all claims 50⁄50.
I just learned (see comments bellow) that “I don’t know” is not 50⁄50 but a uniform distribution. Could you give me a few examples of credence calibration as it happens from your perspective?
Whenever you act where an outcome matters to you, you will take the expected outcomes into account. Even if you say “I don’t know” you still have to make decisions about what you do about an issue.
Indeed. This is practical. All I am saying is that we shouldn’t confuse the fact that we need to decide when we need to decide with the belief that our ratings express truth. I think it perfectly possible to be forced by circumstances into making an action related decision but return the conceptualisation of the underlying assumptions to a uniform distribution for the purpose of further exploration. It is just being aware that you have a belief system, that you need it, but not fully believe in it.
My criticisms are quite close to what David Chapman is saying and it is really nice to see how someone representative of LW responds to this.
No, I don’t think what you are saying is close to what Chapman is arguing. Chapman doesn’t argue that we should say “I don’t know” instead of pinning probability on statements where we have little knowledge.
I understand and agree with that. I am just trying to find the term I can use when discussing scientific results. I thought ‘scientific fact’ was ok cause it includes ‘scientific’ which implies all the rest.
There are enough people who use terms like “scientific fact” without thinking in terms of falsificationism that it’s not clear what’s implied.
All I am saying is that we shouldn’t confuse the fact that we need to decide when we need to decide with the belief that our ratings express truth.
To me your sentence sounds like you have a naive idea of what the word “truth” is supposed to mean. A meaning that you learned as child.
There are some intuitions that come with that view of the world. Some of those intuitions will come into conflict if you come into contact with more refined ideas of what truth happens to be and how epistemology should work. There are various philosophers like Popper who have put forward more refined concepts.
Eliezer Yudkowsky has put forward his own concepts on lesswrong.
I just learned (see comments bellow) that “I don’t know” is not 50⁄50 but a uniform distribution. Could you give me a few examples of credence calibration as it happens from your perspective?
No, I don’t think what you are saying is close to what Chapman is arguing. Chapman doesn’t argue that we should say “I don’t know” instead of pinning probability on statements where we have little knowledge.
Sorry. I meant my general criticisms (which I haven’t expressed), not in the sense of our current discussion. I wasn’t very clear.
To me your sentence sounds like you have a naive idea of what the word “truth” is supposed to mean. A meaning that you learned as child. There are some intuitions that come with that view of the world. Some of those intuitions will come into conflict if you come into contact with more refined ideas of what truth happens to be and how epistemology should work. There are various philosophers like Popper who have put forward more refined concepts. Eliezer Yudkowsky has put forward his own concepts on lesswrong.
I am not sure where you are getting that I “have a naive idea of what the word “truth” is supposed to mean.”. Stating it is no justification. Pointing towards Popper or Yudkowsky is not justification either. You would need to take my statements that point towards my ‘naivety’ and deconstruct them so we can learn. I have from my side offered arguments and examples for the value of the ‘I don’t know’ mentality and why it is useful but I feel you haven’t engaged.
I meant my general criticisms (which I haven’t expressed), not in the sense of our current discussion.
Chapman doesn’t reject rationality but advocates transcending it. It’s a different standpoint. You need to first adopt a framework to later transcend it. In Chapman view, LW type rationality is useful for people who move from Kegan 3 to Kegan 4.
I am not sure where you are getting that I “have a naive idea of what the word “truth” is supposed to mean.”. Stating it is no justification.
If you actually refined your concept of truth, there a good chance that you could point to philosophers that influenced it. This would allow me to address their arguments to the extent that I’m familiar with their notions of truth and how the relate to LW rationality.
I have from my side offered arguments and examples for the value of the ‘I don’t know’ mentality and why it is useful but I feel you haven’t engaged.
I see your argument as “But this isn’t truth” without any deep argument of what you mean by “truth” or signs that you went through the process of refining a notion of what it means for yourself.
You speak about “not fully believing” when the whole point of putting probabilities on the statements is that you don’t fully know what’s going to happen. There’s the general mantra of “Strong opinions, loosely held.” Starting a probability means that this is the likelihood that the information that’s available in this moment warrants. It in no way implies that if other information is available in the next moment that the probability will stay the same. Constantly updating the probability as new information becomes available is part of the ideal of Bayesian rationality.
Chapman doesn’t reject rationality but advocates transcending it.
I do not reject rationality either. Why would I be here if I did so? I think you are misreading my contrarian approach as rejection.
If you actually refined your concept of truth, there a good chance that you could point to philosophers that influenced it.
Talking in terms of references bears the danger of attaching labels to each other. It is much more accurate expanding on points. I respect if you don’t have the time for that of course.
Since you are asking though let’s see where this comparison of readings take us. In terms of most Western philosophy I find the verbosity and tangle of self constructed concepts to be unbearable (though I have to clarify that the 2 pages of Popper that I read were perfectly clear). Wittgenstein’s philosophical investigations is I believe a good antidote to a lot of the above. My study nowadays is focused on self observation, psychology, sociology, neuroscience as well as eastern philosophy ( I am not religious ). For example, you can find a perspective on truth by reading the full corpus of teaching stories/anecdotes of mullah Nasrudin.
I see your argument as “But this isn’t truth” without any deep argument of what you mean by “truth” or signs that you went through the process of refining a notion of what it means for yourself.
A deep definition? Truth is reality. It can not be reached or expressed through rationality but parts of it can be approximated/modelled in a way that can be useful. For discussing practical rationality though isn’t it enough to say that truth is a belief that corresponds to reality?
By this time I feel we have kind of lost the focus of our conversation. To recap, this was all about my comment on the possibility of rationally arrived beliefs being false due to the assessment of insufficient evidence. It was not meant as an attack on rationality but as constructive criticism and possible a way for me to be introduced to solutions.
That’s evading the question. Or pretending that it doesn’t exist.
I can’t spend time going deep into your argument when you use a naive definition of a term that can be stated in three words.
To recap, this was all about my comment on the possibility of rationally arrived beliefs being false due to the assessment of insufficient evidence.
Saying that there’s a chance that it’s false is no new issue but it’s an issue that’s already addressed. The whole point of putting a probability on a belief is that there’s a chance that’s false.
It’s not any new concern. Probability is made for not knowing whether a belief is true or false but having uncertainty about it..
That’s evading the question. Or pretending that it doesn’t exist.
Interesting. I guess your approach of just saying that you understand something but not expressing it in the discussion is not evasive at all.
I will keep an open mind for future conversations because, believe it or not, I am open to learn from you, if there is something you can teach me. But at the moment you are just demonstrating a tendency towards ‘name dropping’ and ‘name calling’ which is not constructive at all.
Saying “Truth is reality”, tells me nothing about what it means for a probability to be true. It leaves everything important implicit.
In the Kegan model that Chapman uses handling concepts this way happens at level three. The step to level four is to actually dig deeper and refine one’s notions to be more specific. To use notions that come from a internally consistent system instead of being the naive notions of the concepts of the kind that people take up while being in high school.
Chapman then goes and says: On the one hand I have the system of reasoning with probability and qualifiying my uncertainty with probability. On the other hand I also want to use predicate calculus and the system in which I can use predicate calculus is not the one that’s ruled by Bayes rule.
But that doesn’t mean that one system is more true or more in line with reality. Both are ways to model the world.
For the next paragraphs I would like you to exercise humility and restrain your assessment of what I am saying until I have finished saying it. You can then assess it as a whole.
My definition of truth was not three words. It was a small paragraph. Let me break it down:
Truth is reality.
Now why is this useful? The first three words ‘Truth is reality’ acknowledges that there is existence and inevitably this X which you can call the world, nature, whole or reality (the one I used in this occasion as I found it cleaner of associations in our context) is inevitably equivalent to truth. Rational discussion, having as its basis the manipulation of symbols, is an abstraction of X and thus not X. Thus absolute truth is outside the realm of rationality. If you think this is what you describe as ‘naive notions of the concepts of the kind that people take up while being in high school.’ I can only say in my turn that your understanding seems naive to me.
It can not be reached or expressed through rationality but parts of it can be approximated/modelled in a way that can be useful.
Here I state that although the first three words describe the deepest level of truth we can focus on truth as can be expressed in rational terms (mathematics are included in this) because it is demonstrably useful. But we should not confuse this truth with reality. We can call it ‘relative truth’ vs ‘absolute truth’ or truth vs Truth or whatever you fancy as long as we clarify our terms so we can talk.
For discussing practical rationality though isn’t it enough to say that truth is a belief that corresponds to reality?
And here we are in the domain where I can learn from you. The domain of being efficient at using rationality. Notice that this sentence is a question where you can respond with a better conceptualisation. Rereading this sentence I already think I see its shortcomings. How about: ‘For discussing practical rationality should we say that truth is a belief of which we can observe or demonstrate its relation to reality?’. Or should we just stop using the word truth for now? I am up for that.
So to return to your statement:
Saying “Truth is reality”, tells me nothing about what it means for a probability to be true.
Indeed. That is why I did not move the conversation towards this direction, you did. I would invite you to reread my original post.
Now why is this useful? The first three words ‘Truth is reality’ acknowledges that there is existence and inevitably this X which you can call the world, nature, whole or reality (the one I used in this occasion as I found it cleaner of associations in our context) is inevitably equivalent to truth.
Let’s take a statement like 2+2=4. It’s not a statement about nature. It’s a statement about how abstract mathematical entities relate to each other that are independent from nature.
I can reason about whether certain statements about Hilberts hotel are true even through there’s no such thing as Hilbert’s hotel in reality.
What you are saying looks to me like you didn’t went through edge cases like this and decided whether you think statements surrounding Hilberts hotel shouldn’t be called true. Going through edges cases leads to a refinement of concepts.
It rather sounds to me like you think that those edge cases don’t really matter and the intuitions you have of the concept of truth should count.
Thus absolute truth is outside the realm of rationality.
You can’t claim this and saying at the same time that someone’s probability is false or wrong. You just defined truth in a way that it’s an attribute of different claims.
How about: ‘For discussing practical rationality should we say that truth is a belief of which we can observe or demonstrate its relation to reality?’. Or should we just stop using the word truth for now?
One way of dealing with a concept like this is to reference refined concepts like the concept of truth that Eliezer developed in the sequences.
Dropping the concept (in LW speak tabooing it) is another way. If your objection to believing that “X happens with probability P” isn’t anymore that this might be false, what’s the objection about?
Before I continue with the discussion I have to say that this depth of analysis does not seem relevant to the practical applications of rationality that I asked about in the original post. The LessWrong wiki states under the entry for ‘truth’:
‘truth’ is a very simple concept, understood perfectly well by three-year-olds, but often made unnecessarily complicated by adults.
This, ‘relative truth’ as I call it, would be perfectly adequate for our discussion before we started philosophising.
Nevertheless, philosophising is good fun! :)
So...
Let’s take a statement like 2+2=4. It’s not a statement about nature. It’s a statement about how abstract mathematical entities relate to each other that are independent from nature.
I assume you are using ‘nature’ in the same way I use `reality’? If yes, it is absurd to say that these entities are independent of nature. Everything is part of nature. Nature is everything there is. It includes you. Your brain. Mathematics. The question you can ask is: Why does abstraction have these properties? Why do they sometimes describe other parts of reality? Does every mathematical truth have a correspondence to this other part of reality we call the physical world? These are all valid and fascinating questions.
You can’t claim this and saying at the same time that someone’s probability is false or wrong. You just defined truth in a way that it’s an attribute of different claims.
I clearly stated that ‘relative truth’ can approximate/model parts of ‘absolute truth’ in a way that can be useful.
Dropping the concept (in LW speak tabooing it) is another way.
You definitely convinced me to be more careful when I use the word. Seriously.
If your objection to believing that “X happens with probability P” isn’t anymore that this might be false, what’s the objection about?
That is my objection. You think there is a conflict because you are not distinguishing between ‘absolute’ and ‘relative’ when you follow my definition. In the original post, I was just observing the situation we are in when we use rational assessment with incomplete data. I am interested to see if we can find ways to calibrate for such distortions. I will expand in future posts.
Finally, I wouldn’t want to give you the impression that I am certain about the view of ‘truth’ I am presenting here. And I hope you are not sure of your assessments either. But this is where I currently am. This is my belief system.
Finally, I wouldn’t want to give you the impression that I am certain about the view of ‘truth’ I am presenting here.
I didn’t have the impression.
Saying truth is about correspondence to reality is quite different than saying that it is about reality.
In Bayesianism a probability that a person associates with an event should be a reflection of the information available to the person. Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it’s subjective.
In the frequentist idea of probability there’s the notion that the probability is independent from the observer and the information that the observer has, but that assumption isn’t there in the Bayesian notion of probability.
Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it’s subjective.
Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.
Indeed this seemed to be something people are aware of from what I gathered from the answer of MrMind and this one from Vaniver. Vaniver in particular pointed me towards an attempt to model the issue in order to mitigate it but it presupposes a computable universe and, most importantly, that the agent has logical omniscience and an infinite amount of time. This puts it out of the realm of practical rationality. A brief description of further attempts to mitigate the issues left me, for now, unconvinced.
Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.
The idea of accuracy presupposes that you can compare a value to another reference.
If I say that Alice is 1,60m tall but she’s 1,65m, that’s inaccurate. If I however say, that there’s a 5% chance that Alice is taller than 1,60m that’s not inaccurate. My ability to predict height might be badly calibrated and I have a bad Briers score or Log score.
I am using ‘inaccurate’ as equivalent to ‘badly calibrated’ here. Why do you feel it is important to make the distinction? I understand why it is important when dealing with clearly quantified data. But in every day life do you really mentally attempt to assign probability to all variables?
I am using ‘inaccurate’ as equivalent to ‘badly calibrated’ here.
To determine whether a person is well calibrated or isn’t you have to look at multiple predictions of the person. It’s an attribute heuristic for decision making.
On the other hand a single statement such as Alice is 1,60m might be inaccurate. Being inaccurate is a property of a statement and not just a property of how the statement was generated.
But in every day life do you really mentally attempt to assign probability to all variables?
Assigning probabilities to event takes effort. As such it’s not something you can do for two-thousand statements in a day. To be able to assign probabilities it’s also important to precisely define the belief.
If I take a belief like “All people who clicked on ‘Going’ will come to the event tonight”, I can assign a probability. The exercise of assigning that probability makes me think more clearly about the likelihood of it happening.
Thanks for the clarifications. One last question as I am sure all these will come out again and again as I am interacting with the community.
Can you give me a concrete example of a complex, real life problem or decision where you used the assignment of probabilities to your beliefs to an extend that you find satisfactory for making the decision. I am curious to see the mental process of really using this way of thinking. I assume it is a process happening through sound in the imagination and more specifically through language (the internal dialogue). Could you reproduce it for me in writing?
I applied for a job. There was uncertainty around whether or not I get the job. Having an accurate view of the probability of getting the job informs the decision of how important it is to spend additional effort.
I basically came up with a number and then ask myself whether I would be surprised if the event happens or doesn’t happen.
I currently don’t have a more systematic process.
I remember a conversation with a CFAR trainer. I said “I think X is a key skill”. They responded with: “I think it is likely that X is a key skill but I don’t know that it has to be a key skill.”.
We didn’t put numbers on it but having probabilities in the background results in us being able to discuss our disagreement even through we both think “X is likely a key skill”.
I had never someone outside of this community tell me “you are likely right but I don’t see why I should believe that what you are saying is certain”.
The kind of mindset that produces a statement like this is about taking different probabilities seriously.
‘I have reached this mindset through studying views of assumptions and beliefs from other sources. Maybe this is another way to make the realisation.’
It’s more than just a mindset. In this case the result were concrete discoursive practice. There are quite many people who profess to have a mindset that separates shades of gray. The amount of people who tell you voice disagreement when you tell them something they believe is likely to be true and that’s important is much lower.
Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them? And stretch out the example?
Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them?
Do I need to express it in numbers? In my mind I follow and practice, among others, the saying: “Study the assumptions behind your actions. Then study the assumptions behind your assumptions.”
Having said that, I can not think of an example of applying that in a situation where I was in agreement. I am thinking that ‘I would not be in agreement without a reason regarding a belief that I have examined’ but I might be rationalising here. I will try to observe myself on that. Thanks!
I am thinking that ‘I would not be in agreement without a reason regarding a belief that I have examined’ but I might be rationalising here
We both had reasons for believing it to be true. On the other hand human believe things that are wrong. If you ask a Republican and a Democrat whether Trump is good for America they might have both reasons for their belief but they still disagree. That means for each of them there’s a chance of them being wrong despite having reasons for their beliefs.
The reasons he had in this mind pointed to the belief being true but they didn’t provide him the certainty that it’s true.
It was a belief that was important enough for him to be right and not only have reasons for holding his belief.
The practice of putting numbers on a belief forces you to be precise about what you believe.
Let’s say that you believe: “It’s likely that Trump will get impeached.” If Trump actually get’s impeached you will tell yourself “I correctly predicted it, I was right”. If he doesn’t get impeached you are likely to think “When I said likely than it meant that there was a decent chance that he get’s impeached but I didn’t mean to say that the chance was more than 50%.
The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.
When Elon Musk started SpaceX he reportedly thought that it had a 10% chance of success. Many people would think of 10% of success as. It’s highly unlikely that the company succeeds. Elon on the other hand thought that given the high stakes 10% chance of success is enough to found SpaceX.
The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.
I will have to explore this further. At the moment the method seems to me to just give an illusion of precision which I am not sure is effective. I could say that I assign a 5% probability that the practice is useful to represent my belief. I will now keep interacting with the community and update my belief according to the evidence I see from people that are using it. Is this the right approach?
What if a rational assessment of inconclusive data weighs you towards the wrong direction. Wouldn’t you then start doing worse than the dart throwing monkey?
Sure. In other words, if you get fed bad enough data then you have (so to speak) anti-knowledge. Surely this isn’t surprising?
I would just clarify though, that the data does not need to be ‘bad’ in the sense that it is false. We might have data that are accurate but misinterpret them by generalising to the larger context or mistakenly transposing them to a different one.
The example I chose is a bit misleading. I am just using it to indicate the problem though. You are thinking of ‘doctors’ as the doctors in our culture but I was hypothesising a situation in which the field is much more primitive. It is interesting to me to see how rationality is applied to a field where data is unavailable, scarce or inconclusive.
I don’t see the problem of saying ‘I don’t know’ when the data is obviously insufficient for me to judge. I would actually consider it counter productive to give a probability in this case, as it might create the illusion that I know more than I actually do. In this sense my ‘I don’t know’ is not a value of 0 but admitting that there is no reason to use rational jargon at all.
The terms ‘right hypothesis’ and ‘truth’ are used, in the above comment, in the context of what is considered a good enough truth in science. You are right that this can be confusing if we get into the epistemological details though I thought it was sufficient for the purpose of communicating in this thread. I can change it to ‘scientific fact’ and we should be ok?
Does that make sense?
After Popper science isn’t about establishing truth or the right hypothesis.
If you can do better than a random guess (the dart throwing monkey) than you have knowledge in the Bayesian sense.
There could be situations where you really don’t know more than the dart throwing monkey and were it thus doesn’t make sense to speak about probability but in most cases we know at least a little bit.
I am not familiar with Popper but I would agree anyway. I will be more careful with my terms. Would ‘scientific fact’ work though? I think it does but I am open to being corrected.
[1] What if a rational assessment of inconclusive data weighs you towards the wrong direction. Wouldn’t you then start doing worse than the dart throwing monkey?
I would challenge your ‘in most cases’ statement. I would also challenge the contention that a little bit is better than nothing according to [1].
No. Everything in science is falsifiable and open to challenge.
It’s certainly possible to be completely deceived by reality.
Whenever you act where an outcome matters to you, you will take the expected outcomes into account. Even if you say “I don’t know” you still have to make decisions about what you do about an issue.
Maybe http://slatestarcodex.com/2013/08/06/on-first-looking-into-chapmans-pop-bayesianism/ is worth reading for you. The kind of “I don’t know that you advocate is what Scott calls Anton-Wilsonism.
When doing credence calibration I don’t get result that indicate that I should label all claims 50⁄50.
I understand and agree with that. I am just trying to find the term I can use when discussing scientific results. I thought ‘scientific fact’ was ok cause it includes ‘scientific’ which implies all the rest. But yes the word ‘fact’ is misleading. Should we just call it ‘scientific result’? What do you recommend?
I can’t stress enough how useful that link is to me as a new LW user. My criticisms are quite close to what David Chapman is saying and it is really nice to see how someone representative of LW responds to this.
Discussing in LW is giving me the impression at the moment that I have to learn to talk in a new language. I have to admit that at the moment all the corrections you guys have indicated are an improvement on my previous way of expressing. Very exciting!
But this is a great opportunity to deepen my understanding of the language by practising it. Let me try to reformulate my ‘I don’t know’ in the Bayesian language. So, what I mean by ‘I don’t know’ is that you should use a uniform distribution. For example, you have attached the label of ‘Anton-Wilsonism’ to me according to what I have currently expressed. I could assume, if you are literally using this way of thinking, that you went through the process of considering a weight for the probability that I am an exact match of what Scott is describing and decided that based on your current evidence I am. This also implies that you have, now or in the past, assigned ratings for all the assumptions and conclusions made in Scott’s two paragraphs (there are quite a few) and you are applying all these to my model. So:
Did you really quantify all that or were these labels applied, as it commonly happens in humans, automatically?
Do you think that my recommendation of creating your model using uniform distributions would be useful as we are going through the process of getting more evidence about each other.
It is a trivial, but indicative of an attitude, example that using my approach your action could change from writing (but most importantly thinking):
“The kind of ‘I don’t know’ that you advocate is what Scott calls Anton-Wilsonism.”
to
“Is the kind of ‘I don’t know’ you advocate what Scott calls Anton-Wilsonism?”
I just learned (see comments bellow) that “I don’t know” is not 50⁄50 but a uniform distribution. Could you give me a few examples of credence calibration as it happens from your perspective?
Indeed. This is practical. All I am saying is that we shouldn’t confuse the fact that we need to decide when we need to decide with the belief that our ratings express truth. I think it perfectly possible to be forced by circumstances into making an action related decision but return the conceptualisation of the underlying assumptions to a uniform distribution for the purpose of further exploration. It is just being aware that you have a belief system, that you need it, but not fully believe in it.
No, I don’t think what you are saying is close to what Chapman is arguing. Chapman doesn’t argue that we should say “I don’t know” instead of pinning probability on statements where we have little knowledge.
There are enough people who use terms like “scientific fact” without thinking in terms of falsificationism that it’s not clear what’s implied.
To me your sentence sounds like you have a naive idea of what the word “truth” is supposed to mean. A meaning that you learned as child.
There are some intuitions that come with that view of the world. Some of those intuitions will come into conflict if you come into contact with more refined ideas of what truth happens to be and how epistemology should work. There are various philosophers like Popper who have put forward more refined concepts. Eliezer Yudkowsky has put forward his own concepts on lesswrong.
For binary yes/no predictions the uniform distribution leads to 50⁄50. https://www.metaculus.com, http://predictionbook.com/ and https://www.gjopen.com/ do have plenty of examples.
Sorry. I meant my general criticisms (which I haven’t expressed), not in the sense of our current discussion. I wasn’t very clear.
I am not sure where you are getting that I “have a naive idea of what the word “truth” is supposed to mean.”. Stating it is no justification. Pointing towards Popper or Yudkowsky is not justification either. You would need to take my statements that point towards my ‘naivety’ and deconstruct them so we can learn. I have from my side offered arguments and examples for the value of the ‘I don’t know’ mentality and why it is useful but I feel you haven’t engaged.
I’m afraid I am not in a position to argue about this as I have only partially understood it. You can read here.
Chapman doesn’t reject rationality but advocates transcending it. It’s a different standpoint. You need to first adopt a framework to later transcend it. In Chapman view, LW type rationality is useful for people who move from Kegan 3 to Kegan 4.
If you actually refined your concept of truth, there a good chance that you could point to philosophers that influenced it. This would allow me to address their arguments to the extent that I’m familiar with their notions of truth and how the relate to LW rationality.
I see your argument as “But this isn’t truth” without any deep argument of what you mean by “truth” or signs that you went through the process of refining a notion of what it means for yourself.
You speak about “not fully believing” when the whole point of putting probabilities on the statements is that you don’t fully know what’s going to happen. There’s the general mantra of “Strong opinions, loosely held.” Starting a probability means that this is the likelihood that the information that’s available in this moment warrants. It in no way implies that if other information is available in the next moment that the probability will stay the same. Constantly updating the probability as new information becomes available is part of the ideal of Bayesian rationality.
I do not reject rationality either. Why would I be here if I did so? I think you are misreading my contrarian approach as rejection.
Talking in terms of references bears the danger of attaching labels to each other. It is much more accurate expanding on points. I respect if you don’t have the time for that of course.
Since you are asking though let’s see where this comparison of readings take us. In terms of most Western philosophy I find the verbosity and tangle of self constructed concepts to be unbearable (though I have to clarify that the 2 pages of Popper that I read were perfectly clear). Wittgenstein’s philosophical investigations is I believe a good antidote to a lot of the above. My study nowadays is focused on self observation, psychology, sociology, neuroscience as well as eastern philosophy ( I am not religious ). For example, you can find a perspective on truth by reading the full corpus of teaching stories/anecdotes of mullah Nasrudin.
A deep definition? Truth is reality. It can not be reached or expressed through rationality but parts of it can be approximated/modelled in a way that can be useful. For discussing practical rationality though isn’t it enough to say that truth is a belief that corresponds to reality?
By this time I feel we have kind of lost the focus of our conversation. To recap, this was all about my comment on the possibility of rationally arrived beliefs being false due to the assessment of insufficient evidence. It was not meant as an attack on rationality but as constructive criticism and possible a way for me to be introduced to solutions.
That’s evading the question. Or pretending that it doesn’t exist.
I can’t spend time going deep into your argument when you use a naive definition of a term that can be stated in three words.
Saying that there’s a chance that it’s false is no new issue but it’s an issue that’s already addressed. The whole point of putting a probability on a belief is that there’s a chance that’s false.
It’s not any new concern. Probability is made for not knowing whether a belief is true or false but having uncertainty about it..
Interesting. I guess your approach of just saying that you understand something but not expressing it in the discussion is not evasive at all.
I will keep an open mind for future conversations because, believe it or not, I am open to learn from you, if there is something you can teach me. But at the moment you are just demonstrating a tendency towards ‘name dropping’ and ‘name calling’ which is not constructive at all.
I’m speaking about processes of reasoning.
Saying “Truth is reality”, tells me nothing about what it means for a probability to be true. It leaves everything important implicit.
In the Kegan model that Chapman uses handling concepts this way happens at level three. The step to level four is to actually dig deeper and refine one’s notions to be more specific. To use notions that come from a internally consistent system instead of being the naive notions of the concepts of the kind that people take up while being in high school.
Chapman then goes and says: On the one hand I have the system of reasoning with probability and qualifiying my uncertainty with probability. On the other hand I also want to use predicate calculus and the system in which I can use predicate calculus is not the one that’s ruled by Bayes rule. But that doesn’t mean that one system is more true or more in line with reality. Both are ways to model the world.
Thank you for engaging :)
For the next paragraphs I would like you to exercise humility and restrain your assessment of what I am saying until I have finished saying it. You can then assess it as a whole.
My definition of truth was not three words. It was a small paragraph. Let me break it down:
Now why is this useful? The first three words ‘Truth is reality’ acknowledges that there is existence and inevitably this X which you can call the world, nature, whole or reality (the one I used in this occasion as I found it cleaner of associations in our context) is inevitably equivalent to truth. Rational discussion, having as its basis the manipulation of symbols, is an abstraction of X and thus not X. Thus absolute truth is outside the realm of rationality. If you think this is what you describe as ‘naive notions of the concepts of the kind that people take up while being in high school.’ I can only say in my turn that your understanding seems naive to me.
Here I state that although the first three words describe the deepest level of truth we can focus on truth as can be expressed in rational terms (mathematics are included in this) because it is demonstrably useful. But we should not confuse this truth with reality. We can call it ‘relative truth’ vs ‘absolute truth’ or truth vs Truth or whatever you fancy as long as we clarify our terms so we can talk.
And here we are in the domain where I can learn from you. The domain of being efficient at using rationality. Notice that this sentence is a question where you can respond with a better conceptualisation. Rereading this sentence I already think I see its shortcomings. How about: ‘For discussing practical rationality should we say that truth is a belief of which we can observe or demonstrate its relation to reality?’. Or should we just stop using the word truth for now? I am up for that.
So to return to your statement:
Indeed. That is why I did not move the conversation towards this direction, you did. I would invite you to reread my original post.
Let’s take a statement like 2+2=4. It’s not a statement about nature. It’s a statement about how abstract mathematical entities relate to each other that are independent from nature.
I can reason about whether certain statements about Hilberts hotel are true even through there’s no such thing as Hilbert’s hotel in reality.
What you are saying looks to me like you didn’t went through edge cases like this and decided whether you think statements surrounding Hilberts hotel shouldn’t be called true. Going through edges cases leads to a refinement of concepts.
It rather sounds to me like you think that those edge cases don’t really matter and the intuitions you have of the concept of truth should count.
You can’t claim this and saying at the same time that someone’s probability is false or wrong. You just defined truth in a way that it’s an attribute of different claims.
One way of dealing with a concept like this is to reference refined concepts like the concept of truth that Eliezer developed in the sequences.
Dropping the concept (in LW speak tabooing it) is another way. If your objection to believing that “X happens with probability P” isn’t anymore that this might be false, what’s the objection about?
Before I continue with the discussion I have to say that this depth of analysis does not seem relevant to the practical applications of rationality that I asked about in the original post. The LessWrong wiki states under the entry for ‘truth’:
This, ‘relative truth’ as I call it, would be perfectly adequate for our discussion before we started philosophising.
Nevertheless, philosophising is good fun! :)
So...
I assume you are using ‘nature’ in the same way I use `reality’? If yes, it is absurd to say that these entities are independent of nature. Everything is part of nature. Nature is everything there is. It includes you. Your brain. Mathematics. The question you can ask is: Why does abstraction have these properties? Why do they sometimes describe other parts of reality? Does every mathematical truth have a correspondence to this other part of reality we call the physical world? These are all valid and fascinating questions.
I clearly stated that ‘relative truth’ can approximate/model parts of ‘absolute truth’ in a way that can be useful.
You definitely convinced me to be more careful when I use the word. Seriously.
That is my objection. You think there is a conflict because you are not distinguishing between ‘absolute’ and ‘relative’ when you follow my definition. In the original post, I was just observing the situation we are in when we use rational assessment with incomplete data. I am interested to see if we can find ways to calibrate for such distortions. I will expand in future posts.
Finally, I wouldn’t want to give you the impression that I am certain about the view of ‘truth’ I am presenting here. And I hope you are not sure of your assessments either. But this is where I currently am. This is my belief system.
I didn’t have the impression.
Saying truth is about correspondence to reality is quite different than saying that it is about reality.
In Bayesianism a probability that a person associates with an event should be a reflection of the information available to the person. Different people should come to the same probability when subject to exactly the same information but to the extent that different people are in the real world always exposed to different information it’s subjective.
In the frequentist idea of probability there’s the notion that the probability is independent from the observer and the information that the observer has, but that assumption isn’t there in the Bayesian notion of probability.
Right, and what my original post explores is that different people should come to the same inaccurate probability when subject to exactly the same incomplete information.
Indeed this seemed to be something people are aware of from what I gathered from the answer of MrMind and this one from Vaniver. Vaniver in particular pointed me towards an attempt to model the issue in order to mitigate it but it presupposes a computable universe and, most importantly, that the agent has logical omniscience and an infinite amount of time. This puts it out of the realm of practical rationality. A brief description of further attempts to mitigate the issues left me, for now, unconvinced.
The idea of accuracy presupposes that you can compare a value to another reference.
If I say that Alice is 1,60m tall but she’s 1,65m, that’s inaccurate. If I however say, that there’s a 5% chance that Alice is taller than 1,60m that’s not inaccurate. My ability to predict height might be badly calibrated and I have a bad Briers score or Log score.
I am using ‘inaccurate’ as equivalent to ‘badly calibrated’ here. Why do you feel it is important to make the distinction? I understand why it is important when dealing with clearly quantified data. But in every day life do you really mentally attempt to assign probability to all variables?
To determine whether a person is well calibrated or isn’t you have to look at multiple predictions of the person. It’s an attribute heuristic for decision making.
On the other hand a single statement such as Alice is 1,60m might be inaccurate. Being inaccurate is a property of a statement and not just a property of how the statement was generated.
Assigning probabilities to event takes effort. As such it’s not something you can do for two-thousand statements in a day. To be able to assign probabilities it’s also important to precisely define the belief.
If I take a belief like “All people who clicked on ‘Going’ will come to the event tonight”, I can assign a probability. The exercise of assigning that probability makes me think more clearly about the likelihood of it happening.
Thanks for the clarifications. One last question as I am sure all these will come out again and again as I am interacting with the community.
Can you give me a concrete example of a complex, real life problem or decision where you used the assignment of probabilities to your beliefs to an extend that you find satisfactory for making the decision. I am curious to see the mental process of really using this way of thinking. I assume it is a process happening through sound in the imagination and more specifically through language (the internal dialogue). Could you reproduce it for me in writing?
I applied for a job. There was uncertainty around whether or not I get the job. Having an accurate view of the probability of getting the job informs the decision of how important it is to spend additional effort.
I basically came up with a number and then ask myself whether I would be surprised if the event happens or doesn’t happen.
I currently don’t have a more systematic process.
I remember a conversation with a CFAR trainer. I said “I think X is a key skill”. They responded with: “I think it is likely that X is a key skill but I don’t know that it has to be a key skill.”. We didn’t put numbers on it but having probabilities in the background results in us being able to discuss our disagreement even through we both think “X is likely a key skill”.
I had never someone outside of this community tell me “you are likely right but I don’t see why I should believe that what you are saying is certain”.
The kind of mindset that produces a statement like this is about taking different probabilities seriously.
My thought is:
‘I have reached this mindset through studying views of assumptions and beliefs from other sources. Maybe this is another way to make the realisation.’
My doubt is:
‘Maybe I am missing something that the use of probabilities adds to this realisation’
Hope to continue the discussion in the future.
It’s more than just a mindset. In this case the result were concrete discoursive practice. There are quite many people who profess to have a mindset that separates shades of gray. The amount of people who tell you voice disagreement when you tell them something they believe is likely to be true and that’s important is much lower.
Can you think of the last time where you cared about an issue and someone professed to believe what you likely believed to be true, that you disagreed with them? And stretch out the example?
Do I need to express it in numbers? In my mind I follow and practice, among others, the saying: “Study the assumptions behind your actions. Then study the assumptions behind your assumptions.”
Having said that, I can not think of an example of applying that in a situation where I was in agreement. I am thinking that ‘I would not be in agreement without a reason regarding a belief that I have examined’ but I might be rationalising here. I will try to observe myself on that. Thanks!
We both had reasons for believing it to be true. On the other hand human believe things that are wrong. If you ask a Republican and a Democrat whether Trump is good for America they might have both reasons for their belief but they still disagree. That means for each of them there’s a chance of them being wrong despite having reasons for their beliefs.
The reasons he had in this mind pointed to the belief being true but they didn’t provide him the certainty that it’s true.
It was a belief that was important enough for him to be right and not only have reasons for holding his belief.
The practice of putting numbers on a belief forces you to be precise about what you believe.
Let’s say that you believe: “It’s likely that Trump will get impeached.” If Trump actually get’s impeached you will tell yourself “I correctly predicted it, I was right”. If he doesn’t get impeached you are likely to think “When I said likely than it meant that there was a decent chance that he get’s impeached but I didn’t mean to say that the chance was more than 50%.
The number forces precision. The practice of forcing yourself to be precise allows the development of more mental categories.
When Elon Musk started SpaceX he reportedly thought that it had a 10% chance of success. Many people would think of 10% of success as. It’s highly unlikely that the company succeeds. Elon on the other hand thought that given the high stakes 10% chance of success is enough to found SpaceX.
I will have to explore this further. At the moment the method seems to me to just give an illusion of precision which I am not sure is effective. I could say that I assign a 5% probability that the practice is useful to represent my belief. I will now keep interacting with the community and update my belief according to the evidence I see from people that are using it. Is this the right approach?
The word “useful” itself isn’t precise and as such the precision of 5% might be more precise than warranted.
Otherwise having your number and then updating it according to what you see from people using it, is the Bayesian way.
How would you express the belief?
Sure. In other words, if you get fed bad enough data then you have (so to speak) anti-knowledge. Surely this isn’t surprising?
No, not really surprising.
I would just clarify though, that the data does not need to be ‘bad’ in the sense that it is false. We might have data that are accurate but misinterpret them by generalising to the larger context or mistakenly transposing them to a different one.