A line of thought that I want to explore: a lot of times when people appear to be close-minded, they aren’t actually being (too) close-minded. This line of thought is very preliminary and unrefined.
It’s related to Aumann’s Ageement Theorem. If you happen to have two perfectly Bayesian agents who are able to share information, then yes, they will end up agreeing. In practice people aren’t 1) perfectly Bayesian or 2) able to share all of their information. I think (2) is a huge problem. A huge reason why it’s hard to convince people of things.
Well, I guess what I’m getting at isn’t really close-mindedness. It’s just… suppose you disagree with someone on something. You list out a bunch of arguments for why the other person is wrong, and why they should adopt your belief. Argument A, B, C, D, E… so on and so forth. It feels like you’ve listed out so many things, and they’re being stubborn in not changing their mind and admitting that you’re right. But actually, given the information they have, their often correct in not adopting your belief. Even if they were a perfect Bayesian, your arguments A through E just aren’t nearly enough. You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Maybe it’s related to the illusion of transparency. There are all of these premises that you are assuming to be true. All of these data points. Subtle life experiences. Stuff like that. All of these things inform your priors. And it’s easy to assume that the other person shares the same data informing their priors. But they don’t. And so providing these data points is part of your job in arguing for your position. But it is often difficult to realize that this is part of your job.
Wait a minute: I think I’m basically trying to say the same thing as Expecting Short Inferential Distances. Sigh. Yeah, I think that’s pretty much it.
This is a pretty good example of something that happens a lot to me on LessWrong. I have some vague idea about something. Then I realize that someone on LessWrong (frequently Eliezer) has a great blog post about it that does a great job of crystalizing it, articulating it, and filling in the gaps for me. Usually it’s a very exciting and satisfying experience. Right now I’m a little a) disappointed in myself for not realizing this to begin with and b) disappointed that I don’t actually have a useful new thought to share. I’m also c) a little frustrated that I am experiencing (b).
You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
Suppose you identify a single crux A. Now you need to convince them of A. But convincing them of A requires you to convince them of A.1, A.2, and A.3.
Ok, no problem. You get started trying to convince them of A.1. But then you realize that in order to convince them of A.1, you need to first convince them of A.1.1, A.1.2, and A.1.3.
I think this sort of thing is often the case, and is how large inferential distances are “shaped”.
A line of thought that I want to explore: a lot of times when people appear to be close-minded, they aren’t actually being (too) close-minded. This line of thought is very preliminary and unrefined.
It’s related to Aumann’s Ageement Theorem. If you happen to have two perfectly Bayesian agents who are able to share information, then yes, they will end up agreeing. In practice people aren’t 1) perfectly Bayesian or 2) able to share all of their information. I think (2) is a huge problem. A huge reason why it’s hard to convince people of things.
Well, I guess what I’m getting at isn’t really close-mindedness. It’s just… suppose you disagree with someone on something. You list out a bunch of arguments for why the other person is wrong, and why they should adopt your belief. Argument A, B, C, D, E… so on and so forth. It feels like you’ve listed out so many things, and they’re being stubborn in not changing their mind and admitting that you’re right. But actually, given the information they have, their often correct in not adopting your belief. Even if they were a perfect Bayesian, your arguments A through E just aren’t nearly enough. You’d need perhaps 100, maybe even 1,000 times more arguments to get a perfectly open-minded and Bayesian agent to start from the point where the other person started and end up agreeing with you.
Maybe it’s related to the illusion of transparency. There are all of these premises that you are assuming to be true. All of these data points. Subtle life experiences. Stuff like that. All of these things inform your priors. And it’s easy to assume that the other person shares the same data informing their priors. But they don’t. And so providing these data points is part of your job in arguing for your position. But it is often difficult to realize that this is part of your job.
Wait a minute: I think I’m basically trying to say the same thing as Expecting Short Inferential Distances. Sigh. Yeah, I think that’s pretty much it.
This is a pretty good example of something that happens a lot to me on LessWrong. I have some vague idea about something. Then I realize that someone on LessWrong (frequently Eliezer) has a great blog post about it that does a great job of crystalizing it, articulating it, and filling in the gaps for me. Usually it’s a very exciting and satisfying experience. Right now I’m a little a) disappointed in myself for not realizing this to begin with and b) disappointed that I don’t actually have a useful new thought to share. I’m also c) a little frustrated that I am experiencing (b).
Modelling humans with Bayesian agent seems wrong.
For humans, I think the problem usually isn’t the number of arguments / number of angles you attacked the problem, but whether you have hit on the few significant cruxes of that person. This is especially because humans are quite far away from perfect Bayesians. For relatively small disargreements (i.e. not at the scale of convincing a Christian that God doesn’t exist), usually people just had a few wrong assumptions or cached thoughts. If you can accurately hit those cruxes, then you can convince them. It is very very hard to know which arguments can hit those cruxes though and it is why one of the viable strategies is to keep throwing arguments until one of them work.
(Also unlike convincing Bayesian agents where you can argue for W->X, X->Y, Y->Z in any order, sometimes you need to argue about things in the correct order)
Suppose you identify a single crux
A
. Now you need to convince them ofA
. But convincing them ofA
requires you to convince them ofA.1
,A.2
, andA.3
.Ok, no problem. You get started trying to convince them of
A.1
. But then you realize that in order to convince them ofA.1
, you need to first convince them ofA.1.1
,A.1.2
, andA.1.3
.I think this sort of thing is often the case, and is how large inferential distances are “shaped”.