As far as I can tell, most people, while engaging in real-time conversations, do not feel this discomfort of having insufficient time and resources to verify the other participant’s claims (or for that matter, to make sure that one’s own speech is not erroneous). Is it because they are too credulous, and haven’t developed an instinctive skepticism of every new idea that they hear? Or do they just not take the other person’s words seriously (i.e., “in one ear, out the other”)?
If you aren’t afraid of making mistakes you can learn and grow MUCH faster than if you are.
If you aren’t afraid of noticing when you have made mistakes you can learn and grow MUCH MUCH faster than if you are.
The main thing though is that once you have learned an average amount the more you learn the less typical your thought patterns will be. If you bother to learn a lot your thought patterns will be VERY atypical. Once this happens, it becomes wildly unlikely that anyone talking with you for more than a minute without feedback will still be saying anything useful. Only conversation provides rapid enough feedback to make most of what the other person says relevant. (think how irrelevant most of the info in a typical pop-science book is because you can’t indicate to the author every ten seconds that you understand and that they can move on to the next point)
If you aren’t afraid of making mistakes you can learn and grow MUCH faster than if you are.
If you aren’t afraid of noticing when you have made mistakes you can learn and grow MUCH MUCH faster than if you are.
I’m afraid of making mistakes, but I’m not afraid of “noticing” my mistakes. Actually I’m mainly afraid of making mistakes and not noticing them. I think this psychological drive is in part responsible for whatever productivity I have in philosophy (or for that matter, in crypto/programming). Unless I can get some assurance that’s not the case, I wouldn’t want to trade it for increased speed of learning and growth.
Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren’t true?
Only conversation provides rapid enough feedback to make most of what the other person says relevant. (think how irrelevant most of the info in a typical pop-science book is because you can’t indicate to the author every ten seconds that you understand and that they can move on to the next point)
I’ve gotten quite good at skimming books and blogs. This seems like a relatively easy skill to pick up.
“Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren’t true?”. Your Bayes Score goes up on net ;-)
I agree that fearing making and not noticing mistakes is much better than not minding mistakes you don’t notice, but you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief. If a belief is wrong and you have good automatic processes that propagate it and that draw attention to incoherence from belief nodes being pushed back and forth from the propogation of the implications of some of your beliefs pushing in conflicting directions, you don’t even need people to criticize you, and especially to criticize you well, though both still help. I also think that simply wanting true beliefs without fearing untrue ones can produce the desired effect. A lot of people try to accomplish a lot of things with negative emotions that could be accomplished better with positive emotions. Positive emotions really do produce a greater risk of wireheading and only wanting to believe your beliefs are correct, in the absence of proper controls, but they don’t cost nearly as much mental energy per unit of effort. Increased emotional self-awareness reduces the wireheading risk, as you are more likely to notice the emotional impact of suppressed awareness of errors. Classic meditation techniques, yoga, varied life experience and physical exercise boost emotional self-awareness and have positive synergies. I can discuss this more, but once again, unfortunately mostly only in person, but I can take long pauses in the conversation if reminded.
Perhaps the difference here is one of risk sensitivity—similarly to the way a gambler going strictly for long term gains over the largest number of iterations will use the Kelly Criterion, Michael Vassar optimizes for becoming the least wrong when scores are tallied up at the end of the game. Wei Dai would prefer to minimize the volatility of his wrongness instead, taking smaller but steadier gains in correctness.
you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief.
I doubt that’s the case if you take into account the difficulty of changing one’s mind after noticing other people disagreeing, and the difficulty of seeing inconsistencies in one’s own beliefs after they’ve settled in for a while. Obviously we can strive to be better at both, but even the best would-be rationalists among us are still quite bad at these skills, when measured on an absolute scale.
Similarly, I suggest that in most cases, it’s better to be underconfident than to be overconfident, because of the risk that if you believe something too much, you might get stuck with that belief and fail to update if contrary evidence comes along.
In general, I’m much more concerned about not getting stuck with a false belief than maximizing my Bayes score in the short run. It just seems like learning new knowledge is not that hard, but I see a lot of otherwise intelligent people apparently stuck with false beliefs.
ETA: To return to my original point, why not write your conversation ideas down as blog posts? Then I don’t have to check them myself: I can just skim the comments to see if others found any errors. It seems like you can also reach a much bigger audience with the same effort that way.
I don’t think, at a first approximation, that written communication much less careful than Eliezer’s sequences can successfully communicate the content of surprising ideas to very many people at all.
I see lots of intelligent people who are not apparently stuck with false beliefs. Normatively, I don’t even see myself as having ‘beliefs’ but rather integrated probabilistic models. One doesn’t occasionally have to change those because you were wrong. Rather, the laws of inference requires that you change them in response to every piece of information you encounter whether the new info is surprising or unsurprising. This crude normative model doesn’t reflect an option for a human mind, given how a human mind works, but neither, I suspect, does the sort of implicit model it is being contrasted with, at least if that model is cashed out in detail at its current level of development.
Let me try a Hansonian explanation: conversation is not about exchanging information. It is about defining and reinforcing social bonds and status hierarchies. You don’t chit-chat about the weather because you really want to consider how recent local atmospheric patterns relate to long-run trends, you do it to show that you care about the other person. If you actually cared about the weather, you would excuse yourself and consult the nearest meteorologist.
Written communication probably escapes this mechanism—the mental machinery for social interaction is less involved, and the mental machinery for analytical judgment has more room to operate. This probably happens because there was no written word in the evolutionary context, so we didn’t evolve to apply our social interaction machinery to it. A second reason is that written communication is relatively easily divorced from the writer—you can encounter a written argument over vast spatial or temporal separation—so the cues that kick the social brain into gear are absent or subdued. The result, as you point out, it is easier to critically engage with a written argument than a spoken one.
You don’t chit-chat about the weather because you really want to consider how recent local atmospheric patterns relate to long-run trends, you do it to show that you care about the other person.
No, you chat about the weather because it allows both parties to become comfortable and pick up the pace of the conversation to something more interesting. Full-on conversations don’t start in a vacuum. In a worst case scenario, you talk about the weather because it’s better than both of you staring at the ground until someone else comes along.
You are certainly correct, and I think what you say reinforces the point. Building comfort is a social function rather than an information exchange function, which is why you don’t particularly care whether or not your conversation leads to more accurate predictions for tomorrow’s weather.
You seem to have an oddly narrow view of human communication. Have you considered the following facts?
In many sorts of cooperative efforts, live conversation (possibly aided by manual writing and drawing) enables rapid exchange of ideas that will converge onto the correct conclusion more quickly than written communication. Think e.g. solving a math problem together with someone.
In many cases, human conversations have the goal of resolving some sort of conflict, in the broad Schellingian sense of the term. Face-to-face communication, with all the clues it provides to people’s inner thoughts and intentions, can greatly facilitate the process of finding and agreeing upon a solution acceptable to all parties.
A good bullshit detector heuristic is usually more than enough to identify claims that can’t be taken at face value, and even when red flags are raised, often it’s enough to ask your interlocutor to provide support for them and see if the answer is satisfactory. You’ll rarely be in a situation where your interlocutors are so hostile and deceptive that they would be lying to your face about the evidence they claim to have seen. (Even in internet discussions, it’s not often that I have to consult references to verify other people’s claims. Most of my googling consists of searching for references to support my own claims that I expect others could find suspicious or unclear, so I could link to the supporting material preemptively.)
Various signaling elements of live communication are highly entertaining, especially when coupled with eating, drinking, and other fun activities that go pleasantly with a conversation. This aspect is impossible to reproduce in writing. Of course, this can be distracting when topics are discussed that require a great level of concentration and logical rigor, though even then the fun elements can make it easier to pull off the hard mental effort. But when it comes to less mentally demanding topics, it’s clearly a great plus.
Finally, when the conversation isn’t about solving some predetermined problem, the environment around you can provide interesting topics for discussion, which is clearly impossible if you’re just sitting and staring at the monitor.
Yes, I agree there are some situations where live conversation is helpful, such as the first two bullet points in your list. I was mainly talking about conversations like the ones described in Kaj’s post, where the participants are just “making conversation” and do not have any specific goals in mind.
A good bullshit detector heuristic is usually more than enough to identify claims that can’t be taken at face value
I typically find myself wanting to verify every single fact or idea that I hadn’t heard of before, and say either “hold on, I need to think about that for a few minutes” or “let me check that on Google/Wikipedia”. In actual conversation I’d suppress this because I suspect the other person will quickly find it extremely annoying. I just think to myself “I’ll try to remember what he’s saying and check it out later”, but of course I don’t have such a good memory.
You’ll rarely be in a situation where your interlocutors are so hostile and deceptive that they would be lying to your face about the evidence they claim to have seen.
It’s not that I think people are deceptive but I don’t trust their memory and/or judgment. Asking for evidence isn’t that helpful because (1) they may have misremembered or misheard from someone else and (2) there may be a lot more evidence in the other direction that they’re not aware of and never thought of looking up.
Various signaling elements of live communication are highly entertaining
I think we covered that in an earlier discussion. :)
the environment around you can provide interesting topics for discussion
But why do people find random elements in the environment interesting?
I typically find myself wanting to verify every single fact or idea that I hadn’t heard of before, and say either “hold on, I need to think about that for a few minutes” or “let me check that on Google/Wikipedia”.
But this seems to me, at the very least, irrationally inefficient. You have a finite amount of time, and it can surely be put to use much more efficiently than verifying every single new fact and idea. (Also, why stop there? Even after you’ve checked the first few references that come up on Google, there is always some non-zero chance that more time invested in research could unearth relevant contrary evidence. So clearly there’s a time-saving trade-off involved.)
It’s not that I think people are deceptive but I don’t trust their memory and/or judgment. Asking for evidence isn’t that helpful because (1) they may have misremembered or misheard from someone else and (2) there may be a lot more evidence in the other direction that they’re not aware of and never thought of looking up.
Sometimes, yes. But often it’s not the case. There are good heuristics to determine if someone really knows what he’s talking about. If they give a positive result, what you’ve been told in a live conversation is only marginally less reliable than what a reasonable time spent googling will tell you. This is an immensely useful and efficient way of saving time.
Also, many claims are very hard to verify by googling. For example, if someone gives you general claims about the state of the art in some area, based on generalizations from his own broad knowledge and experience, you must judge the reliability of these claims heuristically, unless you’re willing to take a lot of time and effort to educate yourself about the field in question so you can make similar conclusions yourself. Google cannot (yet?) be asked to give such judgments from the indexed evidence.
I think we covered that [signaling elements of live communication] in an earlier discussion. :)
Yes, but you’ve asked about the motivations of typical people. For everyone except a very small number of outliers, this is a highly relevant factor.
But why do people find random elements in the environment interesting?
Are you asking for an answer in everyday human terms, or an evolutionary explanation?
In this particular context, it should be noted that human conversations whose purpose is fun, rather than achieving a predetermined goal, typically have a natural and seemingly disorganized flow, jumping from one topic to another in a loose sequence. Comments on various observations from the environment can guide this flow in interesting fun-enhancing ways, which is not possible when people are just exchanging written messages at a distance.
I typically find myself wanting to verify every single fact or idea that I hadn’t heard of before, and say either “hold on, I need to think about that for a few minutes” or “let me check that on Google/Wikipedia”.
But this seems to me, at the very least, irrationally inefficient. You have a finite amount of time, and it can surely be put to use much more efficiently than verifying every single new fact and idea.
My solution is to try to not have any opinion on most subjects, other than background ignorance, despite having heard various specific claims. (And I sometimes argue with people that they, too, should have no opinion, given the evidence they are aware of!)
But this seems to me, at the very least, irrationally inefficient. You have a finite amount of time, and it can surely be put to use much more efficiently than verifying every single new fact and idea.
You’re right, that would be highly inefficient. Now that you mention this, I realize part of what is attractive about reading blogs is that popular posts will tend to have lots of comments, and many of those will point out possible errors in the post, so I can get a higher certainty of correctness with much less work on my part.
Are you asking for an answer in everyday human terms, or an evolutionary explanation?
I guess what I’m really interested in is whether I’m missing out on something really great by not participating in more live conversations (that aren’t about solving specific problems).
I was mainly talking about conversations like the ones described in Kaj’s post, where the participants are just “making conversation” and do not have any specific goals in mind.
Always have a goal. “Just making conversation” doesn’t count. That’s a high-level description of the activity that leaves out the goal, not a description of something that actually has no goal. Your goal might be “learn from this person”, “let this person learn from me”, “get to know this person”, “get an introduction to this person’s friends”, “get into bed with this person”, or many other things, or even at the meta-level, “find out if this is an interesting person to know”. Unless your efforts are about something, the whole activity will seem pointless, because it is.
I typically find myself wanting to verify every single fact or idea that I hadn’t heard of before, and say either “hold on, I need to think about that for a few minutes” or “let me check that on Google/Wikipedia”.
Have you ever been in a conversation with someone who had the same urge?
In actual conversation I’d suppress this because I suspect the other person will quickly find it extremely annoying.
One of the nicest things anyone’s done in conversation with me is say “hold on, I need a few minutes to think about that,” actually go off and think for several minutes, and then come back to the conversation with an integrated perspective. I felt deeply respected as a mind.
People who don’t appreciate this sort of thing aren’t trying to make themselves understood about something surprising, so I expect that by your values you should care less about making them happy to talk with you, except as a way of getting something else from them.
I seriously wouldn’t mind the verification effort if done by a fast googler, and quietly thinking for a few minutes regularly is Awesome for conversation.
most people, while engaging in real-time conversations, do not feel this discomfort of having insufficient time and resources to verify the other participant’s claims (or for that matter, to make sure that one’s own speech is not erroneous).
this relates back to the what was mentioned higher up about people having differing goals for their conversation. the default goal is to strengthen, weaken, or confirm status positions. non-status information is often considered incidental. also note that hardly anyone is conscious of this.
Wow. I get involved in interesting conversations with some frequency; I don’t think it’s because I avoid verification or am too credulous. I think your explanations are a false dichotomy.
First, a lot of conversations involve expertise that I don’t have, and I find interesting. Jobs that are not mine are often interesting; I usually try to ask about what things about someone else’s job are fun or interesting.
I’m always happy to talk about my job; being a prosecutor means you’ve got a storehouse of stories.
In conversations where I am relatively equally situated with my counterpart as far as knowledge, it’s pretty easy to disagree while having a great conversation. I met a guy in September of ’08 after internet discussions on a topic unrelated to politics, and we ended up discussing Biden-Palin for two hours. It was a really fantastic conversation, and we voted opposite ways in the election.
We did this because we conceded points that were true, and we weren’t on The Only Right Team of Properness; we were talking about ideas and facts that we mostly both knew. We also didn’t have our head in the sand. And when one of us gave a factual statement outside the others’ knowledge, the other tended to accept it (I told the story of the missing pallets of hundred dollar bills, which he hadn’t heard.)
Now, I’ve certainly corrected false statements of fact in conversation (ranging in tone from, “Are you sure about that?” to “That’s verifiably false.”) I try not to make false statements of fact, but I have been wrong, and I make it a point to admit wrongness when I’m wrong. (In some circles, given my general propensity for being right and my assertion of a general propensity for being right, this leads to much rejoicing, on the order of Sir Robin’s minstrels getting eaten.)
But there’s something really fun about electric conversations that I think you’re missing here. Fun and funny conversations.… I couldn’t live well without them. And I’m not too credulous. And I take other people—well, many other people—seriously.
And when one of us gave a factual statement outside the others’ knowledge, the other tended to accept it
But you’re sure to accept a lot of false statements that way. Why are you not worried about it?
But there’s something really fun about electric conversations that I think you’re missing here.
Thinking about why conversations might be fun, I can see two reasons:
The “game” aspect (i.e., signaling/status/alliance). I tried to explain earlier why this aspect doesn’t hold much interest for me.
Obtaining novel information. Once I realized how unreliable most people’s beliefs are, the anxiety of accepting false information interferes too much with this “fun”. Also, I can get a much bigger “information high” from reading something like this.
Is there some other element of fun conversation that I might be missing?
I think there’s a lot more to insight than true or false.
Hearing a perspective or a personal experience does broaden your knowledge. In the same way that reading fiction can be enlightening—you are still learning, but using the part of your mental equipment designed for subconscious and tacit social exchange. In my experience, most of the occasions when I changed my mind for the better resulted from hearing someone else’s point of view and feeling empathy for it.
Indeed. I find that often (though by no means always) it’s interesting to find out why and how someone comes to believe something that, to me, is obviously wrong. The transition between “people are mad and stupid” to “there’s method to this madness” is interesting and useful, even if it doesn’t lead to “fixing the mind” of your immediate interlocutor. At the very least, it gives you a subject to think about later, to try and find out ways of fixing the beliefs of others, in future conversations.
(I often have insights on the correct, or at least a good, way of answering a fallacy quite a while after having a conversation. I can cache them for later, and sometimes get to use them in later conversations. Gathering such pre-cached insights can make you seem deep, which at least makes people more attentive to what you say.)
Once I realized how unreliable most people’s beliefs are, the anxiety of accepting false information interferes too much with this “fun”.
Are you sure that you’re not being biased here? If people really are so unreliable, even when they are serious and upfront, how do they ever get anything done in practice?
Or could it be that you’re failing to employ the standard heuristics for judging the reliability of people’s claims? (Note that this also involves judging whether what’s been said was even meant to be said authoritatively. People often say things without implying that they believe them firmly and on good evidence.)
Is there some other element of fun conversation that I might be missing?
What’s the fun element in board game called “go”? I find that particular game really fun to play, and really interesting, but it seems rather pointless to try to argue if it’s “objectively” interesting or fun, or even what specific aspects make it fun and interesting to me. It just is.
You can replace “go” with any fun and entertaining thing that you do. How would you defend that your fun thing against someone who came along and wanted to know, just like you do now, why and how is that fun thing really fun?
Is it because they are too credulous, and haven’t developed an instinctive skepticism of every new idea that they hear? Or do they just not take the other person’s words seriously (i.e., “in one ear, out the other”)?
Also, willingness to humor the claim other makes for the sake of conversation isn’t on that list, as it’s neither “not taking other seriously” nor “being too credulous”.
What’s the fun element in board game called “go”? I find that particular game really fun to play, and really interesting, but it seems rather pointless to try to argue if it’s “objectively” interesting or fun, or even what specific aspects make it fun and interesting to me. It just is.
If other people find some activity fun but I don’t, it might be that I’m doing it wrong, and with the correct understanding I can make it fun for myself.
On the other hand it might be that others only find it fun because they’re being insufficiently reflective. Maybe if they understood better what they’re really doing, they wouldn’t find it fun anymore, and would spend the time furthering some other goal instead (hopefully one that better matches my own purposes, like working to answer scientific/philosophical questions that I’m interested in, or reducing existential risk :)
I’d like to understand my values, and human values in general, both for the purpose of FAI theory, and to satisfy my philosophical interests. “Fun” is obviously a part of that.
Maybe if they understood better what they’re really doing, they wouldn’t find it fun anymore, and would spend the time furthering some other goal instead (hopefully one that better matches my own purposes, like working to answer scientific/philosophical questions that I’m interested in, or reducing existential risk)
I have this weird problem, based on the way my utility function seems to be set up—I want people to do what they really enjoy, even at the cost of them not working on my favorite projects.
So, on the one hand, I would like people to be sufficiently reflective to figure out what they really enjoy doing. On the other hand, if reflection just destroys people’s existing, flawed sources of fun without providing an alternative source of fun, then I wouldn’t want to encourage it.
Imagine a 50-something small business owner with a community college education—maybe he runs a fast food restaurant, or a bike repair shop—who really likes his local sports team. He goes to or watches most of their home games with a few other friends/fans and gets really excited about it and, on balance, has a lot of fun. If I could somehow motivate him to reflect on what professional spectator sports are like, he might not enjoy it as much, or at all.
But what good would that do him? Wouldn’t he be equally likely to plow his new-found surplus energy into, say, watching TV, as to suddenly discover existentialist risks? Even if he did work on existential risks, is there any reason to think that he’d enjoy it? I feel like differences in what people choose to do for fun might reflect differing theories about what is fun, and not just a failure to reflect on one’s activities. Even if the masses’ theories about what is fun are philosophically indefensible, they may nevertheless be real descriptions about what the masses find to be fun, and so I have trouble justifying an attempt to take away that fun without letting go of my commitment to egalitarianism.
I think it would depend on how his pleasure in spectator sports is eliminated. Does he simply find out that spectator sports are pointless, or does he find out that his leisure time can have more to it than spectator sports?
I assume it would be the former, no? Aren’t most people aware that they have a choice of hobbies, even if they don’t realize why/that the one they’ve chosen is particularly banal?
I have the same problem, but funnily enough, I see it as a problem with myself and not a problem with real-time conversation. The ability to consider complicated ideas and follow chains of reasoning without having to verify any of the individual dependencies is a skill I would like to pick up.
“Most people do something that I do not do. Is it because there’s something wrong with them?”
This is perhaps unfairly uncharitable, but it does seem to be the point you’re getting at. Obvious popular alternatives include that you’re not credulous enough, or that people are capable of evaluating other people’s claims sans wikipedia.
“Most people do something that I do not do. Is it because there’s something wrong with them?”
This is perhaps unfairly uncharitable, but it does seem to be the point you’re getting at. Obvious popular alternatives include that you’re not credulous enough, or that people are capable of evaluating other people’s claims sans wikipedia.
As far as I can tell, most people, while engaging in real-time conversations, do not feel this discomfort of having insufficient time and resources to verify the other participant’s claims (or for that matter, to make sure that one’s own speech is not erroneous). Is it because they are too credulous, and haven’t developed an instinctive skepticism of every new idea that they hear? Or do they just not take the other person’s words seriously (i.e., “in one ear, out the other”)?
If you aren’t afraid of making mistakes you can learn and grow MUCH faster than if you are.
If you aren’t afraid of noticing when you have made mistakes you can learn and grow MUCH MUCH faster than if you are.
The main thing though is that once you have learned an average amount the more you learn the less typical your thought patterns will be. If you bother to learn a lot your thought patterns will be VERY atypical. Once this happens, it becomes wildly unlikely that anyone talking with you for more than a minute without feedback will still be saying anything useful. Only conversation provides rapid enough feedback to make most of what the other person says relevant. (think how irrelevant most of the info in a typical pop-science book is because you can’t indicate to the author every ten seconds that you understand and that they can move on to the next point)
I’m afraid of making mistakes, but I’m not afraid of “noticing” my mistakes. Actually I’m mainly afraid of making mistakes and not noticing them. I think this psychological drive is in part responsible for whatever productivity I have in philosophy (or for that matter, in crypto/programming). Unless I can get some assurance that’s not the case, I wouldn’t want to trade it for increased speed of learning and growth.
Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren’t true?
I’ve gotten quite good at skimming books and blogs. This seems like a relatively easy skill to pick up.
“Even aside from that, what is the point of learning faster, if you end up learning a lot of facts and ideas that aren’t true?”. Your Bayes Score goes up on net ;-)
I agree that fearing making and not noticing mistakes is much better than not minding mistakes you don’t notice, but you should be able to notice mistakes later when other people disagree with you or when you can’t get your model of the world to reach a certain level of coherence. This is much faster than actively checking every belief. If a belief is wrong and you have good automatic processes that propagate it and that draw attention to incoherence from belief nodes being pushed back and forth from the propogation of the implications of some of your beliefs pushing in conflicting directions, you don’t even need people to criticize you, and especially to criticize you well, though both still help. I also think that simply wanting true beliefs without fearing untrue ones can produce the desired effect. A lot of people try to accomplish a lot of things with negative emotions that could be accomplished better with positive emotions. Positive emotions really do produce a greater risk of wireheading and only wanting to believe your beliefs are correct, in the absence of proper controls, but they don’t cost nearly as much mental energy per unit of effort. Increased emotional self-awareness reduces the wireheading risk, as you are more likely to notice the emotional impact of suppressed awareness of errors. Classic meditation techniques, yoga, varied life experience and physical exercise boost emotional self-awareness and have positive synergies. I can discuss this more, but once again, unfortunately mostly only in person, but I can take long pauses in the conversation if reminded.
Perhaps the difference here is one of risk sensitivity—similarly to the way a gambler going strictly for long term gains over the largest number of iterations will use the Kelly Criterion, Michael Vassar optimizes for becoming the least wrong when scores are tallied up at the end of the game. Wei Dai would prefer to minimize the volatility of his wrongness instead, taking smaller but steadier gains in correctness.
I doubt that’s the case if you take into account the difficulty of changing one’s mind after noticing other people disagreeing, and the difficulty of seeing inconsistencies in one’s own beliefs after they’ve settled in for a while. Obviously we can strive to be better at both, but even the best would-be rationalists among us are still quite bad at these skills, when measured on an absolute scale.
Similarly, I suggest that in most cases, it’s better to be underconfident than to be overconfident, because of the risk that if you believe something too much, you might get stuck with that belief and fail to update if contrary evidence comes along.
In general, I’m much more concerned about not getting stuck with a false belief than maximizing my Bayes score in the short run. It just seems like learning new knowledge is not that hard, but I see a lot of otherwise intelligent people apparently stuck with false beliefs.
ETA: To return to my original point, why not write your conversation ideas down as blog posts? Then I don’t have to check them myself: I can just skim the comments to see if others found any errors. It seems like you can also reach a much bigger audience with the same effort that way.
I don’t think, at a first approximation, that written communication much less careful than Eliezer’s sequences can successfully communicate the content of surprising ideas to very many people at all.
I see lots of intelligent people who are not apparently stuck with false beliefs. Normatively, I don’t even see myself as having ‘beliefs’ but rather integrated probabilistic models. One doesn’t occasionally have to change those because you were wrong. Rather, the laws of inference requires that you change them in response to every piece of information you encounter whether the new info is surprising or unsurprising. This crude normative model doesn’t reflect an option for a human mind, given how a human mind works, but neither, I suspect, does the sort of implicit model it is being contrasted with, at least if that model is cashed out in detail at its current level of development.
just chiming in two years after the fact to remark that this is EXACTLY why I hate reading most pop science books.
just chiming in ten years after the fact to remark that you could flip the page when this happens.
Let me try a Hansonian explanation: conversation is not about exchanging information. It is about defining and reinforcing social bonds and status hierarchies. You don’t chit-chat about the weather because you really want to consider how recent local atmospheric patterns relate to long-run trends, you do it to show that you care about the other person. If you actually cared about the weather, you would excuse yourself and consult the nearest meteorologist.
Written communication probably escapes this mechanism—the mental machinery for social interaction is less involved, and the mental machinery for analytical judgment has more room to operate. This probably happens because there was no written word in the evolutionary context, so we didn’t evolve to apply our social interaction machinery to it. A second reason is that written communication is relatively easily divorced from the writer—you can encounter a written argument over vast spatial or temporal separation—so the cues that kick the social brain into gear are absent or subdued. The result, as you point out, it is easier to critically engage with a written argument than a spoken one.
No, you chat about the weather because it allows both parties to become comfortable and pick up the pace of the conversation to something more interesting. Full-on conversations don’t start in a vacuum. In a worst case scenario, you talk about the weather because it’s better than both of you staring at the ground until someone else comes along.
You are certainly correct, and I think what you say reinforces the point. Building comfort is a social function rather than an information exchange function, which is why you don’t particularly care whether or not your conversation leads to more accurate predictions for tomorrow’s weather.
These are difficult concepts for those of us who work regularly with meteorological data!
You seem to have an oddly narrow view of human communication. Have you considered the following facts?
In many sorts of cooperative efforts, live conversation (possibly aided by manual writing and drawing) enables rapid exchange of ideas that will converge onto the correct conclusion more quickly than written communication. Think e.g. solving a math problem together with someone.
In many cases, human conversations have the goal of resolving some sort of conflict, in the broad Schellingian sense of the term. Face-to-face communication, with all the clues it provides to people’s inner thoughts and intentions, can greatly facilitate the process of finding and agreeing upon a solution acceptable to all parties.
A good bullshit detector heuristic is usually more than enough to identify claims that can’t be taken at face value, and even when red flags are raised, often it’s enough to ask your interlocutor to provide support for them and see if the answer is satisfactory. You’ll rarely be in a situation where your interlocutors are so hostile and deceptive that they would be lying to your face about the evidence they claim to have seen. (Even in internet discussions, it’s not often that I have to consult references to verify other people’s claims. Most of my googling consists of searching for references to support my own claims that I expect others could find suspicious or unclear, so I could link to the supporting material preemptively.)
Various signaling elements of live communication are highly entertaining, especially when coupled with eating, drinking, and other fun activities that go pleasantly with a conversation. This aspect is impossible to reproduce in writing. Of course, this can be distracting when topics are discussed that require a great level of concentration and logical rigor, though even then the fun elements can make it easier to pull off the hard mental effort. But when it comes to less mentally demanding topics, it’s clearly a great plus.
Finally, when the conversation isn’t about solving some predetermined problem, the environment around you can provide interesting topics for discussion, which is clearly impossible if you’re just sitting and staring at the monitor.
Yes, I agree there are some situations where live conversation is helpful, such as the first two bullet points in your list. I was mainly talking about conversations like the ones described in Kaj’s post, where the participants are just “making conversation” and do not have any specific goals in mind.
I typically find myself wanting to verify every single fact or idea that I hadn’t heard of before, and say either “hold on, I need to think about that for a few minutes” or “let me check that on Google/Wikipedia”. In actual conversation I’d suppress this because I suspect the other person will quickly find it extremely annoying. I just think to myself “I’ll try to remember what he’s saying and check it out later”, but of course I don’t have such a good memory.
It’s not that I think people are deceptive but I don’t trust their memory and/or judgment. Asking for evidence isn’t that helpful because (1) they may have misremembered or misheard from someone else and (2) there may be a lot more evidence in the other direction that they’re not aware of and never thought of looking up.
I think we covered that in an earlier discussion. :)
But why do people find random elements in the environment interesting?
Wei_Dai:
But this seems to me, at the very least, irrationally inefficient. You have a finite amount of time, and it can surely be put to use much more efficiently than verifying every single new fact and idea. (Also, why stop there? Even after you’ve checked the first few references that come up on Google, there is always some non-zero chance that more time invested in research could unearth relevant contrary evidence. So clearly there’s a time-saving trade-off involved.)
Sometimes, yes. But often it’s not the case. There are good heuristics to determine if someone really knows what he’s talking about. If they give a positive result, what you’ve been told in a live conversation is only marginally less reliable than what a reasonable time spent googling will tell you. This is an immensely useful and efficient way of saving time.
Also, many claims are very hard to verify by googling. For example, if someone gives you general claims about the state of the art in some area, based on generalizations from his own broad knowledge and experience, you must judge the reliability of these claims heuristically, unless you’re willing to take a lot of time and effort to educate yourself about the field in question so you can make similar conclusions yourself. Google cannot (yet?) be asked to give such judgments from the indexed evidence.
Yes, but you’ve asked about the motivations of typical people. For everyone except a very small number of outliers, this is a highly relevant factor.
Are you asking for an answer in everyday human terms, or an evolutionary explanation?
In this particular context, it should be noted that human conversations whose purpose is fun, rather than achieving a predetermined goal, typically have a natural and seemingly disorganized flow, jumping from one topic to another in a loose sequence. Comments on various observations from the environment can guide this flow in interesting fun-enhancing ways, which is not possible when people are just exchanging written messages at a distance.
My solution is to try to not have any opinion on most subjects, other than background ignorance, despite having heard various specific claims. (And I sometimes argue with people that they, too, should have no opinion, given the evidence they are aware of!)
You’re right, that would be highly inefficient. Now that you mention this, I realize part of what is attractive about reading blogs is that popular posts will tend to have lots of comments, and many of those will point out possible errors in the post, so I can get a higher certainty of correctness with much less work on my part.
I guess what I’m really interested in is whether I’m missing out on something really great by not participating in more live conversations (that aren’t about solving specific problems).
Always have a goal. “Just making conversation” doesn’t count. That’s a high-level description of the activity that leaves out the goal, not a description of something that actually has no goal. Your goal might be “learn from this person”, “let this person learn from me”, “get to know this person”, “get an introduction to this person’s friends”, “get into bed with this person”, or many other things, or even at the meta-level, “find out if this is an interesting person to know”. Unless your efforts are about something, the whole activity will seem pointless, because it is.
Have you ever been in a conversation with someone who had the same urge?
One of the nicest things anyone’s done in conversation with me is say “hold on, I need a few minutes to think about that,” actually go off and think for several minutes, and then come back to the conversation with an integrated perspective. I felt deeply respected as a mind.
People who don’t appreciate this sort of thing aren’t trying to make themselves understood about something surprising, so I expect that by your values you should care less about making them happy to talk with you, except as a way of getting something else from them.
I seriously wouldn’t mind the verification effort if done by a fast googler, and quietly thinking for a few minutes regularly is Awesome for conversation.
Conversation is not about information.
Conversation is not only/mostly about information. FTFY
this relates back to the what was mentioned higher up about people having differing goals for their conversation. the default goal is to strengthen, weaken, or confirm status positions. non-status information is often considered incidental. also note that hardly anyone is conscious of this.
Wow. I get involved in interesting conversations with some frequency; I don’t think it’s because I avoid verification or am too credulous. I think your explanations are a false dichotomy.
First, a lot of conversations involve expertise that I don’t have, and I find interesting. Jobs that are not mine are often interesting; I usually try to ask about what things about someone else’s job are fun or interesting.
I’m always happy to talk about my job; being a prosecutor means you’ve got a storehouse of stories.
In conversations where I am relatively equally situated with my counterpart as far as knowledge, it’s pretty easy to disagree while having a great conversation. I met a guy in September of ’08 after internet discussions on a topic unrelated to politics, and we ended up discussing Biden-Palin for two hours. It was a really fantastic conversation, and we voted opposite ways in the election.
We did this because we conceded points that were true, and we weren’t on The Only Right Team of Properness; we were talking about ideas and facts that we mostly both knew. We also didn’t have our head in the sand. And when one of us gave a factual statement outside the others’ knowledge, the other tended to accept it (I told the story of the missing pallets of hundred dollar bills, which he hadn’t heard.)
Now, I’ve certainly corrected false statements of fact in conversation (ranging in tone from, “Are you sure about that?” to “That’s verifiably false.”) I try not to make false statements of fact, but I have been wrong, and I make it a point to admit wrongness when I’m wrong. (In some circles, given my general propensity for being right and my assertion of a general propensity for being right, this leads to much rejoicing, on the order of Sir Robin’s minstrels getting eaten.)
But there’s something really fun about electric conversations that I think you’re missing here. Fun and funny conversations.… I couldn’t live well without them. And I’m not too credulous. And I take other people—well, many other people—seriously.
--JRM
But you’re sure to accept a lot of false statements that way. Why are you not worried about it?
Thinking about why conversations might be fun, I can see two reasons:
The “game” aspect (i.e., signaling/status/alliance). I tried to explain earlier why this aspect doesn’t hold much interest for me.
Obtaining novel information. Once I realized how unreliable most people’s beliefs are, the anxiety of accepting false information interferes too much with this “fun”. Also, I can get a much bigger “information high” from reading something like this.
Is there some other element of fun conversation that I might be missing?
I think there’s a lot more to insight than true or false.
Hearing a perspective or a personal experience does broaden your knowledge. In the same way that reading fiction can be enlightening—you are still learning, but using the part of your mental equipment designed for subconscious and tacit social exchange. In my experience, most of the occasions when I changed my mind for the better resulted from hearing someone else’s point of view and feeling empathy for it.
Indeed. I find that often (though by no means always) it’s interesting to find out why and how someone comes to believe something that, to me, is obviously wrong. The transition between “people are mad and stupid” to “there’s method to this madness” is interesting and useful, even if it doesn’t lead to “fixing the mind” of your immediate interlocutor. At the very least, it gives you a subject to think about later, to try and find out ways of fixing the beliefs of others, in future conversations.
(I often have insights on the correct, or at least a good, way of answering a fallacy quite a while after having a conversation. I can cache them for later, and sometimes get to use them in later conversations. Gathering such pre-cached insights can make you seem deep, which at least makes people more attentive to what you say.)
Are you sure that you’re not being biased here? If people really are so unreliable, even when they are serious and upfront, how do they ever get anything done in practice?
Or could it be that you’re failing to employ the standard heuristics for judging the reliability of people’s claims? (Note that this also involves judging whether what’s been said was even meant to be said authoritatively. People often say things without implying that they believe them firmly and on good evidence.)
What’s the fun element in board game called “go”? I find that particular game really fun to play, and really interesting, but it seems rather pointless to try to argue if it’s “objectively” interesting or fun, or even what specific aspects make it fun and interesting to me. It just is.
You can replace “go” with any fun and entertaining thing that you do. How would you defend that your fun thing against someone who came along and wanted to know, just like you do now, why and how is that fun thing really fun?
http://lesswrong.com/lw/1yz/levels_of_communication/
Also, willingness to humor the claim other makes for the sake of conversation isn’t on that list, as it’s neither “not taking other seriously” nor “being too credulous”.
If other people find some activity fun but I don’t, it might be that I’m doing it wrong, and with the correct understanding I can make it fun for myself.
On the other hand it might be that others only find it fun because they’re being insufficiently reflective. Maybe if they understood better what they’re really doing, they wouldn’t find it fun anymore, and would spend the time furthering some other goal instead (hopefully one that better matches my own purposes, like working to answer scientific/philosophical questions that I’m interested in, or reducing existential risk :)
I’d like to understand my values, and human values in general, both for the purpose of FAI theory, and to satisfy my philosophical interests. “Fun” is obviously a part of that.
I have this weird problem, based on the way my utility function seems to be set up—I want people to do what they really enjoy, even at the cost of them not working on my favorite projects.
So, on the one hand, I would like people to be sufficiently reflective to figure out what they really enjoy doing. On the other hand, if reflection just destroys people’s existing, flawed sources of fun without providing an alternative source of fun, then I wouldn’t want to encourage it.
Imagine a 50-something small business owner with a community college education—maybe he runs a fast food restaurant, or a bike repair shop—who really likes his local sports team. He goes to or watches most of their home games with a few other friends/fans and gets really excited about it and, on balance, has a lot of fun. If I could somehow motivate him to reflect on what professional spectator sports are like, he might not enjoy it as much, or at all.
But what good would that do him? Wouldn’t he be equally likely to plow his new-found surplus energy into, say, watching TV, as to suddenly discover existentialist risks? Even if he did work on existential risks, is there any reason to think that he’d enjoy it? I feel like differences in what people choose to do for fun might reflect differing theories about what is fun, and not just a failure to reflect on one’s activities. Even if the masses’ theories about what is fun are philosophically indefensible, they may nevertheless be real descriptions about what the masses find to be fun, and so I have trouble justifying an attempt to take away that fun without letting go of my commitment to egalitarianism.
I think it would depend on how his pleasure in spectator sports is eliminated. Does he simply find out that spectator sports are pointless, or does he find out that his leisure time can have more to it than spectator sports?
I assume it would be the former, no? Aren’t most people aware that they have a choice of hobbies, even if they don’t realize why/that the one they’ve chosen is particularly banal?
I don’t think most people are good at breaking habits to find what they’d be enthusiastic about.
Most people do not practice ninja-level rationality in any part of their life. Why would conversation be any different?
I have the same problem, but funnily enough, I see it as a problem with myself and not a problem with real-time conversation. The ability to consider complicated ideas and follow chains of reasoning without having to verify any of the individual dependencies is a skill I would like to pick up.
“Most people do something that I do not do. Is it because there’s something wrong with them?”
This is perhaps unfairly uncharitable, but it does seem to be the point you’re getting at. Obvious popular alternatives include that you’re not credulous enough, or that people are capable of evaluating other people’s claims sans wikipedia.
“Most people do something that I do not do. Is it because there’s something wrong with them?”
This is perhaps unfairly uncharitable, but it does seem to be the point you’re getting at. Obvious popular alternatives include that you’re not credulous enough, or that people are capable of evaluating other people’s claims sans wikipedia.