First of all, let me say that rationalist-to-rationalist conversations don’t really have this problem. This is all about what happens when we talk to people who think less in “far-mode” if you will. What I’ve found is when talking with non-rationalists, I have to consciously switch into a different mindset to get “flow”. Let me give a personal example.
I was once at a bar where I met some random people, and one of them told me something that a rationalist would consider “woo”. He explained that he’d read that an atomic bomb was a particularly terrible thing, because unlike when you die normally, the radiation destroys the souls. I paused for a moment, swallowed all my rationalist impulses, and thought: “Is there anyway what he said could be meaningful?” I responded: “Well the terrible thing about an atomic explosion, is that it kills not just a person in isolation, but whole families, whole communities… if just one person dies, their friends and families can respect that person’s death and celebrate their memories, but when that many people die all at once, their entire history of who they are, their souls, are just erased in an instant”. He told me that was deep, and bought me a drink.
Did I feel dishonest? Not really. I decided what was relevant and what was not. Obviously the bit he said about radiation didn’t make scientific sense, but I didn’t feel the reason he’d brought up the idea was that he cared for a science lesson. Similarly, I could have asked the question: “well exactly what do you mean by a ‘soul’?” Instead I chose an interpretation that seemed agreeable to both of us. Now, had he specifically asked me for an analytical opinion, I would have absolutely given him that. But for now, what I’d done had earned myself some credibility, so that later in the conversation, if I wanted to be persuasive of an important “rationalist” opinion, I’d actually be someone worth listening too.
Yes of course you should not habitually divert conversations into you lecturing others on how some specific thing they said might be in error. For example, when I listen to an academic talk I do not speak up about most of the questionable claims made—I wait until I can see the main point of the talk and see what questionable claims might actually matter for their main claim. Those are the points I consider raising. Always keep in mind the purpose of your conversation.
Depending on the purpose of the conversation, do you think dark arts are sometimes legitimate? Or perhaps a more interesting question for an economist: can you speculate as to the utility (let’s say some measure of persuasive effectiveness) of dark arts, depending on the conversation type (e.g. a State of the Union Address and a Conversation on Less Wrong would presumably be polar opposites)
I’d rather ask the question without the word “sometimes”. Because what people do is use that word “sometimes” as a rationalization. “We’ll only use the Dark Arts in the short term, in the run-up to the Singularity.” The notion is that once everybody becomes rational, we can stop using them.
I’m skeptical that will happen. As we become more complex reasoners, we will develop new bugs and weaknesses in our reasoning for more-sophisticated dark artists to exploit. And we will have more complicated disagreements with each other, with higher stakes; so we will keep justifying the use of the Dark Arts.
As we become more complex reasoners, we will develop new bugs and weaknesses in our reasoning for more-sophisticated dark artists to exploit.
Are we expecting to become more complex reasoners? It seems to be the opposite to me. We are certainly moving in the direction of reasoning about increasingly complex things, but by all indications, the mechanisms of normal human reasoning are much more complex than they should be, which is why it has so many bugs and weaknesses in the first place. Becoming better at reasoning, in the LW tradition, appears to consist entirely of removing components (biases, obsolete heuristics, bad epistemologies and cached thoughts, etc.), not adding them.
If the goal is to become perfect Bayesians, then the goal is simplicity itself. I realize that is probably an impossible goal — even if the Singularity happens and we all upload ourselves into supercomputer robot brains, we’d need P=NP in order to compute all of our probabilities to exactly where they should be — but every practical step we take, away from our evolutionary patchwork of belief-acquisition mechanisms and toward this ideal of rationality, is one less opportunity for things to go wrong.
As we become more complex reasoners, we will develop new bugs and weaknesses in our reasoning for more-sophisticated dark artists to exploit. And we will have more complicated disagreements with each other, with higher stakes; so we will keep justifying the use of the Dark Arts.
This is exactly the chain of reasoning I had in mind in my original post when I referred to the “big if”.
First of all, let me say that rationalist-to-rationalist conversations don’t really have this problem. This is all about what happens when we talk to people who think less in “far-mode” if you will. What I’ve found is when talking with non-rationalists, I have to consciously switch into a different mindset to get “flow”. Let me give a personal example.
I was once at a bar where I met some random people, and one of them told me something that a rationalist would consider “woo”. He explained that he’d read that an atomic bomb was a particularly terrible thing, because unlike when you die normally, the radiation destroys the souls. I paused for a moment, swallowed all my rationalist impulses, and thought: “Is there anyway what he said could be meaningful?” I responded: “Well the terrible thing about an atomic explosion, is that it kills not just a person in isolation, but whole families, whole communities… if just one person dies, their friends and families can respect that person’s death and celebrate their memories, but when that many people die all at once, their entire history of who they are, their souls, are just erased in an instant”. He told me that was deep, and bought me a drink.
Did I feel dishonest? Not really. I decided what was relevant and what was not. Obviously the bit he said about radiation didn’t make scientific sense, but I didn’t feel the reason he’d brought up the idea was that he cared for a science lesson. Similarly, I could have asked the question: “well exactly what do you mean by a ‘soul’?” Instead I chose an interpretation that seemed agreeable to both of us. Now, had he specifically asked me for an analytical opinion, I would have absolutely given him that. But for now, what I’d done had earned myself some credibility, so that later in the conversation, if I wanted to be persuasive of an important “rationalist” opinion, I’d actually be someone worth listening too.
Yes of course you should not habitually divert conversations into you lecturing others on how some specific thing they said might be in error. For example, when I listen to an academic talk I do not speak up about most of the questionable claims made—I wait until I can see the main point of the talk and see what questionable claims might actually matter for their main claim. Those are the points I consider raising. Always keep in mind the purpose of your conversation.
Depending on the purpose of the conversation, do you think dark arts are sometimes legitimate? Or perhaps a more interesting question for an economist: can you speculate as to the utility (let’s say some measure of persuasive effectiveness) of dark arts, depending on the conversation type (e.g. a State of the Union Address and a Conversation on Less Wrong would presumably be polar opposites)
I’d rather ask the question without the word “sometimes”. Because what people do is use that word “sometimes” as a rationalization. “We’ll only use the Dark Arts in the short term, in the run-up to the Singularity.” The notion is that once everybody becomes rational, we can stop using them.
I’m skeptical that will happen. As we become more complex reasoners, we will develop new bugs and weaknesses in our reasoning for more-sophisticated dark artists to exploit. And we will have more complicated disagreements with each other, with higher stakes; so we will keep justifying the use of the Dark Arts.
Are we expecting to become more complex reasoners? It seems to be the opposite to me. We are certainly moving in the direction of reasoning about increasingly complex things, but by all indications, the mechanisms of normal human reasoning are much more complex than they should be, which is why it has so many bugs and weaknesses in the first place. Becoming better at reasoning, in the LW tradition, appears to consist entirely of removing components (biases, obsolete heuristics, bad epistemologies and cached thoughts, etc.), not adding them.
If the goal is to become perfect Bayesians, then the goal is simplicity itself. I realize that is probably an impossible goal — even if the Singularity happens and we all upload ourselves into supercomputer robot brains, we’d need P=NP in order to compute all of our probabilities to exactly where they should be — but every practical step we take, away from our evolutionary patchwork of belief-acquisition mechanisms and toward this ideal of rationality, is one less opportunity for things to go wrong.
This is exactly the chain of reasoning I had in mind in my original post when I referred to the “big if”.