It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:
A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but they are totally opposite in the way they make you feel. They are also totally opposite in terms of the effect it has on your preferences: Thinking well vs. thinking poorly of someone (ignoring the fact that there are multiple types of hate and love, and the 9999 other added complexities).
Ordinary people notice that hate and love are totally the opposite in several meaningful ways, and say as much. Then along comes a contrarian who wants to show how clever he is, and he picks up on the one way that love and hate are similar and which can make them go well together: The intensity of emotion towards someone or something. And so the contrarian states that really love and hate are the same and indifference is the opposite of both (somehow), which can cause people who aren’t any good at mapping complex subjects along multiple axi in their head to throw out their useful heuristic and award status to the contrarian for his fake wisdom.
I’m a bit disappointed that Eliezer fell for the number one danger of rationalists everywhere: Too much eagerness to throw out common sense in favour of cleverness.
(Eliezer if you are reading this: You are awesome and HPMOR is awesome. Please keep writing it and don’t get discouraged by this criticism)
I’m surprised how strongly you’re reacting to this, given that you seem to be aware that the whole “emotions having opposites” system is really just a word game anyway.
Why is it important that you prioritise the “effect on preferences” axis and Eliezer prioritises the “intensity” axis, except insofar as it is a bit embarrassing to see an intelligent person presenting one of these as wisdom? Perhaps Eliezer simply considers apathy to be a more dangerous affliction than hatred, and is thus trying to shift his readers’ priorities accordingly. Insofar as there are far more people in the world moved to inaction through apathy than there are people moved to wrong action through hatred, perhaps there’s something to that.
Hm, I didn’t think I was reacting that strongly… If I was, it’s probably because I am frustrated in general by people’s inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the ‘eyes’ of that view. And I see the same mistake made on Less Wrong all the time. I guess it’s just frustrating to see EY do the same thing. I don’t get why everyone, even practising rationalists, find this most elementary skill so hard to master.
It’s the most basic rationalist skill there is, in my opinion, but for some reason it’s not much talked about here. I call it “thinking like the universe” as opposed to “thinking like a human”. It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don’t follow any specific chain of logic towards finding an answer: You instead allow the answer to naturally flow from the facts.
It means you don’t ask “which facts argue in favour of my view and which against?”, but “what are the facts?” It means you don’t ask “What is my hypothesis?”, you ask “which hypotheses flow naturally from the facts?” It means you don’t ask “What do I believe?” but “what would an intelligent person believe given these facts?” It means you don’t ask “which hypothesis do I believe is true?”, but “how does the probability mass naturally divide itself over competing hypotheses based on the evidence?” It means you don’t ask “How can I test this hypothesis?” but “Which test would maximally distinguish between competing hypotheses?” It means you never ever ask who has the “burden of proof”.
And so on and so forth. I see it as the most fundamental skill because it allows you to ask the right questions, and if you start with the wrong question it really doesn’t matter what you do with it afterwards.
The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they’re irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care more about how they feel than about what is actually true. (This is related to the fundamental optimization problem: The truth will always sound less truthful than the most truthful sounding falsehood.) Noticing that kind of thing is often more effective than trying to control emotions the hard way.
Secondly, you want to pay attention to your thoughts as much as possible. This is just meditation, really. If you become conscious of your thoughts, you gain a degree of control over them. Notice what you think, when you think it, and why. If a question makes you angry, don’t just suppress the anger, ask yourself why.
For the rest it’s just about cultivating a habit of asking the right questions. Never ask yourself what you think, since the universe doesn’t care what you think. Instead say “Velorien believes X: How much does this increase the probability of X?”.
The truth will always sound less truthful than the most truthful sounding falsehood.
This needs to be on posters and T-shirts if it isn’t already. Is it a well-known principle?
Thank you for the explanation. This overall idea (of the relationship between belief and reality, and the fact that it only goes one way) is in itself not new to me, but your perspective on it is, and I hope it will help me develop my ability to think objectively.
Also thanks for the music video. Shame I can’t upvote you multiple times.
This needs to be on posters and T-shirts if it isn’t already. Is it a well-known principle?
Sadly not. I keep meaning to post an article about this, but it’s really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...
I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that’s wrong with society: It represents the fact that doing what’s good will always feel less good than doing what feels good, people who optimize for altruism will always be seen as more selfish than people who optimize for being seen as altruistic, the people who get in power will always be the ones whose skills are optimized for getting in power and not for knowing what to do once they get there, and people who yell about truth the most are the biggest liars. It’s also why “no good deed goes unpunished”. Despite what Yoda claims, the dark side really is stronger.
Unfortunately there’s no good post about this on LW AFAIK, but Yvain’s post about Moloch is related and is really good (and really long).
Also thanks for the music video. Shame I can’t upvote you multiple times.
people’s inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality.
I think that people who fully possess such a skill are usually described as “have achieved enlightenment” and, um, are rare :-) The skill doesn’t look “elementary” to me.
Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.
I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that’s not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)
I see someone try to answer a question by coming up with an explanation and then defending it
The appropriateness of that probably depends on what kind of question it is...
I think my hackles got raised by the claim that your perception is “what it actually is”—and that’s a remarkably strong claim. It probably works better phrased like something along the lines of “trying to take your ego and preconceived notions out of the picture”.
The appropriateness of that probably depends on what kind of question it is...
I guess it is slightly more acceptable if it’s a binary question. But even so it’s terrible epistimology, since you are giving undue attention to a hypothesis just because it’s the first one you came up with.
An equally awful method of doing things: Reading through someone’s post and trying to find anything wrong with it. If you find anything --> post criticism, if you don’t find anything --> accept conclusion. It’s SOP even on Less Wrong, and it’s not totally stupid but it’s really not what rationalists are supposed to do.
I think my hackles got raised by the claim that your perception is “what it actually is”—and that’s a remarkably strong claim. It probably works better phrased like something along the lines of “trying to take your ego and preconceived notions out of the picture”.
Yes, that is a big part of it, but it’s more than that. It means you stop seeing things from one specific point of view. Think of how confused people get about issues like free will. Only once you stop thinking about the issue from the perspective of an agent and ask what is actually happening from the perspective of the universe can you resolve the confusion.
Or, if you want to see some great examples of people who get this wrong all the time, go to the James Randi forums. There’s a whole host of people there who will say things during discussions like “Well it’s your claim so you have the burden of proof. I am perfectly happy to change my mind if you show me proof that I’m wrong.” and who think that this makes them rationalists. Good grief.
Any links to egregious examples? :-)
I have spent some time going through your posts but I couldn’t really find any egregious examples. Maybe I got you confused with someone else. I did notice that where politics were involved you’re overly prone to talking about “the left” even though the universe does not think in terms of “left” or “right”. But of course that’s not exactly unique to you.
One other instance I found:
Otherwise, I still think you’re confused between the model class and the model complexity (= degrees of freedom), but we’ve set out our positions and it’s fine that we continue to disagree.
It’s not a huge deal but I personally would not classify ideas as belonging to people, for the reasons described earlier.
In practice I think “X has the burden of proof” generally means something similar to “The position X is advancing has a rather low prior probability, so substantial evidence would be needed to make it credible, and in particular if X wants us to believe it then s/he would be well advised to offer substantial evidence.” Which, yes, involves confusion between an idea and the people who hold it, and might encourage an argument-as-conflict view of things that can work out really badly—but it’s still a convenient short phrase, reasonably well understood by many people, that (fuzzily) denotes something it’s often useful to say.
So, yeah, issuing such challenges in such terms is a sign of imperfect enlightenment and certainly doesn’t make the one who does it a rationalist in any useful sense. But I don’t see it as such a bad sign as I think you do.
Yea, the concept of burden of proof can be a useful social convention, but that’s all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two ‘sceptics’ both taking turns postinf ‘you have the burden of proof’, ‘no you have the burden of proof!’… You’ll see what i mean. Actual rationality isn’t supposed to be easy.
I guess it is slightly more acceptable if it’s a binary question.
No, that’s not what I had in mind. For example, there are questions which explicitly ask for an explanation and answering them with an explanation is fine. Or, say, there are questions which are wrong (as a question) so you answer them with an explanation of why they don’t make sense.
It means you stop seeing things from one specific point of view.
I don’t think you can. Or, rather, I think you can see things from multiple specific point of views, but you cannot see them without any point of view. Yes, I understand you talk about looking at things “from the perspective of the universe” but this expression is meaningless to me.
“I am perfectly happy to change my mind if you show me proof that I’m wrong.”
That may or may not be a reasonable position to take. Let me illustrate how it can be reasonable: people often talk in shortcuts. The sentence quoted could be a shortcut expression for “I have evaluated the evidence for and against X and have come to the conclusion Y. You are claiming that Y is wrong, but your claim by itself is not evidence. Please provide me with actual evidence and then I will update my beliefs”.
even though the universe does not think in terms of “left” or “right”
But humans do and I’m talking to humans, not to the universe.
A more general point—you said in another post
I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true.
This is true when you are evaluating the physical reality. But it is NOT true when you are evaluating the social reality—it IS influenced by emotions and what people want to be true.
but I personally would not classify ideas as belonging to people
I suppose “elementary” in the sense of “fundamental” or “simple” or “not relying on other skills before you can learn it”, rather than in the sense of “easy” or “widespread”.
Contrast literacy. Being to read and write one’s own language is elementary. It can be grasped by a small child, and has no prerequisites other than vision, reasonable motor control and not having certain specific brain dysfunctions. Yet one does not have to cast one’s mind that far back through history to reach the days in which this skill was reserved for an educated minority, and most people managed to live their whole lives without picking it up.
It is a wrong question, because reality is never that simple and clear cut and no rationalist should expect it to be. And as with all wrong questions, the thing you should do to resolve the confusion is to take a step back and ask yourself what is actually happening in factual terms:
A more accurate way to describe emotion, much like personality, is in terms of multiple dimensions. One dimension is intensity of emotion. Another dimension is the type of experience it offers. Love and hate both have strong intensity and in that sense they are similar, but they are totally opposite in the way they make you feel. They are also totally opposite in terms of the effect it has on your preferences: Thinking well vs. thinking poorly of someone (ignoring the fact that there are multiple types of hate and love, and the 9999 other added complexities).
Ordinary people notice that hate and love are totally the opposite in several meaningful ways, and say as much. Then along comes a contrarian who wants to show how clever he is, and he picks up on the one way that love and hate are similar and which can make them go well together: The intensity of emotion towards someone or something. And so the contrarian states that really love and hate are the same and indifference is the opposite of both (somehow), which can cause people who aren’t any good at mapping complex subjects along multiple axi in their head to throw out their useful heuristic and award status to the contrarian for his fake wisdom.
I’m a bit disappointed that Eliezer fell for the number one danger of rationalists everywhere: Too much eagerness to throw out common sense in favour of cleverness.
(Eliezer if you are reading this: You are awesome and HPMOR is awesome. Please keep writing it and don’t get discouraged by this criticism)
I’m surprised how strongly you’re reacting to this, given that you seem to be aware that the whole “emotions having opposites” system is really just a word game anyway.
Why is it important that you prioritise the “effect on preferences” axis and Eliezer prioritises the “intensity” axis, except insofar as it is a bit embarrassing to see an intelligent person presenting one of these as wisdom? Perhaps Eliezer simply considers apathy to be a more dangerous affliction than hatred, and is thus trying to shift his readers’ priorities accordingly. Insofar as there are far more people in the world moved to inaction through apathy than there are people moved to wrong action through hatred, perhaps there’s something to that.
Hm, I didn’t think I was reacting that strongly… If I was, it’s probably because I am frustrated in general by people’s inability to just take a step back and look at an issue for what it actually is, instead of superimposing their own favourite views on top of reality. I remember I recently got frustrated by some of the most rational people I know claiming that sun burn was caused by literal heat from the sun instead of UV light. Once they formed the hypothesis, they could only look at the issue through the ‘eyes’ of that view. And I see the same mistake made on Less Wrong all the time. I guess it’s just frustrating to see EY do the same thing. I don’t get why everyone, even practising rationalists, find this most elementary skill so hard to master.
Could you describe this skill in more detail please? If it is one I do not possess, I would like to learn.
Your attitude makes me happy, thank you. :)
It’s the most basic rationalist skill there is, in my opinion, but for some reason it’s not much talked about here. I call it “thinking like the universe” as opposed to “thinking like a human”. It means you remove yourself from the picture, you forget all about your favourite views and you stop caring about the implications of your answer since those should not impact the truth of the matter, and describe the situation in purely factual terms. You don’t follow any specific chain of logic towards finding an answer: You instead allow the answer to naturally flow from the facts.
It means you don’t ask “which facts argue in favour of my view and which against?”, but “what are the facts?”
It means you don’t ask “What is my hypothesis?”, you ask “which hypotheses flow naturally from the facts?”
It means you don’t ask “What do I believe?” but “what would an intelligent person believe given these facts?”
It means you don’t ask “which hypothesis do I believe is true?”, but “how does the probability mass naturally divide itself over competing hypotheses based on the evidence?”
It means you don’t ask “How can I test this hypothesis?” but “Which test would maximally distinguish between competing hypotheses?”
It means you never ever ask who has the “burden of proof”.
And so on and so forth. I see it as the most fundamental skill because it allows you to ask the right questions, and if you start with the wrong question it really doesn’t matter what you do with it afterwards.
I think I understand now, thank you.
Do you follow any specific practices in order to internalise this approach, or do you simply endeavour to apply it whenever you remember?
The primary thing I seem to do is to remind myself to care about the right things. I am irrelevant. My emotions are irrelevant. Truth is not influenced by what I want to be true. I am frequently amazed by the degree with which my emotions are influenced by subconscious beliefs. For example I notice that the people who make me most angry when they’re irrational are the ones I respect the most. People who get offended usually believe at some level that they are entitled to being offended. People who are bad at getting to the truth of a matter usually care more about how they feel than about what is actually true. (This is related to the fundamental optimization problem: The truth will always sound less truthful than the most truthful sounding falsehood.) Noticing that kind of thing is often more effective than trying to control emotions the hard way.
Secondly, you want to pay attention to your thoughts as much as possible. This is just meditation, really. If you become conscious of your thoughts, you gain a degree of control over them. Notice what you think, when you think it, and why. If a question makes you angry, don’t just suppress the anger, ask yourself why.
For the rest it’s just about cultivating a habit of asking the right questions. Never ask yourself what you think, since the universe doesn’t care what you think. Instead say “Velorien believes X: How much does this increase the probability of X?”.
Bertrand Russel gets it right, of course
This needs to be on posters and T-shirts if it isn’t already. Is it a well-known principle?
Thank you for the explanation. This overall idea (of the relationship between belief and reality, and the fact that it only goes one way) is in itself not new to me, but your perspective on it is, and I hope it will help me develop my ability to think objectively.
Also thanks for the music video. Shame I can’t upvote you multiple times.
Sadly not. I keep meaning to post an article about this, but it’s really hard to write an article about a complex subject in such a way that people really get it (especially if the reader has little patience/charity), so I keep putting it off until I have the time to make it perfect. I have some time this weekend though, so maybe...
I think the Fundamental Optimization Problem is the biggest problem humanity has right now and it explains everything that’s wrong with society: It represents the fact that doing what’s good will always feel less good than doing what feels good, people who optimize for altruism will always be seen as more selfish than people who optimize for being seen as altruistic, the people who get in power will always be the ones whose skills are optimized for getting in power and not for knowing what to do once they get there, and people who yell about truth the most are the biggest liars. It’s also why “no good deed goes unpunished”. Despite what Yoda claims, the dark side really is stronger.
Unfortunately there’s no good post about this on LW AFAIK, but Yvain’s post about Moloch is related and is really good (and really long).
Aww shucks. ^_^
I think that people who fully possess such a skill are usually described as “have achieved enlightenment” and, um, are rare :-) The skill doesn’t look “elementary” to me.
Heheh, fair point. I guess a better way of putting it is that people fail to even bother to try this in the first place, or heck even acknowledge that this is important to begin with.
I cannot count the number of times I see someone try to answer a question by coming up with an explanation and then defending it, and utterly failing to graps that that’s not how you answer a question. (In fact, I may be misremembering but I think you do this a lot, Lumifer.)
The appropriateness of that probably depends on what kind of question it is...
I think my hackles got raised by the claim that your perception is “what it actually is”—and that’s a remarkably strong claim. It probably works better phrased like something along the lines of “trying to take your ego and preconceived notions out of the picture”.
Any links to egregious examples? :-)
I guess it is slightly more acceptable if it’s a binary question. But even so it’s terrible epistimology, since you are giving undue attention to a hypothesis just because it’s the first one you came up with.
An equally awful method of doing things: Reading through someone’s post and trying to find anything wrong with it. If you find anything --> post criticism, if you don’t find anything --> accept conclusion. It’s SOP even on Less Wrong, and it’s not totally stupid but it’s really not what rationalists are supposed to do.
Yes, that is a big part of it, but it’s more than that. It means you stop seeing things from one specific point of view. Think of how confused people get about issues like free will. Only once you stop thinking about the issue from the perspective of an agent and ask what is actually happening from the perspective of the universe can you resolve the confusion.
Or, if you want to see some great examples of people who get this wrong all the time, go to the James Randi forums. There’s a whole host of people there who will say things during discussions like “Well it’s your claim so you have the burden of proof. I am perfectly happy to change my mind if you show me proof that I’m wrong.” and who think that this makes them rationalists. Good grief.
I have spent some time going through your posts but I couldn’t really find any egregious examples. Maybe I got you confused with someone else. I did notice that where politics were involved you’re overly prone to talking about “the left” even though the universe does not think in terms of “left” or “right”. But of course that’s not exactly unique to you.
One other instance I found:
It’s not a huge deal but I personally would not classify ideas as belonging to people, for the reasons described earlier.
In principle I agree with you.
In practice I think “X has the burden of proof” generally means something similar to “The position X is advancing has a rather low prior probability, so substantial evidence would be needed to make it credible, and in particular if X wants us to believe it then s/he would be well advised to offer substantial evidence.” Which, yes, involves confusion between an idea and the people who hold it, and might encourage an argument-as-conflict view of things that can work out really badly—but it’s still a convenient short phrase, reasonably well understood by many people, that (fuzzily) denotes something it’s often useful to say.
So, yeah, issuing such challenges in such terms is a sign of imperfect enlightenment and certainly doesn’t make the one who does it a rationalist in any useful sense. But I don’t see it as such a bad sign as I think you do.
Yea, the concept of burden of proof can be a useful social convention, but that’s all it is. The thing is that taking a sceptical position and waiting for someone to proof you wrong is the opposite of what a sceptic should do. If you ever see two ‘sceptics’ both taking turns postinf ‘you have the burden of proof’, ‘no you have the burden of proof!’… You’ll see what i mean. Actual rationality isn’t supposed to be easy.
No, that’s not what I had in mind. For example, there are questions which explicitly ask for an explanation and answering them with an explanation is fine. Or, say, there are questions which are wrong (as a question) so you answer them with an explanation of why they don’t make sense.
I don’t think you can. Or, rather, I think you can see things from multiple specific point of views, but you cannot see them without any point of view. Yes, I understand you talk about looking at things “from the perspective of the universe” but this expression is meaningless to me.
That may or may not be a reasonable position to take. Let me illustrate how it can be reasonable: people often talk in shortcuts. The sentence quoted could be a shortcut expression for “I have evaluated the evidence for and against X and have come to the conclusion Y. You are claiming that Y is wrong, but your claim by itself is not evidence. Please provide me with actual evidence and then I will update my beliefs”.
But humans do and I’m talking to humans, not to the universe.
A more general point—you said in another post
This is true when you are evaluating the physical reality. But it is NOT true when you are evaluating the social reality—it IS influenced by emotions and what people want to be true.
I don’t quite understand you here.
I suppose “elementary” in the sense of “fundamental” or “simple” or “not relying on other skills before you can learn it”, rather than in the sense of “easy” or “widespread”.
Contrast literacy. Being to read and write one’s own language is elementary. It can be grasped by a small child, and has no prerequisites other than vision, reasonable motor control and not having certain specific brain dysfunctions. Yet one does not have to cast one’s mind that far back through history to reach the days in which this skill was reserved for an educated minority, and most people managed to live their whole lives without picking it up.