A very interesting and thought provoking post—I especially like the Q & A format.
I want to quibble with one bit:
How can I tell there aren’t enough people out there, instead of supposing that we haven’t yet figured out how to find and recruit them?
Basically, because it seems to me that if people had really huge amounts of epistemic rationality + competence + caring, they would already be impacting these problems. Their huge amounts of epistemic rationality and competence would allow them to find a path to high impact; and their caring would compel them to do it.
There is an empirical claim about the world that is implicit in that statement, and it is this claim I want to disagree with. Namely: I think having a high impact on the world is really, really hard. I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck.
It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.
I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck
Do the extend that it does require luck that simply means that it’s important to have more people with rationality + competence + caring. If you have many people some will get lucky.
Many such people respond to unreasonable confidence
I think the term “unreasonable confidence” can be misleading. It’s possible to very confidently say “I don’t know”.
At the LW Community Camp in Berlin, I consider Valentine of CFAR to have been the most charismatic person in attendence.
When speaking with Valentine, he said things like: “I think it’s likely that what you are saying is true, but I don’t see a reason why it has to be true.” He also very often told people that he might be wrong and that people shouldn’t trust his judgements as strongly as they do.
may be more difficult to produce the more you are used to thinking things through rationally.
I think you might be pattern matching to straw-vulcan rationality, that’s distinct from what CFAR wants to teach.
I think you might be pattern matching to straw-vulcan rationality, that’s distinct from what CFAR wants to teach.
I don’t think that’s true. In my experience spending time with rationalists and studying aspects of it myself, I have found that rationalists separate themselves from the general population in many ways which would make it hard to convince non-rationalists. Those aspects are things that rationalists cultivate partially in an effort to improve their thinking, but also, in order to signal membership in the rationalist tribe. (Rationalists are humans after all) Those are not things that rationalists can easily turn on and off. I can identify 3 general groups of aspects that many rationalists seem to have:
1) The use of esoteric language. Rationalists tend to use a lot of language that is unfamiliar to others. Rationalists “update” their beliefs. They fight “akrasia”. They “install” new habits. If you spend any time in rationalist circles, you will have heard those terms used in those ways very frequently. This is of course not bad in and of itself. But it marks one as a member of the rationalist tribe and even someone who does not know about rationalists will be able to identify the speaker who uses this terminology as alien and “weird”. My first encounter with rationalists was indeed of this type. All I knew was that they seemed to speak in a very strange manner.
2) Rationalists, at least the ones in this community, hold a variety of unusual beliefs. I actually find it hard to identify those beliefs because I hold many of them. Nonetheless, a chat with many other human beings regarding the theory of the mind, metaphysics, morality, etc will reveal gaps the size of the grand canyon between the average rationalist and the average person. Maybe at some level, there is agreement, but when it comes to object-level issues, the disagreement is immense.
3) Rationalists think very differently from the way most other people think. That is after all the point. However, it means that arguments that convince rationalists will frequently fail to convince an average person. For instance, arguing that the effects of brain damage show that there is no soul in the conventional sense will get you nowhere with an average person while many rationalists see this as a very persuasive if not conclusive argument.
I claim that to convince another human being, you must be able to model their cognitive processes. As many rationalists realize, humans have a tendency to model other humans as similar to themselves. Doing otherwise is incredibly difficult and increases in difficulty exponentially with your difference from that other human. This is after all unsurprising. If modeling an identical copy of yourself, you need only fake sensory inputs and see what the output would be. If you model someone different than yourself, you need to basically replicate their brain within your brain. This is obviously very effortful and error-prone. This is actually hard-enough that it is difficult for you to replicate the processes that led you to believe something you no longer believe. And you had access to the brain which held those now-discarded beliefs!
I do not claim it is an impossible task. But I do claim that the better you are at rationality, the worst you will be at understanding non-rationalists and how to convince them of anything. If anything, as a good rationalist, you will have learned to flinch away from lines of reasoning that are the result of common cognitive errors. But of course, cognitive errors are an integral part of the way most people live their lives. So if you flinch away from such things, you will miss lines of reasoning that would be very fruitful to convince others of the correctness of your beliefs.
Let me provide an example. I recently discussed abortion with a non-rationalist but very intelligent friend. I pointed out that within the context of fetuses being humans deserving or rights, abortion is obviously murder and that he was missing the point of his opponents. The responses I got were riddled with fallacies. Most interestingly, the idea that science has determined that fetuses are not humans. I tried to explain that science can certainly tell us what is going on at various stages of development, but that it cannot tell us what is a “human deserving of right” as that is a purely moral category. This was to no avail. People (even very intelligent people) hang their beliefs and actions of such fallacy-riddled lines of reasoning all the time. If you train yourself to avoid such lines of reasoning, you will have great difficulty in convincing others without first turning them into yourself.
My first encounter with rationalists was indeed of this type.
If I’m chatting with other rationalists I will use a term like akrasia but in other contexts I will say procastination.
I’m prefectly able to use different words in different social contexts.
In my experience spending time with rationalists and studying aspects of it myself
There are ways of studying rationality that do have those effects. I don’t think going to a CFAR workshop is going to make a person less likely to convince the average person.
I tried to explain that science can certainly tell us what is going on at various stages of development, but that it cannot tell us what is a “human deserving of right” as that is a purely moral category. This was to no avail.
Conving a person who believes in sciencism that science doesn’t work that way is similar to trying to convince a theist that there’s no god. Both are hard problems that you can’t easily solve by making emotional appeals even if you are good at making emotional appeal.
I claim that to convince another human being, you must be able to model their cognitive processes.
I don’t believe that to be true. In many cases it’s possible to convince other people by making generalized statements that different human beings will interpret differently and where it’s not important that you know which interpretation the other person chooses.
In NLP that principle is called the Milton model.
As many rationalists realize, humans have a tendency to model other humans as similar to themselves.
I think it would be more accurate to say humans have a tendency to model other humans as they believe themselves to be.
I pointed out that within the context of fetuses being humans deserving or rights, abortion is obviously murder and that he was missing the point of his opponents.
From an LW perspective I think abortion is obviously murder is an argument with little substance because it’s about the definitions of words. My reflex through ratioanlity training would be to taboo murder.
I actually find it hard to identify those beliefs because I hold many of them. Nonetheless, a chat with many other human beings regarding the theory of the mind, metaphysics, morality, etc will reveal gaps the size of the grand canyon between the average rationalist and the average person. Maybe at some level, there is agreement, but when it comes to object-level issues, the disagreement is immense.
I don’t think that the average rationalist has the same opinon on any of those subjects. There a sizeable portion of EA people in this community but not everybody agrees with the EA frame.
I had a Hemming circle at our local LW meetup where pride was very important to the circled person. He choose actions because he wanted to achive results that make him feel pride. For myself pride is no important concept or emotion. It’s not an emotion that I seek.
The Hamming cirlce allowed me to have a perspective into the workings of a mind that in that regard significantly different then myself. Hamming circles are a good way to learn to model people different than yourself.
There are people in this community who focus on analytical reasoning and as a result are bad at modelling normal people. I think those people would get both more rational and better at modeling normal people if they would frequently engage in Hamming circles.
I think the same is true for practicing techniques like goal factoring and urge propagation.
If you train Focusing you can speak from that place to make stronger emotional appeal than you could otherwise.
Do the extend that it does require luck that simply means that it’s important to have more people with rationality + competence + caring. If you have many people some will get lucky.
The “little bit of luck” in my post above was something of an understatement; actually, I’d suggest it requires a lot of luck (among many other things) to successfully change the world.
I think you might be pattern matching to straw-vulcan rationality, that’s distinct from what CFAR wants to teach.
Not sure if I am, but I believe I am making a correct claim about human psychology here.
Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.
This does not mean you don’t have emotions.
You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.
Now if, in the real world, the way you persuade people is by emotional appeals (and this is at least partially true), this will be more difficult the more you get in the habit of rational thinking, even if you have an accurate model about what it takes to persuade someone -- emotions are not easy to fake and humans have strong intuitions about whether someone’s expressed feelings are genuine.
Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.
No. CFAR rationality is about aligning system I and system II. It’s not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.
You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.
The alternative is working towards feeling more strongly for the fundamental principles than caring about the fights.
emotions are not easy to fake and humans have strong intuitions about whether someone’s expressed feelings are genuine.
A person who cares strongly for his cause doesn’t need to fake emotions.
Sure, you can work towards feeling more strongly about something, but I don’t believe you’ll ever be able match the emotional fervor the partisans feel -- I mean here the people who stew in their anger and embrace their emotions without reservations.
As a (rather extreme) example, consider Hitler. He was able to sway a great many people with what were appeals to anger and emotion (though I acknowledge there is much more to the phenomena of Hitler than this). Hypothetically, if you were a politician from the same era, say a rational one, and you understood that the way to persuade people is to tap into the public’s sense of anger, I’m not sure you’d be able to match him.
Julian Assange was one of the first people to bring tears to my eyes when he spoke and I saw him live.
At the same time Julian’s manifesto is rational to the extend that it makes it case with graph theory.
Interestingly the “We Lost The War”-speech that articulated the doctrine that we need to make life easier for whistleblowers by providing a central venue to which the can post their documents was 10 years ago.
The week ago there was a “Ten years after ‚We Lost The War‘” at this CCC congress.
Rop Gonggrijp closes by describing the new doctrine as:
Know there are probably not be a revolution magically manifesting itself next friday, probably also no zombie acopolypse but still we need to be ready for rapid and sizable changes of all sorts and kinds the only way to be effective in this and probably our mission as a community, is to play for the long term, develop a culture that is more fun and attractive to more people to develop infrastructure and turn around and offer that infrastructure to people that need it. That is not a thing we do as a hobby anymore. That’s also something we do for people that need this infrastructure. Create a culture that capable of putting up a fight, that gives it’s inhabitants a sense of purpose, self worth, usefulnes and then lunch that culture over time till it becomes a viable alternative to the status quo.
I think that’s the core strategy. We don’t want eternal september so it’s no problem if the core community uses language that’s not understood by outsiders.
We can have our cuddle pies and feel good with each other. Cuddle pies produce different emotions than anger but they also create emotions that produce strong bonds.
If we really need strong charismatic speakers that are world class at persuasion I think that Valentine currently is at that level (as is Julian Assange in the hacker community). It’s not CFAR mission to maximize for charisma but nothing that CFAR does prevents people from maximizing charisma.
If someone wants to develop themselves into that role Valentine wrote down his body language secrets in http://lesswrong.com/lw/mp3/proper_posture_for_mental_arts/ .
A great thing about the prospects of our community is that there’s money seeking Effective Altruistic uses. As EA grows there might be an EA person running for office in a few years. If other EA people consider his run to have prospects for making a large positive impact he can raise money from them. But as Rop says in the speech, we should play for the long-term. We don’t need a rationalist to run for office next year.
No. CFAR rationality is about aligning system I and system II. It’s not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.
I believe you are nitpicking here.
If your reason tells you 1+1=2 but your emotions tell you that 1+1=3, being rational means going with your reason. If your reason tells you that ghosts do not exist, you should believe this to be the case even if you really, really want there to be evidence of an afterlife.
CFAR may teach you techniques to align your emotions and reason, but this does not change the fundamental fact that being rational involves evaluating claims like “is 1+1=2?” or empirical facts about the world such as “is there evidence for the existence of ghosts?” based on reason alone.
Just to forestall the inevitable objections (which always come in droves whenever I argue with anyone on this site): this does not mean you don’t have emotions; it does not mean that your emotions don’t play a role in determining your values; it does not mean that you shouldn’t train your emotions to be an aid in your decision-making, etc etc etc.
Being rational involves evaluating various claims and empirical facts, using the best evidence that you happen to have available. Sometimes you’re dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence. Both are information-processing systems that have evolved to make sense of the world and orient your behavior appropriately; they’re just evolved for dealing with different tasks.
This means that in some domains explicit reasoning will provide better evidence, and in some domains emotions will provide better evidence. Rationality involves figuring out which is which, and going with the system that happens to provide better evidence for the specific situation that you happen to be in.
Sometimes you’re dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence.
And how should you (rationally) decide which kind of domain you are in?
Answer: using reason, not emotions.
Example: if you notice that your emotions have been a good guide in understanding what other people are thinking in the past, you should trust them in the future. The decision to do this, however, is an application of
inductive reasoning.
but this does not change the fundamental fact that being rational involves evaluating claims like “is 1+1=2?” or empirical facts about the world such as “is there evidence for the existence of ghosts?” based on reason alone.
On of the claims is analytic.1+1=2 is true by definition of what 2 means. There’s little emotion involved.
When it comes to an issue such as is there evidence for the existence of ghosts? neither rationality after Eliezer’s sequences nor CFAR argues that emotions play no role. Noticing when you feel the emotion of confusion because your map doesn’t really fit is important.
Beauty of mathematical theories is a guiding stone for mathematicians.
Basically any task that doesn’t need emotions or intuitions is better done by computers than by humans. To the extend that human’s outcompete computers there’s intuition involved.
“True by definition” is not at all the same as “trivial” or “easy”. In PM the fact that 1+1=2 does in fact follow from R&W’s definition of the terms involved.
I learned math with the Peano axioms and we considered the symbol 2 to refer to the 1+1, 3 to (1+1)+1 and so on. However even if you consider it to be more complicated it still stays an analytic statement and isn’t a synthetic one.
If you define 2 differently what’s the definition of 2?
When you write “1+1” you may mean two things: “the result of doing the adding operation to 1 and 1“, and “the successor of 1”. It just happens that we use “+1” to denote both of those. The fact that successor(1) = add(1,1) isn’t completely trivial.
Principia Mathematica, though, takes a different line. IIRC, in PM “2” means something like “the property a set has when it has exactly two elements” (i.e., when it has an element a and an element b, and a=b is false, and for any element x we have either x=a or x=b) and similarly for “1” (with all sorts of complications because of the hierarchy of kinda-sorta-types PM uses to try to avoid Russell-style paradoxes). And “m+n” means something like “the property a set has when it it is the union of two disjoint subsets, one of which has m and the other of which has n”. Proving 1+1=2 is more cumbersome then. And PM begins from a very early point, devoting quite a lot of space to introducing propositional calculus and predicate calculus (in an early, somewhat clunky form).
It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.
Really good point! In fact, there is a specific challenge in that the rationality community itself lashes back against rationalists using such tactics, as I experienced myself. So this is a particular challenging area of impacting the world.
A very interesting and thought provoking post—I especially like the Q & A format.
I want to quibble with one bit:
There is an empirical claim about the world that is implicit in that statement, and it is this claim I want to disagree with. Namely: I think having a high impact on the world is really, really hard. I would suggest it requires more than just rationality + competence + caring; for one thing, it requires a little bit of luck.
It also requires a good ability to persuade others who are not thinking rationally. Many such people respond to unreasonable confidence, emotional appeals, salesmanship, and other rhetorical tricks which may be more difficult to produce the more you are used to thinking things through rationally.
Do the extend that it does require luck that simply means that it’s important to have more people with rationality + competence + caring. If you have many people some will get lucky.
I think the term “unreasonable confidence” can be misleading. It’s possible to very confidently say “I don’t know”.
At the LW Community Camp in Berlin, I consider Valentine of CFAR to have been the most charismatic person in attendence. When speaking with Valentine, he said things like: “I think it’s likely that what you are saying is true, but I don’t see a reason why it has to be true.” He also very often told people that he might be wrong and that people shouldn’t trust his judgements as strongly as they do.
I think you might be pattern matching to straw-vulcan rationality, that’s distinct from what CFAR wants to teach.
I don’t think that’s true. In my experience spending time with rationalists and studying aspects of it myself, I have found that rationalists separate themselves from the general population in many ways which would make it hard to convince non-rationalists. Those aspects are things that rationalists cultivate partially in an effort to improve their thinking, but also, in order to signal membership in the rationalist tribe. (Rationalists are humans after all) Those are not things that rationalists can easily turn on and off. I can identify 3 general groups of aspects that many rationalists seem to have:
1) The use of esoteric language. Rationalists tend to use a lot of language that is unfamiliar to others. Rationalists “update” their beliefs. They fight “akrasia”. They “install” new habits. If you spend any time in rationalist circles, you will have heard those terms used in those ways very frequently. This is of course not bad in and of itself. But it marks one as a member of the rationalist tribe and even someone who does not know about rationalists will be able to identify the speaker who uses this terminology as alien and “weird”. My first encounter with rationalists was indeed of this type. All I knew was that they seemed to speak in a very strange manner.
2) Rationalists, at least the ones in this community, hold a variety of unusual beliefs. I actually find it hard to identify those beliefs because I hold many of them. Nonetheless, a chat with many other human beings regarding the theory of the mind, metaphysics, morality, etc will reveal gaps the size of the grand canyon between the average rationalist and the average person. Maybe at some level, there is agreement, but when it comes to object-level issues, the disagreement is immense.
3) Rationalists think very differently from the way most other people think. That is after all the point. However, it means that arguments that convince rationalists will frequently fail to convince an average person. For instance, arguing that the effects of brain damage show that there is no soul in the conventional sense will get you nowhere with an average person while many rationalists see this as a very persuasive if not conclusive argument.
I claim that to convince another human being, you must be able to model their cognitive processes. As many rationalists realize, humans have a tendency to model other humans as similar to themselves. Doing otherwise is incredibly difficult and increases in difficulty exponentially with your difference from that other human. This is after all unsurprising. If modeling an identical copy of yourself, you need only fake sensory inputs and see what the output would be. If you model someone different than yourself, you need to basically replicate their brain within your brain. This is obviously very effortful and error-prone. This is actually hard-enough that it is difficult for you to replicate the processes that led you to believe something you no longer believe. And you had access to the brain which held those now-discarded beliefs!
I do not claim it is an impossible task. But I do claim that the better you are at rationality, the worst you will be at understanding non-rationalists and how to convince them of anything. If anything, as a good rationalist, you will have learned to flinch away from lines of reasoning that are the result of common cognitive errors. But of course, cognitive errors are an integral part of the way most people live their lives. So if you flinch away from such things, you will miss lines of reasoning that would be very fruitful to convince others of the correctness of your beliefs.
Let me provide an example. I recently discussed abortion with a non-rationalist but very intelligent friend. I pointed out that within the context of fetuses being humans deserving or rights, abortion is obviously murder and that he was missing the point of his opponents. The responses I got were riddled with fallacies. Most interestingly, the idea that science has determined that fetuses are not humans. I tried to explain that science can certainly tell us what is going on at various stages of development, but that it cannot tell us what is a “human deserving of right” as that is a purely moral category. This was to no avail. People (even very intelligent people) hang their beliefs and actions of such fallacy-riddled lines of reasoning all the time. If you train yourself to avoid such lines of reasoning, you will have great difficulty in convincing others without first turning them into yourself.
If I’m chatting with other rationalists I will use a term like akrasia but in other contexts I will say procastination. I’m prefectly able to use different words in different social contexts.
There are ways of studying rationality that do have those effects. I don’t think going to a CFAR workshop is going to make a person less likely to convince the average person.
Conving a person who believes in sciencism that science doesn’t work that way is similar to trying to convince a theist that there’s no god. Both are hard problems that you can’t easily solve by making emotional appeals even if you are good at making emotional appeal.
I don’t believe that to be true. In many cases it’s possible to convince other people by making generalized statements that different human beings will interpret differently and where it’s not important that you know which interpretation the other person chooses.
In NLP that principle is called the Milton model.
I think it would be more accurate to say humans have a tendency to model other humans as they believe themselves to be.
From an LW perspective I think
abortion is obviously murder
is an argument with little substance because it’s about the definitions of words. My reflex through ratioanlity training would be to taboomurder
.I don’t think that the average rationalist has the same opinon on any of those subjects. There a sizeable portion of EA people in this community but not everybody agrees with the EA frame.
I had a Hemming circle at our local LW meetup where
pride
was very important to the circled person. He choose actions because he wanted to achive results that make him feelpride
. For myselfpride
is no important concept or emotion. It’s not an emotion that I seek.The Hamming cirlce allowed me to have a perspective into the workings of a mind that in that regard significantly different then myself. Hamming circles are a good way to learn to model people different than yourself.
There are people in this community who focus on analytical reasoning and as a result are bad at modelling normal people. I think those people would get both more rational and better at modeling normal people if they would frequently engage in Hamming circles.
I think the same is true for practicing techniques like goal factoring and urge propagation.
If you train Focusing you can speak from that place to make stronger emotional appeal than you could otherwise.
The “little bit of luck” in my post above was something of an understatement; actually, I’d suggest it requires a lot of luck (among many other things) to successfully change the world.
Not sure if I am, but I believe I am making a correct claim about human psychology here.
Being rational means many things, but surely one of them is making decisions based on some kind of reasoning process as opposed to recourse to emotions.
This does not mean you don’t have emotions.
You might, for example, have very strong emotions about matters pertaining to fights between your perceived in-group and out-group, but you try to put those aside and make judgments based on some sort of fundamental principles.
Now if, in the real world, the way you persuade people is by emotional appeals (and this is at least partially true), this will be more difficult the more you get in the habit of rational thinking, even if you have an accurate model about what it takes to persuade someone -- emotions are not easy to fake and humans have strong intuitions about whether someone’s expressed feelings are genuine.
No. CFAR rationality is about aligning system I and system II. It’s not about declaring system I outputs to be worthy of being ignored in favor of system II outputs.
The alternative is working towards feeling more strongly for the fundamental principles than caring about the fights.
A person who cares strongly for his cause doesn’t need to fake emotions.
Sure, you can work towards feeling more strongly about something, but I don’t believe you’ll ever be able match the emotional fervor the partisans feel -- I mean here the people who stew in their anger and embrace their emotions without reservations.
As a (rather extreme) example, consider Hitler. He was able to sway a great many people with what were appeals to anger and emotion (though I acknowledge there is much more to the phenomena of Hitler than this). Hypothetically, if you were a politician from the same era, say a rational one, and you understood that the way to persuade people is to tap into the public’s sense of anger, I’m not sure you’d be able to match him.
“The best lack all conviction, and the worst / Are full of passionate intensity”—W B Yeats
“The fundamental cause of the trouble is that in the modern world the stupid are cocksure while the intelligent are full of doubt”—Bertrand Russell
Julian Assange was one of the first people to bring tears to my eyes when he spoke and I saw him live. At the same time Julian’s manifesto is rational to the extend that it makes it case with graph theory.
Interestingly the “We Lost The War”-speech that articulated the doctrine that we need to make life easier for whistleblowers by providing a central venue to which the can post their documents was 10 years ago. The week ago there was a “Ten years after ‚We Lost The War‘” at this CCC congress.
Rop Gonggrijp closes by describing the new doctrine as:
I think that’s the core strategy. We don’t want eternal september so it’s no problem if the core community uses language that’s not understood by outsiders. We can have our cuddle pies and feel good with each other. Cuddle pies produce different emotions than anger but they also create emotions that produce strong bonds.
If we really need strong charismatic speakers that are world class at persuasion I think that Valentine currently is at that level (as is Julian Assange in the hacker community). It’s not CFAR mission to maximize for charisma but nothing that CFAR does prevents people from maximizing charisma. If someone wants to develop themselves into that role Valentine wrote down his body language secrets in http://lesswrong.com/lw/mp3/proper_posture_for_mental_arts/ .
A great thing about the prospects of our community is that there’s money seeking Effective Altruistic uses. As EA grows there might be an EA person running for office in a few years. If other EA people consider his run to have prospects for making a large positive impact he can raise money from them. But as Rop says in the speech, we should play for the long-term. We don’t need a rationalist to run for office next year.
I believe you are nitpicking here.
If your reason tells you 1+1=2 but your emotions tell you that 1+1=3, being rational means going with your reason. If your reason tells you that ghosts do not exist, you should believe this to be the case even if you really, really want there to be evidence of an afterlife.
CFAR may teach you techniques to align your emotions and reason, but this does not change the fundamental fact that being rational involves evaluating claims like “is 1+1=2?” or empirical facts about the world such as “is there evidence for the existence of ghosts?” based on reason alone.
Just to forestall the inevitable objections (which always come in droves whenever I argue with anyone on this site): this does not mean you don’t have emotions; it does not mean that your emotions don’t play a role in determining your values; it does not mean that you shouldn’t train your emotions to be an aid in your decision-making, etc etc etc.
Being rational involves evaluating various claims and empirical facts, using the best evidence that you happen to have available. Sometimes you’re dealing with a domain where explicit reasoning provides the best evidence, sometimes with a domain where emotions provide the best evidence. Both are information-processing systems that have evolved to make sense of the world and orient your behavior appropriately; they’re just evolved for dealing with different tasks.
This means that in some domains explicit reasoning will provide better evidence, and in some domains emotions will provide better evidence. Rationality involves figuring out which is which, and going with the system that happens to provide better evidence for the specific situation that you happen to be in.
And how should you (rationally) decide which kind of domain you are in?
Answer: using reason, not emotions.
Example: if you notice that your emotions have been a good guide in understanding what other people are thinking in the past, you should trust them in the future. The decision to do this, however, is an application of inductive reasoning.
Sure.
On of the claims is analytic.
1+1=2
is true by definition of what2
means. There’s little emotion involved.When it comes to an issue such as
is there evidence for the existence of ghosts?
neither rationality after Eliezer’s sequences nor CFAR argues that emotions play no role. Noticing when you feel the emotion of confusion because your map doesn’t really fit is important.Beauty of mathematical theories is a guiding stone for mathematicians.
Basically any task that doesn’t need emotions or intuitions is better done by computers than by humans. To the extend that human’s outcompete computers there’s intuition involved.
Russell and Whitehead would beg to differ.
“True by definition” is not at all the same as “trivial” or “easy”. In PM the fact that 1+1=2 does in fact follow from R&W’s definition of the terms involved.
I learned math with the Peano axioms and we considered the symbol
2
to refer to the1+1
, 3 to(1+1)+1
and so on. However even if you consider it to be more complicated it still stays an analytic statement and isn’t a synthetic one.If you define 2 differently what’s the definition of 2?
When you write “1+1” you may mean two things: “the result of doing the adding operation to 1 and 1“, and “the successor of 1”. It just happens that we use “+1” to denote both of those. The fact that successor(1) = add(1,1) isn’t completely trivial.
Principia Mathematica, though, takes a different line. IIRC, in PM “2” means something like “the property a set has when it has exactly two elements” (i.e., when it has an element a and an element b, and a=b is false, and for any element x we have either x=a or x=b) and similarly for “1” (with all sorts of complications because of the hierarchy of kinda-sorta-types PM uses to try to avoid Russell-style paradoxes). And “m+n” means something like “the property a set has when it it is the union of two disjoint subsets, one of which has m and the other of which has n”. Proving 1+1=2 is more cumbersome then. And PM begins from a very early point, devoting quite a lot of space to introducing propositional calculus and predicate calculus (in an early, somewhat clunky form).
One popular definition (at least, among that small class of people who need to define 2) is { { }, { { } } }.
Another, less used nowadays, is { z : ∃x,y. x∈z ∧ y∈z ∧ x ≠ y ∧ ∀w∈z.(w=x ∨ w=y) }.
In surreal numbers, 2 is { { { | } | } | }.
In type theory and some fields of logic, 2 is usually defined as (λf.λx.f (f x)); essentially, the concept of doing something twice.
Really good point! In fact, there is a specific challenge in that the rationality community itself lashes back against rationalists using such tactics, as I experienced myself. So this is a particular challenging area of impacting the world.