I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.
I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.
----
I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.
Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them.
Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun—you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions.
----
I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.
I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.
----
I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.
For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
These both seem pretty common, so I’m curious about the correlation that you’ve observed. Is it mainly based on people you know personally? In that case I expect the correlation not to hold amongst the wider population.
Also, a big effect which probably doesn’t show up much amongst the people you know: younger people seem more altruistic (or at least signal more altruism) and also seem to have worse epistemics than older people.
Caring about things seems to make you interact with the world in more diverse ways (because you do this in addition to things other people do, not instead of); some of that translates into more experience and better models. But also tribal identity, mindkilling, often refusing to see the reasons why your straightforward solution would not work, and uncritical contrarianism.
Now I think about a group of people I know, who care strongly about improving the world, in the one or two aspects they focus on. They did a few amazing things and gained lots of skills; they publish books, organize big conferences, created a network of like-minded people in other countries; some of their activities are profitable, for others they apply for various grants and often get them, so some of them improve the world as a full-time job. They also believe that covid is a hoax, plus have lots of less fringe but still quite irrational beliefs. However… this depends on how you calculate the “total rationality”, but seems to me that their gains in near mode outweigh the losses in far mode, and in some sense I would call them more rational than average population.
Of course I dream about a group that would have all the advantages and none of the disadvantages.
Ah. I meant, I would like to see a group that has the sanity level of a typical rationalist, and the productivity level of these super-agenty irrationalists. (Instead of having to choose between “sane with lots of akrasia” and “awesome but insane”.)
Hm. Maybe there’s something to be gained from navigating ‘trade-offs’ differently? I thought perpetual motion machines were impossible (because thermodynamics) aside from ‘launch something into space, pointed away from stuff it would crash into’, though I’d read that ‘trying to do so is a good way to learn about physics., but I didn’t really try because I thought it’d be pointless.’ And then this happened.
I know a lot of people through a shared interest in truth-seeking and epistemics. I also know a lot of people through a shared interest in trying to do good in the world.
I think I would have naively expected that the people who care less about the world would be better at having good epistemics. For example, people who care a lot about particular causes might end up getting really mindkilled by politics, or might end up strongly affiliated with groups that have false beliefs as part of their tribal identity.
But I don’t think that this prediction is true: I think that I see a weak positive correlation between how altruistic people are and how good their epistemics seem.
----
I think the main reason for this is that striving for accurate beliefs is unpleasant and unrewarding. In particular, having accurate beliefs involves doing things like trying actively to step outside the current frame you’re using, and looking for ways you might be wrong, and maintaining constant vigilance against disagreeing with people because they’re annoying and stupid.
Altruists often seem to me to do better than people who instrumentally value epistemics; I think this is because valuing epistemics terminally has some attractive properties compared to valuing it instrumentally. One reason this is better is that it means that you’re less likely to stop being rational when it stops being fun. For example, I find many animal rights activists very annoying, and if I didn’t feel tied to them by virtue of our shared interest in the welfare of animals, I’d be tempted to sneer at them.
Another reason is that if you’re an altruist, you find yourself interested in various subjects that aren’t the subjects you would have learned about for fun—you have less of an opportunity to only ever think in the way you think in by default. I think that it might be healthy that altruists are forced by the world to learn subjects that are further from their predispositions.
----
I think it’s indeed true that altruistic people sometimes end up mindkilled. But I think that truth-seeking-enthusiasts seem to get mindkilled at around the same rate. One major mechanism here is that truth-seekers often start to really hate opinions that they regularly hear bad arguments for, and they end up rationalizing their way into dumb contrarian takes.
I think it’s common for altruists to avoid saying unpopular true things because they don’t want to get in trouble; I think that this isn’t actually that bad for epistemics.
----
I think that EAs would have much worse epistemics if EA wasn’t pretty strongly tied to the rationalist community; I’d be pretty worried about weakening those ties. I think my claim here is that being altruistic seems to make you overall a bit better at using rationality techniques, instead of it making you substantially worse.
These both seem pretty common, so I’m curious about the correlation that you’ve observed. Is it mainly based on people you know personally? In that case I expect the correlation not to hold amongst the wider population.
Also, a big effect which probably doesn’t show up much amongst the people you know: younger people seem more altruistic (or at least signal more altruism) and also seem to have worse epistemics than older people.
Caring about things seems to make you interact with the world in more diverse ways (because you do this in addition to things other people do, not instead of); some of that translates into more experience and better models. But also tribal identity, mindkilling, often refusing to see the reasons why your straightforward solution would not work, and uncritical contrarianism.
Now I think about a group of people I know, who care strongly about improving the world, in the one or two aspects they focus on. They did a few amazing things and gained lots of skills; they publish books, organize big conferences, created a network of like-minded people in other countries; some of their activities are profitable, for others they apply for various grants and often get them, so some of them improve the world as a full-time job. They also believe that covid is a hoax, plus have lots of less fringe but still quite irrational beliefs. However… this depends on how you calculate the “total rationality”, but seems to me that their gains in near mode outweigh the losses in far mode, and in some sense I would call them more rational than average population.
Of course I dream about a group that would have all the advantages and none of the disadvantages.
It seems like the more people you know, the less likely this is.
Of both? (This sentence didn’t have a clear object.)
Ah. I meant, I would like to see a group that has the sanity level of a typical rationalist, and the productivity level of these super-agenty irrationalists. (Instead of having to choose between “sane with lots of akrasia” and “awesome but insane”.)
Hm. Maybe there’s something to be gained from navigating ‘trade-offs’ differently? I thought perpetual motion machines were impossible (because thermodynamics) aside from ‘launch something into space, pointed away from stuff it would crash into’, though I’d read that ‘trying to do so is a good way to learn about physics., but I didn’t really try because I thought it’d be pointless.’ And then this happened.