Some other random thoughts as they come to me (epistemic status: musing).
1. All the elements in the Rationalsphere are related to some form of rationality, the question is which aspect and why.
Impact and Human are basically the two focuses on instrumental rationality, split loosely along whether they are outward or inward facing. Truthseeking is about epistemic rationality for its own sake.
2. All three focus areas can agree that things are important, but disagree on whether it’s terminal or instrumentally valuable.
If you literally had Impact/Human focused group that didn’t care about honesty and truth at all, I’d say “that’s not a rationality community, no matter how they frame things.” But I think we can (and do) have Impact/Human focused groups that see Truthseeking as instrumentally important for those goals.
Whereas pure Truthseeking tends to be like “reality is neat and forming deep models of things is neat, and we were going to do that anyway, and if it so happens that this actually helps someone I guess that’s cool.”
3. The three focuses are not precisely about values, but about orientation.
The “human” cluster essentially means “I want to focus on me and the people around me”, which sometimes means “form good relationships” and sometimes means “be personally successful at my career or craft.” The Impact cluster could be about EA type stuff, AI type stuff, or even truthseeking-related stuff like “reform academia” or “improve discourse norms in general society.”
I think there’s a lot of tension (in particular in EA spaces), between Impact and Truthseeking oriented people. Both sides (in this context) agree that both action and truth are important. But you have something like...
Impact people, who notice that truthseekers a) tend to spend a lot of time talking and not enough doing, and b) believe that actually_doing is the limiting reagent to effective change, and excessive truthseeking-oriented-norms tend to distract from or penalize doing. (For example, encouraging criticism tends to result in people not wanting to try to do things).
vs
Truthseeking people, who notice that lots of people have tried to change things but consistently get things really wrong, and consistently get their epistemics corrupted as organizations mature and get taken over by Exploiter/Parasite/Vaosociopath-types. And the truthseekers see the Rationality/EA alliance as this really rare precious thing that’s still young enough not to have been corrupted in the usual way things get intellectually corrupted.
And I think the thing I’ve been gradually orienting towards over the past 6 months is something like
“Truthseeking and Agency are BOTH incredibly rare and precious and we don’t have nearly enough of both of them. If we’re fighting over the mindshare of which types of norms are winning out, we’re already lost because the current size of the mindshare-pie and associated Truth and Agency skills are not sufficient to accomplish The Things.”
(I think the same principle ends up applying to Truthseeker/Human conflicts and Human/Impact conflicts)
Reading this again several months later, after having developed related thoughts more, and seeing Viliam’s comment below, caused a strong negative reaction that the line “If we’re fighting over the mindshare of which types of norms are winning out, we’re already lost.”
I have the instinctive sense that when people say “We can’t be fighting over this” it’s often because they are fighting over it and don’t want the other side fighting BACK, and are using the implicit argument that they’ve already pre-committed to fighting so if you fight back we’re gonna have to fight for real, so why not simply let me win? I’m already winning. We’re actively trying to recruit your people and promote our message over your message. We can’t afford to then have you try to recruit our people and have you trying to promote your message over ours. What we do is good and right, what you do is causing conflict.
Thus, you have a project about moving more into the human/impact, arguing that it deserves larger mind share. Fair enough! There’s certainly a case to be made there, but making that case while also arguing we can’t afford to be arguing over various cases sets off my alarm bells. Especially since ‘arguing over what should get more attention’ is itself a truth-seeking mindshare activity, and there are human/impact activities that can be negative to truth-seeking rather than simply neutral, and that we have to do to some extent.
So I’d be more in a ‘you can’t afford not to’ camp rather than a ‘you can’t afford to’ camp, and I think that if we view such an activity as fighting and negative rather than a positive thing, that’s itself a sign of further problems.
If we’re fighting over the mindshare of which types of norms are winning out, we’re already lost
Yep. And most people will continue doing what fits them better anyway… so the whole debate would mostly contribute to making one group feel less welcome.
Also, I suspect that healthy communities are not homogeneous. While the debates about whether X is better than Y already silently assume that homogeneity is the desired outcome—we only need to choose the right template for everyone to copy.
Some other random thoughts as they come to me (epistemic status: musing).
1. All the elements in the Rationalsphere are related to some form of rationality, the question is which aspect and why.
Impact and Human are basically the two focuses on instrumental rationality, split loosely along whether they are outward or inward facing. Truthseeking is about epistemic rationality for its own sake.
2. All three focus areas can agree that things are important, but disagree on whether it’s terminal or instrumentally valuable.
If you literally had Impact/Human focused group that didn’t care about honesty and truth at all, I’d say “that’s not a rationality community, no matter how they frame things.” But I think we can (and do) have Impact/Human focused groups that see Truthseeking as instrumentally important for those goals.
Whereas pure Truthseeking tends to be like “reality is neat and forming deep models of things is neat, and we were going to do that anyway, and if it so happens that this actually helps someone I guess that’s cool.”
3. The three focuses are not precisely about values, but about orientation.
The “human” cluster essentially means “I want to focus on me and the people around me”, which sometimes means “form good relationships” and sometimes means “be personally successful at my career or craft.” The Impact cluster could be about EA type stuff, AI type stuff, or even truthseeking-related stuff like “reform academia” or “improve discourse norms in general society.”
I think there’s a lot of tension (in particular in EA spaces), between Impact and Truthseeking oriented people. Both sides (in this context) agree that both action and truth are important. But you have something like...
Impact people, who notice that truthseekers a) tend to spend a lot of time talking and not enough doing, and b) believe that actually_doing is the limiting reagent to effective change, and excessive truthseeking-oriented-norms tend to distract from or penalize doing. (For example, encouraging criticism tends to result in people not wanting to try to do things).
vs
Truthseeking people, who notice that lots of people have tried to change things but consistently get things really wrong, and consistently get their epistemics corrupted as organizations mature and get taken over by Exploiter/Parasite/Vaosociopath-types. And the truthseekers see the Rationality/EA alliance as this really rare precious thing that’s still young enough not to have been corrupted in the usual way things get intellectually corrupted.
And I think the thing I’ve been gradually orienting towards over the past 6 months is something like
“Truthseeking and Agency are BOTH incredibly rare and precious and we don’t have nearly enough of both of them. If we’re fighting over the mindshare of which types of norms are winning out, we’re already lost because the current size of the mindshare-pie and associated Truth and Agency skills are not sufficient to accomplish The Things.”
(I think the same principle ends up applying to Truthseeker/Human conflicts and Human/Impact conflicts)
Reading this again several months later, after having developed related thoughts more, and seeing Viliam’s comment below, caused a strong negative reaction that the line “If we’re fighting over the mindshare of which types of norms are winning out, we’re already lost.”
I have the instinctive sense that when people say “We can’t be fighting over this” it’s often because they are fighting over it and don’t want the other side fighting BACK, and are using the implicit argument that they’ve already pre-committed to fighting so if you fight back we’re gonna have to fight for real, so why not simply let me win? I’m already winning. We’re actively trying to recruit your people and promote our message over your message. We can’t afford to then have you try to recruit our people and have you trying to promote your message over ours. What we do is good and right, what you do is causing conflict.
Thus, you have a project about moving more into the human/impact, arguing that it deserves larger mind share. Fair enough! There’s certainly a case to be made there, but making that case while also arguing we can’t afford to be arguing over various cases sets off my alarm bells. Especially since ‘arguing over what should get more attention’ is itself a truth-seeking mindshare activity, and there are human/impact activities that can be negative to truth-seeking rather than simply neutral, and that we have to do to some extent.
So I’d be more in a ‘you can’t afford not to’ camp rather than a ‘you can’t afford to’ camp, and I think that if we view such an activity as fighting and negative rather than a positive thing, that’s itself a sign of further problems.
Yep. And most people will continue doing what fits them better anyway… so the whole debate would mostly contribute to making one group feel less welcome.
Also, I suspect that healthy communities are not homogeneous. While the debates about whether X is better than Y already silently assume that homogeneity is the desired outcome—we only need to choose the right template for everyone to copy.