I basically agree with this approach. I have sometimes said that if I could change one thing about EA, it would be that I want more EAs to feel like there job is to understand the world and how the world works (rationalists are overall better on this dimension, though they have other problems.)
[Note: I’m currently training a practice of noticing when what I say, or what others say, aligns with our personal [social] incentives. The statement above aligns with my incentives in so far as I like figuring things out, and apparently can do it. So if the statement above were true, that would imply that I am doing the “right thing” more than others who are doing other work.]
I’m curious to hear more detail about what you imagine for step 4. What sort of “nice things” do you have in mind? What kind of plans?
I basically agree with this approach. I have sometimes said that if I could change one thing about EA, it would be that I want more EAs to feel like there job is to understand the world and how the world works (rationalists are overall better on this dimension, though they have other problems.)
[Note: I’m currently training a practice of noticing when what I say, or what others say, aligns with our personal [social] incentives. The statement above aligns with my incentives in so far as I like figuring things out, and apparently can do it. So if the statement above were true, that would imply that I am doing the “right thing” more than others who are doing other work.]
I’m curious to hear more detail about what you imagine for step 4. What sort of “nice things” do you have in mind? What kind of plans?