I’m not sure why all the people who think harder than I do about the field aren’t testing their “how to get alignment” theories on humans first…
Some of us are!
I mean, I don’t know you, so I don’t know if I’ve thought harder about the field than you have.
But FWIW, there’s a lot of us chewing on exactly this, and running experiments of various sizes, and we have some tentative conclusions.
It just tends to drift away from LW in social flavor. A lot of this stuff you’ll find in places LW-type folk tend to label “post-rationalist”.
Some of us are!
I mean, I don’t know you, so I don’t know if I’ve thought harder about the field than you have.
But FWIW, there’s a lot of us chewing on exactly this, and running experiments of various sizes, and we have some tentative conclusions.
It just tends to drift away from LW in social flavor. A lot of this stuff you’ll find in places LW-type folk tend to label “post-rationalist”.