This is a short self-review, but with a bit of distance, I think understanding ‘limits to legibility’ is one of the maybe top 5 things an aspiring rationalist should deeply understand and lack of this leads to many bad outcomes in both rationalist and EA communities.
In a very brief form, maybe the most common cause of EA problem and stupidities are attempts to replace illegible S1 boxes able to represent human values such as ‘caring’ by legible, symbolically described, verbal moral reasoning subject to memetic pressure.
Maybe the most common cause of rationalist problems and difficulties with coordination are cases where people replace illegible smart S1 computations with legible S2 arguments.
This is a short self-review, but with a bit of distance, I think understanding ‘limits to legibility’ is one of the maybe top 5 things an aspiring rationalist should deeply understand and lack of this leads to many bad outcomes in both rationalist and EA communities.
In a very brief form, maybe the most common cause of EA problem and stupidities are attempts to replace illegible S1 boxes able to represent human values such as ‘caring’ by legible, symbolically described, verbal moral reasoning subject to memetic pressure.
Maybe the most common cause of rationalist problems and difficulties with coordination are cases where people replace illegible smart S1 computations with legible S2 arguments.