This response is specific to AI/AI alignment, right? I wasn’t “sub-tweeting” the state of AI alignment, and was more thinking of other endeavours (quantified self, paradise engineering, forecasting research).
In general, the bias towards framing can be swamped by other considerations.
This response is specific to AI/AI alignment, right? I wasn’t “sub-tweeting” the state of AI alignment, and was more thinking of other endeavours (quantified self, paradise engineering, forecasting research).
In general, the bias towards framing can be swamped by other considerations.