Or independent thinkers try to find new frames because the ones on offer are insufficient? I think this is roughly what people mean when they say that AI is “pre-paradigmatic,” i.e., we don’t have the frames for filling to be very productive yet. Given that, I’m more sympathetic to framing posts on the margin than I am to filling ones, although I hope (and expect) that filling-type work will become more useful as we gain a better understanding of AI.
This response is specific to AI/AI alignment, right? I wasn’t “sub-tweeting” the state of AI alignment, and was more thinking of other endeavours (quantified self, paradise engineering, forecasting research).
In general, the bias towards framing can be swamped by other considerations.
Or independent thinkers try to find new frames because the ones on offer are insufficient? I think this is roughly what people mean when they say that AI is “pre-paradigmatic,” i.e., we don’t have the frames for filling to be very productive yet. Given that, I’m more sympathetic to framing posts on the margin than I am to filling ones, although I hope (and expect) that filling-type work will become more useful as we gain a better understanding of AI.
This response is specific to AI/AI alignment, right? I wasn’t “sub-tweeting” the state of AI alignment, and was more thinking of other endeavours (quantified self, paradise engineering, forecasting research).
In general, the bias towards framing can be swamped by other considerations.