We are bad at governance, even on issues/​problems that emerge/​change slowly relative to human thinking (unlike, e.g., COVID-19). I think people who are optimistic about x-risk governance should be a bit more pessimistic based on this.
Nobody had the foresight to think ahead of time about status dynamics in relation to fertility and parental investment. Academic theories about this are lagging empirical phenomena by a lot. What important dynamics will we miss with AI? (Nobody seems to be thinking about status and AI, which is one obvious candidate.)
Trying to draw some general lessons from this:
We are bad at governance, even on issues/​problems that emerge/​change slowly relative to human thinking (unlike, e.g., COVID-19). I think people who are optimistic about x-risk governance should be a bit more pessimistic based on this.
Nobody had the foresight to think ahead of time about status dynamics in relation to fertility and parental investment. Academic theories about this are lagging empirical phenomena by a lot. What important dynamics will we miss with AI? (Nobody seems to be thinking about status and AI, which is one obvious candidate.)