I like the definition of eucatastrophe, I think it’s useful to look at both sides of the coin when assessing risk.
Far out example: we receive a radio transmission from an alien craft that passed by our solar system a few thousand years ago looking for intelligent life. If we fire a narrow beam message back at them in the next 10 years they might turn back, after that they’ll be out of range. Do we call them back? It’s quite likely that they could destroy Earth, but we also need to consider the chance that they’ll “pull us up” to their level of civilization, which would be a eucatastrophe.
More relevant example: a child is growing up, his g factor may be the highest ever measured and he’s talking his first computer science class at 8 years old. Certainly, if anyone in our generation is going to be give the critical push towards AGI it’s likely to be him. But what if he’s not interested in AI friendliness and doesn’t want to hear about values or ethics?
I like the definition of eucatastrophe, I think it’s useful to look at both sides of the coin when assessing risk.
Far out example: we receive a radio transmission from an alien craft that passed by our solar system a few thousand years ago looking for intelligent life. If we fire a narrow beam message back at them in the next 10 years they might turn back, after that they’ll be out of range. Do we call them back? It’s quite likely that they could destroy Earth, but we also need to consider the chance that they’ll “pull us up” to their level of civilization, which would be a eucatastrophe.
More relevant example: a child is growing up, his g factor may be the highest ever measured and he’s talking his first computer science class at 8 years old. Certainly, if anyone in our generation is going to be give the critical push towards AGI it’s likely to be him. But what if he’s not interested in AI friendliness and doesn’t want to hear about values or ethics?