Since black swans are difficult to predict, Taleb recommends being resilient to them rather than trying to predict them.
I don’t think that strategy is effective in the context of AGI. Instead, I think we should imagine a wide range of scenarios to turn unknown unknowns into known unknowns.
I agree completely, and I’m currently looking for what is the most public and concise platform where these scenarios are mapped. Or as I think of them, recipes. There is a finite series of ingredients I think which result in extremely volatile situations. A software with unknown goal formation, widely distributed with no single kill switch, with the abiliity to create more computational power, etc. We have already basically created the first two but we should be thinking what it would take for the 3rd ingredient to be added.
Since black swans are difficult to predict, Taleb recommends being resilient to them rather than trying to predict them.
I don’t think that strategy is effective in the context of AGI. Instead, I think we should imagine a wide range of scenarios to turn unknown unknowns into known unknowns.
I agree completely, and I’m currently looking for what is the most public and concise platform where these scenarios are mapped. Or as I think of them, recipes. There is a finite series of ingredients I think which result in extremely volatile situations. A software with unknown goal formation, widely distributed with no single kill switch, with the abiliity to create more computational power, etc. We have already basically created the first two but we should be thinking what it would take for the 3rd ingredient to be added.