I find the article well written and hits one nail on the head after another in regards to the potential scope of what’s to come, but the overarching question of the black swan is a bit distracting. To greatly oversimplify, I would say black swan is a category of massive event, on par with “catastrophe” and “miracle”, it just has overtones of financial investors having hedged their bets properly or not to prepare for it (that was the context of Taleb’s book iirc).
Imho, the more profound point you started to address, was our denial of these events—that we only in fact understand them retroactively. I think there is some inevitability to that, given that we can’t be living perpetually in hypothetical futures.
I did read the book many years ago but I forget Taleb’s prognosis—what are the strategies for preparing for uknown unknowns?
Since black swans are difficult to predict, Taleb recommends being resilient to them rather than trying to predict them.
I don’t think that strategy is effective in the context of AGI. Instead, I think we should imagine a wide range of scenarios to turn unknown unknowns into known unknowns.
I agree completely, and I’m currently looking for what is the most public and concise platform where these scenarios are mapped. Or as I think of them, recipes. There is a finite series of ingredients I think which result in extremely volatile situations. A software with unknown goal formation, widely distributed with no single kill switch, with the abiliity to create more computational power, etc. We have already basically created the first two but we should be thinking what it would take for the 3rd ingredient to be added.
I find the article well written and hits one nail on the head after another in regards to the potential scope of what’s to come, but the overarching question of the black swan is a bit distracting. To greatly oversimplify, I would say black swan is a category of massive event, on par with “catastrophe” and “miracle”, it just has overtones of financial investors having hedged their bets properly or not to prepare for it (that was the context of Taleb’s book iirc).
Imho, the more profound point you started to address, was our denial of these events—that we only in fact understand them retroactively. I think there is some inevitability to that, given that we can’t be living perpetually in hypothetical futures.
I did read the book many years ago but I forget Taleb’s prognosis—what are the strategies for preparing for uknown unknowns?
Since black swans are difficult to predict, Taleb recommends being resilient to them rather than trying to predict them.
I don’t think that strategy is effective in the context of AGI. Instead, I think we should imagine a wide range of scenarios to turn unknown unknowns into known unknowns.
I agree completely, and I’m currently looking for what is the most public and concise platform where these scenarios are mapped. Or as I think of them, recipes. There is a finite series of ingredients I think which result in extremely volatile situations. A software with unknown goal formation, widely distributed with no single kill switch, with the abiliity to create more computational power, etc. We have already basically created the first two but we should be thinking what it would take for the 3rd ingredient to be added.