So, then she fell back instead on the familiar (paraphrasing again) “OK, but you must admit there’s a non-zero risk of such an AGI destroying humanity, so we should be very careful—when the stakes are so high, better safe than sorry!”
Yeah, Pascal’s mugging. That’s what it all comes down to in the end.
Not sure what you mean. It seems clear that both mugging and x-risk have a Pascalian flavor. (’Course, I personally think that the original wager wasn’t fallacious...)
The mugging has a referential aspect to it, referring either to your elicited prior or your universal prior. Whatever probability you give or whatever the universal is, the mugger solves for x in the obvious inequality (x times probability > 1) and claims the mugging yields x utility.
(Not that it matters, but my own belief is that the solution is to have a prior that takes into account the magnitude, avoiding the complexity issue by bounding the loss and identifying this loss with the prefix you need in the Kolmogorov setting.)
In contrast, there is no such referentiality in assessing AGI existential risk that I’ve heard of. FAIs don’t offer to bootstrap themselves into existence with greater probability or… something like that. I’m really not sure how one would construct an x-risk argument isomorphic
(Possibly XiXi experienced a brainfart and meant the more common accusation that the general argument for existential risk prevention is Pascal’s wager with finite stakes and more plausible premises.)
Big engineering projects with lots of lives at stake need to have big safety margins and research into safety issues. I think that’s just a fact. I’m not necessarily disagreeing with some Pascal’s muggings going on—but safety is an issue.
Yeah, Pascal’s mugging. That’s what it all comes down to in the end.
There is no formal mapping from the mugging to AGI existential risk.
Not sure what you mean. It seems clear that both mugging and x-risk have a Pascalian flavor. (’Course, I personally think that the original wager wasn’t fallacious...)
The mugging has a referential aspect to it, referring either to your elicited prior or your universal prior. Whatever probability you give or whatever the universal is, the mugger solves for x in the obvious inequality (x times probability > 1) and claims the mugging yields x utility.
(Not that it matters, but my own belief is that the solution is to have a prior that takes into account the magnitude, avoiding the complexity issue by bounding the loss and identifying this loss with the prefix you need in the Kolmogorov setting.)
In contrast, there is no such referentiality in assessing AGI existential risk that I’ve heard of. FAIs don’t offer to bootstrap themselves into existence with greater probability or… something like that. I’m really not sure how one would construct an x-risk argument isomorphic
(Possibly XiXi experienced a brainfart and meant the more common accusation that the general argument for existential risk prevention is Pascal’s wager with finite stakes and more plausible premises.)
Ah, I was assuming that everyone was assuming that XiXiDu had had a brainfart.
Big engineering projects with lots of lives at stake need to have big safety margins and research into safety issues. I think that’s just a fact. I’m not necessarily disagreeing with some Pascal’s muggings going on—but safety is an issue.