With log utility, the models are remarkably unconcerned with existential risk, suggesting that large consumption gains that A.I. might deliver can be worth gambles that involve a 1-in-3 chance of extinction
I mean, math is nice and all, but this remarkably feels Not How Humans Actually Function. Not to mention that an assumption of AI benefitting a generic N humans uniformly seems laughably naïve since unless you really had some kind of all-loving egalitarian FAI, odds are AI benefits would go to some humans more than others.
This paper seems to me like it belongs to the class of arguments that don’t actually use math to find out any new conclusions, but rather just to formalize a point that is so vague and qualitative, it could be made just as well in words. There’s no real added value from putting numbers into it since so many important coefficients are unknown or straight up represent opinion (for example, whether existential risk shouldn’t carry an additional negative utility penalty beyond just everyone’s utility going to zero). I don’t feel like this will persuade anyone or shift anything in the discussion.
In fact, the verbal tl;dr seems to be that if you don’t care much about enormously wonderful or terrible futures (because of bounded or sufficiently concave utility functions) then you won’t pay much to achieve or avoid them.
I mean, math is nice and all, but this remarkably feels Not How Humans Actually Function. Not to mention that an assumption of AI benefitting a generic N humans uniformly seems laughably naïve since unless you really had some kind of all-loving egalitarian FAI, odds are AI benefits would go to some humans more than others.
This paper seems to me like it belongs to the class of arguments that don’t actually use math to find out any new conclusions, but rather just to formalize a point that is so vague and qualitative, it could be made just as well in words. There’s no real added value from putting numbers into it since so many important coefficients are unknown or straight up represent opinion (for example, whether existential risk shouldn’t carry an additional negative utility penalty beyond just everyone’s utility going to zero). I don’t feel like this will persuade anyone or shift anything in the discussion.
In fact, the verbal tl;dr seems to be that if you don’t care much about enormously wonderful or terrible futures (because of bounded or sufficiently concave utility functions) then you won’t pay much to achieve or avoid them.