Er… wouldn’t it be vastly preferable for the AI to /not/ slap people in the face to avoid 1/3↑↑↑3 probability events for non-multiplication reasons? Building an AGI that acts on 1/3↑↑↑3 probability is to make a god that, to outsiders, comes across as both arbitrarily capricious and overwhelmingly interventionist. Even if the AGI’s end result as a net effect on its well-defined utility function is positive, I’d wager modern humans or even extrapolated transhumans wouldn’t like being slapped in the face lest they consider running the next-next-gen version of the LHC in their AGI’s favorite universe. You don’t even need Knuth notation for that to run into place: a 1/10^100^100 or even 1/10^100 event quickly gets to the point.
Even from a practical viewpoint, that seems incredibly prone to oscillation. There’s a reason we don’t set air conditioners to keep local temperatures to five nines, and 1/3↑↑↑3 is, to fall to understatement, a lot of sensitivity past that.
This is incoherent because qualitative boundaries are naturally incoherent—why is one 2/10^100 risk worth processing power, but six separate 1/10^100 risks not worth processing, to give the blunt version of the sorites paradox? That’s a major failing from a philosophical standpoint, where incoherence is functionally incorrect. But AGI aren’t pure philosophy: there are strong secondary benefits to an underinterventionist AGI, and the human neurological bias trends toward underintentervention.
Of course, in an AGI situation you have to actually program it, and actually defining slapping one person 50 times versus slapping 100 people once each is programatically difficult enough
Edit : never mind; retracting this as off topic and misunderstanding the question.
Er… wouldn’t it be vastly preferable for the AI to /not/ slap people in the face to avoid 1/3↑↑↑3 probability events for non-multiplication reasons? Building an AGI that acts on 1/3↑↑↑3 probability is to make a god that, to outsiders, comes across as both arbitrarily capricious and overwhelmingly interventionist. Even if the AGI’s end result as a net effect on its well-defined utility function is positive, I’d wager modern humans or even extrapolated transhumans wouldn’t like being slapped in the face lest they consider running the next-next-gen version of the LHC in their AGI’s favorite universe. You don’t even need Knuth notation for that to run into place: a 1/10^100^100 or even 1/10^100 event quickly gets to the point.
Even from a practical viewpoint, that seems incredibly prone to oscillation. There’s a reason we don’t set air conditioners to keep local temperatures to five nines, and 1/3↑↑↑3 is, to fall to understatement, a lot of sensitivity past that.
This is incoherent because qualitative boundaries are naturally incoherent—why is one 2/10^100 risk worth processing power, but six separate 1/10^100 risks not worth processing, to give the blunt version of the sorites paradox? That’s a major failing from a philosophical standpoint, where incoherence is functionally incorrect. But AGI aren’t pure philosophy: there are strong secondary benefits to an underinterventionist AGI, and the human neurological bias trends toward underintentervention.
Of course, in an AGI situation you have to actually program it, and actually defining slapping one person 50 times versus slapping 100 people once each is programatically difficult enough
Edit : never mind; retracting this as off topic and misunderstanding the question.