Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.
When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had.
Now, rather than thinking the solution obvious, I’m leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values.
Otherwise, it’s rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can’t even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc.
EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.
Sure. Admittedly, when there are 3^^^3 humans around, torturing me for fifty years is also such a negligible amount of suffering relative to the current lived human experience that it, too, has an expected cost that rounds to zero in the calculations of any agent with imperfect knowledge, unless they have some particular reason to care about me, which in that world is vanishingly unlikely.
Heh.
When put like that, my original post / arguments sure seem not to have been thought through as much as I thought I had.
Now, rather than thinking the solution obvious, I’m leaning more towards the idea that this eventually reduces to the problem of building a good utility function, one that also assigns the right utility value to the expected utility calculated by other beings based on unknown (or known?) other utility functions that may or may not irrationally assign disproportionate disutility to respective hedon-values.
Otherwise, it’s rather obvious that a perfect superintelligence might find a way to make the tortured victim enjoy the torture and become enhanced by it, while also remaining a productive member of society during all fifty years of torture (or some other completely ideal solution we can’t even remotely imagine) - though this might be in direct contradiction with the implicit premise of torture being inherently bad, depending on interpretation/definition/etc.
EDIT: Which, upon reading up a bit more of the old comments on the issue, seems fairly close to the general consensus back in late 2007.