Real-life animals can and do die of shock, which seems *like* it might be some maximum ‘pain’ threshold being exceeded.
In theory, would it not be possible for, say, a malevolent superintelligence to “override” any possibility of a “shock” reaction, and prevent the brain from shutting down? Wouldn’t that allow for ridiculous amounts of agony?
It seems plausible to me that a sufficiently powerful agent could create some form of ever-growing agony by expanding subjects’ pain centres to maximise pain; and the limit being the point where most of the matter in the universe is part of someone’s pain centre seems incredibly scary. I sincerely hope there’s good reason to believe that a hypothetical “evil” superintelligence would get diminishing returns quite quickly.
(You need a space between the > and the text being quoted to format it as a quote in Markdown.)
Sure, we can assume a malevolent super-intelligence could prevent people from going into shock and thus cause much more pain than otherwise.
But it’s not clear how (or even whether) we can quantize pain (or suffering). From the perspective of information processing (or something similar), it seems like there would probably be a maximum amount of non-disabling pain, i.e. a ‘maximum priority override’ to focus all energy and other resources on escaping that pain as quickly as possible. It also seems unclear why evolution would result in creatures able to experience pain more intensely than such a maximum.
Let’s assume pain has no maximum – I’d still expect a reasonable utility function to cap the (dis)utility of pain. If it didn’t, the (possible) torture of just one creature capable of experiencing arbitrary amounts/degrees/levels of pain would effectively be ‘Pascal’s hostage’ (something like, under the control of a malevolent super-intelligence, a utility monster).
But yes, a malevolent super-intelligence, or even just one that’s not perfectly ‘friendly’, would be terrible and the possibility is incredibly scary to me too!
I’d still expect a reasonable utility function to *cap* the (dis)utility of pain. If it didn’t, the (possible) torture of just one creature capable of experiencing arbitrary amounts/degrees/levels of pain would effectively be ‘Pascal’s hostage’
I suppose I never thought about that, but I’m not entirely sure how it’d work in practice. Since the AGI could never be 100% certain that the pain it’s causing is at its maximum, it might further increase pain levels, just to *make sure* that it’s hitting the maximum level of disutility.
It also seems unclear why evolution would result in creatures able to experience pain more intensely than such a maximum.
I think part of what worries me is that, even if we had a “maximum” amount of pain, it’d be hypothetically possible for humans to be re-wired to remove that maximum. I’d think that I’d still be the same person experiencing the same consciousness *after* being rewired, which is somewhat troubling.
If the pain a superintelligence can cause scales linearly or better with computational power, then the thought is even more terrifying.
Overall, you make some solid points that I wouldn’t have considered otherwise.
My point about ‘capping’ the (dis)utility of pain was that one – a person or mind that isn’t a malevolent (super-)intelligence – wouldn’t want to be able to be ‘held hostage’ were something like a malevolent super-intelligence in control of some other mind that could experience ‘infinite pain’. You probably wouldn’t want to sacrifice everything for a tiny chance at preventing the torture of a single being, even if that being was capable of experiencing infinite pain.
I don’t think it’s possible, or even makes sense, for a mind to experience an infinite amount/level/degree of pain (or suffering). Infinite pain might be possible over an infinite amount of time, but that seems (at least somewhat) implausible, e.g. given that the universe doesn’t seem to be infinite, seems to contain a finite amount of matter and energy, and seems likely to die of an eventual heat death (and thus not able to support life or computation indefinitely).
Even assuming that a super-intelligence could rewire human minds to just increase the amount of pain they can experience, a reasonable generalization is to a super-intelligence creating (e.g. simulating) minds (human or otherwise). That seems to me to be the same (general) moral/ethical catastrophe as your hypothetical(s).
But I don’t think these hypotheticals really alter the moral/ethical calculus with respect to our decisions, i.e. the possibility of the torture of minds that can experience infinite pain doesn’t automatically imply that we should avoid developing AGI or super-intelligences entirely. (For one, if infinite pain is possible, so might infinite joy/happiness/satisfaction.)
In theory, would it not be possible for, say, a malevolent superintelligence to “override” any possibility of a “shock” reaction, and prevent the brain from shutting down? Wouldn’t that allow for ridiculous amounts of agony?
It seems plausible to me that a sufficiently powerful agent could create some form of ever-growing agony by expanding subjects’ pain centres to maximise pain; and the limit being the point where most of the matter in the universe is part of someone’s pain centre seems incredibly scary. I sincerely hope there’s good reason to believe that a hypothetical “evil” superintelligence would get diminishing returns quite quickly.
(You need a space between the
>
and the text being quoted to format it as a quote in Markdown.)Sure, we can assume a malevolent super-intelligence could prevent people from going into shock and thus cause much more pain than otherwise.
But it’s not clear how (or even whether) we can quantize pain (or suffering). From the perspective of information processing (or something similar), it seems like there would probably be a maximum amount of non-disabling pain, i.e. a ‘maximum priority override’ to focus all energy and other resources on escaping that pain as quickly as possible. It also seems unclear why evolution would result in creatures able to experience pain more intensely than such a maximum.
Let’s assume pain has no maximum – I’d still expect a reasonable utility function to cap the (dis)utility of pain. If it didn’t, the (possible) torture of just one creature capable of experiencing arbitrary amounts/degrees/levels of pain would effectively be ‘Pascal’s hostage’ (something like, under the control of a malevolent super-intelligence, a utility monster).
But yes, a malevolent super-intelligence, or even just one that’s not perfectly ‘friendly’, would be terrible and the possibility is incredibly scary to me too!
I suppose I never thought about that, but I’m not entirely sure how it’d work in practice. Since the AGI could never be 100% certain that the pain it’s causing is at its maximum, it might further increase pain levels, just to *make sure* that it’s hitting the maximum level of disutility.
I think part of what worries me is that, even if we had a “maximum” amount of pain, it’d be hypothetically possible for humans to be re-wired to remove that maximum. I’d think that I’d still be the same person experiencing the same consciousness *after* being rewired, which is somewhat troubling.
If the pain a superintelligence can cause scales linearly or better with computational power, then the thought is even more terrifying.
Overall, you make some solid points that I wouldn’t have considered otherwise.
My point about ‘capping’ the (dis)utility of pain was that one – a person or mind that isn’t a malevolent (super-)intelligence – wouldn’t want to be able to be ‘held hostage’ were something like a malevolent super-intelligence in control of some other mind that could experience ‘infinite pain’. You probably wouldn’t want to sacrifice everything for a tiny chance at preventing the torture of a single being, even if that being was capable of experiencing infinite pain.
I don’t think it’s possible, or even makes sense, for a mind to experience an infinite amount/level/degree of pain (or suffering). Infinite pain might be possible over an infinite amount of time, but that seems (at least somewhat) implausible, e.g. given that the universe doesn’t seem to be infinite, seems to contain a finite amount of matter and energy, and seems likely to die of an eventual heat death (and thus not able to support life or computation indefinitely).
Even assuming that a super-intelligence could rewire human minds to just increase the amount of pain they can experience, a reasonable generalization is to a super-intelligence creating (e.g. simulating) minds (human or otherwise). That seems to me to be the same (general) moral/ethical catastrophe as your hypothetical(s).
But I don’t think these hypotheticals really alter the moral/ethical calculus with respect to our decisions, i.e. the possibility of the torture of minds that can experience infinite pain doesn’t automatically imply that we should avoid developing AGI or super-intelligences entirely. (For one, if infinite pain is possible, so might infinite joy/happiness/satisfaction.)