I didn’t realize then that disutility of human-built AI can be much larger than utility of FAI, because pain is easier to achieve than human utility (which doesn’t reduce to pleasure). That makes the argument much stronger.
I didn’t realize then that disutility of human-built AI can be much larger than utility of FAI, because pain is easier to achieve than human utility (which doesn’t reduce to pleasure).
This argument doesn’t actually seem to be in the article that Kaj linked to. Did you see it somewhere else, or come up with it yourself? I’m not sure it makes sense, but I’d like to read more if it’s written up somewhere. (My objection is that “easier to achieve” doesn’t necessarily mean the maximum value achievable is higher. It could be that it would take longer or more effort to achieve the maximum value, but the actual maximums aren’t that different. For example, maybe the extra stuff needed for human utility (aside from pleasure) is complex but doesn’t actually cost much in terms of mass/energy.)
The argument somehow came to my mind yesterday, and I’m not sure it’s true either. But do you really think human value might be as easy to maximize as pleasure or pain? Pain is only about internal states, and human value seems to be partly about external states, so it should be way more expensive.
One of the more crucial points, I think, is that positive utility is – for most humans – complex and its creation is conjunctive. Disutility, in contrast, is disjunctive. Consequently, the probability of creating the former is smaller than the latter – all else being equal (of course, all else is not equal).
In other words, the scenarios leading towards the creation of (large amounts of) positive human value are conjunctive: to create a highly positive future, we have to eliminate (or at least substantially reduce) physical pain and boredom and injustice and loneliness and inequality (at least certain forms of it) and death, etc. etc. etc. (You might argue that getting “FAI” and “CEV” right would accomplish all those things at once (true) but getting FAI and CEV right is, of course, a highly conjunctive task in itself.)
In contrast, disutility is much more easily created and essentially disjunctive. Many roads lead towards dystopia: sadistic programmers or failing AI safety wholesale (or “only” value-loading or extrapolating, or stable self-modification), or some totalitarian regime takes over, etc. etc.
It’s also not a coincidence that even the most untalented writer with the most limited imagination can conjure up a convincing dystopian society. Envisioning a true utopia in concrete detail, on the other hand, is nigh impossible for most human minds.
“[...] human intuitions about what is valuable are often complex and fragile (Yudkowsky, 2011), taking up only a small area in the space of all possible values. In other words, the number of possible configurations of matter constituting anything we would value highly (under reflection) is arguably smaller than the number of possible configurations that constitute some sort of strong suffering or disvalue, making the incidental creation of the latter ceteris paribus more likely.”
Consequently, UFAIs such as paperclippers are more likely to create large amounts of disutility than utility (factoring out acausal considerations) incidentally (e.g. because creating simulations is instrumentally useful for them).
Generally, I like how you put it in your comment here:
In terms of utility, the landscape of possible human-built superintelligences might look like a big flat plain (paperclippers and other things that kill everyone without fuss), with a tall sharp peak (FAI) surrounded by a pit that’s astronomically deeper (many almost-FAIs and other designs that sound natural to humans). The pit needs to be compared to the peak, not the plain. If the pit is more likely, I’d rather have the plain.
Yeah. In a nutshell, supporting generic x-risk-reduction (which also reduces extinction risks) is in one’s best interest, if and only if one’s own normative trade-ratio of suffering vs. happiness is less suffering-focused than one’s estimate of the ratio of expected future happiness to suffering (feel free to replace “happiness” with utility and “suffering” with disutility). If one is more pessimistic about the future or if one needs large amounts of happiness to trade-off small amounts of suffering, one should rather focus on s-risk-reduction instead. Of course, this simplistic analysis leaves out issues like cooperation with others, neglectedness, tractability, moral uncertainty, acausal considerations, etc.
Yeah, I also had the idea about utility being conjunctive and mentioned it in a deleted reply to Wei, but then realized that Eliezer’s version (fragility of value) already exists and is better argued.
On the other hand, maybe the worst hellscapes can be prevented in one go, if we “just” solve the problem of consciousness and tell the AI what suffering means. We don’t need all of human value for that. Hellscapes without suffering can also be pretty bad in terms of human value, but not quite as bad, I think. Of course solving consciousness is still a very tall order, but it might be easier than solving all philosophy that’s required for FAI, and it can lead to other shortcuts like in my recent post (not that I’d propose them seriously).
Some people at MIRI might be thinking about this under nonperson predicate. (Eliezer’s view on which computations matter morally is different from the one endorsed by Brian, though.) And maybe it’s important to not limit FAI options too much by preventing mindcrime at all costs – if there are benefits against other very bad failure modes (or – cooperatively – just increased controllability for the people who care a lot about utopia-type outcomes), maybe some mindcrime in the early stages to ensure goal-alignment would be the lesser evil.
Human disutility includes more than just pain too. Destruction of the humanity (the flat plain you describe) carries a great deal of negative utility for me, even if I disappear without feeling any pain at all. There’s more disutility if all life is destroyed, and more if the universe as a whole is destroyed… I don’t think there’s any fundamental asymmetry. Pain and pleasure are the most immediate ways of affecting value, and probably the ones that can be achieved most efficiently in computronium, so external states probably don’t come into play much at all if you take a purely utilitarian view.
Our values might say, for example, that a universe filled with suffering insects is very undesirable, but a universe filled with happy insects isn’t very desirable. More generally, if our values are a conjunction of many different values, then it’s probably easier to create a universe where one is strongly negative and the rest are zero, than a universe where all are strongly positive. I haven’t seen the argument written up, I’m trying to figure it out now.
Didn’t you realize this yourself back in 2012?
I didn’t realize then that disutility of human-built AI can be much larger than utility of FAI, because pain is easier to achieve than human utility (which doesn’t reduce to pleasure). That makes the argument much stronger.
This argument doesn’t actually seem to be in the article that Kaj linked to. Did you see it somewhere else, or come up with it yourself? I’m not sure it makes sense, but I’d like to read more if it’s written up somewhere. (My objection is that “easier to achieve” doesn’t necessarily mean the maximum value achievable is higher. It could be that it would take longer or more effort to achieve the maximum value, but the actual maximums aren’t that different. For example, maybe the extra stuff needed for human utility (aside from pleasure) is complex but doesn’t actually cost much in terms of mass/energy.)
The argument somehow came to my mind yesterday, and I’m not sure it’s true either. But do you really think human value might be as easy to maximize as pleasure or pain? Pain is only about internal states, and human value seems to be partly about external states, so it should be way more expensive.
One of the more crucial points, I think, is that positive utility is – for most humans – complex and its creation is conjunctive. Disutility, in contrast, is disjunctive. Consequently, the probability of creating the former is smaller than the latter – all else being equal (of course, all else is not equal).
In other words, the scenarios leading towards the creation of (large amounts of) positive human value are conjunctive: to create a highly positive future, we have to eliminate (or at least substantially reduce) physical pain and boredom and injustice and loneliness and inequality (at least certain forms of it) and death, etc. etc. etc. (You might argue that getting “FAI” and “CEV” right would accomplish all those things at once (true) but getting FAI and CEV right is, of course, a highly conjunctive task in itself.)
In contrast, disutility is much more easily created and essentially disjunctive. Many roads lead towards dystopia: sadistic programmers or failing AI safety wholesale (or “only” value-loading or extrapolating, or stable self-modification), or some totalitarian regime takes over, etc. etc.
It’s also not a coincidence that even the most untalented writer with the most limited imagination can conjure up a convincing dystopian society. Envisioning a true utopia in concrete detail, on the other hand, is nigh impossible for most human minds.
Footnote 10 of the above mentioned s-risk-static makes a related point (emphasis mine):
“[...] human intuitions about what is valuable are often complex and fragile (Yudkowsky, 2011), taking up only a small area in the space of all possible values. In other words, the number of possible configurations of matter constituting anything we would value highly (under reflection) is arguably smaller than the number of possible configurations that constitute some sort of strong suffering or disvalue, making the incidental creation of the latter ceteris paribus more likely.”
Consequently, UFAIs such as paperclippers are more likely to create large amounts of disutility than utility (factoring out acausal considerations) incidentally (e.g. because creating simulations is instrumentally useful for them).
Generally, I like how you put it in your comment here:
Yeah. In a nutshell, supporting generic x-risk-reduction (which also reduces extinction risks) is in one’s best interest, if and only if one’s own normative trade-ratio of suffering vs. happiness is less suffering-focused than one’s estimate of the ratio of expected future happiness to suffering (feel free to replace “happiness” with utility and “suffering” with disutility). If one is more pessimistic about the future or if one needs large amounts of happiness to trade-off small amounts of suffering, one should rather focus on s-risk-reduction instead. Of course, this simplistic analysis leaves out issues like cooperation with others, neglectedness, tractability, moral uncertainty, acausal considerations, etc.
Do you think that makes sense?
Yeah, I also had the idea about utility being conjunctive and mentioned it in a deleted reply to Wei, but then realized that Eliezer’s version (fragility of value) already exists and is better argued.
On the other hand, maybe the worst hellscapes can be prevented in one go, if we “just” solve the problem of consciousness and tell the AI what suffering means. We don’t need all of human value for that. Hellscapes without suffering can also be pretty bad in terms of human value, but not quite as bad, I think. Of course solving consciousness is still a very tall order, but it might be easier than solving all philosophy that’s required for FAI, and it can lead to other shortcuts like in my recent post (not that I’d propose them seriously).
Some people at MIRI might be thinking about this under nonperson predicate. (Eliezer’s view on which computations matter morally is different from the one endorsed by Brian, though.) And maybe it’s important to not limit FAI options too much by preventing mindcrime at all costs – if there are benefits against other very bad failure modes (or – cooperatively – just increased controllability for the people who care a lot about utopia-type outcomes), maybe some mindcrime in the early stages to ensure goal-alignment would be the lesser evil.
Human disutility includes more than just pain too. Destruction of the humanity (the flat plain you describe) carries a great deal of negative utility for me, even if I disappear without feeling any pain at all. There’s more disutility if all life is destroyed, and more if the universe as a whole is destroyed… I don’t think there’s any fundamental asymmetry. Pain and pleasure are the most immediate ways of affecting value, and probably the ones that can be achieved most efficiently in computronium, so external states probably don’t come into play much at all if you take a purely utilitarian view.
Our values might say, for example, that a universe filled with suffering insects is very undesirable, but a universe filled with happy insects isn’t very desirable. More generally, if our values are a conjunction of many different values, then it’s probably easier to create a universe where one is strongly negative and the rest are zero, than a universe where all are strongly positive. I haven’t seen the argument written up, I’m trying to figure it out now.