Not being able to do that thing, yes, and you shouldn’t do it—unless you can obviate the harm.
The relevant scenario is one in which all possible actions lead to some harm somewhere. Suppose the AGI designed to cure cancer uses electrical power to run a molecular simulation; then it’s causing someone to die due respiratory illness from inhaling coal dust, or to die from falling off a ladder installing a solar panel, or to die in a mine, or so on. Suppose it doesn’t; then people are dying due to cancer, and it’s abandoning the duty it cares deeply about.
Typically, this problem gets solved by either not thinking about it, rounding small numbers to 0, or by taxes. Consequentialist restrictive moralities operate by “taxes”—if you want to use that electricity that someone died for, you need to need to be using it for something good enough to offset that cost.
For example, coal costs about a hundred lives per TWh; American per capita power consumption is about 1.7kW, and so we’re looking at about one death every 670 person-years of energy consumption. It’s a small number, but there are clear problems with rounding it down to zero: if we do, half of zero is still zero, and there’s no impetus to reduce the pollution or switch to something cleaner. And not thinking about it is even more dangerous!
Only by the “Good Samaritan” moral code, in which this society is so dolefully steeped. I prefer Caveat emptor.
Do you think it is sensible to leave sharp knives around unattended infants? If so, yikes, and if not, why not?
Clearly the infant’s choices led to it cutting itself, but we wouldn’t call that the infant’s informed consent, because we don’t think infants can provide informed consent, because we don’t think they can be informed of the consequences of their actions. Most libertarian reasoning assumes that we are not dealing with infants, but instead with “responsible adults,” who can reason about their situations and make choices and “deserve what they get.”
But when we’re designing systems, there’s a huge information asymmetry, and the system designer often picks what information the user is looking at. To replace the field of human factors research with caveat emptor is profoundly anti-life and anti-efficiency.
And note that we haven’t touched at all on the Good Samaritan issue—there’s a different underlying relationship than the bystander-victim relationship, or the merchant-merchant relationship. The designer-user relationship is categorically different, and we can strongly endorse moral obligations there without also endorsing them in the other two scenarios. (Hayek’s information cost seems relevant.)
(The relevance is that most AI-human interactions will be closer to designer-user or parent-infant than merchant-merchant, and what constitutes trickery and harm might look very different.)
The relevant scenario is one in which all possible actions lead to some harm somewhere. Suppose the AGI designed to cure cancer uses electrical power to run a molecular simulation; then it’s causing someone to die due respiratory illness from inhaling coal dust, or to die from falling off a ladder installing a solar panel, or to die in a mine, or so on. Suppose it doesn’t; then people are dying due to cancer, and it’s abandoning the duty it cares deeply about.
Typically, this problem gets solved by either not thinking about it, rounding small numbers to 0, or by taxes. Consequentialist restrictive moralities operate by “taxes”—if you want to use that electricity that someone died for, you need to need to be using it for something good enough to offset that cost.
For example, coal costs about a hundred lives per TWh; American per capita power consumption is about 1.7kW, and so we’re looking at about one death every 670 person-years of energy consumption. It’s a small number, but there are clear problems with rounding it down to zero: if we do, half of zero is still zero, and there’s no impetus to reduce the pollution or switch to something cleaner. And not thinking about it is even more dangerous!
Do you think it is sensible to leave sharp knives around unattended infants? If so, yikes, and if not, why not?
Clearly the infant’s choices led to it cutting itself, but we wouldn’t call that the infant’s informed consent, because we don’t think infants can provide informed consent, because we don’t think they can be informed of the consequences of their actions. Most libertarian reasoning assumes that we are not dealing with infants, but instead with “responsible adults,” who can reason about their situations and make choices and “deserve what they get.”
But when we’re designing systems, there’s a huge information asymmetry, and the system designer often picks what information the user is looking at. To replace the field of human factors research with caveat emptor is profoundly anti-life and anti-efficiency.
And note that we haven’t touched at all on the Good Samaritan issue—there’s a different underlying relationship than the bystander-victim relationship, or the merchant-merchant relationship. The designer-user relationship is categorically different, and we can strongly endorse moral obligations there without also endorsing them in the other two scenarios. (Hayek’s information cost seems relevant.)
(The relevance is that most AI-human interactions will be closer to designer-user or parent-infant than merchant-merchant, and what constitutes trickery and harm might look very different.)