Doesn’t a concept such as “mortality” require some general intelligence to understand in the first place?
There’s no XML tag that comes with living beings that says whether they are “alive” or “dead” at one moment or another. We think this is a simple binary thing to see whether something is alive or dead, mainly because our brains have modules (modules whose functioning we don’t understand and can’t yet replicate in computer code) for distinguishing “animate” objects from “inaminate” objects. But doctors know that the dividing line between “alive” and “dead” is a much murkier one than deciding whether a beam of laser light is closer to 500 nm or 450 nm in wavelength (which would be a task that a narrow-intelligence AI could probably figure out). Already the concept of “mortality” is a bit too advanced for any “narrow AI.”
It’s a bit like, if you wanted to design a narrow intelligence to tackle the problem of mercury pollution in freshwater streams, and you came up with the most simple way of phrasing the command, like: “Computer: reduce the number of mercury atoms in the following 3-dimensional GPS domain (the state of Ohio, from 100 feet below ground to 100 feet up in the air, for example), while leaving all other factors untouched.
The computer might respond with something to the effect of, “I cannot accomplish that because any method of reducing the number of mercury atoms in that domain will require re-arranging some other atoms upstream (such as the atoms comprising the coal power plant that is belching out tons of mercury pollution).”
So then you tell the narrow AI, “Okay, try to figure out how to reduce the number of mercury atoms in the domain, and you can modify SOME other atoms upstream, but nothing IMPORTANT.” Well, then we are back to the problem of needing a general intelligence to interpret things like the word “important.”
This is why we can’t just build an Oracle AI and command it to, “Tell us a cure for X disease, leaving all other things in the world unchanged.” And the computer might say, “I can’t keep everything else in the world the same and change just that one thing. To make the medical devices that you will need to cure this disease, you are going to have to build a factory to make the medical devices, and you are going to have to employ workers to work in that factory, and you are going to have to produce the food to feed those workers, and you are going to have to transport that food, and you are going to have to divert some gasoline supplies to the transportation of that food, and that is going to change the worldwide price of gasoline by an average of 0.005 cents, which will produce a 0.000006% chance of a revolution in France....” and so on.
So you tell the computer, “Okay, just come up with a plan for curing X disease, and change as little as possible, and if you do need to change other things, try not to change things that are IMPORTANT, that WE HUMANS CARE ABOUT.”
And we are right back to the problem of having to be sure that we have successfully encoded all of human morality and values into this Oracle AI.
Doesn’t a concept such as “mortality” require some general intelligence to understand in the first place?
Concepts are generally easier to encode for narrow intelligences. For an AI that designs drugs, “the heart stopped/didn’t stop” is probably sufficient, as it’s very tricky to get cunning with that definition within the limitations of drug-design.
Doesn’t a concept such as “mortality” require some general intelligence to understand in the first place?
There’s no XML tag that comes with living beings that says whether they are “alive” or “dead” at one moment or another. We think this is a simple binary thing to see whether something is alive or dead, mainly because our brains have modules (modules whose functioning we don’t understand and can’t yet replicate in computer code) for distinguishing “animate” objects from “inaminate” objects. But doctors know that the dividing line between “alive” and “dead” is a much murkier one than deciding whether a beam of laser light is closer to 500 nm or 450 nm in wavelength (which would be a task that a narrow-intelligence AI could probably figure out). Already the concept of “mortality” is a bit too advanced for any “narrow AI.”
It’s a bit like, if you wanted to design a narrow intelligence to tackle the problem of mercury pollution in freshwater streams, and you came up with the most simple way of phrasing the command, like: “Computer: reduce the number of mercury atoms in the following 3-dimensional GPS domain (the state of Ohio, from 100 feet below ground to 100 feet up in the air, for example), while leaving all other factors untouched.
The computer might respond with something to the effect of, “I cannot accomplish that because any method of reducing the number of mercury atoms in that domain will require re-arranging some other atoms upstream (such as the atoms comprising the coal power plant that is belching out tons of mercury pollution).”
So then you tell the narrow AI, “Okay, try to figure out how to reduce the number of mercury atoms in the domain, and you can modify SOME other atoms upstream, but nothing IMPORTANT.” Well, then we are back to the problem of needing a general intelligence to interpret things like the word “important.”
This is why we can’t just build an Oracle AI and command it to, “Tell us a cure for X disease, leaving all other things in the world unchanged.” And the computer might say, “I can’t keep everything else in the world the same and change just that one thing. To make the medical devices that you will need to cure this disease, you are going to have to build a factory to make the medical devices, and you are going to have to employ workers to work in that factory, and you are going to have to produce the food to feed those workers, and you are going to have to transport that food, and you are going to have to divert some gasoline supplies to the transportation of that food, and that is going to change the worldwide price of gasoline by an average of 0.005 cents, which will produce a 0.000006% chance of a revolution in France....” and so on.
So you tell the computer, “Okay, just come up with a plan for curing X disease, and change as little as possible, and if you do need to change other things, try not to change things that are IMPORTANT, that WE HUMANS CARE ABOUT.”
And we are right back to the problem of having to be sure that we have successfully encoded all of human morality and values into this Oracle AI.
Concepts are generally easier to encode for narrow intelligences. For an AI that designs drugs, “the heart stopped/didn’t stop” is probably sufficient, as it’s very tricky to get cunning with that definition within the limitations of drug-design.