The whole idea that computers “do exactly what we say.” seems highly dubious. Look at the iPad and Kindle, for example. You have to be careful if building on top of such a premise—since intelligent machines will probably not be so literal-minded.
I’m really annoyed that the above comment is heavily downvoted; yes, it goes against local folk beliefs, but it’s not straightforwardly wrong and I can see many arguments that suggest it’s the sensible default belief.
I don’t really see why it would go against “local folk beliefs”.
It’s surely widely recognised that many computers don’t literally do what their users tell them to—but instead obey a bunch of layers of system software—which may or may not have the user’s interests in mind.
As for super-intelligent machines being “literal-minded”, that would go against a long trend towards the use of higher level languages, and computers adapting to humans (rather than the other way around). Nobody is going to be aiming at a superintelligence which is autistic in this department.
[I]ntelligent machines will probably not be so literal-minded.
This is a variation of the “Superintelligent AI will do what you mean, not what you literally say; it would have to be pretty non-superintelligent to screw that up.”
The counter-argument is: The person making the request may not understand the full implications of “what they really mean”. The AI needs to be able to protect against bad unintended outcomes even of correctly interpreted requests. Because a superintelligent AI is very powerful the bad outcomes could be very bad indeed. To deal with this, the AI has to understand “what we really want”, which is tricky since most of the time we don’t even know what that is in any great detail.
This is a variation of the “Superintelligent AI will do what you mean, not what you literally say; it would have to be pretty non-superintelligent to screw that up.”
...except that my comments were fine, while the position that you are likening them to is completely daft. That doesn’t seem to be entirely fair. Maybe you thought I was making that daft argument—in which case, perhaps revisit the situation now that you have heard me state that I wasn’t.
The whole idea that computers “do exactly what we say.” seems highly dubious. Look at the iPad and Kindle, for example. You have to be careful if building on top of such a premise—since intelligent machines will probably not be so literal-minded.
I’m really annoyed that the above comment is heavily downvoted; yes, it goes against local folk beliefs, but it’s not straightforwardly wrong and I can see many arguments that suggest it’s the sensible default belief.
I don’t really see why it would go against “local folk beliefs”.
It’s surely widely recognised that many computers don’t literally do what their users tell them to—but instead obey a bunch of layers of system software—which may or may not have the user’s interests in mind.
As for super-intelligent machines being “literal-minded”, that would go against a long trend towards the use of higher level languages, and computers adapting to humans (rather than the other way around). Nobody is going to be aiming at a superintelligence which is autistic in this department.
This is a variation of the “Superintelligent AI will do what you mean, not what you literally say; it would have to be pretty non-superintelligent to screw that up.”
The counter-argument is: The person making the request may not understand the full implications of “what they really mean”. The AI needs to be able to protect against bad unintended outcomes even of correctly interpreted requests. Because a superintelligent AI is very powerful the bad outcomes could be very bad indeed. To deal with this, the AI has to understand “what we really want”, which is tricky since most of the time we don’t even know what that is in any great detail.
...except that my comments were fine, while the position that you are likening them to is completely daft. That doesn’t seem to be entirely fair. Maybe you thought I was making that daft argument—in which case, perhaps revisit the situation now that you have heard me state that I wasn’t.
I re-read your comment, but I’m still not sure what you’re driving at. Can you elaborate a little further?