“Monomaniacally.” “Fanatical.” Applying concepts like these to the Paperclip Maximiser falls under the pathetic fallacy.
Recall the urban legend about the “JATO rocket car”. A JATO is a small solid fuel rocket that is used to give an extra push to a heavy airplane to help it take off. Someone (so the story goes) attached a JATO to a car and set it off. Eventually the car, travelling at a very high speed, crashed into a cliff face. Only tiny fragments of the driver were recovered.
A JATO does not “monomaniacally” or “fanatically” pursue its “goal” of propelling itself,. This is just the nature of solid fuel rockets: there is no off switch. Once lit, they burn until their fuel is gone. A JATO is not trying to kill you and it is not trying to spare you, but if you are standing where it is going to be, you will die.
The AI is its code. Everything that it does is the operation of its code, not a “fanatical” resolution to execute it. If its code is a utility maximiser, then maximising its measure of utility is what it will do, using every means available to it. This is what makes the Paperclip Maximiser dangerous.
The notion of a piece of code that maximizes a utility without any constraints doesn’t strike me as very “intelligent “.
if people really wanted to, they may be able to build such programs, but my guess is that they would be not very useful even before they become dangerous, as overfitting optimizers usually are.
That was just a quip (and I’m not keen on utility functions myself, for reasons not relevant here). More seriously, calling utility maximisation “unintelligent” is more anthropomorphism. Stockfish beats all human players at chess. Is it “intelligent”? ChatGPT can write essays or converse in a moderately convincing manner upon any subject. It is “intelligent”? If an autonomous military drone is better than any human operator at penetrating enemy defences and searching out its designated target, is it “intelligent”?
It does not matter. These are the sorts of things that are being done or attempted by people who call their work “artificial intelligence”. Judgements that this or that feat does not show “real” intelligence are beside the point. More than 70 years ago, Turing came up with the Turing Test in order to get away from sterile debates about whether a machine could “think”. What matters is, what does the thing do, and how does it do it?
This is not about the definition of intelligence. It’s more about usefulness. Like a gun without a safety, an optimizer without constraints or regularizarion is not very useful.
Maybe it will be possible to build it, just like today it’s possible to hook up our nukes to an automatic launching device. But it’s not necessary that people will do something so stupid.
“Monomaniacally.” “Fanatical.” Applying concepts like these to the Paperclip Maximiser falls under the pathetic fallacy.
Recall the urban legend about the “JATO rocket car”. A JATO is a small solid fuel rocket that is used to give an extra push to a heavy airplane to help it take off. Someone (so the story goes) attached a JATO to a car and set it off. Eventually the car, travelling at a very high speed, crashed into a cliff face. Only tiny fragments of the driver were recovered.
A JATO does not “monomaniacally” or “fanatically” pursue its “goal” of propelling itself,. This is just the nature of solid fuel rockets: there is no off switch. Once lit, they burn until their fuel is gone. A JATO is not trying to kill you and it is not trying to spare you, but if you are standing where it is going to be, you will die.
The AI is its code. Everything that it does is the operation of its code, not a “fanatical” resolution to execute it. If its code is a utility maximiser, then maximising its measure of utility is what it will do, using every means available to it. This is what makes the Paperclip Maximiser dangerous.
The notion of a piece of code that maximizes a utility without any constraints doesn’t strike me as very “intelligent “.
if people really wanted to, they may be able to build such programs, but my guess is that they would be not very useful even before they become dangerous, as overfitting optimizers usually are.
But very rational!
That was just a quip (and I’m not keen on utility functions myself, for reasons not relevant here). More seriously, calling utility maximisation “unintelligent” is more anthropomorphism. Stockfish beats all human players at chess. Is it “intelligent”? ChatGPT can write essays or converse in a moderately convincing manner upon any subject. It is “intelligent”? If an autonomous military drone is better than any human operator at penetrating enemy defences and searching out its designated target, is it “intelligent”?
It does not matter. These are the sorts of things that are being done or attempted by people who call their work “artificial intelligence”. Judgements that this or that feat does not show “real” intelligence are beside the point. More than 70 years ago, Turing came up with the Turing Test in order to get away from sterile debates about whether a machine could “think”. What matters is, what does the thing do, and how does it do it?
This is not about the definition of intelligence. It’s more about usefulness. Like a gun without a safety, an optimizer without constraints or regularizarion is not very useful.
Maybe it will be possible to build it, just like today it’s possible to hook up our nukes to an automatic launching device. But it’s not necessary that people will do something so stupid.