The notion of a piece of code that maximizes a utility without any constraints doesn’t strike me as very “intelligent “.
if people really wanted to, they may be able to build such programs, but my guess is that they would be not very useful even before they become dangerous, as overfitting optimizers usually are.
That was just a quip (and I’m not keen on utility functions myself, for reasons not relevant here). More seriously, calling utility maximisation “unintelligent” is more anthropomorphism. Stockfish beats all human players at chess. Is it “intelligent”? ChatGPT can write essays or converse in a moderately convincing manner upon any subject. It is “intelligent”? If an autonomous military drone is better than any human operator at penetrating enemy defences and searching out its designated target, is it “intelligent”?
It does not matter. These are the sorts of things that are being done or attempted by people who call their work “artificial intelligence”. Judgements that this or that feat does not show “real” intelligence are beside the point. More than 70 years ago, Turing came up with the Turing Test in order to get away from sterile debates about whether a machine could “think”. What matters is, what does the thing do, and how does it do it?
This is not about the definition of intelligence. It’s more about usefulness. Like a gun without a safety, an optimizer without constraints or regularizarion is not very useful.
Maybe it will be possible to build it, just like today it’s possible to hook up our nukes to an automatic launching device. But it’s not necessary that people will do something so stupid.
The notion of a piece of code that maximizes a utility without any constraints doesn’t strike me as very “intelligent “.
if people really wanted to, they may be able to build such programs, but my guess is that they would be not very useful even before they become dangerous, as overfitting optimizers usually are.
But very rational!
That was just a quip (and I’m not keen on utility functions myself, for reasons not relevant here). More seriously, calling utility maximisation “unintelligent” is more anthropomorphism. Stockfish beats all human players at chess. Is it “intelligent”? ChatGPT can write essays or converse in a moderately convincing manner upon any subject. It is “intelligent”? If an autonomous military drone is better than any human operator at penetrating enemy defences and searching out its designated target, is it “intelligent”?
It does not matter. These are the sorts of things that are being done or attempted by people who call their work “artificial intelligence”. Judgements that this or that feat does not show “real” intelligence are beside the point. More than 70 years ago, Turing came up with the Turing Test in order to get away from sterile debates about whether a machine could “think”. What matters is, what does the thing do, and how does it do it?
This is not about the definition of intelligence. It’s more about usefulness. Like a gun without a safety, an optimizer without constraints or regularizarion is not very useful.
Maybe it will be possible to build it, just like today it’s possible to hook up our nukes to an automatic launching device. But it’s not necessary that people will do something so stupid.