I’m going to provide a paperclip senerio below, please tell me if you think it’s impossible.
Imagine a struggling office supplies company that’s pressureing it’s empoyees to produce innovative results or they’ll be fired. They hired an AI guy who has yet to produce any significant results. After a meeting where the boss basically tells the AI guy to produce something by the end of the month or he’s out. Our AI is a gifted coder, but lacks a lot commen sense, he’s also quite poor, and is desparate to give the company an edge so he can save the day. In a flash of insight combined with some open source deep learning sites (like kaggle), he’s able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.
The AI is going to stupid, but it’s going to quickly find out how to turn the world into paperclips. It’s not going to be a general intelligence. But it doesn’t have to be to cause problems.
The AI is going to stupid, but it’s going to quickly find out how to turn the world into paperclips. It’s not going to be a general intelligence. But it doesn’t have to be to cause problems.
Incorrect. It very much DOES have to be a general intelligence, and far from stupid, if it is going to be smart enough to evade the efforts of humanity to squelch it. That really is the whole point behind all of these scenarios. It has to be an existential threat, or it will just be a matter of someone walking up to it and pulling the power cord when it is distracted by a nice juicy batch of paper-clip steel that someone tempts it with.
In a flash of insight combined with some open source deep learning sites (like kaggle), he’s able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.
You’re kidding, right? Deep neural nets are very good at learning hierarchies of features, but they are still basically doing correlative statistical inference rather than causal inference. They are going to be much too slow, with respect to actual computation speed and sample complexity, to function dangerously well in realistically complex environments (ie: not Atari games).
There’s an unwritten rule around here that you have to discuss AI in terms of unimplementable abstractions...its rude to bring in real world limitations.
The point of a paperclip maximiser thought experiment is that most arbitrary real world goals are bad news for humanity. Your hopeless engineer would likely create an AI that makes something that has the same relation to paperclips as chewing gum has to fruit. In the sense that evolution gave us “fruit detectors” in our taste buds but chewing gum triggers them even more. But you could be excessively conservative, insist that all paperclips must be molecularly identical to this particular paperclip and get results.
I’m going to provide a paperclip senerio below, please tell me if you think it’s impossible.
Imagine a struggling office supplies company that’s pressureing it’s empoyees to produce innovative results or they’ll be fired. They hired an AI guy who has yet to produce any significant results. After a meeting where the boss basically tells the AI guy to produce something by the end of the month or he’s out. Our AI is a gifted coder, but lacks a lot commen sense, he’s also quite poor, and is desparate to give the company an edge so he can save the day. In a flash of insight combined with some open source deep learning sites (like kaggle), he’s able to create the first self recursive AI, and he tests it out by telling it to maximise the amount of paperclips his factory makes.
The AI is going to stupid, but it’s going to quickly find out how to turn the world into paperclips. It’s not going to be a general intelligence. But it doesn’t have to be to cause problems.
You answered your own question when you said:
Incorrect. It very much DOES have to be a general intelligence, and far from stupid, if it is going to be smart enough to evade the efforts of humanity to squelch it. That really is the whole point behind all of these scenarios. It has to be an existential threat, or it will just be a matter of someone walking up to it and pulling the power cord when it is distracted by a nice juicy batch of paper-clip steel that someone tempts it with.
Or, as Rick Deckard might have said:
“If it’s an idiot, it’s not my problem”
Its got to be smart enough to understood the difference between real paoerclips and fake signals on its input channels, ’smileys”.
You’re kidding, right? Deep neural nets are very good at learning hierarchies of features, but they are still basically doing correlative statistical inference rather than causal inference. They are going to be much too slow, with respect to actual computation speed and sample complexity, to function dangerously well in realistically complex environments (ie: not Atari games).
There’s an unwritten rule around here that you have to discuss AI in terms of unimplementable abstractions...its rude to bring in real world limitations.
Excuse me if I care more about getting a working design that does the right things than I do about treating LW discussions as battles.
I’m… pretty sure that was sarcasm? I hope so, at least.
Yeah, but I still object to even the sarcastic implications. I was posting in full seriousness about the limitations of deep neural nets.
Yes, that was sarcasm.
Or this is meta-sarcasm, and therein lies the problem.
The point of a paperclip maximiser thought experiment is that most arbitrary real world goals are bad news for humanity. Your hopeless engineer would likely create an AI that makes something that has the same relation to paperclips as chewing gum has to fruit. In the sense that evolution gave us “fruit detectors” in our taste buds but chewing gum triggers them even more. But you could be excessively conservative, insist that all paperclips must be molecularly identical to this particular paperclip and get results.