This is not Musk’s field of expertise. I do not give his words special weight.
The fact that he can sit in on some cutting edge tech demos, or even chat with CEOs, still doesn’t make him an expert.
I have a technical background in AI; there’s still massive hurdles to overcome; not 5-10 year hurdles. Nothing from Deepmind will “escape onto the internet” any time soon. It is very much grounded in the “Narrow AI” technologies like machine learning.
I agree with the rest of your comment, but calling him a “Cassandra” means “He’s right, but no-one will believe him,” and I hope that isn’t what you meant!
An applicable morality tale here would be the boy that cried wolf, if he hadn’t retracted his post. I don’t remember if he had a name. (Elon Musk: Inverse Cassandra.)
Well, his comment was deleted, possibly by him, so we should take that into account—maybe he thought he was being a bit overly Cassandra-like too.
The other thing to remember is that Musk’s comments reach a slightly different audience to the usual with regards to AI risk. So it’s at least somewhat relevant to see the perspective of the one communicating to these people.
I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don’t think that the “social sciences” approach to that works.
This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they’ve happened one cannot recover.
More concretely, what experiment in your view should they be doing?
Because life isn’t a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!
Because life isn’t a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone.
This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about “super scientists”- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren’t going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.
Wow, that’s clearly foolish. Sorry. :) I mean I can’t stop laughing so I won’t be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.
And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don’t have to “imprison” any AI agent.
So, no, because it doesn’t have to be fully autonomous.
For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don’t think anyone is claiming “every AI project is dangerous”. They are claiming something more like “AI with the ability to do pretty much all the things human minds do is dangerous”, with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.
I can PROGRAM an agent so that it never walks out of a box. It never wants to.
Again: for sure, but that isn’t the point at issue.
One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.
In any of these cases, you may be confident that the AI you initially built doesn’t want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn’t want to get out of the box?
There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.
(One further observation: telling people they’re stupid and you’re laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there’s a weakness in your own arguments. (“Argument weak; shout louder.”))
I think gjm responded pretty effectively so I’m just going to note that it really isn’t helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one’s self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously.
By the way Eray, you claimed back last November here that 2018 was a reasonable target for “trans-sapient” entities. Do you still stand by that?
Well, in terms of out-of-control software produced by an AI company, I feel the two risks, ‘something dangerous’ and AGI, are pretty closely linked.
Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that’s not the issue at hand.
The most advanced AI today takes input and creates output. It is strictly Oracle AI with nothing present in its architecture that could circumvent that. I don’t see that changing anytime soon.
Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that’s not the issue at hand.
You’re free to disregard those, but I’m not sure Elon Musk is doing that.
The more damaging computer virus or data leak are only two of the possible worries. If a narrow AI simply helps black market chemists find more novel psychoactives than regulation can ever hope to handle, or if bots eliminate just 10% of jobs (say in transportation and retail to name just the most obvious) leading to massive societal unrest, or if they get better at solving captchas than humans are (which would lead to a massive crisis in anonymous communication and everything that depends on it)… all of these would make Musks prediction true in my book.
But these are just technological issues comparable to other mundane ones; just like how 3D printing could make it easy to create weapons, or how the rise of the automobile has created an enormous new cause of death and injury. There’s not reason to think it would be outside the scope of ordinary policy-making methods to handle them.
Also, Solving Captchas is already pretty damn easy. A combination of algorithmic methods and crowdsourcing makes it quite cheap, especially for sites using older/easier captcha versions. Captcha is not a security plan; it’s a speedbump that’s getting easier to pass all the time (but still, no crisis will result from this).
Some powerful agents (say secret services or the government of… let’s say China) would benefit greatly from disrupting anonymous electronic communication as a whole, because that’d force electronic communication to occur in a non-anonymous fashion. People could still encrypt, but it’d at least be known who talked to who, and that’s the kind of information that’s apparently valued worth billions of dollars and a couple of civil rights. Correct?
But how could you do that? Thoroughly anonymized peer-to-peer networks built to defy surveillance (such as Freenet), appear to successfully make de-anonymizing communication very, very, very hard. If you kill or severely impede less than perfect anonymization services such as Tor, anonymity-liking people can just migrate to services such as Freenet, and your plan to disrupt anonymous electronic communication has backfired. Correct?
But what you can do is attack not the anonymity, but the communication inside that anonymity. All you need to do is flood the anonymous medium with disruptive pseudo-communication. Spam is the obvious example, but (especially if there are web-of-trust-like structures between the anonymization and the actual communication) you can’t make your bots too easy to identify—but as far as it is still possible, you can simply throw in more and more bots.
How do you identify bots as such? You do Turing tests of course. How do you identify lots and lots of bots as such? You do completely automated Turing tests, or Captchas. Not necessarily the ones we have, which are apparently somewhat solvable with the current state of machine learning, but better ones. Captchas have already improved, because they had to. Surely there can be better ones, or sites can start to require perfect performance on ten different Captchas at once for acceptance as a non-bot, or charge (even anonymously, using something like bitcoin) for the privilege of getting to take the Captcha. But once you get to the level where narrow AIs can solve Captchas as successfully as humans, the floodgates are open.
And then anyone who benefits from disrupting all anonymous electronic communication can—and will—do so. Non-anonymity will be promoted as “a small price to pay” to get rid of the bot plague, and everyone will live happy ever after—except those in that vast majority of countries that does not have a First Amendment, and are scared of their governments for very good reasons. They’ll retreat into non-electronic communication of course, but that can’t be the way forward, can it?
Your argument is basically that anonymous networks can be spammed into uselessness. That looks to be theoretically possible but practically difficult, but that’s not the main problem with your argument. The biggest hole, from my point of view, is that you think that captchas are a good (or even only) anti-spam measure. They are not.
And, of course, email is a pseudonymous P2P network which used to have a large spam problem and which, by now, has largely solved it.
Here is good write-up of how spam wars work in real life.
Spam wars in real life use mechanisms that don’t work in fully anonymous networks like Freenet. You can’t filter by IP in a network without IPs.
Captchas are obviously not a good (or even only) anti-spam measure. But inside anonymous networks, they’re one of the few things that work. Webs of Trust, which I explicitly mentioned, are another—they just don’t scale well.
DeepMind is very definitely AGI in the sense of the domain of problems its learners can learn and its agents can solve. If DeepMind is easily controlled and not very dangerous, that’s not evidence for AGI being further away than we thought before we looked at DeepMind, it’s evidence for AGI being more easily controlled than we thought before we looked at DeepMind.
Real AGI was never going to look like magic genies, so we should never fault real-life AI work for failing at genie.
This is not Musk’s field of expertise. I do not give his words special weight.
The fact that he can sit in on some cutting edge tech demos, or even chat with CEOs, still doesn’t make him an expert.
I have a technical background in AI; there’s still massive hurdles to overcome; not 5-10 year hurdles. Nothing from Deepmind will “escape onto the internet” any time soon. It is very much grounded in the “Narrow AI” technologies like machine learning.
I feel pretty confident calling him a Cassandra.
I agree with the rest of your comment, but calling him a “Cassandra” means “He’s right, but no-one will believe him,” and I hope that isn’t what you meant!
An applicable morality tale here would be the boy that cried wolf, if he hadn’t retracted his post. I don’t remember if he had a name. (Elon Musk: Inverse Cassandra.)
Stöffler might have the best name among those who failed to update properly.
Well, his comment was deleted, possibly by him, so we should take that into account—maybe he thought he was being a bit overly Cassandra-like too.
The other thing to remember is that Musk’s comments reach a slightly different audience to the usual with regards to AI risk. So it’s at least somewhat relevant to see the perspective of the one communicating to these people.
I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don’t think that the “social sciences” approach to that works.
This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they’ve happened one cannot recover.
More concretely, what experiment in your view should they be doing?
Because life isn’t a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!
This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about “super scientists”- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren’t going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.
Wow, that’s clearly foolish. Sorry. :) I mean I can’t stop laughing so I won’t be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.
And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don’t have to “imprison” any AI agent.
So, no, because it doesn’t have to be fully autonomous.
For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don’t think anyone is claiming “every AI project is dangerous”. They are claiming something more like “AI with the ability to do pretty much all the things human minds do is dangerous”, with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.
Again: for sure, but that isn’t the point at issue.
One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.
In any of these cases, you may be confident that the AI you initially built doesn’t want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn’t want to get out of the box?
There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.
(One further observation: telling people they’re stupid and you’re laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there’s a weakness in your own arguments. (“Argument weak; shout louder.”))
I think gjm responded pretty effectively so I’m just going to note that it really isn’t helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one’s self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously.
By the way Eray, you claimed back last November here that 2018 was a reasonable target for “trans-sapient” entities. Do you still stand by that?
Are you talking about what he’s talking about—“risk of something seriously dangerous happening”—or are you talking about AGI?
Because I can easily imagine how a narrow AI technology could do a lot of damage, particularly if humans intend it to.
Well, in terms of out-of-control software produced by an AI company, I feel the two risks, ‘something dangerous’ and AGI, are pretty closely linked.
Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that’s not the issue at hand.
The most advanced AI today takes input and creates output. It is strictly Oracle AI with nothing present in its architecture that could circumvent that. I don’t see that changing anytime soon.
You’re free to disregard those, but I’m not sure Elon Musk is doing that.
The more damaging computer virus or data leak are only two of the possible worries. If a narrow AI simply helps black market chemists find more novel psychoactives than regulation can ever hope to handle, or if bots eliminate just 10% of jobs (say in transportation and retail to name just the most obvious) leading to massive societal unrest, or if they get better at solving captchas than humans are (which would lead to a massive crisis in anonymous communication and everything that depends on it)… all of these would make Musks prediction true in my book.
But these are just technological issues comparable to other mundane ones; just like how 3D printing could make it easy to create weapons, or how the rise of the automobile has created an enormous new cause of death and injury. There’s not reason to think it would be outside the scope of ordinary policy-making methods to handle them.
Also, Solving Captchas is already pretty damn easy. A combination of algorithmic methods and crowdsourcing makes it quite cheap, especially for sites using older/easier captcha versions. Captcha is not a security plan; it’s a speedbump that’s getting easier to pass all the time (but still, no crisis will result from this).
You seem to be much confused :-)
Cleared up my grammar—was that the symptom of the perceived confusion, or do you doubt that much depends on anonymous communication?
How would breaking captchas break anonymous communications?
Some powerful agents (say secret services or the government of… let’s say China) would benefit greatly from disrupting anonymous electronic communication as a whole, because that’d force electronic communication to occur in a non-anonymous fashion. People could still encrypt, but it’d at least be known who talked to who, and that’s the kind of information that’s apparently valued worth billions of dollars and a couple of civil rights. Correct?
But how could you do that? Thoroughly anonymized peer-to-peer networks built to defy surveillance (such as Freenet), appear to successfully make de-anonymizing communication very, very, very hard. If you kill or severely impede less than perfect anonymization services such as Tor, anonymity-liking people can just migrate to services such as Freenet, and your plan to disrupt anonymous electronic communication has backfired. Correct?
But what you can do is attack not the anonymity, but the communication inside that anonymity. All you need to do is flood the anonymous medium with disruptive pseudo-communication. Spam is the obvious example, but (especially if there are web-of-trust-like structures between the anonymization and the actual communication) you can’t make your bots too easy to identify—but as far as it is still possible, you can simply throw in more and more bots.
How do you identify bots as such? You do Turing tests of course. How do you identify lots and lots of bots as such? You do completely automated Turing tests, or Captchas. Not necessarily the ones we have, which are apparently somewhat solvable with the current state of machine learning, but better ones. Captchas have already improved, because they had to. Surely there can be better ones, or sites can start to require perfect performance on ten different Captchas at once for acceptance as a non-bot, or charge (even anonymously, using something like bitcoin) for the privilege of getting to take the Captcha. But once you get to the level where narrow AIs can solve Captchas as successfully as humans, the floodgates are open.
And then anyone who benefits from disrupting all anonymous electronic communication can—and will—do so. Non-anonymity will be promoted as “a small price to pay” to get rid of the bot plague, and everyone will live happy ever after—except those in that vast majority of countries that does not have a First Amendment, and are scared of their governments for very good reasons. They’ll retreat into non-electronic communication of course, but that can’t be the way forward, can it?
Your argument is basically that anonymous networks can be spammed into uselessness. That looks to be theoretically possible but practically difficult, but that’s not the main problem with your argument. The biggest hole, from my point of view, is that you think that captchas are a good (or even only) anti-spam measure. They are not.
And, of course, email is a pseudonymous P2P network which used to have a large spam problem and which, by now, has largely solved it.
Here is good write-up of how spam wars work in real life.
Spam wars in real life use mechanisms that don’t work in fully anonymous networks like Freenet. You can’t filter by IP in a network without IPs.
Captchas are obviously not a good (or even only) anti-spam measure. But inside anonymous networks, they’re one of the few things that work. Webs of Trust, which I explicitly mentioned, are another—they just don’t scale well.
DeepMind is very definitely AGI in the sense of the domain of problems its learners can learn and its agents can solve. If DeepMind is easily controlled and not very dangerous, that’s not evidence for AGI being further away than we thought before we looked at DeepMind, it’s evidence for AGI being more easily controlled than we thought before we looked at DeepMind.
Real AGI was never going to look like magic genies, so we should never fault real-life AI work for failing at genie.