This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they’ve happened one cannot recover.
More concretely, what experiment in your view should they be doing?
Because life isn’t a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!
Because life isn’t a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone.
This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about “super scientists”- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren’t going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.
Wow, that’s clearly foolish. Sorry. :) I mean I can’t stop laughing so I won’t be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.
And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don’t have to “imprison” any AI agent.
So, no, because it doesn’t have to be fully autonomous.
For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don’t think anyone is claiming “every AI project is dangerous”. They are claiming something more like “AI with the ability to do pretty much all the things human minds do is dangerous”, with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.
I can PROGRAM an agent so that it never walks out of a box. It never wants to.
Again: for sure, but that isn’t the point at issue.
One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.
In any of these cases, you may be confident that the AI you initially built doesn’t want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn’t want to get out of the box?
There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.
(One further observation: telling people they’re stupid and you’re laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there’s a weakness in your own arguments. (“Argument weak; shout louder.”))
I think gjm responded pretty effectively so I’m just going to note that it really isn’t helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one’s self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously.
By the way Eray, you claimed back last November here that 2018 was a reasonable target for “trans-sapient” entities. Do you still stand by that?
This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they’ve happened one cannot recover.
More concretely, what experiment in your view should they be doing?
Because life isn’t a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!
This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about “super scientists”- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren’t going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.
Wow, that’s clearly foolish. Sorry. :) I mean I can’t stop laughing so I won’t be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.
And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don’t have to “imprison” any AI agent.
So, no, because it doesn’t have to be fully autonomous.
For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don’t think anyone is claiming “every AI project is dangerous”. They are claiming something more like “AI with the ability to do pretty much all the things human minds do is dangerous”, with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities.
Again: for sure, but that isn’t the point at issue.
One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware.
In any of these cases, you may be confident that the AI you initially built doesn’t want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn’t want to get out of the box?
There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue.
(One further observation: telling people they’re stupid and you’re laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there’s a weakness in your own arguments. (“Argument weak; shout louder.”))
I think gjm responded pretty effectively so I’m just going to note that it really isn’t helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one’s self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously.
By the way Eray, you claimed back last November here that 2018 was a reasonable target for “trans-sapient” entities. Do you still stand by that?