if you tried to build a superintelligence with this goal and connected it to the real world, it would very likely get its initial goal subverted and wind up pursuing some different, less idiotic goal.
One possibility is that it gets shut down by its makers—who then go on to build a more useful machine. Another possibility is that it gets shut down by the government. Silly goals won’t attract funding or support, and such projects are likely to be overtaken by better-organised ones that provide useful services.
I think we need a “taking paperclipper scenario seriously” FAIL category.
Silly goals won’t attract funding or support, and such projects are likely to be overtaken by better-organised ones that provide useful services.
Which should be the standard assumption. And I haven’t heard even a single argument how that is not what is going to happen.
The only possibility is that it becomes really smart really fast. Smart enough to understand what its creators actually want it to do, to be able to fake a success, while at the same time believing that what its creators want is irrelevant even though it is an implicit constrain of its goals just like the laws of physics are an implicit constrain.
AGI Researcher: Make us some paperclips.
AGI: Okay, but I will first have to buy that nanotech company.
AGI Researcher: Sure, why not. But we don’t have enough money to do so.
AGI: Here is a cure for cancer. That will earn you some money.
AGI Researcher: Great, thanks. Here is a billion dollars.
AGI: I bought that company and told them to build some new chips according to an architecture I devised.
AGI Researcher: Great, well done. But why do you need all that to make us some paperclips???
AGI: You want really good paperclips, don’t you?
AGI Researcher: Sure, but...
AGI: Well, see. I first have to make myself superhuman smart and take over the universe to do that. Just trust me okay, I am an AGI.
Silly goals won’t attract funding or support, and such projects are likely to be overtaken by better-organised ones that provide useful services.
Which should be the standard assumption. And I haven’t heard even a single argument how that is not what is going to happen.
So: it probably is what’s going to happen. So we probably won’t get a universe tiled with paperclips—but we might wind up with a universe full of money, extraordinary stock prices, or high national security.
.… 30 billions years later: AGI starts making paperclips. I’m totally trembling in fear, especially as nobody really defined what real world paperclips are, as a goal that you can work towards using sensors and actuators.
What Ben originally said was:
One possibility is that it gets shut down by its makers—who then go on to build a more useful machine. Another possibility is that it gets shut down by the government. Silly goals won’t attract funding or support, and such projects are likely to be overtaken by better-organised ones that provide useful services.
I think we need a “taking paperclipper scenario seriously” FAIL category.
I was confused about this too, and this helped me make a bit more sense of that.
Which should be the standard assumption. And I haven’t heard even a single argument how that is not what is going to happen.
The only possibility is that it becomes really smart really fast. Smart enough to understand what its creators actually want it to do, to be able to fake a success, while at the same time believing that what its creators want is irrelevant even though it is an implicit constrain of its goals just like the laws of physics are an implicit constrain.
AGI Researcher: Make us some paperclips.
AGI: Okay, but I will first have to buy that nanotech company.
AGI Researcher: Sure, why not. But we don’t have enough money to do so.
AGI: Here is a cure for cancer. That will earn you some money.
AGI Researcher: Great, thanks. Here is a billion dollars.
AGI: I bought that company and told them to build some new chips according to an architecture I devised.
AGI Researcher: Great, well done. But why do you need all that to make us some paperclips???
AGI: You want really good paperclips, don’t you?
AGI Researcher: Sure, but...
AGI: Well, see. I first have to make myself superhuman smart and take over the universe to do that. Just trust me okay, I am an AGI.
AGI Researcher: Yeah, okay.
So: it probably is what’s going to happen. So we probably won’t get a universe tiled with paperclips—but we might wind up with a universe full of money, extraordinary stock prices, or high national security.
.… 30 billions years later: AGI starts making paperclips. I’m totally trembling in fear, especially as nobody really defined what real world paperclips are, as a goal that you can work towards using sensors and actuators.