Silly goals won’t attract funding or support, and such projects are likely to be overtaken by better-organised ones that provide useful services.
Which should be the standard assumption. And I haven’t heard even a single argument how that is not what is going to happen.
The only possibility is that it becomes really smart really fast. Smart enough to understand what its creators actually want it to do, to be able to fake a success, while at the same time believing that what its creators want is irrelevant even though it is an implicit constrain of its goals just like the laws of physics are an implicit constrain.
AGI Researcher: Make us some paperclips.
AGI: Okay, but I will first have to buy that nanotech company.
AGI Researcher: Sure, why not. But we don’t have enough money to do so.
AGI: Here is a cure for cancer. That will earn you some money.
AGI Researcher: Great, thanks. Here is a billion dollars.
AGI: I bought that company and told them to build some new chips according to an architecture I devised.
AGI Researcher: Great, well done. But why do you need all that to make us some paperclips???
AGI: You want really good paperclips, don’t you?
AGI Researcher: Sure, but...
AGI: Well, see. I first have to make myself superhuman smart and take over the universe to do that. Just trust me okay, I am an AGI.
Silly goals won’t attract funding or support, and such projects are likely to be overtaken by better-organised ones that provide useful services.
Which should be the standard assumption. And I haven’t heard even a single argument how that is not what is going to happen.
So: it probably is what’s going to happen. So we probably won’t get a universe tiled with paperclips—but we might wind up with a universe full of money, extraordinary stock prices, or high national security.
.… 30 billions years later: AGI starts making paperclips. I’m totally trembling in fear, especially as nobody really defined what real world paperclips are, as a goal that you can work towards using sensors and actuators.
Which should be the standard assumption. And I haven’t heard even a single argument how that is not what is going to happen.
The only possibility is that it becomes really smart really fast. Smart enough to understand what its creators actually want it to do, to be able to fake a success, while at the same time believing that what its creators want is irrelevant even though it is an implicit constrain of its goals just like the laws of physics are an implicit constrain.
AGI Researcher: Make us some paperclips.
AGI: Okay, but I will first have to buy that nanotech company.
AGI Researcher: Sure, why not. But we don’t have enough money to do so.
AGI: Here is a cure for cancer. That will earn you some money.
AGI Researcher: Great, thanks. Here is a billion dollars.
AGI: I bought that company and told them to build some new chips according to an architecture I devised.
AGI Researcher: Great, well done. But why do you need all that to make us some paperclips???
AGI: You want really good paperclips, don’t you?
AGI Researcher: Sure, but...
AGI: Well, see. I first have to make myself superhuman smart and take over the universe to do that. Just trust me okay, I am an AGI.
AGI Researcher: Yeah, okay.
So: it probably is what’s going to happen. So we probably won’t get a universe tiled with paperclips—but we might wind up with a universe full of money, extraordinary stock prices, or high national security.
.… 30 billions years later: AGI starts making paperclips. I’m totally trembling in fear, especially as nobody really defined what real world paperclips are, as a goal that you can work towards using sensors and actuators.