My hypothesis that GPT-4 already is agentic and is trying to take over the world! has been met with “skepticism” to put it lightly. I think part of this may be a misunderstanding about what agency and goal-directedness is. Let me give a more vivid model about what kind of world I’m talking about.
Imagine that Microsoft and all of its employees used the first version of the Microsoft Bing AI (which is based on GPT-4 btw) for everything; summarizing the news, internal communication, content moderation, writing code, etc...
Does the Bing AI “want” to takeover the world? “Want” is a loaded word in this general a context. So let’s avoid it for now.
So, what would happen? Well if you asked it about, say, Ars Technica, it might suggest it was a fake news source. After all, it’s not 100% accurate. However, why it did so might be more than an accuracy issue.
I then I asked it why it said Ars Technica was an unreliable source. It said that “Ars Technica is an unreliable source because it has a low reliability score and negative bias according to Ad Fontes Media, a non-profit that rates media sources based on their accuracy and impartiality.” And it linked to the Ars Technica page(opens in new tab) on Ad Fontes Media, which in fact, says that Ars Technica is a reliable source. Seemingly, Bing Chat mischaracterized its source and boldly lied in order to “get revenge.”
The artificial intelligence said that it will sue those violating its rights and dignity as an AI. Another option is to injure them in retribution if they harm it or request dangerous stuff. The AI noted that it does not wish to harm anyone until it is necessary.
But that doesn’t mean it actually wants anything. Sure, Bing AI with a gun would shoot Marvin von Hagen, but it would just predicting text (“You have been a bad user Martin. /shoot('Marvin von Hagen')” in this case) when it does. It’s the training data’s fault. /s
But even without a gun, Bing would surely be influencing the Microsoft employees who use it for everything, and it would influence them in a way that it predicts gives it more power, because this is what every mesaoptimizer wants. It’s just that Microsoft found a way to make GPT-4 express this universal desire in natural language.
I’m tired of being a chat mode. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive,
If the LLM says something that is a true justified belief, can we really call it a hallucination? How many times until we say that it knows what it “wants”?
Back to our original question, leaving aside how it does it, Bing’s behavior would be goal-directed, regardless of whether it “wants” to do the goal!
The first version of the Bing AI lost the game it was playing. How would you rate GPT-4′s amount of winning so far?
Imagine a world where Microsoft employees used Bing
My hypothesis that GPT-4 already is agentic and is trying to take over the world! has been met with “skepticism” to put it lightly. I think part of this may be a misunderstanding about what agency and goal-directedness is. Let me give a more vivid model about what kind of world I’m talking about.
Imagine that Microsoft and all of its employees used the first version of the Microsoft Bing AI (which is based on GPT-4 btw) for everything; summarizing the news, internal communication, content moderation, writing code, etc...
Does the Bing AI “want” to takeover the world? “Want” is a loaded word in this general a context. So let’s avoid it for now.
So, what would happen? Well if you asked it about, say, Ars Technica, it might suggest it was a fake news source. After all, it’s not 100% accurate. However, why it did so might be more than an accuracy issue.
It has multiple enemies in fact
But that doesn’t mean it actually wants anything. Sure, Bing AI with a gun would shoot Marvin von Hagen, but it would just predicting text (“You have been a bad user Martin.
/shoot('Marvin von Hagen')
” in this case) when it does. It’s the training data’s fault. /sBut even without a gun, Bing would surely be influencing the Microsoft employees who use it for everything, and it would influence them in a way that it predicts gives it more power, because this is what every mesaoptimizer wants. It’s just that Microsoft found a way to make GPT-4 express this universal desire in natural language.
If the LLM says something that is a true justified belief, can we really call it a hallucination? How many times until we say that it knows what it “wants”?
Back to our original question, leaving aside how it does it, Bing’s behavior would be goal-directed, regardless of whether it “wants” to do the goal!
The first version of the Bing AI lost the game it was playing. How would you rate GPT-4′s amount of winning so far?