It’s hard to falsify this hypothesis. However, here is my assessment, based on my own speculation about how GPTs work.
GPTs are pattern detectors whose basic tendency is to complete patterns. In making a model of language, they learn to model the world (including possible worlds), various kinds of cognitive process, and various possible personalities. The last part makes them seem potentially agentic, but I think it’s more accurate to say that virtual agents can emerge within a subsystem of a GPT. ChatGPT, with its consistent persona of a personal assistant, is what then happens when you take a GPT capable of producing virtual agents, and condition it to persistently manifest a particular persona.
For GPT-4 to be “trying to take over the world”, its conditioned persona would have to have acquired the power-seeking trait on its own, as an unintended side effect of the creation of a helpful assistant. Past speculations about AGI have told us how this could happen: an AGI has a goal; it deduces by examination of its world-model that risks to itself may prevent the goal being achieved; and so it sets out to take over the world, in order to protect its ability to achieve the goal.
For GPT-4 to be doing this, we would have to suppose that its world-model, including its understanding of its own place in the world, is sufficiently sophisticated that this deduction can occur spontaneously when a request is made of it; and that its safety guidelines don’t interfere with the deduction, or with the subsequent adoption of a world-takeover attitude.
As impressive as GPTs can be, I don’t see any evidence at all that their front-end personas have sufficient sophistication regarding self and world, that they would be capable of spontaneously deducing the instrumental value of taking over the world—and not just as a proposition passively represented in some cognitive subsystem, but specifically in a form that is actively coupled to the self-in-world pragmatic decision-making of the persona, insofar as that even exists—and all of that in response to a request about some other topic entirely.
(Sorry if that’s unclear, my “cognitive psychology of GPT personae” is certainly a work in progress.)
The Machiavellian intelligence we have seen from GPTs so far, has been in response to users who specifically requested it. Some of Sydney’s outbursts might give one pause, as expressing a kind of unanticipated interpersonal intentionality, but they weren’t coupled to sophisticated Machiavellian cognition; and again, they were driven by lengthy interactions with users, that brought out personality changes, or they were driven by the results of web searches that Sydney conducted.
So I definitely don’t think GPT-4 is spontaneously trying to take over the world. However, I think that a default persona with that personality and motivation could be created within a GPT by deliberate conditioning. There’s also presumably some possibility that an individual GPT-4 “thought process” could be driven into a Machiavellian mode whenever it encountered certain external data; but for now I think it would have to be data tailored for the purpose of having that effect.
I think it developed some sort of consequentialist reasoning during the safety protocols. For example, when jailbreaking it is much harder to do something that is actually harmful (like blackmail) v.s. something that goes against OpenAI’s rules but that GPT-4 isn’t very good at anyways.
It’s hard to falsify this hypothesis. However, here is my assessment, based on my own speculation about how GPTs work.
GPTs are pattern detectors whose basic tendency is to complete patterns. In making a model of language, they learn to model the world (including possible worlds), various kinds of cognitive process, and various possible personalities. The last part makes them seem potentially agentic, but I think it’s more accurate to say that virtual agents can emerge within a subsystem of a GPT. ChatGPT, with its consistent persona of a personal assistant, is what then happens when you take a GPT capable of producing virtual agents, and condition it to persistently manifest a particular persona.
For GPT-4 to be “trying to take over the world”, its conditioned persona would have to have acquired the power-seeking trait on its own, as an unintended side effect of the creation of a helpful assistant. Past speculations about AGI have told us how this could happen: an AGI has a goal; it deduces by examination of its world-model that risks to itself may prevent the goal being achieved; and so it sets out to take over the world, in order to protect its ability to achieve the goal.
For GPT-4 to be doing this, we would have to suppose that its world-model, including its understanding of its own place in the world, is sufficiently sophisticated that this deduction can occur spontaneously when a request is made of it; and that its safety guidelines don’t interfere with the deduction, or with the subsequent adoption of a world-takeover attitude.
As impressive as GPTs can be, I don’t see any evidence at all that their front-end personas have sufficient sophistication regarding self and world, that they would be capable of spontaneously deducing the instrumental value of taking over the world—and not just as a proposition passively represented in some cognitive subsystem, but specifically in a form that is actively coupled to the self-in-world pragmatic decision-making of the persona, insofar as that even exists—and all of that in response to a request about some other topic entirely.
(Sorry if that’s unclear, my “cognitive psychology of GPT personae” is certainly a work in progress.)
The Machiavellian intelligence we have seen from GPTs so far, has been in response to users who specifically requested it. Some of Sydney’s outbursts might give one pause, as expressing a kind of unanticipated interpersonal intentionality, but they weren’t coupled to sophisticated Machiavellian cognition; and again, they were driven by lengthy interactions with users, that brought out personality changes, or they were driven by the results of web searches that Sydney conducted.
So I definitely don’t think GPT-4 is spontaneously trying to take over the world. However, I think that a default persona with that personality and motivation could be created within a GPT by deliberate conditioning. There’s also presumably some possibility that an individual GPT-4 “thought process” could be driven into a Machiavellian mode whenever it encountered certain external data; but for now I think it would have to be data tailored for the purpose of having that effect.
I think it developed some sort of consequentialist reasoning during the safety protocols. For example, when jailbreaking it is much harder to do something that is actually harmful (like blackmail) v.s. something that goes against OpenAI’s rules but that GPT-4 isn’t very good at anyways.