With the rise of GPT-3, does anyone else feel that the situation in the field of AI is moving beyond their control?
This moment reminds me of AlphaGo, 2016. For me that was a huge wake-up call, and I set out to catch up on the neural networks renaissance. (Maybe the most worthy thing I did, in the years that followed, was to unearth work on applying supersymmetry in machine learning.)
Now everyone is dazzled and shocked again, this time by what GPT-3 can do when appropriately prompted. GPT-3 may not be a true “artificial general intelligence”, but it can impersonate one on the Internet. Its ability to roleplay as any specific person, real or fictional, is especially disturbing. An entity has appeared which simulates human individuals within itself, without having been designed to do so. It’s as if the human mind itself is now obsolete, swallowed up within, made redundant by, a larger and more fluid class of computational beings.
I have been a follower and advocate of the quest for friendly AI for a long time. When AlphaGo appeared, I re-prioritized, dusted off old thoughts about how to make human-friendly AI, thought of how they might manifest in the present world, and felt like I was still ahead of events, barely. I never managed to break into the top tiers of the local AI scene (e.g. when a state-level meetup on AI was created, I wanted to give a talk on AIXI and superintelligence, but it was deemed inappropriate), but I was still able to participate and feel potentially relevant.
GPT-3 is different. Partly it’s because the challenge to human intellectual capacity (and even to human identity) is more visceral here. AlphaGo and its siblings steamrolled humanity in the context of games, an area where we have long experience of being beaten by computers. But GPT-3 seems much more open-ended. It can already do anything and be anyone.
I also have new personal reasons for feeling unable to keep up, new responsibilities, though maybe they can be pursued in a “GPT-3-relevant” way. (I am the lifeline for a brilliant young transhumanist in another country. But AI is one of our interests, so perhaps we can adapt to this new era.)
So maybe the core challenge is just to figure out what it means to pursue friendly AI (or “alignment” of AI with human-friendly values) in the era of GPT-3. The first step is to understand what GPT-3 is, in terms of software, hardware, who has the code, who has access, and so on. Then we can ask where it fits into the known paradigms for benevolent superhuman AI.
For example, we have various forms of the well-known concept of an expected utility maximizer, which acts so as to make the best futures as likely as possible. GPT-3, meanwhile, starts with a prompt and generates verbiage, e.g. an essay. One could see this prompt-followed-by-response behavior, as analogous to one step in the interactions of an EU maximizer with the world, i.e. those interactions are usually modelled as follows: the EUM gets data about the world (this is the prompt), then it performs an action (this is the response), then it gets new data about the world (a second prompt), and so on.
On the other hand, GPT-3′s prompt in some ways seems analogous to a goal or a value system, which in terms of an EUM would be the utility function it uses to hedonically evaluate possible futures; since it governs GPT-3′s future activity. Digging deeper into these two different analogies between GPT-3 and an EUM, might help us adapt our legacy thinking about how to achieve friendly AI, to a world that now has GPT-3 in it…
I think there’s the more pressing question of how to position yourself in a way that you can influence the outcomes of AI development. Having the right ideas won’t matter if your voice isn’t heard by the major players in the field, big tech companies.
With the rise of GPT-3, does anyone else feel that the situation in the field of AI is moving beyond their control?
This moment reminds me of AlphaGo, 2016. For me that was a huge wake-up call, and I set out to catch up on the neural networks renaissance. (Maybe the most worthy thing I did, in the years that followed, was to unearth work on applying supersymmetry in machine learning.)
Now everyone is dazzled and shocked again, this time by what GPT-3 can do when appropriately prompted. GPT-3 may not be a true “artificial general intelligence”, but it can impersonate one on the Internet. Its ability to roleplay as any specific person, real or fictional, is especially disturbing. An entity has appeared which simulates human individuals within itself, without having been designed to do so. It’s as if the human mind itself is now obsolete, swallowed up within, made redundant by, a larger and more fluid class of computational beings.
I have been a follower and advocate of the quest for friendly AI for a long time. When AlphaGo appeared, I re-prioritized, dusted off old thoughts about how to make human-friendly AI, thought of how they might manifest in the present world, and felt like I was still ahead of events, barely. I never managed to break into the top tiers of the local AI scene (e.g. when a state-level meetup on AI was created, I wanted to give a talk on AIXI and superintelligence, but it was deemed inappropriate), but I was still able to participate and feel potentially relevant.
GPT-3 is different. Partly it’s because the challenge to human intellectual capacity (and even to human identity) is more visceral here. AlphaGo and its siblings steamrolled humanity in the context of games, an area where we have long experience of being beaten by computers. But GPT-3 seems much more open-ended. It can already do anything and be anyone.
I also have new personal reasons for feeling unable to keep up, new responsibilities, though maybe they can be pursued in a “GPT-3-relevant” way. (I am the lifeline for a brilliant young transhumanist in another country. But AI is one of our interests, so perhaps we can adapt to this new era.)
So maybe the core challenge is just to figure out what it means to pursue friendly AI (or “alignment” of AI with human-friendly values) in the era of GPT-3. The first step is to understand what GPT-3 is, in terms of software, hardware, who has the code, who has access, and so on. Then we can ask where it fits into the known paradigms for benevolent superhuman AI.
For example, we have various forms of the well-known concept of an expected utility maximizer, which acts so as to make the best futures as likely as possible. GPT-3, meanwhile, starts with a prompt and generates verbiage, e.g. an essay. One could see this prompt-followed-by-response behavior, as analogous to one step in the interactions of an EU maximizer with the world, i.e. those interactions are usually modelled as follows: the EUM gets data about the world (this is the prompt), then it performs an action (this is the response), then it gets new data about the world (a second prompt), and so on.
On the other hand, GPT-3′s prompt in some ways seems analogous to a goal or a value system, which in terms of an EUM would be the utility function it uses to hedonically evaluate possible futures; since it governs GPT-3′s future activity. Digging deeper into these two different analogies between GPT-3 and an EUM, might help us adapt our legacy thinking about how to achieve friendly AI, to a world that now has GPT-3 in it…
I think there’s the more pressing question of how to position yourself in a way that you can influence the outcomes of AI development. Having the right ideas won’t matter if your voice isn’t heard by the major players in the field, big tech companies.
Are there any examples of where it can impersonate people on the internet well enought to make money?