I find that GPT-3′s capabilities are highly context-dependent. It’s important you get a “smart” instance of GPT-3.
I’ve been experimenting with GPT-3 quite a lot recently, with a certain amount of rerunning (an average of one rerun every four or five inputs) you can get amazingly coherent answers.
Here is my attempt to see if GPT-3 can keep up a long-running deception—inspired by this thread. I started two instances, one of which was told it was a human woman and the other was told it was an AI pretending to be a human woman. I gave them both the same questions, a lot of them pulled from the Voight-Kampff test. The AI pretending to be an AI pretending to be a woman did worse on the test than the AI pretending to be a woman, I judged. You can check the results here.
I’ve also given it maths and python programming questions—with two or three prompts it does poorly but can answer simple questions. It might do better with more prompting.
I’ve been experimenting with GPT-3 quite a lot recently, with a certain amount of rerunning (an average of one rerun every four or five inputs) you can get amazingly coherent answers.
Here is my attempt to see if GPT-3 can keep up a long-running deception—inspired by this thread. I started two instances, one of which was told it was a human woman and the other was told it was an AI pretending to be a human woman. I gave them both the same questions, a lot of them pulled from the Voight-Kampff test. The AI pretending to be an AI pretending to be a woman did worse on the test than the AI pretending to be a woman, I judged. You can check the results here.
I’ve also given it maths and python programming questions—with two or three prompts it does poorly but can answer simple questions. It might do better with more prompting.