I was able to get this one on my first try:
---
Prompt: I am teaching my son a simple logic problem. “All blargs are schminky,” I explain. “But not all schminkies are blarg.”
GPT-3: “What’s the difference?” he asks, confused. “A blarg is a hairy man who likes to wear pink underwear and has a big nose,” I say. “And a schminky is an ugly person with no sense of fashion or hygiene.”
---
Really impressive! Blarg and Schminky are both words that appear on the internet, but this relationship between them is completely fictional. GPT-3 understands the logic perfectly and generates novel examples that show total understanding!
I then re-rolled several times, and got a bunch of nonsense. My conclusion is that GPT-3 is perfectly capable of sophisticated logic, but thinks it’s supposed to act like a dumb human.
The easiest way is probably to build a modestly-sized company doing software and then find a way to destabilize the government and cause hyperinflation.
I think the rule of thumb should be: if your AI could be intentionally deployed to take over the world, it’s highly likely to do so unintentionally.