My model of a non-technical layperson finds it really surprising that an AGI would turn rogue and kill everyone. For them it’s a big and crazy claim.
They imagine that an AGI will obviously be very human-like and the default is that it will be cooperative and follow ethical norms. They will say you need some special reason why it would decide to do something so extreme and unexpected as killing everyone.
When I’ve talked to family members and non-EA friends that’s almost always the first reaction I get.
If you don’t address that early in the introduction I think you might lose a lot of people.
I don’t think you need to fully counter that argument in the introduction (it’s a complex counter-argument) but my marketing instincts are that you need to at least acknowledge that you understand your audience’s skepticism and disbelief.
You need to say early in the introduction: Yes, I know how crazy this sounds. Why would an AGI want to kill us? There are some weird and counter-intuitive reasons why, which I promise we’re going to get to.
Some of the later levels on this?
https://en.wikipedia.org/wiki/Notpron
“Notpron is an online puzzle game and internet riddle created in 2004 by German game developer David Münnich. It has been named as ‘the hardest riddle available on the internet.’”
“Notpron follows a standard puzzle game layout, where the player is presented with a webpage containing a riddle and must find the answer to the riddle in order to proceed to the next webpage”
“Each level answer or solution is unique, often requiring specific skills such as decoding ciphers, image editing, musical knowledge”
“As of October 2020, only 100 people have completed the game, out of 20 million visitors since August 2004”