My model of a non-technical layperson finds it really surprising that an AGI would turn rogue and kill everyone. For them it’s a big and crazy claim.
They imagine that an AGI will obviously be very human-like and the default is that it will be cooperative and follow ethical norms. They will say you need some special reason why it would decide to do something so extreme and unexpected as killing everyone.
When I’ve talked to family members and non-EA friends that’s almost always the first reaction I get.
If you don’t address that early in the introduction I think you might lose a lot of people.
I don’t think you need to fully counter that argument in the introduction (it’s a complex counter-argument) but my marketing instincts are that you need to at least acknowledge that you understand your audience’s skepticism and disbelief.
You need to say early in the introduction: Yes, I know how crazy this sounds. Why would an AGI want to kill us? There are some weird and counter-intuitive reasons why, which I promise we’re going to get to.
We’ll consider this point for future releases, but personally, I would say that this kind of hedging also has a lot of downsides: it makes you sound far more uncertain and defensive than you really want to.
This document tries to be both grounded and to the point, and so we by default don’t want to put ourselves in a defensive position when arguing things that we think make sense and are supported by the evidence.
My model of a non-technical layperson finds it really surprising that an AGI would turn rogue and kill everyone. For them it’s a big and crazy claim.
They imagine that an AGI will obviously be very human-like and the default is that it will be cooperative and follow ethical norms. They will say you need some special reason why it would decide to do something so extreme and unexpected as killing everyone.
When I’ve talked to family members and non-EA friends that’s almost always the first reaction I get.
If you don’t address that early in the introduction I think you might lose a lot of people.
I don’t think you need to fully counter that argument in the introduction (it’s a complex counter-argument) but my marketing instincts are that you need to at least acknowledge that you understand your audience’s skepticism and disbelief.
You need to say early in the introduction: Yes, I know how crazy this sounds. Why would an AGI want to kill us? There are some weird and counter-intuitive reasons why, which I promise we’re going to get to.
Thanks for the comment!
We’ll consider this point for future releases, but personally, I would say that this kind of hedging also has a lot of downsides: it makes you sound far more uncertain and defensive than you really want to.
This document tries to be both grounded and to the point, and so we by default don’t want to put ourselves in a defensive position when arguing things that we think make sense and are supported by the evidence.