Your strategy for AI risk seems to be “Let’s not build the sort of AI that would destroy the world”, which fails at the first word: “Let’s”.
I don’t have a strategy, I’m basically just thinking out loud about a couple of specific points. Building a strategy for preventing that type of AI is important, but I don’t (yet?) have any ideas in that area.
Ok, perhaps I was too combative with the wording. My general point is: Don’t think of humanity as a coordinated agent, don’t think of “AGI” as a single tribe with particular properties (I frequently see this same mistake with regard to aliens), and in particular, don’t think because a specific AI won’t be able or want to destroy the world, that therefore the world is saved in general.
I don’t have a strategy, I’m basically just thinking out loud about a couple of specific points. Building a strategy for preventing that type of AI is important, but I don’t (yet?) have any ideas in that area.
Ok, perhaps I was too combative with the wording. My general point is: Don’t think of humanity as a coordinated agent, don’t think of “AGI” as a single tribe with particular properties (I frequently see this same mistake with regard to aliens), and in particular, don’t think because a specific AI won’t be able or want to destroy the world, that therefore the world is saved in general.