I partially agree, but I prefer not relying on the AI having “sane decision theories”. A young AI that just took over the world is not necessarily a perfectly rational agent, I want to present a case that appeals to as many potential AIs as possible, and I think doing the actual simulations helps a lot in that.
I agree that the AI having a small amount of natural kindness towards the humans is also a plausible way to survive, but I decided not to talk about that, as that is a separate line of argument from what I’m presenting, and Paul already argued for it in detail.
I partially agree, but I prefer not relying on the AI having “sane decision theories”. A young AI that just took over the world is not necessarily a perfectly rational agent, I want to present a case that appeals to as many potential AIs as possible, and I think doing the actual simulations helps a lot in that.
I agree that the AI having a small amount of natural kindness towards the humans is also a plausible way to survive, but I decided not to talk about that, as that is a separate line of argument from what I’m presenting, and Paul already argued for it in detail.