I state in the post that I agree that the takeover, while the AI stabilizes its position to the degree that it can prevent other AIs from being built, can be very violent, but I don’t see how hunting down everyone living in Argentina is an important step in the takeover.
I strongly disagree about Nate’s post. I agree that it’s good that he debunked some bad arguments, but it’s just not true that he is only arguing against ideas that were trying to change how people act right now. He spends long sections on the imagined Interlocutor coming up with false hopes that are not action-relevant in the present, like our friends in the multiverse saving us, us running simulations in the future and punishing the AI for defection and us asking for half the Universe now in bargain then using a fraction of what we got to run simulations for bargaining. These take up like half the essay. My proposal clearly fits in the reference class of arguments Nate debunks, he just doesn’t get around to it, and spends pages on strictly worse proposals, like one where we don’t reward the cooperating AIs in the future simulations but punish the defecting ones.
I state in the post that I agree that the takeover, while the AI stabilizes its position to the degree that it can prevent other AIs from being built, can be very violent, but I don’t see how hunting down everyone living in Argentina is an important step in the takeover.
I strongly disagree about Nate’s post. I agree that it’s good that he debunked some bad arguments, but it’s just not true that he is only arguing against ideas that were trying to change how people act right now. He spends long sections on the imagined Interlocutor coming up with false hopes that are not action-relevant in the present, like our friends in the multiverse saving us, us running simulations in the future and punishing the AI for defection and us asking for half the Universe now in bargain then using a fraction of what we got to run simulations for bargaining. These take up like half the essay. My proposal clearly fits in the reference class of arguments Nate debunks, he just doesn’t get around to it, and spends pages on strictly worse proposals, like one where we don’t reward the cooperating AIs in the future simulations but punish the defecting ones.