I gave an argument that open-sourcing AI would increase the risk of the world being destroyed by accident. You said
I note that Anders Sandberg recently included: “Otherwise the terrorists will win!” …in his list of of signs that you might be looking at a weak moral argument.
I presented the mismatch between this statement and my argument as evidence that you had misunderstood what I was saying. In your reply,
I never compared destroying the world by accident with terrorism—you appear to be projecting.
You are misunderstanding me again. I think I’ve already said all that needs to be said, but I can’t clear up confusion if you keep attacking straw men rather than asking questions.
I gave an argument that open-sourcing AI would increase the risk of the world being destroyed by accident. You said
I presented the mismatch between this statement and my argument as evidence that you had misunderstood what I was saying. In your reply,
You are misunderstanding me again. I think I’ve already said all that needs to be said, but I can’t clear up confusion if you keep attacking straw men rather than asking questions.