I think that the question is not thoroughly set from the start. It is not whether AI could prove dangerous for a possible extinction of the humanity, but how much more risk does the artificial intelligence ADDS to the current risk of extinction of the humanity as it is without a cleverest AI. In this case the answers might be different. Of course it is a very difficult question to answer and in any case, it does not reduce the significance of the original question, since we talk about a situation totally human-made -and preventable as such.
If you were dead in the future, you would be dead already.
Because time travel is not ruled out in principle.
Danger is a fact about fact density and your degree of certainty.
Stop saying things with the full confidence of being afraid.
And start simply counting the evidence.
I think that the question is not thoroughly set from the start. It is not whether AI could prove dangerous for a possible extinction of the humanity, but how much more risk does the artificial intelligence ADDS to the current risk of extinction of the humanity as it is without a cleverest AI. In this case the answers might be different. Of course it is a very difficult question to answer and in any case, it does not reduce the significance of the original question, since we talk about a situation totally human-made -and preventable as such.
Its the same question.
If you were dead in the future, you would be dead already. Because time travel is not ruled out in principle.
Danger is a fact about fact density and your degree of certainty. Stop saying things with the full confidence of being afraid. And start simply counting the evidence.
Go back a few years. Start there.