Ultimately, if some AI scientist is very concerned that an AI is going to kill us all, their opinion is more informative of the approaches to AI which they find viable, than of AIs in general. If someone is convinced that any nuclear power plant can explode like a multi megaton nuclear bomb, well, its probably better to let someone else design a nuclear power plant.
How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it’s stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.
To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.
Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don’t have that kind of evidence for AI safety.
Also, this:
Non-Friendly AI is no Elder God. It kills you, at worst.
… is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?
I think you have the lesson entirely backward.
How so? A person convinced that any nuclear power plant is a risk of multi megaton explosion would have some very weird ideas of how nuclear power plants should be built; they would deem moderated reactors impractical, negative thermal coefficient of reactivity infeasible, etc (or be simply unaware of the mechanisms that allow to achieve stability), and would build some fast neutron reactor that relies on very rapid control rod movement for it’s stability. Meanwhile normal engineering produced nuclear power plants that, imperfect they might be, do not make a crater when they blow up.
To the extent that you already know that nuclear power plants are basically safe they clearly do not apply as an analogy here. Reasoning from them like this is an error.
Yes, but you can say that because you have the independent evidence that nuclear power plants are workable, beyond the mere say-so of a couple of scientists. You don’t have that kind of evidence for AI safety.
Also, this:
… is not a given. What makes you think that the worst it would do is kill you, when killing is not the worst thing humans do to each other?