My take is that in most cases it’s probably good to discuss publicly (but I wouldn’t be shocked to become convinced otherwise).
The main plausible reason I see for it potentially being bad is if it were drawing attention to a destabilizing technology that otherwise might not be discovered. But I imagine most thoughts are kind of going to be chasing through the implications of obvious ideas. And I think that in general having the basic strategic situation be closer to common knowledge is likely to reduce the risk of war.
(You might think the discussion could also have impacts on the amount of energy going into racing, but that seems pretty unlikely to me?)
If AI ends up intelligent enough and with enough manufacturing capability to threaten nuclear deterrence; I’d expect it to also deduce any conclusions I would.
So it seems mostly a question of what the world would do with those conclusions earlier, rather than not at all.
A key exception is if later AGI would be blocked on certain kinds of manufacturing to create it’s destabilizing tech, and if drawing attention to that earlier starts serially blocking work earlier.
All our discussions will be repeated ad nauseam in DoD boardrooms with people whose job it is to talk about info hazards. And I also doubt discussion here will move the needle much if Trump and Jake Paul have already digested these ideas.
I have thoughts on the impact of AI on nuclear deterrents; and claims made thereof in the post.
But I’m uncertain whether it’s wise to discuss such things publicly.
Curious if folks have takes on that. (The meta question)
My take is that in most cases it’s probably good to discuss publicly (but I wouldn’t be shocked to become convinced otherwise).
The main plausible reason I see for it potentially being bad is if it were drawing attention to a destabilizing technology that otherwise might not be discovered. But I imagine most thoughts are kind of going to be chasing through the implications of obvious ideas. And I think that in general having the basic strategic situation be closer to common knowledge is likely to reduce the risk of war.
(You might think the discussion could also have impacts on the amount of energy going into racing, but that seems pretty unlikely to me?)
If AI ends up intelligent enough and with enough manufacturing capability to threaten nuclear deterrence; I’d expect it to also deduce any conclusions I would.
So it seems mostly a question of what the world would do with those conclusions earlier, rather than not at all.
A key exception is if later AGI would be blocked on certain kinds of manufacturing to create it’s destabilizing tech, and if drawing attention to that earlier starts serially blocking work earlier.
All our discussions will be repeated ad nauseam in DoD boardrooms with people whose job it is to talk about info hazards. And I also doubt discussion here will move the needle much if Trump and Jake Paul have already digested these ideas.