I know it’s not the point of your article, but you lost me at saying you would have a 2% chance of killing millions of people, if you had that intention.
Without getting into tactics, I would venture to say there are quite a few groups across the world with that intention, which include various parties of high intelligence and significant resources, and zero of those have achieved it (if we exclude, say, heads of state).
Yes, unfortunately there are indeed quite a few groups interested in it.
There are reasons why they haven’t succeeded historically, and those reasons are getting much weaker over time. It should suffice to say that I’m not optimistic about our odds on avoiding this type of threat over the next 30 years (conditioned on no other gameboard flip).
I have an issue with it for a different reason. Not because I don’t think it’s possible, but because even just by stating it, it might cause some entities to pay attention to things they wouldn’t have otherwise.
I went back and forth on whether I should include that bit for exactly that reason. Knowing something is possible is half the battle and such. I ended up settling on a rough rule for whether I could include something:
It is trivial, or
it is already covered elsewhere, that coverage goes into more detail, and the audience of that coverage is vastly larger than my own post’s reach.
The more potentially dangerous an idea is, the stronger the requirements are.
Something like “single token prediction runs in constant time” falls into 1, while this fell in 2. There is technically nonzero added risk, but given the context and the lack of details, the risk seemed very small to the point of being okay to allude to as a discussion point.
I know it’s not the point of your article, but you lost me at saying you would have a 2% chance of killing millions of people, if you had that intention.
Without getting into tactics, I would venture to say there are quite a few groups across the world with that intention, which include various parties of high intelligence and significant resources, and zero of those have achieved it (if we exclude, say, heads of state).
Yes, unfortunately there are indeed quite a few groups interested in it.
There are reasons why they haven’t succeeded historically, and those reasons are getting much weaker over time. It should suffice to say that I’m not optimistic about our odds on avoiding this type of threat over the next 30 years (conditioned on no other gameboard flip).
I have an issue with it for a different reason. Not because I don’t think it’s possible, but because even just by stating it, it might cause some entities to pay attention to things they wouldn’t have otherwise.
I went back and forth on whether I should include that bit for exactly that reason. Knowing something is possible is half the battle and such. I ended up settling on a rough rule for whether I could include something:
It is trivial, or
it is already covered elsewhere, that coverage goes into more detail, and the audience of that coverage is vastly larger than my own post’s reach.
The more potentially dangerous an idea is, the stronger the requirements are.
Something like “single token prediction runs in constant time” falls into 1, while this fell in 2. There is technically nonzero added risk, but given the context and the lack of details, the risk seemed very small to the point of being okay to allude to as a discussion point.