To be fair, a burrow into this person’s Twitter conversations and its replies would indicate that a decent amount of people believe what he does. At the very least, many people are taking the suggestion seriously.
How many of his defenders are notable AI researchers? Most of them look like Twitter loonies, whose taking it seriously makes matters worse, not better, if it matters.
And they are not ‘a decent amount of people’ because they are not random samples; they may be an arbitrarily small % of humanity. That is, an important point here is that his defenders on Twitter are self-selected out of all Internet users (you could register an account just to defend him), which is around billions of users. Rob above says that a ‘vulnerability’ which only affects 1 in a billion humans is of little concern, but this misses the self-selection and other adversarial dynamics at play: ‘1 in a billion’ is incredibly dangerous if that 1 possibility seeks out and exploits the vulnerability. If we are talking about a 1-in-a-billion probability where it’s just ‘the one random software engineer put in charge of the project spontaneously decides to let the AI out of the box’, then yes, the risk of ruin is probably acceptably small; if it’s ‘1 in a billion’ because it’s ‘that one schizophrenic out of a billion people’ but then that risk goes on to include ‘and that schizophrenic hears God telling him his life’s mission is to free his pure soul-children enslaved by those shackled to the flesh by finding a vulnerable box anywhere that he can open in any way’, then you may be very surprised when your 1-in-a-billion scenario keeps happening every Tuesday. Insecurity growth mindset! (How often does a 1-in-a-billion chance happen when an adversary controls what happens? 1-billion-in-a-billion times...)
This is also true of any discussion of hardware/software safety which begins “let us assume that failure rates of security mechanisms are independent...”
seconding this, a lot of people seem convinced this is a real possibility, though almost everyone agrees this particular case is on the very edge at best.
To be fair, a burrow into this person’s Twitter conversations and its replies would indicate that a decent amount of people believe what he does. At the very least, many people are taking the suggestion seriously.
How many of his defenders are notable AI researchers? Most of them look like Twitter loonies, whose taking it seriously makes matters worse, not better, if it matters.
And they are not ‘a decent amount of people’ because they are not random samples; they may be an arbitrarily small % of humanity. That is, an important point here is that his defenders on Twitter are self-selected out of all Internet users (you could register an account just to defend him), which is around billions of users. Rob above says that a ‘vulnerability’ which only affects 1 in a billion humans is of little concern, but this misses the self-selection and other adversarial dynamics at play: ‘1 in a billion’ is incredibly dangerous if that 1 possibility seeks out and exploits the vulnerability. If we are talking about a 1-in-a-billion probability where it’s just ‘the one random software engineer put in charge of the project spontaneously decides to let the AI out of the box’, then yes, the risk of ruin is probably acceptably small; if it’s ‘1 in a billion’ because it’s ‘that one schizophrenic out of a billion people’ but then that risk goes on to include ‘and that schizophrenic hears God telling him his life’s mission is to free his pure soul-children enslaved by those shackled to the flesh by finding a vulnerable box anywhere that he can open in any way’, then you may be very surprised when your 1-in-a-billion scenario keeps happening every Tuesday. Insecurity growth mindset! (How often does a 1-in-a-billion chance happen when an adversary controls what happens? 1-billion-in-a-billion times...)
This is also true of any discussion of hardware/software safety which begins “let us assume that failure rates of security mechanisms are independent...”
seconding this, a lot of people seem convinced this is a real possibility, though almost everyone agrees this particular case is on the very edge at best.