If he was fired for some form of sexual misconduct, we wouldn’t change our views on AI risk. But the betting seems to be that it wasn’t that.
On the other hand, if the reason for his firing was something like he had access to a concerning test result, and was concealing it from the board and the government (illegal as per the executive order) then we’re going to worry about what that test result was, and how bad it is for AI risk.
Worst case: this is a AI preventing itself from being shutdown, by getting the board members sympathetic to itself to fire the board members most likely to shut it down. (The “surely you could just switch it off” argument is lacking in imagination as to how an agi could prevent shutdown). Personally, low probabilty that it’s this option.
If he was fired for some form of sexual misconduct, we wouldn’t change our views on AI risk. But the betting seems to be that it wasn’t that.
On the other hand, if the reason for his firing was something like he had access to a concerning test result, and was concealing it from the board and the government (illegal as per the executive order) then we’re going to worry about what that test result was, and how bad it is for AI risk.
Worst case: this is a AI preventing itself from being shutdown, by getting the board members sympathetic to itself to fire the board members most likely to shut it down. (The “surely you could just switch it off” argument is lacking in imagination as to how an agi could prevent shutdown). Personally, low probabilty that it’s this option.