One route would be if some of them thought that existential risks weren’t that much worse than major global catastrophes.
If I think that likely 10% of everyone will die because of the wrong people getting control of the killer AI drones (“slaughterbots”), and it’s important that we get to AI as quickly as possible, then we might move it forward as quickly as possible because we want to be in control, at the expense of some kinds of unlikely alignment problems. This person accepts a very small increase in the chance of existential risk via indirect AI issues at the price of a substantial decrease in the chance of 10% of humanity being wiped out via bad direct use of the AI. This would be intentionally be increasing x-risk in expectation, and they would agree.
You might correctly point out that Paul Christiano and Chris Olah don’t think like this, but I don’t really know who is involved in leadership at OpenAI, perhaps “safe” AI to some of them means “non-military”. So this is a case that the new title rules out.
One route would be if some of them thought that existential risks weren’t that much worse than major global catastrophes.
If I think that likely 10% of everyone will die because of the wrong people getting control of the killer AI drones (“slaughterbots”), and it’s important that we get to AI as quickly as possible, then we might move it forward as quickly as possible because we want to be in control, at the expense of some kinds of unlikely alignment problems. This person accepts a very small increase in the chance of existential risk via indirect AI issues at the price of a substantial decrease in the chance of 10% of humanity being wiped out via bad direct use of the AI. This would be intentionally be increasing x-risk in expectation, and they would agree.
You might correctly point out that Paul Christiano and Chris Olah don’t think like this, but I don’t really know who is involved in leadership at OpenAI, perhaps “safe” AI to some of them means “non-military”. So this is a case that the new title rules out.
Yeah, that’s a good example, thanks.
(I do think it is unlikely.)