Its in an area that some people (not the OpenAI management) think is unusually high-risk,
I really can’t imagine that someone who wrote “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.” in 2015 and occasionally references extinction as a possibility when not directly asked about doesn’t think AGI development is high risk.
I’m not sure how to square this circle. I almost hope Sam is being consciously dishonest and has a 4D chess plan, as opposed to self-deluding himself that while it’s dangerous the risks are low or they’re somehow worth it. But it seems that the latter is more likely based on some other stuff he said, e.g. “What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT”.
Broadly agree except for this part:
I really can’t imagine that someone who wrote “Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.” in 2015 and occasionally references extinction as a possibility when not directly asked about doesn’t think AGI development is high risk.
I’m not sure how to square this circle. I almost hope Sam is being consciously dishonest and has a 4D chess plan, as opposed to self-deluding himself that while it’s dangerous the risks are low or they’re somehow worth it. But it seems that the latter is more likely based on some other stuff he said, e.g. “What I lose the most sleep over is the hypothetical idea that we already have done something really bad by launching ChatGPT”.