Yeah, we’re in agreement on that point. And for me, this has been an update over the past couple years. I used to think that slowing down the top labs was a great idea. Now, having thought through the likely side-effects of that, and having thought through the implications of being able to control the training data, I have come to a position which agrees with yours on this point.
To make this explicit:
I currently believe (Sept 2024) that the best possible route to safety for humanity routes through accelerating the current safety-leading lab, Anthropic, to highly capable tool-AI and/or AGI as fast as possible without degrading their safety culture and efforts.
I think we’re in a lot of danger, as a species, from bad actors deploying self-replicating weapons (bioweapons, computer worms exploiting zero days, nanotech). I think this danger is going to be greatly increased by AI progress, biotech progress, and increased integration of computers into the world economy.
I think our best hope for taking preventative action to head off disaster is to start soon, by convincing important decision makers that the threats are real and their lives are on the line. I think the focus on a pivotal act is incorrect, and we should instead focus on gradual defensive-tech development and deployment.
I worry that we might not get a warning shot which we can survive with civilization intact. The first really bad incident could devastate us in a single blow. So I think that demonstrations of danger are really important.
Yeah, we’re in agreement on that point. And for me, this has been an update over the past couple years. I used to think that slowing down the top labs was a great idea. Now, having thought through the likely side-effects of that, and having thought through the implications of being able to control the training data, I have come to a position which agrees with yours on this point.
To make this explicit:
I currently believe (Sept 2024) that the best possible route to safety for humanity routes through accelerating the current safety-leading lab, Anthropic, to highly capable tool-AI and/or AGI as fast as possible without degrading their safety culture and efforts.
I think we’re in a lot of danger, as a species, from bad actors deploying self-replicating weapons (bioweapons, computer worms exploiting zero days, nanotech). I think this danger is going to be greatly increased by AI progress, biotech progress, and increased integration of computers into the world economy.
I think our best hope for taking preventative action to head off disaster is to start soon, by convincing important decision makers that the threats are real and their lives are on the line. I think the focus on a pivotal act is incorrect, and we should instead focus on gradual defensive-tech development and deployment.
I worry that we might not get a warning shot which we can survive with civilization intact. The first really bad incident could devastate us in a single blow. So I think that demonstrations of danger are really important.