It has been pretty clearly announced to the world by various tech leaders that they are explicitly spending billions of dollars to produce “new minds vastly smarter than any person, which pose double-digit risk of killing everyone on Earth”. This pronouncement has not yet incited riots. I feel like discussing whether Anthropic should be on the riot-target-list is a conversation that should happen after the OpenAI/Microsoft, DeepMind/Google, and Chinese datacenters have been burnt to the ground.
Once those datacenters have been reduced to rubble, and the chip fabs also, then you can ask things like, “Now, with the pressure to race gone, will Anthropic proceed in a sufficiently safe way? Should we allow them to continue to exist?” I think that, at this point, one might very well decide that the company should continue to exist with some minimal amount of compute, while the majority of the compute is destroyed. I’m not sure it makes sense to have this conversation while OpenAI and DeepMind remain operational.
It has been pretty clearly announced to the world by various tech leaders that they are explicitly spending billions of dollars to produce “new minds vastly smarter than any person, which pose double-digit risk of killing everyone on Earth”. This pronouncement has not yet incited riots. I feel like discussing whether Anthropic should be on the riot-target-list is a conversation that should happen after the OpenAI/Microsoft, DeepMind/Google, and Chinese datacenters have been burnt to the ground.
Once those datacenters have been reduced to rubble, and the chip fabs also, then you can ask things like, “Now, with the pressure to race gone, will Anthropic proceed in a sufficiently safe way? Should we allow them to continue to exist?” I think that, at this point, one might very well decide that the company should continue to exist with some minimal amount of compute, while the majority of the compute is destroyed. I’m not sure it makes sense to have this conversation while OpenAI and DeepMind remain operational.