I think it might be helpful to re-examine the analogy to climate change. Today’s proponents of increased focus on AI safety and alignment are not comparable to today’s climate scientists, they’re comparable to the climate scientists of the 1970s-1980s. Since then many trillions of dollars and millions of careers have been spent developing and scaling up the alternatives to business-as-usual that are finally starting to make global decarbonization feasible.
It’s only in the past handful of years that unsubsidized wind and solar power became cost-competitive with fossil fuels in most of the world. Getting here required massive advances in semiconductor engineering, power electronics, and many other fields. We still aren’t at a point where energy storage is cost-competitive in most places, but we’ve gotten better at grid management which has been enough for now. Less than 15 years ago many people in relevant industries still seriously believed that every watt of wind and solar on the grid needed to be matched by a watt of dispatchable (aka usually natural gas) power plants to deal with intermittency. It’s only in the last decade that we’ve gotten enough improvements in battery technology to make electric cars start to become competitive. And there are still a huge number of advances needed—ones we can now predict and plan for because we can see what the likely solutions are and have examples starting to enter real-world use, but difficult problems nonetheless. Mining more critical minerals. Replacing coke in steelmaking. Zero-carbon cement. Synthetic hydrocarbon fuels and hydrogen production for industrial heat, aviation fuel, and shipping fuel. Carbon capture and utilization/storage. Better recycling methods. Plastic alternatives. Cleaner agricultural practices. It’s a long list, and every item on it is multifaceted.
The first climate scientists to worry about global warning weren’t experts in any of these other fields, and it wouldn’t make sense to expect them to be. They were working before the computer models, ice cores, extreme weather event increases, and so on, so of course they wouldn’t able to convince most people that society should invest so heavily into solving a new, hard to conceptualize (for most), invisible problem that might someday harm their great-grandchildren in some hard-to-anticipate ways.
So the fact that today’s experts who are talking about AI safety and alignment don’t have a ready-to-go solution, or even a readily actionable path to one, really shouldn’t be surprising. If they did, they’d either deploy it, or sell it to someone for a huge sum of money who could then deploy it and also promote regulation of everyone else, in order to secure a huge economic advantage.
I think it might be helpful to re-examine the analogy to climate change. Today’s proponents of increased focus on AI safety and alignment are not comparable to today’s climate scientists, they’re comparable to the climate scientists of the 1970s-1980s. Since then many trillions of dollars and millions of careers have been spent developing and scaling up the alternatives to business-as-usual that are finally starting to make global decarbonization feasible.
It’s only in the past handful of years that unsubsidized wind and solar power became cost-competitive with fossil fuels in most of the world. Getting here required massive advances in semiconductor engineering, power electronics, and many other fields. We still aren’t at a point where energy storage is cost-competitive in most places, but we’ve gotten better at grid management which has been enough for now. Less than 15 years ago many people in relevant industries still seriously believed that every watt of wind and solar on the grid needed to be matched by a watt of dispatchable (aka usually natural gas) power plants to deal with intermittency. It’s only in the last decade that we’ve gotten enough improvements in battery technology to make electric cars start to become competitive. And there are still a huge number of advances needed—ones we can now predict and plan for because we can see what the likely solutions are and have examples starting to enter real-world use, but difficult problems nonetheless. Mining more critical minerals. Replacing coke in steelmaking. Zero-carbon cement. Synthetic hydrocarbon fuels and hydrogen production for industrial heat, aviation fuel, and shipping fuel. Carbon capture and utilization/storage. Better recycling methods. Plastic alternatives. Cleaner agricultural practices. It’s a long list, and every item on it is multifaceted.
The first climate scientists to worry about global warning weren’t experts in any of these other fields, and it wouldn’t make sense to expect them to be. They were working before the computer models, ice cores, extreme weather event increases, and so on, so of course they wouldn’t able to convince most people that society should invest so heavily into solving a new, hard to conceptualize (for most), invisible problem that might someday harm their great-grandchildren in some hard-to-anticipate ways.
So the fact that today’s experts who are talking about AI safety and alignment don’t have a ready-to-go solution, or even a readily actionable path to one, really shouldn’t be surprising. If they did, they’d either deploy it, or sell it to someone for a huge sum of money who could then deploy it and also promote regulation of everyone else, in order to secure a huge economic advantage.