An “existential risk” is an extinction risk. Climate change is not an extinction risk—not for the human race, anyway. It would just mean suffering. Pick any horrible historical event you can think of—plague, war, whatever—as lethal as you like—and clearly it wasn’t enough to end the species, because the present generations are here. Things can get incredibly bad without actually killing us off. 6 billion people could die and the other 1 billion would continue the struggle amidst the ruins.
I am convinced that in any case, climate change is too slow to matter, when compared to the development of technology. We are going to have our AI/nanotech crisis long before we even have 2 more degrees, let alone 4. If we survive AI/nanotech, then we can solve the problem, and if we don’t survive AI/nanotech, then the future is out of our hands anyway.
I’m aware of what an existential risk is. While I don’t think that climate change is likely to destroy mankind on its own, I consider the potential for runaway climate change to provoke massive instability to be truly worrisome on an existential level, especially in the long run.
I hope that you’re right with climate change being too slow to matter, but I also think that hope is not exactly a reliable strategy.
It is the way of extinction that what kills the last individual commonly has nothing to do with the underlying factors that doomed the species. Could climate change by itself kill everyone on earth? No. Could it be a significant contributing factor to a downward spiral that ends in extinction? I hope not, but I don’t really know. Leaving aside the purely fictional versions of AI and nanotech to which you refer, could real-life versions of those technologies help us develop sustainable energy sources? Yes. Will they do so fast enough? I hope so, but I don’t really know.
As for whether there’s anything we here can usefully do about it, I don’t think it would be useful for us to get bogged down in the sort of bickering about politics that all too often goes with this kind of territory, but LW does have a good track record of avoiding that; and perhaps it would be useful for us to explore potential solutions.
At $130/kg, there’s enough for 80 years at current consumption (about 10 years if we use it for all our electricity), but if we’re willing to use ore with a tenth as much uranium, there’s 300 times as much. Also, there’s ways of using uranium 238, which is about 140 times as abundant. It’s still a temporary patch, in the sense that we can’t just keep using it until the sun goes out, but it will last long enough for fusion power to become economically feasible.
Any ideas about how capable computer programs would need to be to give significant help to researchers with hypothesis generation and with whether research programs make sense? With seeing whether abstracts match experimental results?
It seems to me that some of this could be done without even having full natural language.
As long as we’re talking about, as you say, significant help rather than solving the whole problem, about what can be done without having full natural language—then I think this is one of the more promising areas of AI research for the next couple of decades.
I talked a few months ago to somebody who’s doing biomedical research—one of the smartest guys I know—asking what AI might be able to do to make his job easier, and his answer was that the one thing likely to be feasible in the near future that would really help would be better text mining, something that could do better than just keyword matching for e.g. flagging papers likely to be relevant to a particular problem.
Let’s say the amount realistically liable to occur in the next few centuries :) If we have to worry about it on megayear timescales, we’ll already have failed.
An “existential risk” is an extinction risk. Climate change is not an extinction risk—not for the human race, anyway. It would just mean suffering. Pick any horrible historical event you can think of—plague, war, whatever—as lethal as you like—and clearly it wasn’t enough to end the species, because the present generations are here. Things can get incredibly bad without actually killing us off. 6 billion people could die and the other 1 billion would continue the struggle amidst the ruins.
I am convinced that in any case, climate change is too slow to matter, when compared to the development of technology. We are going to have our AI/nanotech crisis long before we even have 2 more degrees, let alone 4. If we survive AI/nanotech, then we can solve the problem, and if we don’t survive AI/nanotech, then the future is out of our hands anyway.
I’m aware of what an existential risk is. While I don’t think that climate change is likely to destroy mankind on its own, I consider the potential for runaway climate change to provoke massive instability to be truly worrisome on an existential level, especially in the long run.
I hope that you’re right with climate change being too slow to matter, but I also think that hope is not exactly a reliable strategy.
It is the way of extinction that what kills the last individual commonly has nothing to do with the underlying factors that doomed the species. Could climate change by itself kill everyone on earth? No. Could it be a significant contributing factor to a downward spiral that ends in extinction? I hope not, but I don’t really know. Leaving aside the purely fictional versions of AI and nanotech to which you refer, could real-life versions of those technologies help us develop sustainable energy sources? Yes. Will they do so fast enough? I hope so, but I don’t really know.
As for whether there’s anything we here can usefully do about it, I don’t think it would be useful for us to get bogged down in the sort of bickering about politics that all too often goes with this kind of territory, but LW does have a good track record of avoiding that; and perhaps it would be useful for us to explore potential solutions.
We don’t need help finding sustainable resources. We already have nuclear power. We just need to convince everyone that nuclear isn’t bad.
Nuclear’s hardly sustainable long-term though. It’s a temporary patch that might help for fifty years or so.
At $130/kg, there’s enough for 80 years at current consumption (about 10 years if we use it for all our electricity), but if we’re willing to use ore with a tenth as much uranium, there’s 300 times as much. Also, there’s ways of using uranium 238, which is about 140 times as abundant. It’s still a temporary patch, in the sense that we can’t just keep using it until the sun goes out, but it will last long enough for fusion power to become economically feasible.
And thorium.
Any ideas about how capable computer programs would need to be to give significant help to researchers with hypothesis generation and with whether research programs make sense? With seeing whether abstracts match experimental results?
It seems to me that some of this could be done without even having full natural language.
As long as we’re talking about, as you say, significant help rather than solving the whole problem, about what can be done without having full natural language—then I think this is one of the more promising areas of AI research for the next couple of decades.
I talked a few months ago to somebody who’s doing biomedical research—one of the smartest guys I know—asking what AI might be able to do to make his job easier, and his answer was that the one thing likely to be feasible in the near future that would really help would be better text mining, something that could do better than just keyword matching for e.g. flagging papers likely to be relevant to a particular problem.
How much climate change are we talking? :)
Let’s say the amount realistically liable to occur in the next few centuries :) If we have to worry about it on megayear timescales, we’ll already have failed.