Many of my “disjunctive” arguments were written specifically with that scenario in mind.
Cool, makes sense. I retract my pointed questions.
I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we’re doing including the social/political aspects, and you don’t, so you think the burden of proof is on me?
This seems about right. In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support.
does the current COVID-19 disaster not make you more pessimistic about “whatever efforts people will make when the problem starts becoming more apparent”?
Actually, COVID makes me a little more optimistic. First because quite a few countries are handling it well. Secondly because I wasn’t even sure that lockdowns were a tool in the arsenal of democracies, and it seemed pretty wild to shut the economy down for so long. But they did. Also essential services have proven much more robust than I’d expected (I thought there would be food shortages, etc).
This seems about right. In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support.
“More dangerous than anything else has ever been” does not seem incredibly bold to me, given that superhuman AI will be more powerful than anything else the world has seen. Historically the risk of civilization doing damage to itself seems to grow with the power that it has access to (e.g., the two world wars, substantial risks of nuclear war and man-made pandemic that continue to accumulate each year, climate change) so I think I’m just extrapolating a clear trend. (Past risks like these could not have been eliminated by solving a single straightforward, self-contained, technical problem analogous to “intent alignment” so why expect that now?)
To risk being uncharitable, your position seems analogous to someone saying, before the start of the nuclear era, “I think we should have a low prior that developing any particular kind of nuclear weapon will greatly increase the risk of global devastation in the future, because (1) that would be unprecedentedly dangerous and (2) nobody wants global devastation so everyone will work to prevent it. The only argument that has been developed well enough to overcome this low prior is that some types of nuclear weapons could potentially ignite the atmosphere, so to be safe we’ll just make sure to only build bombs that definitely can’t do that.” (What would be a charitable historical analogy to your position if this one is not?)
“The world might end” is not the only or even the main thing I’m worried about, especially because there are more people who can be expected to worry about “the world might end” and try to do something about it. My focus is more on the possibility that humanity survives but the values of people like me (or human values, or objective morality, depending on what the correct metaethics turn out to be) end up controlling only a small fraction of universe so we end up with astronomical waste or Beyond Astronomical Waste as a result. (Or our values become corrupted and the universe ends up being optimized for completely alien or wrong values.) There is plenty of precedence for the world becoming quite suboptimal according to some group’s values, and there is no apparent reason to think the universe has to evolve according to objective morality (if such a thing exists), so my claim also doesn’t seem very extraordinary from this perspective.
First because quite a few countries are handling it well. Secondly because I wasn’t even sure that lockdowns were a tool in the arsenal of democracies, and it seemed pretty wild to shut the economy down for so long.
If you think societal response to a risk like pandemic (and presumably AI) is substantially suboptimal by default (and it clearly is given that large swaths of humanity are incurring a lot of needless deaths), doesn’t that imply significant residual risks, and plenty of room for people like us to try to improve the response? To a first approximation, the default suboptimal social response reduces all risks by some constant amount, so if some particular x-risk is important to work on without considering default social response, it’s probably still important to work on after considering “whatever efforts people will make when the problem starts becoming more apparent”. Do you disagree this argument? Did you have some other reason for saying that, that I’m not getting?
Cool, makes sense. I retract my pointed questions.
This seems about right. In general when someone proposes a mechanism by which the world might end, I think the burden of proof is on them. You’re not just claiming “dangerous”, you’re claiming something like “more dangerous than anything else has ever been, even if it’s intent-aligned”. This is an incredibly bold claim and requires correspondingly thorough support.
Actually, COVID makes me a little more optimistic. First because quite a few countries are handling it well. Secondly because I wasn’t even sure that lockdowns were a tool in the arsenal of democracies, and it seemed pretty wild to shut the economy down for so long. But they did. Also essential services have proven much more robust than I’d expected (I thought there would be food shortages, etc).
“More dangerous than anything else has ever been” does not seem incredibly bold to me, given that superhuman AI will be more powerful than anything else the world has seen. Historically the risk of civilization doing damage to itself seems to grow with the power that it has access to (e.g., the two world wars, substantial risks of nuclear war and man-made pandemic that continue to accumulate each year, climate change) so I think I’m just extrapolating a clear trend. (Past risks like these could not have been eliminated by solving a single straightforward, self-contained, technical problem analogous to “intent alignment” so why expect that now?)
To risk being uncharitable, your position seems analogous to someone saying, before the start of the nuclear era, “I think we should have a low prior that developing any particular kind of nuclear weapon will greatly increase the risk of global devastation in the future, because (1) that would be unprecedentedly dangerous and (2) nobody wants global devastation so everyone will work to prevent it. The only argument that has been developed well enough to overcome this low prior is that some types of nuclear weapons could potentially ignite the atmosphere, so to be safe we’ll just make sure to only build bombs that definitely can’t do that.” (What would be a charitable historical analogy to your position if this one is not?)
“The world might end” is not the only or even the main thing I’m worried about, especially because there are more people who can be expected to worry about “the world might end” and try to do something about it. My focus is more on the possibility that humanity survives but the values of people like me (or human values, or objective morality, depending on what the correct metaethics turn out to be) end up controlling only a small fraction of universe so we end up with astronomical waste or Beyond Astronomical Waste as a result. (Or our values become corrupted and the universe ends up being optimized for completely alien or wrong values.) There is plenty of precedence for the world becoming quite suboptimal according to some group’s values, and there is no apparent reason to think the universe has to evolve according to objective morality (if such a thing exists), so my claim also doesn’t seem very extraordinary from this perspective.
If you think societal response to a risk like pandemic (and presumably AI) is substantially suboptimal by default (and it clearly is given that large swaths of humanity are incurring a lot of needless deaths), doesn’t that imply significant residual risks, and plenty of room for people like us to try to improve the response? To a first approximation, the default suboptimal social response reduces all risks by some constant amount, so if some particular x-risk is important to work on without considering default social response, it’s probably still important to work on after considering “whatever efforts people will make when the problem starts becoming more apparent”. Do you disagree this argument? Did you have some other reason for saying that, that I’m not getting?