I think it’s a pretty good argument. Holden Karnofsky puts a 1/3rd chance that we don’t see transformative AI this century. In that world, people today know very little about what advanced AI will eventually look like, and how to solve the challenges it presents. Surely some people should be working on problems that won’t be realized for a century or more, but it would seem much more difficult to argue that AI safety today is more altruistically pressing than other longtermist causes like biosecurity, and even neartermist causes like animal welfare and global poverty.
Personally I do buy the arguments that we could reach superintelligent AI within the next few decades, which is a large part of why I think AI safety is an important cause area right now.
Its not enough that AI might appear in a few decades, you also need something useful you can do about it now, compared to investing your money to have more to spend later when concrete problems appear.
A 2/3rds chance of a technology that might kill everyone (and certainly would change the world in any case) is still easily the most important thing going on right now. You’d have to demonstrate that AI has less than a 10% chance of appearing in my lifetime for me to not care about AI risk.
I think it’s a pretty good argument. Holden Karnofsky puts a 1/3rd chance that we don’t see transformative AI this century. In that world, people today know very little about what advanced AI will eventually look like, and how to solve the challenges it presents. Surely some people should be working on problems that won’t be realized for a century or more, but it would seem much more difficult to argue that AI safety today is more altruistically pressing than other longtermist causes like biosecurity, and even neartermist causes like animal welfare and global poverty.
Personally I do buy the arguments that we could reach superintelligent AI within the next few decades, which is a large part of why I think AI safety is an important cause area right now.
Its not enough that AI might appear in a few decades, you also need something useful you can do about it now, compared to investing your money to have more to spend later when concrete problems appear.
A 2/3rds chance of a technology that might kill everyone (and certainly would change the world in any case) is still easily the most important thing going on right now. You’d have to demonstrate that AI has less than a 10% chance of appearing in my lifetime for me to not care about AI risk.