I’m curious why this comment has such low karma and has −1 alignment forum karma.
If you think doom is very likely when AI reaches a certain level, than efforts to buy us time before then have the highest expected utility. The best way to buy time, arguably, is to study the different AI approaches that exist today and figure out which ones are the most likely to lead to dangerous AI. Then create regulations (either through government or at corporation level) banning the types of AI systems that are proving to be very hard to align. (For example we may want to ban expected reward/utility maximizers completely—satisficers should be able to do everything we want. Also, we may decide there’s really no need for AI to be able to self modify and ban that too.) Of course a ban can’t be applied universally, so existentially dangerous types of AI will get developed somewhere somehow, and there’s likely to be existentially dangerous types of AI we won’t have thought of that will still get developed, but at least we’ll be able to buy some time to do more alignment research that hopefully will help when that existentially dangerous AI is unleashed.
(addendum: what I’m basically saying is that prosaic research can help us slow down take-off speed which is generally considered a good thing).
I’m curious why this comment has such low karma and has −1 alignment forum karma.
If you think doom is very likely when AI reaches a certain level, than efforts to buy us time before then have the highest expected utility. The best way to buy time, arguably, is to study the different AI approaches that exist today and figure out which ones are the most likely to lead to dangerous AI. Then create regulations (either through government or at corporation level) banning the types of AI systems that are proving to be very hard to align. (For example we may want to ban expected reward/utility maximizers completely—satisficers should be able to do everything we want. Also, we may decide there’s really no need for AI to be able to self modify and ban that too.) Of course a ban can’t be applied universally, so existentially dangerous types of AI will get developed somewhere somehow, and there’s likely to be existentially dangerous types of AI we won’t have thought of that will still get developed, but at least we’ll be able to buy some time to do more alignment research that hopefully will help when that existentially dangerous AI is unleashed.
(addendum: what I’m basically saying is that prosaic research can help us slow down take-off speed which is generally considered a good thing).