doesn’t it seem to you that the topic is super neglected (even compared to AI alignment) given that the risks/consequences of failing to correctly solve this problem seem comparable to the risk of AI takeover?
Yes, I’m sympathetic. Among all the issues that will come with AI, I think alignment is relatively tractable (at least it is now) and that it has an unusually clear story for why we shouldn’t count on being able to defer it to smarter AIs (though that might work). So I think it’s probably correct for it to get relatively more attention. But even taking that into account, the non-alignment singularity issues do seem too neglected.
I’m currently trying to figure out what non-alignment stuff seems high-priority and whether I should be tackling any of it.
Yes, I’m sympathetic. Among all the issues that will come with AI, I think alignment is relatively tractable (at least it is now) and that it has an unusually clear story for why we shouldn’t count on being able to defer it to smarter AIs (though that might work). So I think it’s probably correct for it to get relatively more attention. But even taking that into account, the non-alignment singularity issues do seem too neglected.
I’m currently trying to figure out what non-alignment stuff seems high-priority and whether I should be tackling any of it.