I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.