I understand why if you don’t agree with DoomsdayCult then such sabotage would be bad, but if you don’t agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.
Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it’s part of everyone’s problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.
I understand why if you don’t agree with DoomsdayCult then such sabotage would be bad, but if you don’t agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.
Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it’s part of everyone’s problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.