Stupid like attempted sabotage. Keep in mind we’re talking of folks whom can’t keep their cool when someone thinks through a decision theory of their own to arrive at a creepy conclusion. (the link that you are not recommended to read) And before then, a lot of stupid in form of going around associating safety concerns with crankery, which probably won’t matter but may matter if at some point someone sees some actual danger (as opposed to reading stuff off science fiction by Vinge) and measures have to be implemented for good reasons. (BTW, from the wikipedia: “Although a crank’s beliefs seem ridiculous to experts in the field, cranks are sometimes very successful in convincing non-experts of their views. A famous example is the Indiana Pi Bill where a state legislature nearly wrote into law a crank result in geometry.”)
I understand why if you don’t agree with DoomsdayCult then such sabotage would be bad, but if you don’t agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.
Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it’s part of everyone’s problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.
Stupid like attempted sabotage. Keep in mind we’re talking of folks whom can’t keep their cool when someone thinks through a decision theory of their own to arrive at a creepy conclusion. (the link that you are not recommended to read) And before then, a lot of stupid in form of going around associating safety concerns with crankery, which probably won’t matter but may matter if at some point someone sees some actual danger (as opposed to reading stuff off science fiction by Vinge) and measures have to be implemented for good reasons. (BTW, from the wikipedia: “Although a crank’s beliefs seem ridiculous to experts in the field, cranks are sometimes very successful in convincing non-experts of their views. A famous example is the Indiana Pi Bill where a state legislature nearly wrote into law a crank result in geometry.”)
I understand why if you don’t agree with DoomsdayCult then such sabotage would be bad, but if you don’t agree with DoomsdayCult then it also seems like a pretty minor world problem, so you seem surprisingly impassioned to me.
Interesting notion. The idea is, I suppose, that one should put boredom time into trying to influence the major world events without seeing that chance at influencing those is proportionally lower? Somewhat parallel question: why people fresh out of not having succeeded at anything relevant (or fresh out of theology even) are trying to save everyone from getting killed by AI, even though it’s part of everyone’s problem space including that of people whom succeeded at proving new theorems, creating new methods, etc? Heuristic of pick a largest problem? I see a lot of newbies to programming wanting to make MMORPG with zillion ultra expensive features.
I’m just surprised the topic holds your interest. Presumably you see LW and related people as low status, since having extreme ideas and being wrong are low status. I wouldn’t be very motivated to argue with Scientologists. (I’m not sure this is worth discussing much)
They picked this problem because it seems like the highest marginal utility to them. Rightly or wrongly, most other people don’t take AI risks very seriously. Also, since it’s a difficult problem, “gaining general competence” can and probably should be a step in attempting to work on big risks.