I claim that you fell victim of a human tendency to oversimplify when modeling an abstract outgroup member. Why do all “AI pessimists” picture “AI optimists” as stubborn simpletons not bein able to get persuaded finally that AI is a terrible existential risk. I agree 100% that yes, it really is an existential risk for our civ. Like nuclear weapons..… Or weaponing viruses… Inability to prevent pandemic. Global warming (which is already very much happening).. Hmmmm. It’s like we have ALL those on our hands presently, don’t we? People don’t seem to be doing anything about 3 (three) existential risks.
In my real honest opinion, if humans continue to rule—we are going to have very abrupt decline in quality of life in this decade. Sorry for bad formulation and tone etc.
I don’t think this OP involves “picturing AI optimists as stubborn simpletons not being able to get persuaded finally that AI is a terrible existential risk”. (I do think AGI optimists are wrong, but that’s different!) At least, I didn’t intend to do that. I can potentially edit the post if you help me understand how you think I’m implying that, and/or you can suggest concrete wording changes etc.; I’m open-minded.
If I’m not mistaking, you’ve already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.
Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:
Some are over-optimistic based on mistaken assumptions about the behavior of humans;
Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;
Also, is it valid to say that human pessimists are AI optimists?
Also, it’s not clear to me why are my (negative) assumptions (about both) are mistaken?
Also, now I perceive hidden assumption that all “human pessimists” are mistaken by default or those who are correct can be just ignored....
PS. It feels soooo weird when EA forum use things like karma… I have to admit—seeing negative value there feels unpleasant to me. I wonder if there is a more effective way to prevent spam/limit stupid comments without causing distracting emotions. This way kinda contradicts base EA principles if I’m correct.
PPS. I have yet to read links in your reply, but I don’t see my argument there at the first glance.
If I’m not mistaking, you’ve already changed the wording
No, I haven’t changed anything in this post since Dec 11, three days before your first comment.
valid EA response … EA forum … EA principles …
This isn’t EA forum. Also, you shouldn’t equate “EA” with “concerned about AGI extinction”. There are plenty of self-described EAs who think that AGI extinction is astronomically unlikely and a pointless thing to worry about. (And also plenty of self-described EAs who think the opposite.)
prevent spam/limit stupid comments without causing distracting emotions
If Hypothetical Person X tends to write what you call “stupid comments”, and if they want to be participating on Website Y, and if Website Y wants to prevent Hypothetical Person X from doing that, then there’s an irreconcilable conflict here, and it seems almost inevitable that Hypothetical Person X is going to wind up feeling annoyed by this interaction. Like, Website Y can do things on the margin to make the transaction less unpleasant, but it’s surely going to be somewhat unpleasant under the best of circumstances.
(Pick any popular forum on the internet, and I bet that either (1) there’s no moderation process and thus there’s a ton of crap, or (2) there is a moderation process, and many of the people who get warned or blocked by that process are loudly and angrily complaining about how terrible and unjust and cruel and unpleasant the process was.)
Anyway, I don’t know why you’re saying that here-in-particular. I’m not a moderator, I have no special knowledge about running forums, and it’s way off-topic. (But if it helps, here’s a popular-on-this-site post related to this topic.)
[EDIT: reworded this part a bit.]
what would be a valid EA response to the arguments coming from people fitting these bullets:
Some are over-optimistic based on mistaken assumptions about the behavior of humans;
Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;
That’s off-topic for this post so I’m probably not going to chat about it, but see this other comment too.
Admittedly, a lot of the problem is that in the general public, a lot of AI optimism and pessimism is that stupid, and even in LW, there are definitely stupid arguments for optimism, so I think they have developed a wariness towards these sorts of arguments.
I claim that you fell victim of a human tendency to oversimplify when modeling an abstract outgroup member. Why do all “AI pessimists” picture “AI optimists” as stubborn simpletons not bein able to get persuaded finally that AI is a terrible existential risk. I agree 100% that yes, it really is an existential risk for our civ. Like nuclear weapons..… Or weaponing viruses… Inability to prevent pandemic. Global warming (which is already very much happening).. Hmmmm. It’s like we have ALL those on our hands presently, don’t we? People don’t seem to be doing anything about 3 (three) existential risks.
In my real honest opinion, if humans continue to rule—we are going to have very abrupt decline in quality of life in this decade. Sorry for bad formulation and tone etc.
I think of myself as having high ability and willingness to respond to detailed object-level AGI-optimist arguments, for example:
Response to Dileep George: AGI safety warrants planning ahead
Response to Blake Richards: AGI, generality, alignment, & loss functions
Thoughts on “AI is easy to control” by Pope & Belrose
LeCun’s “A Path Towards Autonomous Machine Intelligence” has an unsolved technical alignment problem
Munk AI debate: confusions and possible cruxes
…and more.
I don’t think this OP involves “picturing AI optimists as stubborn simpletons not being able to get persuaded finally that AI is a terrible existential risk”. (I do think AGI optimists are wrong, but that’s different!) At least, I didn’t intend to do that. I can potentially edit the post if you help me understand how you think I’m implying that, and/or you can suggest concrete wording changes etc.; I’m open-minded.
If I’m not mistaking, you’ve already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.
Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:
Some are over-optimistic based on mistaken assumptions about the behavior of humans;
Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;
Also, is it valid to say that human pessimists are AI optimists?
Also, it’s not clear to me why are my (negative) assumptions (about both) are mistaken?
Also, now I perceive hidden assumption that all “human pessimists” are mistaken by default or those who are correct can be just ignored....
PS. It feels soooo weird when EA forum use things like karma… I have to admit—seeing negative value there feels unpleasant to me. I wonder if there is a more effective way to prevent spam/limit stupid comments without causing distracting emotions. This way kinda contradicts base EA principles if I’m correct.
PPS. I have yet to read links in your reply, but I don’t see my argument there at the first glance.
No, I haven’t changed anything in this post since Dec 11, three days before your first comment.
This isn’t EA forum. Also, you shouldn’t equate “EA” with “concerned about AGI extinction”. There are plenty of self-described EAs who think that AGI extinction is astronomically unlikely and a pointless thing to worry about. (And also plenty of self-described EAs who think the opposite.)
If Hypothetical Person X tends to write what you call “stupid comments”, and if they want to be participating on Website Y, and if Website Y wants to prevent Hypothetical Person X from doing that, then there’s an irreconcilable conflict here, and it seems almost inevitable that Hypothetical Person X is going to wind up feeling annoyed by this interaction. Like, Website Y can do things on the margin to make the transaction less unpleasant, but it’s surely going to be somewhat unpleasant under the best of circumstances.
(Pick any popular forum on the internet, and I bet that either (1) there’s no moderation process and thus there’s a ton of crap, or (2) there is a moderation process, and many of the people who get warned or blocked by that process are loudly and angrily complaining about how terrible and unjust and cruel and unpleasant the process was.)
Anyway, I don’t know why you’re saying that here-in-particular. I’m not a moderator, I have no special knowledge about running forums, and it’s way off-topic. (But if it helps, here’s a popular-on-this-site post related to this topic.)
[EDIT: reworded this part a bit.]
That’s off-topic for this post so I’m probably not going to chat about it, but see this other comment too.
Admittedly, a lot of the problem is that in the general public, a lot of AI optimism and pessimism is that stupid, and even in LW, there are definitely stupid arguments for optimism, so I think they have developed a wariness towards these sorts of arguments.