If I’m not mistaking, you’ve already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.
Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:
Some are over-optimistic based on mistaken assumptions about the behavior of humans;
Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;
Also, is it valid to say that human pessimists are AI optimists?
Also, it’s not clear to me why are my (negative) assumptions (about both) are mistaken?
Also, now I perceive hidden assumption that all “human pessimists” are mistaken by default or those who are correct can be just ignored....
PS. It feels soooo weird when EA forum use things like karma… I have to admit—seeing negative value there feels unpleasant to me. I wonder if there is a more effective way to prevent spam/limit stupid comments without causing distracting emotions. This way kinda contradicts base EA principles if I’m correct.
PPS. I have yet to read links in your reply, but I don’t see my argument there at the first glance.
If I’m not mistaking, you’ve already changed the wording
No, I haven’t changed anything in this post since Dec 11, three days before your first comment.
valid EA response … EA forum … EA principles …
This isn’t EA forum. Also, you shouldn’t equate “EA” with “concerned about AGI extinction”. There are plenty of self-described EAs who think that AGI extinction is astronomically unlikely and a pointless thing to worry about. (And also plenty of self-described EAs who think the opposite.)
prevent spam/limit stupid comments without causing distracting emotions
If Hypothetical Person X tends to write what you call “stupid comments”, and if they want to be participating on Website Y, and if Website Y wants to prevent Hypothetical Person X from doing that, then there’s an irreconcilable conflict here, and it seems almost inevitable that Hypothetical Person X is going to wind up feeling annoyed by this interaction. Like, Website Y can do things on the margin to make the transaction less unpleasant, but it’s surely going to be somewhat unpleasant under the best of circumstances.
(Pick any popular forum on the internet, and I bet that either (1) there’s no moderation process and thus there’s a ton of crap, or (2) there is a moderation process, and many of the people who get warned or blocked by that process are loudly and angrily complaining about how terrible and unjust and cruel and unpleasant the process was.)
Anyway, I don’t know why you’re saying that here-in-particular. I’m not a moderator, I have no special knowledge about running forums, and it’s way off-topic. (But if it helps, here’s a popular-on-this-site post related to this topic.)
[EDIT: reworded this part a bit.]
what would be a valid EA response to the arguments coming from people fitting these bullets:
Some are over-optimistic based on mistaken assumptions about the behavior of humans;
Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;
That’s off-topic for this post so I’m probably not going to chat about it, but see this other comment too.
If I’m not mistaking, you’ve already changed the wording and new version does not trigger negative emotional response in my particular sub-type of AI optimists. Now I have a bullet accounting for my kind of AI optimists *_*.
Although I still remain in confusion what would be a valid EA response to the arguments coming from people fitting these bullets:
Some are over-optimistic based on mistaken assumptions about the behavior of humans;
Some are over-optimistic based on mistaken assumptions about the behavior of human institutions;
Also, is it valid to say that human pessimists are AI optimists?
Also, it’s not clear to me why are my (negative) assumptions (about both) are mistaken?
Also, now I perceive hidden assumption that all “human pessimists” are mistaken by default or those who are correct can be just ignored....
PS. It feels soooo weird when EA forum use things like karma… I have to admit—seeing negative value there feels unpleasant to me. I wonder if there is a more effective way to prevent spam/limit stupid comments without causing distracting emotions. This way kinda contradicts base EA principles if I’m correct.
PPS. I have yet to read links in your reply, but I don’t see my argument there at the first glance.
No, I haven’t changed anything in this post since Dec 11, three days before your first comment.
This isn’t EA forum. Also, you shouldn’t equate “EA” with “concerned about AGI extinction”. There are plenty of self-described EAs who think that AGI extinction is astronomically unlikely and a pointless thing to worry about. (And also plenty of self-described EAs who think the opposite.)
If Hypothetical Person X tends to write what you call “stupid comments”, and if they want to be participating on Website Y, and if Website Y wants to prevent Hypothetical Person X from doing that, then there’s an irreconcilable conflict here, and it seems almost inevitable that Hypothetical Person X is going to wind up feeling annoyed by this interaction. Like, Website Y can do things on the margin to make the transaction less unpleasant, but it’s surely going to be somewhat unpleasant under the best of circumstances.
(Pick any popular forum on the internet, and I bet that either (1) there’s no moderation process and thus there’s a ton of crap, or (2) there is a moderation process, and many of the people who get warned or blocked by that process are loudly and angrily complaining about how terrible and unjust and cruel and unpleasant the process was.)
Anyway, I don’t know why you’re saying that here-in-particular. I’m not a moderator, I have no special knowledge about running forums, and it’s way off-topic. (But if it helps, here’s a popular-on-this-site post related to this topic.)
[EDIT: reworded this part a bit.]
That’s off-topic for this post so I’m probably not going to chat about it, but see this other comment too.