It would be somewhat less bad if it had been more clearly labeled speculative, but that’s not the fundamental issue. “cartoon” implies to me something like Newton’s laws—not correct exactly, but a good enough model to be going on for the purposes of the conversation. I think your object-level evopsych statements are closer to, uh, I don’t actually know physics nearly well enough to complete the analogy. Some sort of theory of a phenomenon that is not entirely proven to even exist, with some evidence for and some against, which a small minority group of scientists present as settled science and procede to write further papers using it as an assumption.
I was not saying you had made the claim up, but presenting controversial claims with no hedging is not great. As for everything else, your post implies strongly, without stating outright, various narratives about human motivations/evolution that are not, in fact, obvious. For instance, that women want to secure the loyalty of one man, while men want to have sex with as many women as possible, and that this adversarial dynamic is present in the modern day and results in women, in particular, having unique insight into figuring out the motives of partially aligned intelligences due to practice on men.
It’s okay to describe features of a group of people. Which features you’re describing, how you present your claims, and whether you’re in fact right all matter. In this case, you are, implicitly, making the claim that the difference between men and women is large enough that it makes sense to try to draw an analogy to the difference between humans and AIs, even though you explicitly stated that of course the difference is not as large.
To put it another way, I don’t actually see what using women and men here adds to the analogy beyond “sometimes, humans have to suss out the true intentions of other humans who partially share goals with them when those other humans have motive to deceive them”. To the extent that you are claiming there is a meaningful difference, I think that is [not entirely sure I am phrasing the following correctly] privileging gender as a special axis of human difference in a way that I think is meaningfully wrong and also find unpleasant.
(Somewhat more incidentally, I and many other women I know dislike the use of “females”, “mate”, etc in this context, though that is somewhat trivial and not actually a big deal so much as often correlated with things that do actually bother me.)
A guess about what’s happening: you’re seeing that I said “X” and you’re inferring that I believe “Y” because a lot of people who go around saying “X” also say “Y”. And you’re worried about that, because people who say “Y” have a disturbing pattern of going around mysteriously not noticing all the counterevidence against Y, and also advocating for harming others on the basis of Y being true. That’s a reasonable thing to worry about if you have good reason to think there are such people. But I think responding by punishing people who say “X”, while understandable, is an escalatory sort of action, and is a bad long term solution, and adds to the big pile of people silencing each other. So my somewhat prickly olive branch is: if this is something like what’s really going on, let’s talk about that explicitly.
As for everything else, your post implies strongly, without stating outright, various narratives about human motivations/evolution that are not, in fact, obvious. For instance, that women want to secure the loyalty of one man, while men want to have sex with as many women as possible, and that this adversarial dynamic is present in the modern day and results in women, in particular, having unique insight into figuring out the motives of partially aligned intelligences due to practice on men.
How does the post imply that? As you’ve stated them, I don’t agree with any of those things, and I didn’t say them, and I didn’t say anything that implied them, except that I said there is some (other) reason that might result in women in particular having unique insight.
In this case, you are, implicitly, making the claim that the difference between men and women is large enough that it makes sense to try to draw an analogy to the difference between humans and AIs, even though you explicitly stated that of course the difference is not as large.
No I’m not! Men and women are the same on any “human to AI” dimension! The analogy doesn’t rest on differences between men and women, except that there’s a desire to align in that direction, as described, coming from different incentives. I’m not making this claim that you’re saying I’m making! It’s other people’s fault if they make up an interpretation that I didn’t say and then ding me for saying that thing I didn’t say. The only analogy is that it’s a general intelligence trying to align another general intelligence.
I don’t actually see what using women and men here adds to the analogy
It’s an especially strong case of incentive to interpersonally suss out intentions. It’s the strongest one I could think of. What are some other very strong cases?
in a way that I think is meaningfully wrong
Why do you think it’s meaningfully wrong? Do you mean incorrect, or morally wrong?
It would be somewhat less bad if it had been more clearly labeled speculative, but that’s not the fundamental issue. “cartoon” implies to me something like Newton’s laws—not correct exactly, but a good enough model to be going on for the purposes of the conversation. I think your object-level evopsych statements are closer to, uh, I don’t actually know physics nearly well enough to complete the analogy. Some sort of theory of a phenomenon that is not entirely proven to even exist, with some evidence for and some against, which a small minority group of scientists present as settled science and procede to write further papers using it as an assumption.
I was not saying you had made the claim up, but presenting controversial claims with no hedging is not great. As for everything else, your post implies strongly, without stating outright, various narratives about human motivations/evolution that are not, in fact, obvious. For instance, that women want to secure the loyalty of one man, while men want to have sex with as many women as possible, and that this adversarial dynamic is present in the modern day and results in women, in particular, having unique insight into figuring out the motives of partially aligned intelligences due to practice on men.
It’s okay to describe features of a group of people. Which features you’re describing, how you present your claims, and whether you’re in fact right all matter. In this case, you are, implicitly, making the claim that the difference between men and women is large enough that it makes sense to try to draw an analogy to the difference between humans and AIs, even though you explicitly stated that of course the difference is not as large.
To put it another way, I don’t actually see what using women and men here adds to the analogy beyond “sometimes, humans have to suss out the true intentions of other humans who partially share goals with them when those other humans have motive to deceive them”. To the extent that you are claiming there is a meaningful difference, I think that is [not entirely sure I am phrasing the following correctly] privileging gender as a special axis of human difference in a way that I think is meaningfully wrong and also find unpleasant.
(Somewhat more incidentally, I and many other women I know dislike the use of “females”, “mate”, etc in this context, though that is somewhat trivial and not actually a big deal so much as often correlated with things that do actually bother me.)
Thanks for engaging though, I continue to be grateful for you making the effort to help me understand what’s happening, including harms.
A guess about what’s happening: you’re seeing that I said “X” and you’re inferring that I believe “Y” because a lot of people who go around saying “X” also say “Y”. And you’re worried about that, because people who say “Y” have a disturbing pattern of going around mysteriously not noticing all the counterevidence against Y, and also advocating for harming others on the basis of Y being true. That’s a reasonable thing to worry about if you have good reason to think there are such people. But I think responding by punishing people who say “X”, while understandable, is an escalatory sort of action, and is a bad long term solution, and adds to the big pile of people silencing each other. So my somewhat prickly olive branch is: if this is something like what’s really going on, let’s talk about that explicitly.
How does the post imply that? As you’ve stated them, I don’t agree with any of those things, and I didn’t say them, and I didn’t say anything that implied them, except that I said there is some (other) reason that might result in women in particular having unique insight.
No I’m not! Men and women are the same on any “human to AI” dimension! The analogy doesn’t rest on differences between men and women, except that there’s a desire to align in that direction, as described, coming from different incentives. I’m not making this claim that you’re saying I’m making! It’s other people’s fault if they make up an interpretation that I didn’t say and then ding me for saying that thing I didn’t say. The only analogy is that it’s a general intelligence trying to align another general intelligence.
It’s an especially strong case of incentive to interpersonally suss out intentions. It’s the strongest one I could think of. What are some other very strong cases?
Why do you think it’s meaningfully wrong? Do you mean incorrect, or morally wrong?