It was good to be explicit that these are generalizations. Nonetheless, it was still a mistake to label these two views “female” and “male”, rather than the more neutral “positional” and “non-positional”. That you notice this correlation is interesting, but not the main point. Given the likely effects, it seems better not to over-emphasize this with your choice nomenclature.
I thought about that a lot; and I tried writing “male” and “female” out of it. But I couldn’t write “male” and “female” out until after the point where I stopped using evidence for the existence of each value based on observations of men vs. of women. The post doesn’t talk about men or women any longer than it absolutely has to just to introduce its supporting data.
You don’t have to point to women and men generally having these respective values to show that these values exist. Pointing to specific examples suffices. Pointing out that there is a general trend is interesting, and worth doing. But you still don’t need to name them that way.
Actually, another problem is that the AI is, with either title, neither male nor female, neither positional nor non-positional. These apply to the values of humans that it is trying to optimize for.
Even if you didn’t make that the title, you could have at least introduced the term, which is shorter than “mutually-satisfiable”, and then used it for the bulk of the article.
I changed the title—apart from distraction by gender issues, I think people are focusing on the “you can’t please all the people all the time” part, and missing the “most of the variance of possible values is present within human values” part. Any links made to this article before this comment will now be broken.
It was good to be explicit that these are generalizations. Nonetheless, it was still a mistake to label these two views “female” and “male”, rather than the more neutral “positional” and “non-positional”. That you notice this correlation is interesting, but not the main point. Given the likely effects, it seems better not to over-emphasize this with your choice nomenclature.
I thought about that a lot; and I tried writing “male” and “female” out of it. But I couldn’t write “male” and “female” out until after the point where I stopped using evidence for the existence of each value based on observations of men vs. of women. The post doesn’t talk about men or women any longer than it absolutely has to just to introduce its supporting data.
You don’t have to point to women and men generally having these respective values to show that these values exist. Pointing to specific examples suffices. Pointing out that there is a general trend is interesting, and worth doing. But you still don’t need to name them that way.
Would it be possible to change the title?
Edit: “Positional and Non-Positional Friendly AI” would be an improvement, for example. You would have to add the definitions to the text, naturally.
Would you be as likely to read something called “Positional and Non-Positional Friendly AI”?
Actually, another problem is that the AI is, with either title, neither male nor female, neither positional nor non-positional. These apply to the values of humans that it is trying to optimize for.
Even if you didn’t make that the title, you could have at least introduced the term, which is shorter than “mutually-satisfiable”, and then used it for the bulk of the article.
More, honestly. “Male and Female Friendly AI” led me to suspect you would be engaging in unwarranted generalization.
It is a clanger, though.
I changed the title—apart from distraction by gender issues, I think people are focusing on the “you can’t please all the people all the time” part, and missing the “most of the variance of possible values is present within human values” part. Any links made to this article before this comment will now be broken.
Actually, the title in the URL doesn’t matter.
OK, that made me laugh.