As I understand it, there is a phenomenon among transgender people where no matter what they do they can’t help but ask themselves the question, “Am I really an [insert self-reported gender category here]?” In the past, a few people have called for a LessWrong-style dissolution of this question. This is how I approach the problem.
There are two caveats which I must address in the beginning.
The first caveat has to do with hypotheses about the etiology of the transgender condition. There are many possible causes of gender identity self-reports, but I don’t think it’s too controversial to propose that at least some of the transgender self-reports might result from the same mechanism as cisgender self-reports. Again, the idea is that there is some ‘self-reporting algorithm’, that takes some input that we don’t yet know about, and outputs a gender category, and that both cisgender people and transgender people have this. It’s not hard to come up with just-so stories about why having such an algorithm and caring about its output might have been adaptive. This is, however, an assumption. In theory, the self-reports from transgender people could have a cause separate from the self-reports of cisgender people, but it’s not what I expect.
The second caveat has to do with essentialism. In the past calls for an article like this one, I saw people point out that we reason about gender as if it is an essence, and that any dissolution would have to avoid this mistake. But there’s a difference between describing an algorithm that produces a category which feels like an essence, and providing an essentialist explanation. My dissolution will talk about essences because the human mind reasons with them, but my dissolution itself will not be essentialist in nature.
Humans universally make inferences about their typicality with respect to their self-reported gender. Check Google Scholar for ‘self-perceived gender typicality’ for further reading. So when I refer to a transman, by my model, I mean, “A human whose self-reporting algorithm returns the gender category ‘male’, but whose self-perceived gender typicality checker returns ‘Highly atypical!’”
And the word ‘human’ at the beginning of that sentence is important. I do not mean “A human that is secretly, essentially a girl,” or “A human that is secretly, essentially a boy,”; I just mean a human. I postulate that there are not boy typicality checkers and girl typicality checkers; there are typicality checkers that take an arbitrary gender category as input and return a measure of that human’s self-perceived typicality with regard to the category.
So when a transwoman looks in the mirror and feels atypical because of a typicality inference from the width of her hips, I believe that this is not a fundamentally transgender experience, not different in kind, but only in degree, from a ciswoman who listens to herself speak and feels atypical because of a typicality inference from the pitch of her voice.
Fortunately, society’s treatment of transgender people has come around to something like this in recent decades; our therapy proceeds by helping transgender people become more typical instances of their self-report algorithm’s output.
Many of the typical traits are quite tangible: behavior, personality, appearance. It is easier to make tangible things more typical, because they’re right there for you to hold; you aren’t confused about them. But I often hear reports of transgender people left with a nagging doubt, a lingering question of “Am I really an X?, which feels far more slippery and about which they confess themselves quite confused.
To get at this question, I sometimes see transgender people try to simulate the subjective experience of a typical instance of the self-report algorithm’s output. They ask questions like, “Does it feel the same to be me as it does to be a ‘real X’?” And I think this is the heart of the confusion.
For when they simulate the subjective experience of a ‘real X’, there is a striking dissimilarity between themselves and the simulation, because a ‘real X’ lacks a pervasive sense of distress originating from self-perceived atypicality.
But what I just described in the previous sentence is itself a typicality inference, which means that this simulation itself causes distress from atypicality, which is used to justify future inferences of self-perceived atypicality!
One of the primary inspirations for Bayesian networks was noticing the problem of double-counting evidence if inference resonates between an effect and a cause. For example, let’s say that I get a bit of unreliable information that the sidewalk is wet. This should make me think it’s more likely to be raining. But, if it’s more likely to be raining, doesn’t that make it more likely that the sidewalk is wet? And wouldn’t that make it more likely that the sidewalk is slippery? But if the sidewalk is slippery, it’s probably wet; and then I should again raise my probability that it’s raining...
If you didn’t have an explicit awareness that you have a general human algorithm that checks the arbitrary self-report against the perceived typicality, but rather you believed that this was some kind of special, transgender-specific self-doubt, then your typicality checker would never be able to mark its own distress signal as ‘Typical!‘, and it would oscillate between judging the subjective experience as atypical, outputting a distress signal in response, judging its own distress signal as atypical, sending a distress signal about that, etc.
And this double-counting is not anything like hair length or voice pitch, or even more slippery stuff like ‘being empathetic’; it’s very slippery, and no matter how many other ways you would have made yourself more typical, even though those changes would have soothed you, there would have been this separate and additional lingering doubt, a doubt that can only be annihilated by understanding the deep reasons that the tangible interventions worked, and how your mind runs skew to reality.
And that’s it. For me at least, this adds up to normality. There is no unbridgeable gap between the point at which you are a non-X and the point at which you become an X. Now you can just go back to making yourself as typical as you want to be, or anything else that you want to be.
Am I Really an X?
As I understand it, there is a phenomenon among transgender people where no matter what they do they can’t help but ask themselves the question, “Am I really an [insert self-reported gender category here]?” In the past, a few people have called for a LessWrong-style dissolution of this question. This is how I approach the problem.
There are two caveats which I must address in the beginning.
The first caveat has to do with hypotheses about the etiology of the transgender condition. There are many possible causes of gender identity self-reports, but I don’t think it’s too controversial to propose that at least some of the transgender self-reports might result from the same mechanism as cisgender self-reports. Again, the idea is that there is some ‘self-reporting algorithm’, that takes some input that we don’t yet know about, and outputs a gender category, and that both cisgender people and transgender people have this. It’s not hard to come up with just-so stories about why having such an algorithm and caring about its output might have been adaptive. This is, however, an assumption. In theory, the self-reports from transgender people could have a cause separate from the self-reports of cisgender people, but it’s not what I expect.
The second caveat has to do with essentialism. In the past calls for an article like this one, I saw people point out that we reason about gender as if it is an essence, and that any dissolution would have to avoid this mistake. But there’s a difference between describing an algorithm that produces a category which feels like an essence, and providing an essentialist explanation. My dissolution will talk about essences because the human mind reasons with them, but my dissolution itself will not be essentialist in nature.
Humans universally make inferences about their typicality with respect to their self-reported gender. Check Google Scholar for ‘self-perceived gender typicality’ for further reading. So when I refer to a transman, by my model, I mean, “A human whose self-reporting algorithm returns the gender category ‘male’, but whose self-perceived gender typicality checker returns ‘Highly atypical!’”
And the word ‘human’ at the beginning of that sentence is important. I do not mean “A human that is secretly, essentially a girl,” or “A human that is secretly, essentially a boy,”; I just mean a human. I postulate that there are not boy typicality checkers and girl typicality checkers; there are typicality checkers that take an arbitrary gender category as input and return a measure of that human’s self-perceived typicality with regard to the category.
So when a transwoman looks in the mirror and feels atypical because of a typicality inference from the width of her hips, I believe that this is not a fundamentally transgender experience, not different in kind, but only in degree, from a ciswoman who listens to herself speak and feels atypical because of a typicality inference from the pitch of her voice.
Fortunately, society’s treatment of transgender people has come around to something like this in recent decades; our therapy proceeds by helping transgender people become more typical instances of their self-report algorithm’s output.
Many of the typical traits are quite tangible: behavior, personality, appearance. It is easier to make tangible things more typical, because they’re right there for you to hold; you aren’t confused about them. But I often hear reports of transgender people left with a nagging doubt, a lingering question of “Am I really an X?, which feels far more slippery and about which they confess themselves quite confused.
To get at this question, I sometimes see transgender people try to simulate the subjective experience of a typical instance of the self-report algorithm’s output. They ask questions like, “Does it feel the same to be me as it does to be a ‘real X’?” And I think this is the heart of the confusion.
For when they simulate the subjective experience of a ‘real X’, there is a striking dissimilarity between themselves and the simulation, because a ‘real X’ lacks a pervasive sense of distress originating from self-perceived atypicality.
But what I just described in the previous sentence is itself a typicality inference, which means that this simulation itself causes distress from atypicality, which is used to justify future inferences of self-perceived atypicality!
I expected this to take more than one go-around.
Let’s review something Eliezer wrote in Fake Causality:
If you didn’t have an explicit awareness that you have a general human algorithm that checks the arbitrary self-report against the perceived typicality, but rather you believed that this was some kind of special, transgender-specific self-doubt, then your typicality checker would never be able to mark its own distress signal as ‘Typical!‘, and it would oscillate between judging the subjective experience as atypical, outputting a distress signal in response, judging its own distress signal as atypical, sending a distress signal about that, etc.
And this double-counting is not anything like hair length or voice pitch, or even more slippery stuff like ‘being empathetic’; it’s very slippery, and no matter how many other ways you would have made yourself more typical, even though those changes would have soothed you, there would have been this separate and additional lingering doubt, a doubt that can only be annihilated by understanding the deep reasons that the tangible interventions worked, and how your mind runs skew to reality.
And that’s it. For me at least, this adds up to normality. There is no unbridgeable gap between the point at which you are a non-X and the point at which you become an X. Now you can just go back to making yourself as typical as you want to be, or anything else that you want to be.