Upvoted and agreed, but I do wanna go a bit deeper and add some nuance to this. I read too much GEB and now you all have to deal with it.
Gender systems as social constructs is a very basic idea from sociology that basically no one finds really that contentious at this point hopefully. What’s more contentious is whether or not you can “really” pull back the social fabric and get at anything other than yet another layer of social fabric, I think you can but most attempts to do so, do so in a way that ignores power structures, trauma, inequality, or even really free will. “What you will choose to eat for dinner is a product of your neurotype” sort of thinking, which ultimately restricts your behavior in ways that are unhelpful to the free exertion of agency. Blanchardian sexology is a fundamentally behaviorist model, and leaves no room for an actual agent that makes choices. It’s epistemic masochism and it leaves one highly exposed to invasive motive misattribution and drive-by conceptual gaslighting.
Like, as far as I’m concerned, I’m trans because I chose to be, because being the way I am seemed like a better and happier life to have than the alternative. Now sure, you could ask, “yeah but why did I think that? Why was I the kind of agent that would make that kind of choice? Why did I decide to believe that?”
Well, because I decided to be the kind of agent that could decide what kind of agent I was. “Alright octavia but come on this can’t just recurse forever, there has to be an actual cause in biology” does there really? What’s that thing Eliezer says about looking for morality from the universe written on a rock? If a brain scan said I “wasn’t really trans” I would just say it was wrong, because I choose what I am, not some external force. Morphological freedom without metaphysical freedom of will is pointless.
Broadly, I agree that it’s hard not to assume something like metaphysical free will when doing decision theory. This is awkward given current metaphysics of science, but maybe the constructor theory people will figure out how to make it work.
It seems to me that repressed drives obviously exist. Everyone exists under coercion of one form or another and has to hide things that they want, sometimes from consciousness. I’m sure you’ve already read False Faces.
The main problem with repressed drive theory is that, given that they’re repressed, you only get a low-resolution picture of the drives. It doesn’t make sense to say that what someone Really Wants is (x). Actually, what they really want is complex and obscure, and (x) is a high-level summary of a major component of what they want that is hidden.
The proper use of repressed drive theories including Blanchardianism is to arbitrage on people’s stories about what they want to find places where an outside observer’s story could better predict what they do than their own story can. But an arb opportunity isn’t a full world model! It isn’t some kind of Ultimate Truth, it’s a directional update. Of course there are motivations for mtf transition other than autogynephilia and androphilia.
As far as choosing to be trans goes: the relatively unconfusing phenomenon is people choosing to transition medically, go by a different name, etc. I mean, it’s still confusing, but scientifically, not metaphysically. People are getting something out of it, whether it’s satisfaction of repressed or non-repressed drives, being socially treated more compatibly with their life strategy, etc.
What’s more confusing (metaphysically) is why people talk about having a gender identity, transitioning because of their gender identity, feeling a gender identity, etc. I’ve been through all this, I went from thinking I didn’t have a gender identity to thinking I had a female gender identity to thinking it’s complicated and maybe I don’t have one and maybe it’s nonbinary or male or something, idk, what does this even mean?
I think there’s a problem with conceptualizing gender identity as inherent to a human. (Perhaps it can be inherent to an agent that can be instantiated on a human some of the time?). Which is the basic issue with false faces. ~Everyone is subject to gendered coercion, including, as Judith Butler emphasizes, coercion to form a gender identity, to present one’s actions as following from that identity.
This goes especially for people who want to transition. It’s easier to get HRT and get your family to accept you if you say you’ve been a Real Man/Woman all along. It’s related to gender binarism. Medical transition for nonbinaries (or the rare cisgender transsexual?) is getting better but is still less standard.
Now, the fact that people are under coercion to have either no gender identity or a cis gender identity prior to transition, and a trans (preferably binary) gender during transition, and a gender identity consistent with their appearance/behavior, and either no gender identity or a cis gender identity if trying to get along with gender-critical people, etc, doesn’t mean that any of these identities are wrong per se. It just means there are these huge social forces pushing people to have or simulate certain identities at different points in their life.
If gender identity is taken as a human trait that is constant over time, then coercion straightforwardly introduces bias in reporting of one’s true gender identity. If gender identity is taken as a non-constant trait, then people changing their gender identity in response to coercion is not a contradiction, and doesn’t imply these gender identities are wrong.
If gender identity is taken as a trait of an agent that is constant over time, then coercion might change which agent is instantiated on a given human, in a way that changes the gender identity of the agent the human is instantiating, but it’s wrong to say humans per se have gender identities.
In my case I got to the point in my life where I didn’t need to say I was a woman to get further medical treatments, and re-conceptualized my worldview to be less dependent on gender identity (and explain past decisions I had made in terms of more pragmatic motives), and took some ketamine, and… maybe I’m not trans anymore (in the sense of, not having a gender identify different from assigned gender at birth, not that I’m detransitioning)? If I can decide to be trans, maybe I can decide not to be? Maybe my previous agent decided to terminate and choose a different agent to replace it? It depends on your ontology, I guess. (And maybe this is also a coerced relation to gender identity, given transphobia? I feel less stressed out about it, though.)
I mean I think you sort of hit the nail on the head without realizing it: gender identity is performative. It’s made of words and language and left brain narrative and logical structures. Really, I think the whole point of identity is communicable legibility, both with yourself and with others. It’s the cluster of nodes in your mental neural network that most tightly correspond with your concept of yourself, based on how you see yourself reflected in the world around you.
But all of that is just words and language, it’s all describing what you feel, it’s not the actual felt senses, just the labels for them. When someone says “I feel like I’m really a woman” that’s all felt sense stuff which is likely to be complicated and multidimensional, and the collapse of that high dimensional feeling into a low dimension phrase makes it hard to know exactly what they’re feeling beyond that it roughly circles their concept of womanhood.
Similarly I think, the Blanchardian model also does a similar dimensional collapse, but it’s doing on a second dimensional collapse over the the claim that they feel like they’re really a woman, into something purely sexual. I don’t think the sexology model that treats the desire to have reproductive sex as logically prior to everything else a human values, is a particularly accurate, useful, or predictive model of the vast majority of human behavior.
But that still leaves the question: what is actually being conveyed the the phrase “I feel like I’m really a woman”? Like, what are the actual nodes on the graph of feelings and preverbal sensations connected to? What does it even mean to feel like a woman? Or a man for that matter? Or anything else, really? If I say “I feel like an old tree” what am I conveying about my phenomenal experience?
One potential place to look for the answer has to do with empathy and “mirror neurons”. If we assume that a mind builds a self model (an identity) the same way it builds everything else (and via occam’s razor, we have no reason to think it wouldn’t), then “things that feel like me” are just things that relate more closely in their network graph to their self node. Under this model, someone reporting that they feel more like a woman than like a man, is reporting that their “empathic connectivity” (in the sense of producing more node activations) is higher for women than for men, their self concept activates more strongly when they are around “other women” than when they are around “other men”. Similarly we can model dysphoria as something like a contradictory cluster of nodes, which when activated (for example by someone calling you a man when that concept is weakly or negatively correlated with your self node) produces disharmony or destructive interference patterns within the contradictory portion of the graph.
However, under this model, someone’s felt sense concept of gender would likely start developing before they had words for it, and because of how everyone is taught to override and suppress their felt sense in places it seems to contradict reality, this feeling ends up repressed beneath whatever socially constructed identity their parents enforced on them. By the time they begin to make sense of the feelings, the closest they can come to conveying how they feel under the binary paradigm of our culture is to just say they feel like the opposite sex. That’s partly what it seems like Zack is complaining about, like, if your model of yourself is non-normative in any way, you’re expected to collapse it into legible normativity at some defensible schelling point. However if your model of yourself just doesn’t neatly fit somewhere around that schelling point, you’re left isolated and feeling attacked by all sides just for trying to accurately report your experiences.
I transitioned basically as soon as I could legally get hormones, and I’ve identified all sorts of ways over the years: as femboy, trans woman, nonbinary amab, mentally intersex, genderqueer, a spaceship, a glitch in the spacetime continuum, slime...and as I’ve gotten older and settled into my body and my sense of myself, a lot of that has just sort of...stopped mattering? I know who I am and what I am, even if I don’t have the words for it. I know what ways of being bring me joy, what styles and modes of interaction I like, and how I want to be treated by others. I have an identity, but it’s not exactly a gender identity. It includes things that could probably be traditionally called gender (like wearing dresses and makeup) but also things that really...just don’t fit into that category at all (like DJing, LSD, and rocket stage separations), and I don’t have a line in my head for where things start being specifically about gender, there’s just me and how I feel about myself. If I find a way of being I like better than one of my current ways of being, I change, if I try something and decide I don’t like it, I stop.
I think this is partly what Paul Graham gets at with advice to “keep your identity small”, the more locked into a particular way of being I am, the less awareness I’ll have of other ways of being I might like more. I’m not just a woman, or just a man, I’m not even a person. I am whatever I say I am, I’m whatever feels fun and interesting and comfortable, I contain multitudes.
Like, as far as I’m concerned, I’m trans because I chose to be, because being the way I am seemed like a better and happier life to have than the alternative. Now sure, you could ask, “yeah but why did I think that? Why was I the kind of agent that would make that kind of choice? Why did I decide to believe that?”
Yes, this a non-confused question with a real answer.
Well, because I decided to be the kind of agent that could decide what kind of agent I was. “Alright octavia but come on this can’t just recurse forever, there has to be an actual cause in biology” does there really?
In a literal/trivial sense, all human actions have a direct cause in the biology of the human brain and body. But you are probably using “biology” in a way that refers to “coarse” biological causes like hormone levels in utero, rather than individual connections between neurons, as well as excluding social causes. In that case, it’s at least logically possible that the answer to this question is no. It seems extremely unlikely that coarse biological factors play no role in determining whether someone is trans (I expect coarse biological factors to be at least somewhat involved in determining the variance in every relevant high-level trait of a person), but it’s very plausible that there is not one discrete cause to point to, or that most of the variance in gender identity is explained by social factors.
If a brain scan said I “wasn’t really trans” I would just say it was wrong, because I choose what I am, not some external force.
This seems like a red herring to me—as far as I know no transgender brain research is attempting to diagnose trans people by brain scan in a way that overrides their verbal reports and behavior, but rather to find correlates of those verbal reports and behavior in the brain. If we find a characteristic set of features in the brains of most trans people, but not all, it will then be a separate debate as to whether we should consider this newly discovered thing to be the true meaning of the word “transgender”, or whether we should just keep using the word the same way we used it before, to refer to a pattern of self-identity and behavior, and the “keep using it the same way we did before” side seems quite reasonable. Even now, many people understand the word “transgender” as an “umbrella term” that encompasses people who may not have the same underlying motivations.
Morphological freedom without metaphysical freedom of will is pointless.
If by “metaphysical freedom of will” you are referring to is libertarian free will, then I have to disagree. Even if libertarian free will doesn’t exist (it doesn’t), it is still beneficial to me for society to allow me the option of changing my body. If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
I agree completely with the entirety of your comment, which makes some excellent points… with one exception:
If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
It has never seemed to me that Eliezer successfully solved (and/or dissolved) the question of free will. As far as I can tell, the free will sequence skips over most of the actually difficult problems, and the post you link is one of the worst offenders in that regard.
The actually difficult problem that’s specific to the question of free will is “how is the state space generated” (i.e., where do all these graph nodes come from in the first place, that our algorithm is searching through?).
The other actually difficult problem, which is not specific to the question of free will but applies also (and first) to Eliezer’s “dissolving” of problems like “How An Algorithm Feels From Inside”, is “why exactly should this algorithm feel like anything from the inside? why, indeed, should anything feel like anything from the inside?” Without an answer to this question (which Eliezer never gives and, as far as I can recall, never even seriously acknowledges), all of these supposed “solutions”… aren’t.
I’m inclined to give Yudkowsky credit for solving the “in scope” problems, and to defer the difficult problems you identify as “out of scope”.
For free will, the question Yudkowsky is trying to address is, “What could it possibly mean to make decisions in a deterministic universe?”
I think the relevant philosophical question being posed here is addressed by contemplating a chess engine as a toy model. The program searches the game tree in order to output the best move. It can’t know which move is best in advance of performing the search, and the search algorithm treats all legal moves as “possible”, even though the program is deterministic and will only end up outputting one of them.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts. I feel like I “could” lift my arm, because if my brain computed the intent to lift my arm, it could output the appropriate nerve signals to make it happen, but I can’t know whether I will lift my arm in advance of computing the decision to do so, and the decision treats both the lift and not-lift outcomes as “possible”, even though the universe is deterministic and I’m only going to end up doing one of them.
The “how the algorithm feels” methodology is doing work (identifying the role could-ness plays in the “map” of choosing a chess move or lifting my arm, without presupposing fundamental could-ness in the “territory”), even if it doesn’t itself solve the hard problem of why algorithms have feelings.
I don’t dispute that both the “search algorithm” idea and the “algorithm that implements this cognitive functionality” idea are valuable, and cut through some parts of the confusions related to free will and consciousness respectively. But the things I mention are hardly “out of scope”, if without them, the puzzles remains (as indeed they do, IMO).
In any case, claiming that the questions of either free will or consciousness have been “solved” by these explanations is simply false, and that’s what I was objecting to.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts.
This is the sort of claim that it’s premature to make prior to having even a rough functional sketch of the solution. Something might look like ‘“merely” an enormously difficult empirical cognitive science problem’, until you try to solve it, and realize that you’re still confused.
“Alright octavia but come on this can’t just recurse forever, there has to be an actual cause in biology” does there really?
Yes! There does! The “you” that chooses is a structure within the physical universe. A purportedly scientific explanation that contradicts the facts should be discarded, of course—just because someone performed a measurement they call a “brain scan”, doesn’t mean that the alleged scan means what they say it means—but there’s no good reason to invent a generalized skepticism of there being a real answer. (Bad reasons include being afraid of the real answer and being afraid that legitimizing the idea of there being a real answer will empower the forces of oppression.)
I think this raises the question of what it even means to have a biological explanation (or explanation on any other specific level of abstraction), rather than a psychological one.
In a literal sense, it’s true that any human trait must be explainable biologically. Even something like preferring Star Wars to Star Trek: If you had a 100% accurate model of the biology of a human, you could load up that model with a scan, play a simulated version of both series, and look for simulated signs of approval.
But it feels a bit brute-forcey, doesn’t it? Like not a real explanation?
One idea I’ve had is that an explanation on a specified level of abstraction should be in terms of simple features of the abstraction. Such as linear and low-order polynomial functions, rather than crazily deeply run complex simulations. This has practical utility, in that very shallow functions are much easier to work with, and it also captures the notion that reductionism can bring you to an inappropriate level of abstraction if you are working with information that is nonlinearly encoded into an underlying substrate.
For an example of how to apply this, imagine that you were trying to explain a bug in some code as a program is running. Technically this is reducible to an electronic level of abstraction, but the memory locations the program uses will be unpredictable based on the allocators involved, so attempts at actually explaining it electronically would require strange nonlinear features whose many job is to extract the computational abstractions. It wouldn’t actually be an electronic rather than computational explanation. On the other hand, if e.g. a powerful cosmic ray entered the computer and broke it, then you would have a much more straightforward electronic explanation, and more ad-hoc computational explanation.
In terms of transness, a simple biological MIGI explanation could be something like “this hormone interacting with this cell starts a developmental cascade for gender identity, and it can be interfered with through these mechanisms, which cause transness”. Meanwhile, a simple biological AGP explanation could be something like “this area in male brains recognizes that one is pursuing attractive women, and under ordinary circumstances this other brain region sends a suppressing signal to it when one is considering oneself, but for AGPs it doesn’t do that”. However, one could have more complex explanations that don’t fit a simple biological story. For instance the meme that AGP is caused by a culture of women being presented as desirable and men not, is presumably relying on complex, open-ended cognition that can vary in similar ways to how a memory allocator cna vary.
I’m saying that the “cause in biology” is that I have evolutionarily granted have free will and generalized recursively aware intelligence, I’m capable of making choices after consciously considering my options. Consciousness is physical, it is an actual part of reality that has real push-pull causal power on the external universe. Believing otherwise would be epiphenomenalist. The experience of phenomenal consciousness that people have, and their ability to make choices within that experience, cannot be illusory or a byproduct of some deeper “real” computation, it is the computation, via anthropics it’s a logical necessity. You can’t strip out someone’s phenomenal experience to get at the “real” computation, if they’re being honest and reporting their feelings accurately, that is the computation, and I don’t think there are going to be neat and tidy biological correlates to...well most of the things sexology tries to put into biologically innate categories based on the interpretation of statistical data, because they’re doing everything from an extremely sex-essentialist frame of motivated reasoning, starting from poorly framed presuppositions as axioms.
Blanchardian sexology is a fundamentally behaviorist model, and leaves no room for an actual agent that makes choices. It’s epistemic masochism and it leaves one highly exposed to invasive motive misattribution and drive-by conceptual gaslighting.
Not sure how I feel about the rest of your comment, but this is a critically important and central point regardless.
Upvoted and agreed, but I do wanna go a bit deeper and add some nuance to this. I read too much GEB and now you all have to deal with it.
Gender systems as social constructs is a very basic idea from sociology that basically no one finds really that contentious at this point hopefully. What’s more contentious is whether or not you can “really” pull back the social fabric and get at anything other than yet another layer of social fabric, I think you can but most attempts to do so, do so in a way that ignores power structures, trauma, inequality, or even really free will. “What you will choose to eat for dinner is a product of your neurotype” sort of thinking, which ultimately restricts your behavior in ways that are unhelpful to the free exertion of agency. Blanchardian sexology is a fundamentally behaviorist model, and leaves no room for an actual agent that makes choices. It’s epistemic masochism and it leaves one highly exposed to invasive motive misattribution and drive-by conceptual gaslighting.
Like, as far as I’m concerned, I’m trans because I chose to be, because being the way I am seemed like a better and happier life to have than the alternative. Now sure, you could ask, “yeah but why did I think that? Why was I the kind of agent that would make that kind of choice? Why did I decide to believe that?”
Well, because I decided to be the kind of agent that could decide what kind of agent I was. “Alright octavia but come on this can’t just recurse forever, there has to be an actual cause in biology” does there really? What’s that thing Eliezer says about looking for morality from the universe written on a rock? If a brain scan said I “wasn’t really trans” I would just say it was wrong, because I choose what I am, not some external force. Morphological freedom without metaphysical freedom of will is pointless.
Broadly, I agree that it’s hard not to assume something like metaphysical free will when doing decision theory. This is awkward given current metaphysics of science, but maybe the constructor theory people will figure out how to make it work.
It seems to me that repressed drives obviously exist. Everyone exists under coercion of one form or another and has to hide things that they want, sometimes from consciousness. I’m sure you’ve already read False Faces.
The main problem with repressed drive theory is that, given that they’re repressed, you only get a low-resolution picture of the drives. It doesn’t make sense to say that what someone Really Wants is (x). Actually, what they really want is complex and obscure, and (x) is a high-level summary of a major component of what they want that is hidden.
The proper use of repressed drive theories including Blanchardianism is to arbitrage on people’s stories about what they want to find places where an outside observer’s story could better predict what they do than their own story can. But an arb opportunity isn’t a full world model! It isn’t some kind of Ultimate Truth, it’s a directional update. Of course there are motivations for mtf transition other than autogynephilia and androphilia.
As far as choosing to be trans goes: the relatively unconfusing phenomenon is people choosing to transition medically, go by a different name, etc. I mean, it’s still confusing, but scientifically, not metaphysically. People are getting something out of it, whether it’s satisfaction of repressed or non-repressed drives, being socially treated more compatibly with their life strategy, etc.
What’s more confusing (metaphysically) is why people talk about having a gender identity, transitioning because of their gender identity, feeling a gender identity, etc. I’ve been through all this, I went from thinking I didn’t have a gender identity to thinking I had a female gender identity to thinking it’s complicated and maybe I don’t have one and maybe it’s nonbinary or male or something, idk, what does this even mean?
I think there’s a problem with conceptualizing gender identity as inherent to a human. (Perhaps it can be inherent to an agent that can be instantiated on a human some of the time?). Which is the basic issue with false faces. ~Everyone is subject to gendered coercion, including, as Judith Butler emphasizes, coercion to form a gender identity, to present one’s actions as following from that identity.
This goes especially for people who want to transition. It’s easier to get HRT and get your family to accept you if you say you’ve been a Real Man/Woman all along. It’s related to gender binarism. Medical transition for nonbinaries (or the rare cisgender transsexual?) is getting better but is still less standard.
Now, the fact that people are under coercion to have either no gender identity or a cis gender identity prior to transition, and a trans (preferably binary) gender during transition, and a gender identity consistent with their appearance/behavior, and either no gender identity or a cis gender identity if trying to get along with gender-critical people, etc, doesn’t mean that any of these identities are wrong per se. It just means there are these huge social forces pushing people to have or simulate certain identities at different points in their life.
If gender identity is taken as a human trait that is constant over time, then coercion straightforwardly introduces bias in reporting of one’s true gender identity. If gender identity is taken as a non-constant trait, then people changing their gender identity in response to coercion is not a contradiction, and doesn’t imply these gender identities are wrong.
If gender identity is taken as a trait of an agent that is constant over time, then coercion might change which agent is instantiated on a given human, in a way that changes the gender identity of the agent the human is instantiating, but it’s wrong to say humans per se have gender identities.
In my case I got to the point in my life where I didn’t need to say I was a woman to get further medical treatments, and re-conceptualized my worldview to be less dependent on gender identity (and explain past decisions I had made in terms of more pragmatic motives), and took some ketamine, and… maybe I’m not trans anymore (in the sense of, not having a gender identify different from assigned gender at birth, not that I’m detransitioning)? If I can decide to be trans, maybe I can decide not to be? Maybe my previous agent decided to terminate and choose a different agent to replace it? It depends on your ontology, I guess. (And maybe this is also a coerced relation to gender identity, given transphobia? I feel less stressed out about it, though.)
I mean I think you sort of hit the nail on the head without realizing it: gender identity is performative. It’s made of words and language and left brain narrative and logical structures. Really, I think the whole point of identity is communicable legibility, both with yourself and with others. It’s the cluster of nodes in your mental neural network that most tightly correspond with your concept of yourself, based on how you see yourself reflected in the world around you.
But all of that is just words and language, it’s all describing what you feel, it’s not the actual felt senses, just the labels for them. When someone says “I feel like I’m really a woman” that’s all felt sense stuff which is likely to be complicated and multidimensional, and the collapse of that high dimensional feeling into a low dimension phrase makes it hard to know exactly what they’re feeling beyond that it roughly circles their concept of womanhood.
Similarly I think, the Blanchardian model also does a similar dimensional collapse, but it’s doing on a second dimensional collapse over the the claim that they feel like they’re really a woman, into something purely sexual. I don’t think the sexology model that treats the desire to have reproductive sex as logically prior to everything else a human values, is a particularly accurate, useful, or predictive model of the vast majority of human behavior.
But that still leaves the question: what is actually being conveyed the the phrase “I feel like I’m really a woman”? Like, what are the actual nodes on the graph of feelings and preverbal sensations connected to? What does it even mean to feel like a woman? Or a man for that matter? Or anything else, really? If I say “I feel like an old tree” what am I conveying about my phenomenal experience?
One potential place to look for the answer has to do with empathy and “mirror neurons”. If we assume that a mind builds a self model (an identity) the same way it builds everything else (and via occam’s razor, we have no reason to think it wouldn’t), then “things that feel like me” are just things that relate more closely in their network graph to their self node. Under this model, someone reporting that they feel more like a woman than like a man, is reporting that their “empathic connectivity” (in the sense of producing more node activations) is higher for women than for men, their self concept activates more strongly when they are around “other women” than when they are around “other men”. Similarly we can model dysphoria as something like a contradictory cluster of nodes, which when activated (for example by someone calling you a man when that concept is weakly or negatively correlated with your self node) produces disharmony or destructive interference patterns within the contradictory portion of the graph.
However, under this model, someone’s felt sense concept of gender would likely start developing before they had words for it, and because of how everyone is taught to override and suppress their felt sense in places it seems to contradict reality, this feeling ends up repressed beneath whatever socially constructed identity their parents enforced on them. By the time they begin to make sense of the feelings, the closest they can come to conveying how they feel under the binary paradigm of our culture is to just say they feel like the opposite sex. That’s partly what it seems like Zack is complaining about, like, if your model of yourself is non-normative in any way, you’re expected to collapse it into legible normativity at some defensible schelling point. However if your model of yourself just doesn’t neatly fit somewhere around that schelling point, you’re left isolated and feeling attacked by all sides just for trying to accurately report your experiences.
I transitioned basically as soon as I could legally get hormones, and I’ve identified all sorts of ways over the years: as femboy, trans woman, nonbinary amab, mentally intersex, genderqueer, a spaceship, a glitch in the spacetime continuum, slime...and as I’ve gotten older and settled into my body and my sense of myself, a lot of that has just sort of...stopped mattering? I know who I am and what I am, even if I don’t have the words for it. I know what ways of being bring me joy, what styles and modes of interaction I like, and how I want to be treated by others. I have an identity, but it’s not exactly a gender identity. It includes things that could probably be traditionally called gender (like wearing dresses and makeup) but also things that really...just don’t fit into that category at all (like DJing, LSD, and rocket stage separations), and I don’t have a line in my head for where things start being specifically about gender, there’s just me and how I feel about myself. If I find a way of being I like better than one of my current ways of being, I change, if I try something and decide I don’t like it, I stop.
I think this is partly what Paul Graham gets at with advice to “keep your identity small”, the more locked into a particular way of being I am, the less awareness I’ll have of other ways of being I might like more. I’m not just a woman, or just a man, I’m not even a person. I am whatever I say I am, I’m whatever feels fun and interesting and comfortable, I contain multitudes.
Yes, this a non-confused question with a real answer.
In a literal/trivial sense, all human actions have a direct cause in the biology of the human brain and body. But you are probably using “biology” in a way that refers to “coarse” biological causes like hormone levels in utero, rather than individual connections between neurons, as well as excluding social causes. In that case, it’s at least logically possible that the answer to this question is no. It seems extremely unlikely that coarse biological factors play no role in determining whether someone is trans (I expect coarse biological factors to be at least somewhat involved in determining the variance in every relevant high-level trait of a person), but it’s very plausible that there is not one discrete cause to point to, or that most of the variance in gender identity is explained by social factors.
This seems like a red herring to me—as far as I know no transgender brain research is attempting to diagnose trans people by brain scan in a way that overrides their verbal reports and behavior, but rather to find correlates of those verbal reports and behavior in the brain. If we find a characteristic set of features in the brains of most trans people, but not all, it will then be a separate debate as to whether we should consider this newly discovered thing to be the true meaning of the word “transgender”, or whether we should just keep using the word the same way we used it before, to refer to a pattern of self-identity and behavior, and the “keep using it the same way we did before” side seems quite reasonable. Even now, many people understand the word “transgender” as an “umbrella term” that encompasses people who may not have the same underlying motivations.
If by “metaphysical freedom of will” you are referring to is libertarian free will, then I have to disagree. Even if libertarian free will doesn’t exist (it doesn’t), it is still beneficial to me for society to allow me the option of changing my body. If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
I agree completely with the entirety of your comment, which makes some excellent points… with one exception:
It has never seemed to me that Eliezer successfully solved (and/or dissolved) the question of free will. As far as I can tell, the free will sequence skips over most of the actually difficult problems, and the post you link is one of the worst offenders in that regard.
What do you see as the actually difficult problems?
The actually difficult problem that’s specific to the question of free will is “how is the state space generated” (i.e., where do all these graph nodes come from in the first place, that our algorithm is searching through?).
The other actually difficult problem, which is not specific to the question of free will but applies also (and first) to Eliezer’s “dissolving” of problems like “How An Algorithm Feels From Inside”, is “why exactly should this algorithm feel like anything from the inside? why, indeed, should anything feel like anything from the inside?” Without an answer to this question (which Eliezer never gives and, as far as I can recall, never even seriously acknowledges), all of these supposed “solutions”… aren’t.
I’m inclined to give Yudkowsky credit for solving the “in scope” problems, and to defer the difficult problems you identify as “out of scope”.
For free will, the question Yudkowsky is trying to address is, “What could it possibly mean to make decisions in a deterministic universe?”
I think the relevant philosophical question being posed here is addressed by contemplating a chess engine as a toy model. The program searches the game tree in order to output the best move. It can’t know which move is best in advance of performing the search, and the search algorithm treats all legal moves as “possible”, even though the program is deterministic and will only end up outputting one of them.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts. I feel like I “could” lift my arm, because if my brain computed the intent to lift my arm, it could output the appropriate nerve signals to make it happen, but I can’t know whether I will lift my arm in advance of computing the decision to do so, and the decision treats both the lift and not-lift outcomes as “possible”, even though the universe is deterministic and I’m only going to end up doing one of them.
The “how the algorithm feels” methodology is doing work (identifying the role could-ness plays in the “map” of choosing a chess move or lifting my arm, without presupposing fundamental could-ness in the “territory”), even if it doesn’t itself solve the hard problem of why algorithms have feelings.
I don’t dispute that both the “search algorithm” idea and the “algorithm that implements this cognitive functionality” idea are valuable, and cut through some parts of the confusions related to free will and consciousness respectively. But the things I mention are hardly “out of scope”, if without them, the puzzles remains (as indeed they do, IMO).
In any case, claiming that the questions of either free will or consciousness have been “solved” by these explanations is simply false, and that’s what I was objecting to.
This is the sort of claim that it’s premature to make prior to having even a rough functional sketch of the solution. Something might look like ‘“merely” an enormously difficult empirical cognitive science problem’, until you try to solve it, and realize that you’re still confused.
Yes! There does! The “you” that chooses is a structure within the physical universe. A purportedly scientific explanation that contradicts the facts should be discarded, of course—just because someone performed a measurement they call a “brain scan”, doesn’t mean that the alleged scan means what they say it means—but there’s no good reason to invent a generalized skepticism of there being a real answer. (Bad reasons include being afraid of the real answer and being afraid that legitimizing the idea of there being a real answer will empower the forces of oppression.)
I think this raises the question of what it even means to have a biological explanation (or explanation on any other specific level of abstraction), rather than a psychological one.
In a literal sense, it’s true that any human trait must be explainable biologically. Even something like preferring Star Wars to Star Trek: If you had a 100% accurate model of the biology of a human, you could load up that model with a scan, play a simulated version of both series, and look for simulated signs of approval.
But it feels a bit brute-forcey, doesn’t it? Like not a real explanation?
One idea I’ve had is that an explanation on a specified level of abstraction should be in terms of simple features of the abstraction. Such as linear and low-order polynomial functions, rather than crazily deeply run complex simulations. This has practical utility, in that very shallow functions are much easier to work with, and it also captures the notion that reductionism can bring you to an inappropriate level of abstraction if you are working with information that is nonlinearly encoded into an underlying substrate.
For an example of how to apply this, imagine that you were trying to explain a bug in some code as a program is running. Technically this is reducible to an electronic level of abstraction, but the memory locations the program uses will be unpredictable based on the allocators involved, so attempts at actually explaining it electronically would require strange nonlinear features whose many job is to extract the computational abstractions. It wouldn’t actually be an electronic rather than computational explanation. On the other hand, if e.g. a powerful cosmic ray entered the computer and broke it, then you would have a much more straightforward electronic explanation, and more ad-hoc computational explanation.
In terms of transness, a simple biological MIGI explanation could be something like “this hormone interacting with this cell starts a developmental cascade for gender identity, and it can be interfered with through these mechanisms, which cause transness”. Meanwhile, a simple biological AGP explanation could be something like “this area in male brains recognizes that one is pursuing attractive women, and under ordinary circumstances this other brain region sends a suppressing signal to it when one is considering oneself, but for AGPs it doesn’t do that”. However, one could have more complex explanations that don’t fit a simple biological story. For instance the meme that AGP is caused by a culture of women being presented as desirable and men not, is presumably relying on complex, open-ended cognition that can vary in similar ways to how a memory allocator cna vary.
I’m saying that the “cause in biology” is that I have evolutionarily granted have free will and generalized recursively aware intelligence, I’m capable of making choices after consciously considering my options. Consciousness is physical, it is an actual part of reality that has real push-pull causal power on the external universe. Believing otherwise would be epiphenomenalist. The experience of phenomenal consciousness that people have, and their ability to make choices within that experience, cannot be illusory or a byproduct of some deeper “real” computation, it is the computation, via anthropics it’s a logical necessity. You can’t strip out someone’s phenomenal experience to get at the “real” computation, if they’re being honest and reporting their feelings accurately, that is the computation, and I don’t think there are going to be neat and tidy biological correlates to...well most of the things sexology tries to put into biologically innate categories based on the interpretation of statistical data, because they’re doing everything from an extremely sex-essentialist frame of motivated reasoning, starting from poorly framed presuppositions as axioms.
Not sure how I feel about the rest of your comment, but this is a critically important and central point regardless.