Like, as far as I’m concerned, I’m trans because I chose to be, because being the way I am seemed like a better and happier life to have than the alternative. Now sure, you could ask, “yeah but why did I think that? Why was I the kind of agent that would make that kind of choice? Why did I decide to believe that?”
Yes, this a non-confused question with a real answer.
Well, because I decided to be the kind of agent that could decide what kind of agent I was. “Alright octavia but come on this can’t just recurse forever, there has to be an actual cause in biology” does there really?
In a literal/trivial sense, all human actions have a direct cause in the biology of the human brain and body. But you are probably using “biology” in a way that refers to “coarse” biological causes like hormone levels in utero, rather than individual connections between neurons, as well as excluding social causes. In that case, it’s at least logically possible that the answer to this question is no. It seems extremely unlikely that coarse biological factors play no role in determining whether someone is trans (I expect coarse biological factors to be at least somewhat involved in determining the variance in every relevant high-level trait of a person), but it’s very plausible that there is not one discrete cause to point to, or that most of the variance in gender identity is explained by social factors.
If a brain scan said I “wasn’t really trans” I would just say it was wrong, because I choose what I am, not some external force.
This seems like a red herring to me—as far as I know no transgender brain research is attempting to diagnose trans people by brain scan in a way that overrides their verbal reports and behavior, but rather to find correlates of those verbal reports and behavior in the brain. If we find a characteristic set of features in the brains of most trans people, but not all, it will then be a separate debate as to whether we should consider this newly discovered thing to be the true meaning of the word “transgender”, or whether we should just keep using the word the same way we used it before, to refer to a pattern of self-identity and behavior, and the “keep using it the same way we did before” side seems quite reasonable. Even now, many people understand the word “transgender” as an “umbrella term” that encompasses people who may not have the same underlying motivations.
Morphological freedom without metaphysical freedom of will is pointless.
If by “metaphysical freedom of will” you are referring to is libertarian free will, then I have to disagree. Even if libertarian free will doesn’t exist (it doesn’t), it is still beneficial to me for society to allow me the option of changing my body. If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
I agree completely with the entirety of your comment, which makes some excellent points… with one exception:
If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
It has never seemed to me that Eliezer successfully solved (and/or dissolved) the question of free will. As far as I can tell, the free will sequence skips over most of the actually difficult problems, and the post you link is one of the worst offenders in that regard.
The actually difficult problem that’s specific to the question of free will is “how is the state space generated” (i.e., where do all these graph nodes come from in the first place, that our algorithm is searching through?).
The other actually difficult problem, which is not specific to the question of free will but applies also (and first) to Eliezer’s “dissolving” of problems like “How An Algorithm Feels From Inside”, is “why exactly should this algorithm feel like anything from the inside? why, indeed, should anything feel like anything from the inside?” Without an answer to this question (which Eliezer never gives and, as far as I can recall, never even seriously acknowledges), all of these supposed “solutions”… aren’t.
I’m inclined to give Yudkowsky credit for solving the “in scope” problems, and to defer the difficult problems you identify as “out of scope”.
For free will, the question Yudkowsky is trying to address is, “What could it possibly mean to make decisions in a deterministic universe?”
I think the relevant philosophical question being posed here is addressed by contemplating a chess engine as a toy model. The program searches the game tree in order to output the best move. It can’t know which move is best in advance of performing the search, and the search algorithm treats all legal moves as “possible”, even though the program is deterministic and will only end up outputting one of them.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts. I feel like I “could” lift my arm, because if my brain computed the intent to lift my arm, it could output the appropriate nerve signals to make it happen, but I can’t know whether I will lift my arm in advance of computing the decision to do so, and the decision treats both the lift and not-lift outcomes as “possible”, even though the universe is deterministic and I’m only going to end up doing one of them.
The “how the algorithm feels” methodology is doing work (identifying the role could-ness plays in the “map” of choosing a chess move or lifting my arm, without presupposing fundamental could-ness in the “territory”), even if it doesn’t itself solve the hard problem of why algorithms have feelings.
I don’t dispute that both the “search algorithm” idea and the “algorithm that implements this cognitive functionality” idea are valuable, and cut through some parts of the confusions related to free will and consciousness respectively. But the things I mention are hardly “out of scope”, if without them, the puzzles remains (as indeed they do, IMO).
In any case, claiming that the questions of either free will or consciousness have been “solved” by these explanations is simply false, and that’s what I was objecting to.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts.
This is the sort of claim that it’s premature to make prior to having even a rough functional sketch of the solution. Something might look like ‘“merely” an enormously difficult empirical cognitive science problem’, until you try to solve it, and realize that you’re still confused.
Yes, this a non-confused question with a real answer.
In a literal/trivial sense, all human actions have a direct cause in the biology of the human brain and body. But you are probably using “biology” in a way that refers to “coarse” biological causes like hormone levels in utero, rather than individual connections between neurons, as well as excluding social causes. In that case, it’s at least logically possible that the answer to this question is no. It seems extremely unlikely that coarse biological factors play no role in determining whether someone is trans (I expect coarse biological factors to be at least somewhat involved in determining the variance in every relevant high-level trait of a person), but it’s very plausible that there is not one discrete cause to point to, or that most of the variance in gender identity is explained by social factors.
This seems like a red herring to me—as far as I know no transgender brain research is attempting to diagnose trans people by brain scan in a way that overrides their verbal reports and behavior, but rather to find correlates of those verbal reports and behavior in the brain. If we find a characteristic set of features in the brains of most trans people, but not all, it will then be a separate debate as to whether we should consider this newly discovered thing to be the true meaning of the word “transgender”, or whether we should just keep using the word the same way we used it before, to refer to a pattern of self-identity and behavior, and the “keep using it the same way we did before” side seems quite reasonable. Even now, many people understand the word “transgender” as an “umbrella term” that encompasses people who may not have the same underlying motivations.
If by “metaphysical freedom of will” you are referring to is libertarian free will, then I have to disagree. Even if libertarian free will doesn’t exist (it doesn’t), it is still beneficial to me for society to allow me the option of changing my body. If you are confused about how the concept of “options” can exist without libertarian free will, that problem has already been solved in Possibility and Could-ness.
I agree completely with the entirety of your comment, which makes some excellent points… with one exception:
It has never seemed to me that Eliezer successfully solved (and/or dissolved) the question of free will. As far as I can tell, the free will sequence skips over most of the actually difficult problems, and the post you link is one of the worst offenders in that regard.
What do you see as the actually difficult problems?
The actually difficult problem that’s specific to the question of free will is “how is the state space generated” (i.e., where do all these graph nodes come from in the first place, that our algorithm is searching through?).
The other actually difficult problem, which is not specific to the question of free will but applies also (and first) to Eliezer’s “dissolving” of problems like “How An Algorithm Feels From Inside”, is “why exactly should this algorithm feel like anything from the inside? why, indeed, should anything feel like anything from the inside?” Without an answer to this question (which Eliezer never gives and, as far as I can recall, never even seriously acknowledges), all of these supposed “solutions”… aren’t.
I’m inclined to give Yudkowsky credit for solving the “in scope” problems, and to defer the difficult problems you identify as “out of scope”.
For free will, the question Yudkowsky is trying to address is, “What could it possibly mean to make decisions in a deterministic universe?”
I think the relevant philosophical question being posed here is addressed by contemplating a chess engine as a toy model. The program searches the game tree in order to output the best move. It can’t know which move is best in advance of performing the search, and the search algorithm treats all legal moves as “possible”, even though the program is deterministic and will only end up outputting one of them.
In the case of human free will, it’s true that we don’t have a “game tree” written out the way the rules of chess specify the game tree for a chess engine, but figuring that out seems like “merely” an enormously difficult empirical cognitive science problem, rather than the elementary philosophical confusion being addressed by the blog posts. I feel like I “could” lift my arm, because if my brain computed the intent to lift my arm, it could output the appropriate nerve signals to make it happen, but I can’t know whether I will lift my arm in advance of computing the decision to do so, and the decision treats both the lift and not-lift outcomes as “possible”, even though the universe is deterministic and I’m only going to end up doing one of them.
The “how the algorithm feels” methodology is doing work (identifying the role could-ness plays in the “map” of choosing a chess move or lifting my arm, without presupposing fundamental could-ness in the “territory”), even if it doesn’t itself solve the hard problem of why algorithms have feelings.
I don’t dispute that both the “search algorithm” idea and the “algorithm that implements this cognitive functionality” idea are valuable, and cut through some parts of the confusions related to free will and consciousness respectively. But the things I mention are hardly “out of scope”, if without them, the puzzles remains (as indeed they do, IMO).
In any case, claiming that the questions of either free will or consciousness have been “solved” by these explanations is simply false, and that’s what I was objecting to.
This is the sort of claim that it’s premature to make prior to having even a rough functional sketch of the solution. Something might look like ‘“merely” an enormously difficult empirical cognitive science problem’, until you try to solve it, and realize that you’re still confused.