I agree that the phrase “taking seriously the idea that you are a computation” does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.
I think that the interest in anthropics is related to the fact that computations are the kind of thing that can be multiply instantiated. I think logic is a computational-like model of epistemics. I think that haskell is not really that much about this philosophy, and is more about mathematical elegance. (I think that liking elegance/simplicity is mostly different from the “I am a computation” philosophy, and is also selected for at MIRI.)
I think that a lot of the sequences (including the first and third and fourth posts in your list) are about thinking about the computation that you are running in contrast and relation to an ideal (AIXI-like) computation.
I think that That alien message is directly about getting the reader to imagine being a subprocess inside an AI, and thinking about what they would do in that situation.
I think that the politics post is not that representative of the sequences, and it bubbled to the top by karma because politics gets lots of votes.
(It does feel a little like I am justifying the connection in a way that could be used to justify false connections. I still believe that there is a cluster very roughly described as “taking seriously the idea that you are a computation” that is a natural class of ideas that is the heart of the sequences)
I think the vast majority of people who bounce off the sequences do so either because it’s too longwinded or they don’t like Eliezer’s writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.
I agree, but I think that the majority of people who love the sequences do so because they deeply share this philosophical stance, and don’t find it much elsewhere, more so than because they e.g. find a bunch of advice in it that actually works for them.
I think the effect you describe is also part of why people like the sequences, but I think that a stronger effect is that there are a bunch of people who had a certain class of thoughts prior to reading the sequences, didn’t see thoughts of this type before finding LessWrong, and then saw these thoughts in sequences. (I especially believe this about the kind of people who get hired at MIRI.) Prior to the sequences, they were intellectually lonely in not having people to talk to that shared this philosophical stance, that is a large part of their worldview.
I view the sequences as a collection of thoughts similar to things that I was already thinking, that was then used as a flag to connect me with people who were also already thinking the same things, more so than something that taught me a bunch of stuff. I predict a large portion of karma-weighted lesswrongers will say the same thing. (This isn’t inconsistent with your theory, but I think would be evidence of mine.)
My theory about why people like the sequences is very entangled with the philosophical stance actually being a natural cluster, and thus something that many different people would have independently.
I think that MIRI selects for the kind of person who likes the sequences, which under my theory is a philosophical stance related to being a computation, and under your theory seems entangled with little mental resistance to (some kinds of) narratives.
I agree that the phrase “taking seriously the idea that you are a computation” does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.
I think that the interest in anthropics is related to the fact that computations are the kind of thing that can be multiply instantiated. I think logic is a computational-like model of epistemics. I think that haskell is not really that much about this philosophy, and is more about mathematical elegance. (I think that liking elegance/simplicity is mostly different from the “I am a computation” philosophy, and is also selected for at MIRI.)
I think that a lot of the sequences (including the first and third and fourth posts in your list) are about thinking about the computation that you are running in contrast and relation to an ideal (AIXI-like) computation.
I think that That alien message is directly about getting the reader to imagine being a subprocess inside an AI, and thinking about what they would do in that situation.
I think that the politics post is not that representative of the sequences, and it bubbled to the top by karma because politics gets lots of votes.
(It does feel a little like I am justifying the connection in a way that could be used to justify false connections. I still believe that there is a cluster very roughly described as “taking seriously the idea that you are a computation” that is a natural class of ideas that is the heart of the sequences)
I agree, but I think that the majority of people who love the sequences do so because they deeply share this philosophical stance, and don’t find it much elsewhere, more so than because they e.g. find a bunch of advice in it that actually works for them.
I think the effect you describe is also part of why people like the sequences, but I think that a stronger effect is that there are a bunch of people who had a certain class of thoughts prior to reading the sequences, didn’t see thoughts of this type before finding LessWrong, and then saw these thoughts in sequences. (I especially believe this about the kind of people who get hired at MIRI.) Prior to the sequences, they were intellectually lonely in not having people to talk to that shared this philosophical stance, that is a large part of their worldview.
I view the sequences as a collection of thoughts similar to things that I was already thinking, that was then used as a flag to connect me with people who were also already thinking the same things, more so than something that taught me a bunch of stuff. I predict a large portion of karma-weighted lesswrongers will say the same thing. (This isn’t inconsistent with your theory, but I think would be evidence of mine.)
My theory about why people like the sequences is very entangled with the philosophical stance actually being a natural cluster, and thus something that many different people would have independently.
I think that MIRI selects for the kind of person who likes the sequences, which under my theory is a philosophical stance related to being a computation, and under your theory seems entangled with little mental resistance to (some kinds of) narratives.