The most natural shared interest for a group united by “taking seriously the idea that you are a computation” seems like computational neuroscience, but that’s not on your list, nor do I recall it being covered in the sequences. If we were to tell 5 random philosophically inclined STEM PhD students to write a lit review on “taking seriously the idea that you are a computation” (giving them that phrase and nothing else), I’m quite doubtful we would see any sort of convergence towards the set of topics you allude to (Haskell, anthropics, mathematical logic).
As a way to quickly sample the sequences, I went to Eliezer’s userpage, sorted by score, and checked the first 5 sequence posts:
IMO very little of the content of these 5 posts fits strongly into the theme of “taking seriously the idea that you are a computation”. I think this might be another one of these rarity narrative things (computers have been a popular metaphor for the brain for decades, but we’re the only ones who take this seriously, same way we’re the only ones who are actually trying).
the sequences pipeline is largely creating a selection for this philosophical stance
I think the vast majority of people who bounce off the sequences do so either because it’s too longwinded or they don’t like Eliezer’s writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.
I’ve written about how “science” is inherently public...
But that’s only one vision of the future. In another vision, the knowledge we now call “science” is taken out of the public domain—the books and journals hidden away, guarded by mystic cults of gurus wearing robes, requiring fearsome initiation rituals for access—so that more people will actually study it.
I assume this has motivated a lot of the stylistic choices in the sequences and Eliezer’s other writing: the 12 virtues of rationality, the litany of Gendlin/Tarski/Hodgell, parables and fables, Jeffreyssai and his robes/masks/rituals.
I find the sequences to be longwinded and repetitive. I think Eliezer is a smart guy with interesting ideas, but if I wanted to learn quantum mechanics (or any other academic topic the sequences cover), I would learn it from someone who has devoted their life to understanding the subject and is widely recognized as a subject matter expert.
From my perspective, the question how anyone gets through all 1800+ pages of the sequences. My answer is that the post I linked is right. The mystical presentation, where Eliezer plays the role of your sensei who throws you to the mat out of nowhere if you forgot to keep your center of gravity low, really resonates with some people (and really doesn’t resonate with others). By the time someone gets through all 1800+ pages, they’ve invested a significant chunk of their ego in Eliezer and his ideas.
I agree that the phrase “taking seriously the idea that you are a computation” does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.
I think that the interest in anthropics is related to the fact that computations are the kind of thing that can be multiply instantiated. I think logic is a computational-like model of epistemics. I think that haskell is not really that much about this philosophy, and is more about mathematical elegance. (I think that liking elegance/simplicity is mostly different from the “I am a computation” philosophy, and is also selected for at MIRI.)
I think that a lot of the sequences (including the first and third and fourth posts in your list) are about thinking about the computation that you are running in contrast and relation to an ideal (AIXI-like) computation.
I think that That alien message is directly about getting the reader to imagine being a subprocess inside an AI, and thinking about what they would do in that situation.
I think that the politics post is not that representative of the sequences, and it bubbled to the top by karma because politics gets lots of votes.
(It does feel a little like I am justifying the connection in a way that could be used to justify false connections. I still believe that there is a cluster very roughly described as “taking seriously the idea that you are a computation” that is a natural class of ideas that is the heart of the sequences)
I think the vast majority of people who bounce off the sequences do so either because it’s too longwinded or they don’t like Eliezer’s writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.
I agree, but I think that the majority of people who love the sequences do so because they deeply share this philosophical stance, and don’t find it much elsewhere, more so than because they e.g. find a bunch of advice in it that actually works for them.
I think the effect you describe is also part of why people like the sequences, but I think that a stronger effect is that there are a bunch of people who had a certain class of thoughts prior to reading the sequences, didn’t see thoughts of this type before finding LessWrong, and then saw these thoughts in sequences. (I especially believe this about the kind of people who get hired at MIRI.) Prior to the sequences, they were intellectually lonely in not having people to talk to that shared this philosophical stance, that is a large part of their worldview.
I view the sequences as a collection of thoughts similar to things that I was already thinking, that was then used as a flag to connect me with people who were also already thinking the same things, more so than something that taught me a bunch of stuff. I predict a large portion of karma-weighted lesswrongers will say the same thing. (This isn’t inconsistent with your theory, but I think would be evidence of mine.)
My theory about why people like the sequences is very entangled with the philosophical stance actually being a natural cluster, and thus something that many different people would have independently.
I think that MIRI selects for the kind of person who likes the sequences, which under my theory is a philosophical stance related to being a computation, and under your theory seems entangled with little mental resistance to (some kinds of) narratives.
I notice I like “you are an algorithm” better than “you are a computation”, since “computation” feels like it could point to a specific instantiation of an algorithm, and I think that algorithm as opposed to instantiation of an algorithm is an important part of it.
To be slightly more precise, I think I historically felt like I identify with like 60% of framings in the general MIRI cluster(at least the way it appears in public outputs) and now I’m like 80%+, and part of the difference here was that I already was pretty into stuff like empiricism, materalism, Bayesianism, etc, but I previously (not very reflectively) had opinions and intuitions in the direction of thinking myself as an computational instance, and these days I can understand the algorithmic framing much better (even though it’s still not very intuitive/natural to me).
The most natural shared interest for a group united by “taking seriously the idea that you are a computation” seems like computational neuroscience, but that’s not on your list, nor do I recall it being covered in the sequences. If we were to tell 5 random philosophically inclined STEM PhD students to write a lit review on “taking seriously the idea that you are a computation” (giving them that phrase and nothing else), I’m quite doubtful we would see any sort of convergence towards the set of topics you allude to (Haskell, anthropics, mathematical logic).
As a way to quickly sample the sequences, I went to Eliezer’s userpage, sorted by score, and checked the first 5 sequence posts:
https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences
https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message
https://www.lesswrong.com/posts/RcZCwxFiZzE6X7nsv/what-do-we-mean-by-rationality-1
https://www.lesswrong.com/posts/HLqWn5LASfhhArZ7w/expecting-short-inferential-distances
https://www.lesswrong.com/posts/6hfGNLf4Hg5DXqJCF/a-fable-of-science-and-politics
IMO very little of the content of these 5 posts fits strongly into the theme of “taking seriously the idea that you are a computation”. I think this might be another one of these rarity narrative things (computers have been a popular metaphor for the brain for decades, but we’re the only ones who take this seriously, same way we’re the only ones who are actually trying).
I think the vast majority of people who bounce off the sequences do so either because it’s too longwinded or they don’t like Eliezer’s writing style. I predict that if you ask someone involved in trying to popularize the sequences, they will agree.
In this post Eliezer wrote:
I assume this has motivated a lot of the stylistic choices in the sequences and Eliezer’s other writing: the 12 virtues of rationality, the litany of Gendlin/Tarski/Hodgell, parables and fables, Jeffreyssai and his robes/masks/rituals.
I find the sequences to be longwinded and repetitive. I think Eliezer is a smart guy with interesting ideas, but if I wanted to learn quantum mechanics (or any other academic topic the sequences cover), I would learn it from someone who has devoted their life to understanding the subject and is widely recognized as a subject matter expert.
From my perspective, the question how anyone gets through all 1800+ pages of the sequences. My answer is that the post I linked is right. The mystical presentation, where Eliezer plays the role of your sensei who throws you to the mat out of nowhere if you forgot to keep your center of gravity low, really resonates with some people (and really doesn’t resonate with others). By the time someone gets through all 1800+ pages, they’ve invested a significant chunk of their ego in Eliezer and his ideas.
I agree that the phrase “taking seriously the idea that you are a computation” does not directly point at the cluster, but I still think it is a natural cluster. I think that computational neuroscience is in fact high up on the list of things I expect less wrongers to be interested in. To the extent that they are not as interested in it as other things, I think it is because it is too hard to actually get much that feels like algorithmic structure from neuroscience.
I think that the interest in anthropics is related to the fact that computations are the kind of thing that can be multiply instantiated. I think logic is a computational-like model of epistemics. I think that haskell is not really that much about this philosophy, and is more about mathematical elegance. (I think that liking elegance/simplicity is mostly different from the “I am a computation” philosophy, and is also selected for at MIRI.)
I think that a lot of the sequences (including the first and third and fourth posts in your list) are about thinking about the computation that you are running in contrast and relation to an ideal (AIXI-like) computation.
I think that That alien message is directly about getting the reader to imagine being a subprocess inside an AI, and thinking about what they would do in that situation.
I think that the politics post is not that representative of the sequences, and it bubbled to the top by karma because politics gets lots of votes.
(It does feel a little like I am justifying the connection in a way that could be used to justify false connections. I still believe that there is a cluster very roughly described as “taking seriously the idea that you are a computation” that is a natural class of ideas that is the heart of the sequences)
I agree, but I think that the majority of people who love the sequences do so because they deeply share this philosophical stance, and don’t find it much elsewhere, more so than because they e.g. find a bunch of advice in it that actually works for them.
I think the effect you describe is also part of why people like the sequences, but I think that a stronger effect is that there are a bunch of people who had a certain class of thoughts prior to reading the sequences, didn’t see thoughts of this type before finding LessWrong, and then saw these thoughts in sequences. (I especially believe this about the kind of people who get hired at MIRI.) Prior to the sequences, they were intellectually lonely in not having people to talk to that shared this philosophical stance, that is a large part of their worldview.
I view the sequences as a collection of thoughts similar to things that I was already thinking, that was then used as a flag to connect me with people who were also already thinking the same things, more so than something that taught me a bunch of stuff. I predict a large portion of karma-weighted lesswrongers will say the same thing. (This isn’t inconsistent with your theory, but I think would be evidence of mine.)
My theory about why people like the sequences is very entangled with the philosophical stance actually being a natural cluster, and thus something that many different people would have independently.
I think that MIRI selects for the kind of person who likes the sequences, which under my theory is a philosophical stance related to being a computation, and under your theory seems entangled with little mental resistance to (some kinds of) narratives.
I notice I like “you are an algorithm” better than “you are a computation”, since “computation” feels like it could point to a specific instantiation of an algorithm, and I think that algorithm as opposed to instantiation of an algorithm is an important part of it.
This sounds right to me. FDT feels more natural when I think of myself as an algorithm than when I think of myself as a computation, for example.
To be slightly more precise, I think I historically felt like I identify with like 60% of framings in the general MIRI cluster(at least the way it appears in public outputs) and now I’m like 80%+, and part of the difference here was that I already was pretty into stuff like empiricism, materalism, Bayesianism, etc, but I previously (not very reflectively) had opinions and intuitions in the direction of thinking myself as an computational instance, and these days I can understand the algorithmic framing much better (even though it’s still not very intuitive/natural to me).
(Numbers made up and not well thought out)