Can someone please fill me in, what are some of Michael Vassar’s best ideas, that made him someone who people “ruled in” and encouraged others to listen to?
Some examples of valuable true things I’ve learned from Michael:
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Examples of potentially valuable medical innovation that never see wide application are abundant.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
How intersubjectivity works; “objective” reality refers to the conserved *patterns* or *relationships* between different perspectives.
People who have coherent philosophies—even opposing ones—have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Seems right to me, as I was never tied to such a narrative in the first place.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they’re not taking enough risks even given their current values?
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more “grandiose” or “prestigious” activities like thinking about philosophy or AI safety, at least for me… I think overall I’m more afraid that someone who could be doing productive “grandiose” work chooses not to in favor of “fiddly puttering”, than the reverse.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
That seems almost certain to be true, but I don’t see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I’m not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Couldn’t this just be that we’ve picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?
Examples of potentially valuable medical innovation that never see wide application are abundant.
This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
“Doesn’t occur to them” seems too strong but I think I know what you mean. Can you give some examples of what these projects are?
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not all conflicts are merely misunderstandings.
This seems plausible but what are some examples of such “evil”? What happened to Enron, perhaps?
How intersubjectivity works; “objective” reality refers to the conserved patterns or relationships between different perspectives.
It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I’m not sure if I’m just missing the point here.
People who have coherent philosophies—even opposing ones—have more in common in the way they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on “something we’ll figure out in the future”.
Also a meta question: Why haven’t these been written up or discussed online more? In any case, please don’t feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.
I think in part these could be “lessons relevant to Sarah”, a sort of a philosophical therapy that can’t be completely taken out of context. Which is why some of these might seem of low relevance or obvious.
>> Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more “grandiose” or “prestigious” activities like thinking about philosophy or AI safety, at least for me… I think overall I’m more afraid that someone who could be doing productive “grandiose” work chooses not to in favor of “fiddly puttering”, than the reverse.
suggests really valuable contributions are more bottlenecked on obsession rather than being good at directing attention in a “valuable” direction
For example, for the very ambitious, the bus ticket theory suggests that the way to do great work is to relax a little. Instead of gritting your teeth and diligently pursuing what all your peers agree is the most promising line of research, maybe you should try doing something just for fun. And if you’re stuck, that may be the vector along which to break out.
This seems plausible but what are some examples of such “evil”? What happened to Enron, perhaps?
According to the official narrative, the Enron scandal is mostly about people engaging in actions that benefit themselves. I don’t know whether that’s true as I don’t have much insight into it. If it’s true that’s not what is meant.
It’s not about actions that are actually self benefitial.
Let’s say I’m at a lunch with a friend. I draw benefit most benefit from my lunch when I have a conversation as intellectual equals. At the same time there’s sometimes an impulse to say something to put my friend down and to demostrate that I’m higher then him in the social pecking order. If I follow that instinct and say something to put my friend down, I’m engaging in evil in the sense Vassar talks about.
The instinct has some value in a tribal context where it’s important to fight about the social pecking order but I’m drawing no value from it in the lunch with my friend.
I’m a person who has some self awareness and I try not to go down such roads when those evolutionary instincts come up. On the other hand you have people in middle management of immoral mazes who spent a lot of their time following such instincts and being evil.
I would say it’s possible, just at a lower probability proportional to the difference in intelligence. More intelligence will still correspond to better ideas on average. That said, it was not acclaimed scientists or ivy-league research teams that invented the airplane. It was two random high-school dropouts in Ohio. This is not to say that education or prestige are the same thing as intelligence[1], simply that brilliant innovations can sometimes be made by the little guy who’s not afraid to dream big.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
Oh hey, I’ve accidentally tried this just by virtue of my personality!
Results: high-variance ideas are high-variance. YMMV, but so far I haven’t had a “hit”. (My friend politely calls my ideas “hits-based ideas”, which is a great term.)
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
My sense is that his worldview was ‘very sane’ in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that’s not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.
I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were ‘original’ and which weren’t. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)
Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the “subtle skill of seeing the world with fresh eyes” potentially extremely valuable, which is why I suppose Anna kept on encouraging people.
I think I have this skill, but I don’t know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn’t the primary factor, but was probably a significant one.
Another thing that’s quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a ‘safe’ way?
Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own
When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there’s a way in which the truth isn’t “safe” if safety is measured by something like ‘reversibility’ or ‘ability to continue being the way they were.’ It is also often the case that one can’t make a single small change, and then move on; if, say, you manage to convince a Christian that God isn’t real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.
Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a ‘reversibility’ criterion is sort of ludicrous; it’s not like people can undo puberty, or so on. If you present them with an alternative system, they don’t need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you’re in something of a double bind; it’s somewhat irresponsible to break people’s functioning systems without giving them a replacement, and it’s somewhat creepy if you break people’s functioning systems to pitch your replacement. (And since ‘functioning’ is value-laden, it’s easy for you to think their system needs replacing.)
He is referring to HPMOR, where the following happens (major spoiler for the first 25 chapters):
Harry tries to show Draco the truth about blood purism, and Draco goes through a really bad crisis of faith. Harry tries to do it effectively and gracefully, but non the less it is hard, and could even be somewhat dangerous.
Alas, I spent this year juuust coming to the conclusion that it was all more dangerous than I thought and am I still wrapping my brain around it.
I suppose it was noteworthy that I don’t think I got very damaged, and most of that was via… just not having prolonged contact with the four Vassar-type-people that I encountered (the two people whom I did have more extended contact with, I think may have damaged me somewhat)
So, I guess the short answer is “if you hang out with weird iconoclasts with interesting takes on agency and seeing the world, and you don’t spend more than an evening every 6 months with them, you will probably get a slight benefit with little to no risk. If you hang out more than that you take on proportionately more risk/reward. The risks/rewards are very person specific.”
My current take is something like “the social standing of this class of person should be the mysterious old witch who lives at the end of the road, who everyone respects but, like, you’re kinda careful about when you go ask for their advice.”
FWIW, I’ve never had a clear sense that Vassar’s ideas were especially good (but, also, not had a clear sense that they weren’t). More that, Vassar generally operates in a mode that is heavily-brainstorm-style-thinking and involves seeing the world in a particular way. And this has high-variance-but-often-useful side effects.
Exposure to that way of thinking has a decent chance of causing people to become more agenty, or dislodged from a subpar local optimum, or gain some subtle skills about seeing the world with fresh eyes. The point is less IMO about the ideas and more about having that effect on people.
(With the further caveat that this is all a high variance strategy, and the tail risks do not fail gracefully, sometimes causing damage, in ways that Anna hints at and which I agree would be a much larger discussion)
Can someone please fill me in, what are some of Michael Vassar’s best ideas, that made him someone who people “ruled in” and encouraged others to listen to?
Some examples of valuable true things I’ve learned from Michael:
Being tied to your childhood narrative of what a good upper-middle-class person does is not necessary for making intellectual progress, making money, or contributing to the world.
Most people (esp. affluent ones) are way too afraid of risking their social position through social disapproval. You can succeed where others fail just by being braver even if you’re not any smarter.
Fiddly puttering with something that fascinates you is the source of most genuine productivity. (Anything from hardware tinkering, to messing about with cost spreadsheets until you find an efficiency, to writing poetry until it “comes out right”.) Sometimes the best work of this kind doesn’t look grandiose or prestigious at the time you’re doing it.
The mind and the body are connected. Really. Your mind affects your body and your body affects your mind. The better kinds of yoga, meditation, massage, acupuncture, etc, actually do real things to the body and mind.
Science had higher efficiency in the past (late 19th-to-mid-20th centuries).
Examples of potentially valuable medical innovation that never see wide application are abundant.
A major problem in the world is a ‘hope deficit’ or ‘trust deficit’; otherwise feasible good projects are left undone because people are so mistrustful that it doesn’t occur to them that they might not be scams.
A good deal of human behavior is explained by evolutionary game theory; coalitional strategies, not just individual strategies.
Evil exists; in less freighted, more game-theoretic terms, there exist strategies which rapidly expand, wipe out other strategies, and then wipe themselves out. Not *all* conflicts are merely misunderstandings.
How intersubjectivity works; “objective” reality refers to the conserved *patterns* or *relationships* between different perspectives.
People who have coherent philosophies—even opposing ones—have more in common in the *way* they think, and are more likely to get meaningful stuff done together, than they can with “moderates” who take unprincipled but middle-of-the-road positions. Two “bullet-swallowers” can disagree on some things and agree on others; a “bullet-dodger” and a “bullet-swallower” will not even be able to disagree, they’ll just not be saying commensurate things.
Thanks! Here are my reactions/questions:
Seems right to me, as I was never tied to such a narrative in the first place.
What kind of risks is he talking about here? Also does he mean that people value their social positions too much, or that they’re not taking enough risks even given their current values?
Hmm, I use to spend quite a bit of time fiddling with assembly language implementations of encryption code to try to squeeze out a few more percent of speed. Pretty sure that is not as productive as more “grandiose” or “prestigious” activities like thinking about philosophy or AI safety, at least for me… I think overall I’m more afraid that someone who could be doing productive “grandiose” work chooses not to in favor of “fiddly puttering”, than the reverse.
That seems almost certain to be true, but I don’t see evidence that there a big enough effect for me to bother spending the time to investigate further. (I seem to be doing fine without doing any of these things and I’m not sure who is deriving large benefits from them.) Do you want to try to change my mind about this?
Couldn’t this just be that we’ve picked most of the low-hanging fruit, plus the fact that picking the higher fruits require more coordination among larger groups of humans and that is very costly? Or am I just agreeing with Michael here?
This seems quite plausible to me, as I used to lament that a lot of innovations in cryptography never got deployed.
“Doesn’t occur to them” seems too strong but I think I know what you mean. Can you give some examples of what these projects are?
Agreed, and I think this is a big problem as far as advancing human rationality because we currently have a very poor theoretical understanding of coalitional strategies.
This seems plausible but what are some examples of such “evil”? What happened to Enron, perhaps?
It would make more sense to me to say that objective reality refers to whatever explains the conserved patterns or relationships between different perspectives, rather than the patterns/relationships themselves. I’m not sure if I’m just missing the point here.
I think I prefer to hold a probability distribution over coherent philosophies, plus a lot of weight on “something we’ll figure out in the future”.
Also a meta question: Why haven’t these been written up or discussed online more? In any case, please don’t feel obligated to answer my comments/questions in this thread. You (or others who are familiar with these ideas) can just keep them in mind for when you do want to discuss them online.
I think in part these could be “lessons relevant to Sarah”, a sort of a philosophical therapy that can’t be completely taken out of context. Which is why some of these might seem of low relevance or obvious.
I suspect this might be a subtler point?
http://paulgraham.com/genius.html
suggests really valuable contributions are more bottlenecked on obsession rather than being good at directing attention in a “valuable” direction
According to the official narrative, the Enron scandal is mostly about people engaging in actions that benefit themselves. I don’t know whether that’s true as I don’t have much insight into it. If it’s true that’s not what is meant.
It’s not about actions that are actually self benefitial.
Let’s say I’m at a lunch with a friend. I draw benefit most benefit from my lunch when I have a conversation as intellectual equals. At the same time there’s sometimes an impulse to say something to put my friend down and to demostrate that I’m higher then him in the social pecking order. If I follow that instinct and say something to put my friend down, I’m engaging in evil in the sense Vassar talks about.
The instinct has some value in a tribal context where it’s important to fight about the social pecking order but I’m drawing no value from it in the lunch with my friend.
I’m a person who has some self awareness and I try not to go down such roads when those evolutionary instincts come up. On the other hand you have people in middle management of immoral mazes who spent a lot of their time following such instincts and being evil.
One thing I’ve wondered about is, how true is this for someone who’s dumber than others?
(Asking for, uh, a friend.)
I would say it’s possible, just at a lower probability proportional to the difference in intelligence. More intelligence will still correspond to better ideas on average.
That said, it was not acclaimed scientists or ivy-league research teams that invented the airplane. It was two random high-school dropouts in Ohio. This is not to say that education or prestige are the same thing as intelligence[1], simply that brilliant innovations can sometimes be made by the little guy who’s not afraid to dream big.
By all accounts the Wright Brothers were intelligent
Oh hey, I’ve accidentally tried this just by virtue of my personality!
Results: high-variance ideas are high-variance. YMMV, but so far I haven’t had a “hit”. (My friend politely calls my ideas “hits-based ideas”, which is a great term.)
http://paulgraham.com/genius.html seems to be promoting a similar idea
My sense is that his worldview was ‘very sane’ in the cynical HPMOR!Quirrell sense (and he was one of the major inspirations for Quirrell, so that’s not surprising), and that he was extremely open about it in person in a way that was surprising and exciting.
I think his standout feature was breadth more than depth. I am not sure I could distinguish which of his ideas were ‘original’ and which weren’t. He rarely if ever wrote things, which makes the genealogy of ideas hard to track. (Especially if many people who do write things were discussing ideas with him and getting feedback on them.)
Good points (similar to Raemon). I would find it useful if someone created some guidance for safe ingestion (or alternative source) of MV type ideas/outlook; I do the “subtle skill of seeing the world with fresh eyes” potentially extremely valuable, which is why I suppose Anna kept on encouraging people.
I think I have this skill, but I don’t know that I could write this guide. Partly this is because there are lots of features about me that make this easier, which are hard (or too expensive) to copy. For example, Michael once suggested part of my emotional relationship to lots of this came from being gay, and thus not having to participate in a particular variety of competition and signalling that was constraining others; that seemed like it wasn’t the primary factor, but was probably a significant one.
Another thing that’s quite difficult here is that many of the claims are about values, or things upstream of values; how can Draco Malfoy learn the truth about blood purism in a ‘safe’ way?
Thanks (&Yoav for clarification). So in your opinion is MV dangerous to a class of people with certain kinds of beliefs the way Harry was to Drako (the risk was pure necessity to break out of wrong ideas) or is he dangerous because of an idea package or bad motivations of his own
When someone has an incomplete moral worldview (or one based on easily disprovable assertions), there’s a way in which the truth isn’t “safe” if safety is measured by something like ‘reversibility’ or ‘ability to continue being the way they were.’ It is also often the case that one can’t make a single small change, and then move on; if, say, you manage to convince a Christian that God isn’t real (or some other thing that will predictably cause the whole edifice of their worldview to come crashing down eventually), then the default thing to happen is for them to be lost and alone.
Where to go from there is genuinely unclear to me. Like, one can imagine caring mostly about helping other people grow, in which a ‘reversibility’ criterion is sort of ludicrous; it’s not like people can undo puberty, or so on. If you present them with an alternative system, they don’t need to end up lost and alone, because you can directly introduce them to humanism, or whatever. But here you’re in something of a double bind; it’s somewhat irresponsible to break people’s functioning systems without giving them a replacement, and it’s somewhat creepy if you break people’s functioning systems to pitch your replacement. (And since ‘functioning’ is value-laden, it’s easy for you to think their system needs replacing.)
Ah sorry would you mind elaborating the Draco point in normie speak if you have the bandwidth?
He is referring to HPMOR, where the following happens (major spoiler for the first 25 chapters):
Harry tries to show Draco the truth about blood purism, and Draco goes through a really bad crisis of faith. Harry tries to do it effectively and gracefully, but non the less it is hard, and could even be somewhat dangerous.
I edited your comment to add the spoiler cover. FYI the key for this is > followed by ! and then a space.
Ah, great, thank you :)
Alas, I spent this year juuust coming to the conclusion that it was all more dangerous than I thought and am I still wrapping my brain around it.
I suppose it was noteworthy that I don’t think I got very damaged, and most of that was via… just not having prolonged contact with the four Vassar-type-people that I encountered (the two people whom I did have more extended contact with, I think may have damaged me somewhat)
So, I guess the short answer is “if you hang out with weird iconoclasts with interesting takes on agency and seeing the world, and you don’t spend more than an evening every 6 months with them, you will probably get a slight benefit with little to no risk. If you hang out more than that you take on proportionately more risk/reward. The risks/rewards are very person specific.”
My current take is something like “the social standing of this class of person should be the mysterious old witch who lives at the end of the road, who everyone respects but, like, you’re kinda careful about when you go ask for their advice.”
FWIW, I’ve never had a clear sense that Vassar’s ideas were especially good (but, also, not had a clear sense that they weren’t). More that, Vassar generally operates in a mode that is heavily-brainstorm-style-thinking and involves seeing the world in a particular way. And this has high-variance-but-often-useful side effects.
Exposure to that way of thinking has a decent chance of causing people to become more agenty, or dislodged from a subpar local optimum, or gain some subtle skills about seeing the world with fresh eyes. The point is less IMO about the ideas and more about having that effect on people.
(With the further caveat that this is all a high variance strategy, and the tail risks do not fail gracefully, sometimes causing damage, in ways that Anna hints at and which I agree would be a much larger discussion)