I was always stuck in an awkward position of feeling like I was surrounded by people who took him more seriously than I felt like he ought to be taken.
Heh, the same feeling here. I didn’t have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn’t reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.
Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it’s all gibberish to me.
Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing.
Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.)
My probability distribution was gradually shifting from 1 to 3.
Not a direct response to you, but if anyone who hasn’t talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it’ll have a fair bit in it that’ll probably still seem false/confusing), you might try Spencer Greenberg’s podcast with Vassar.
As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he’s saying. I certainly did not fully succeed.
It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?
I would really like to understand what he’s getting at by the way, so if it is clearer for you than it is for me, I’d actively appreciate clarification.
Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.
In Harry Potter the standard practice seems to be to “eat chocolate” and perhaps “play with puppies” after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.
Then there is Gendlin’s Litany (and please note that I am linking to a critique, not to unadulterated “yay for the litany” ideas) which I believe is part of Lesswrong’s canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.
Ideally [a better version of the Litany] would communicate: “Lying to yourself will eventually screw you up worse than getting hurt by a truth,” instead of “learning new truths has no negative consequences.”
This distinction is particularly important when the truth at hand is “the world is a fundamentally unfair place that will kill you without a second thought if you mess up, and possibly even if you don’t.”
EDIT TO CLARIFY: The person who goes about their life ignoring the universe’s Absolute Neutrality is very fundamentally NOT already enduring this truth. They’re enduring part of it (arguably most of it), but not all. Thinking about that truth is depressing for many people. That is not a meaningless cost. Telling people they should get over that depression and make good changes to fix the world is important. But saying that they are already enduring everything there was to endure, seems to me a patently false statement, and makes your argument weaker, not stronger.
The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is.
Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and “ethical”?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to “reliably and safely accomplish the goals” (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between “the status quo” and “a world where the goal has been accomplished”… thus, the litany itself:
What is true is already so. Owning up to it doesn’t make it worse. Not being open about it doesn’t make it go away.
And because it’s true, it is what is there to be interacted with. Anything untrue isn’t there to be lived. People can stand what is true, for they are already enduring it.
In my personal experience, as a person with feelings, is that I can only work on “the hot stuff” mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical “gravity well” of perspectives like this, which have internal logic that “makes as if to demand” that the perspective not be dropped, except maybe “at one’s personal peril”.
Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.
Another great option is “talk about it with your wisest and most caring grand parent (or parent)”.
Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution.
Also, you don’t have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?
Personally, I try not to put “ideas that seem particularly hot” on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.
However also, I don’t consider a given forum to be “the really real forum, where the grownups actually talk”… unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).
This leads me to be curious about any second thoughts or second feelings you’ve had, but only if you feel ok sharing them in this forum. Could you perhaps reply with: <silence> (a completely valid response, in my book) ”Mu.” (that is, being still in the space, but not wanting to pose or commit) ”The ideas still make me want to scream, but I can afford emitting these ~2 bits of information.” or “I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here’s what’s left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>”.
Heh, the same feeling here. I didn’t have much opportunity to interact with him in person. I remember repeatedly hearing praise about how incredibly smart he is (from people whom I admired), then trying to find something smart written by him, and feeling unimpressed and confused, like maybe I wasn’t reading the right texts or I failed to discover the hidden meaning that people smarter than me have noticed.
Hypothesis 1: I am simply not smart enough to recognize his greatness. I can recognize people one level above me, and they can recognize people one level above them, but when I try to understand someone two levels above me, it’s all gibberish to me.
Hypothesis 2: He is more persuasive in person than in writing. (But once he impressed you in person, you will now see greatness in his writing, too. Maybe because of halo affect. Maybe because now you understand the hidden layers of what he actually meant by that.) Maybe he is more persuasive in person because he can make his message optimized for the receiver; which might be a good thing, or a bad thing.
Hypothesis 3: He gives high-variance advice. Some of it amazingly good, some of it horribly wrong. When people take him seriously, some of them benefit greatly, others suffer. Those who benefitted will tell the story. (Those who suffered will leave the community.)
My probability distribution was gradually shifting from 1 to 3.
Not a direct response to you, but if anyone who hasn’t talked to Vassar is wanting an example of Vassar-conversation that may be easier to understand or get some sense from than most examples would (though it’ll have a fair bit in it that’ll probably still seem false/confusing), you might try Spencer Greenberg’s podcast with Vassar.
As a datapoint: I listened to that podcast 4 times, and took notes 3 of those 4 times, to try and clearly parse what he’s saying. I certainly did not fully succeed.
My notes.
It seems like he said some straightforwardly contradictory things? For instance, that strong conflict theorists trust their own senses and feelings more, but also trust them less?
I would really like to understand what he’s getting at by the way, so if it is clearer for you than it is for me, I’d actively appreciate clarification.
i tried reading / skimming some of that summary
it made me want to scream
what a horrible way to view the world / people / institutions / justice
i should maybe try listening to the podcast to see if i have a similar reaction to that
Seeing as how you posted this 9 days ago, I hope you did not bite off more than you could chew, and I hope you do not want to scream anymore.
In Harry Potter the standard practice seems to be to “eat chocolate” and perhaps “play with puppies” after exposure to ideas that are both (1) possibly true, and (2) very saddening to think about.
Then there is Gendlin’s Litany (and please note that I am linking to a critique, not to unadulterated “yay for the litany” ideas) which I believe is part of Lesswrong’s canon somewhat on purpose. In the critique there are second and third thoughts along these lines, which I admire for their clarity, and also for their hopefulness.
The reason to include the Litany (flaws and all?) in a canon would be specifically to try to build a system of social interactions that can at least sometimes talk about understanding the world as it really is.
Then, atop this shared understanding of a potentially sad world, the social group with this litany as common knowledge might actually engage in purposive (and “ethical”?) planning processes that will work because the plans are built on an accurate perception of the barriers and risks of any given plan. In theory, actions based on such plans would mostly tend to “reliably and safely accomplish the goals” (maybe not always, but at least such practices might give one an edge) and this would work even despite the real barriers and real risks that stand between “the status quo” and “a world where the goal has been accomplished”… thus, the litany itself:
In my personal experience, as a person with feelings, is that I can only work on “the hot stuff” mostly only in small motions, mostly/usually as a hobby, because otherwise the totalizing implications of some ideas threaten to cause an internal information cascade that is probably abstractly undesirable, and if the cascade happens it might require the additional injection of additional cognitive and/or emotional labor of a very unusual sort in order to escape from the metaphorical “gravity well” of perspectives like this, which have internal logic that “makes as if to demand” that the perspective not be dropped, except maybe “at one’s personal peril”.
Running away from the first hint of a non-trivial infohazard, especially an infohazard being handled without thoughtful safety measures, is a completely valid response in my book.
Another great option is “talk about it with your wisest and most caring grand parent (or parent)”.
Another option is to look up the oldest versions of the idea, and examine their sociological outcomes (good and bad, in a distribution), and consider if you want to be exposed to that outcome distribution.
Also, you don’t have to jump in. You can take baby steps (one per week or one per month or one per year) and re-apply your safety checklist after each step?
Personally, I try not to put “ideas that seem particularly hot” on the Internet, or in conversations, by default, without verifying things about the audience, and but I could understand someone who was willing to do so.
However also, I don’t consider a given forum to be “the really real forum, where the grownups actually talk”… unless infohazards like this cause people to have some reaction OTHER than traumatic suffering displays (and upvotes of the traumatic suffering display from exposure to sad ideas).
This leads me to be curious about any second thoughts or second feelings you’ve had, but only if you feel ok sharing them in this forum. Could you perhaps reply with:
<silence> (a completely valid response, in my book)
”Mu.” (that is, being still in the space, but not wanting to pose or commit)
”The ideas still make me want to scream, but I can afford emitting these ~2 bits of information.” or
“I calmed down a bit, and I can think about this without screaming now, and I wrote down several ideas and deleted a bunch of them and here’s what’s left after applying some filters for safety: <a few sentences with brief short authentic abstractly-impersonal partial thoughts>”.
There’s also these 2 podcasts which cover quite a variety of topics, for anyone who’s interested:
You’ve Got Mel—With Michael Vassar
Jim Rutt Show—Michael Vassar on Passive-Aggressive Revolution
I haven’t seen/heard anything particularly impressive from him either, but perhaps his ‘best work’ just isn’t written down anywhere?