David Chapman, once you talk to him and actually double-crux, is much more sensible than your reading of him. He was an AI researcher himself.
His popular writing has a vibe of “nerds and I F*ing Love Science fans: you aren’t so great!” that can be off-putting. If you ignore the implicit insult, assume it can’t hurt you, and try to figure out what he literally believes is true, it’s not actually irrationalist.
This sounds true, and is useful clarification for David Chapman in particular, but I don’t think it’s bad for PDV to pump against tone and signaling that will, if unchecked, reliably undermine norms of epistemic hygiene and reliably incentivize/cause people to feel proud of [behavior sets] that are roughly equally likely to emerge from “serious rationalist investigating means and modes that are outside of rationalist culture norms” and “sloppy thinker going off the rails.”
In other words, I could believe you entirely about David Chapman (and basically do) and nevertheless want to upvote a policy that disincentivizes people who talk like that. If eight out of ten people who talk like that are Doing It Wrong, then in my opinion the responses are, ranked from least to most awesome:
Let them have the microphone and lean into the fallacy of the gray (deontological allow)
Hold a low bar of suspicion and reject particular people or claims (“for” with exceptions)
Shut them down and shut them out (deontological against)
Hold a high bar of suspicion and vet particular people or claims (“against” with exceptions)
I think we know enough about how things play out (see the history of General Semantics’ slow unraveling and decline) to lean for both the third and fourth bullets over the second. I think the second is a siren song that overestimates our ability to expunge bad habits and bad norms after they’ve already taken root. In my experience, those bad habits and bad norms just never go away, so it’s worth being proactive in preventing them from forming in the first place.
I acknowledge that Good and Smart and Perceptive people could disagree. Wouldn’t mind pointing more clearly at cruxes if that seems productive, but would ask that the other group go first.
This essay, which you recommend “holding to a high bar of suspicion”, is basically “guilty” of contrasting “heart” and “head”, and claiming that personal experience or maturity can be differently valuable than explicit reasoning.
Gordon’s general corpus of work is about developing psychological maturity, which is pretty much Kegan’s topic as well. It’s about the squishy stuff that we call “wisdom.”
I can see some possible reasons why this topic and outlook is off-putting.
First of all, it’s squishy and hard to prove, by its nature. I don’t think that’s inherently bad—poetry and literature are also squishy insights into human nature, and I think they’re valuable.
Second of all, people like Gordon and the developmental psychologists have certain implicit presumptions about the Good Life that I’m not sure I share. They tend to favor adapting to the status quo more than changing it—in writers like Erikson and Kohlberg, there’s a lot of pro-death sentiment, and a lot of talk about cooperating with one’s dominant society. They tend to talk about empathy in ways that sometimes (though not always) conflict with my beliefs about autonomy.
At its worst, the ideal of humanely accepting the “complexity” of life leads people to commit some actual harms—Siddhartha Mukherjee, the cancer researcher and physician who wrote the Pulitzer-winning The Emperor of Maladies, is famous for his humane, compassionate, “mature” outlook, and probably would get classified as having a high “developmental stage”, and he’s a major popular promoter of the view that cancer is intrinsically incurable. In my opinion, passive acceptance that cancer cannot be cured is one of the reasons that we haven’t made more progress in cancer treatment. Medical progress is a real thing in the real world, and sounding wise by accepting death is not actually good for humankind.
But. I definitely don’t believe in jumping down the throat of everybody who sounds vaguely developmental-psych-ish or Continental-philosophy-ish and calling on the community to shun them! That’s not what people seeking truth would do.
I know the feeling of “this is scary voodoo designed to demoralize me, get it away!” I’ve learned that you have to take time on your own to face your fears, read the original sources behind the “voodoo”, and pull apart the ideas until they’re either trivially wrong (and hence not scary) or have a grain of truth (which is also not scary.)
I can see the voodoo too. I can see collectivist and mystical vibes a long way off. Let’s be gentlemen anyway.
To give an example, my first reaction to reading Heidegger is “this is voodoo!” I still think he’s a bad person with bad values. But I can clarify somewhat what I think is true in his philosophy and false in it, and I think my understanding of psychology (and potentially AI) is sounder for that struggle.
There’s stuff I can’t read without getting “triggered.” I know my limits and proceed slowly. But ultimately, if there’s something I have reason to believe has substance on topics I care about, I expect to eventually hit the books and wrestle with the ideas. I think I’ll eventually get there with developmental psychology. Which is why I think people like Gordon are good to have in my noosphere.
I disagree with very little in the above, and think that disagreement is mainly summed up with me being slightly more conservative/wary and you being slightly less so. =)
I do, however, object to the implication that I was, in any way, advocating “jumping down the throat of” or “calling on the community to shun” ideas that pattern-match to things that are bad-in-expectation. I think that people reading this thread will reasonably assume that that was your summary of me (since you were making that objection in response to my comment) and that this is an uncharitable strawman that clearly doesn’t match the actual words in my actual comment.
Perhaps that impression would not have been left if I had more strenuously objected to the specific over-the-top stuff in PDV’s post (“cancer” and so forth)? I left those out of my endorsement, but maybe that would’ve been clearer if I’d included specific lines against them.
Do you have a specific document that you mean when you reference the history of General Semantics slow unraveling and decline? What do you think should they have done differently?
Nothing wrong with talking “like this”[1] if you’re having good epistemics while you do it. I do agree that it’s harder to evaluate and should get held to the same high standard that easily verified things are held to.
[1] (for values of “like this” that aren’t “have bad epistemics”)
Ehhhhhhhhr mooooostly agree? But there are social effects, like shifts in the Overton Window. I claim that I have seen high-status people with rock-solid epistemics spout frothing madness as they explored the edges of what is known, and they were taking all of the things that they were saying with heavy helpings of salt and so forth, but they weren’t transparent to onlookers about the fact that they were spitballing, and then some of those onlookers went on to spout frothing madness themselves (but with less rock-solid epistemics themselves) and by the time you got three or four steps removed there were people who just thought that sort of reasoning was a central example of rationality because lots of people were doing it sans context or caveat.
I don’t know what he believes. I know only what he says. If he doesn’t believe what he says, that isn’t exactly a ringing endorsement, but would complicate things. What I do know is that his entire notion of meaningness, and everything I’ve ever read from that blog, is anti-truth and anti-rationality. It’s grounded in assertions that rationality has problems which I do not accept are problems and makes bald-faced assertions (see: eternalism, aka ‘Truth exists’, which is asserted to be wrong because it contains divine command theory as a subset) that are just not true, while laying the foundation for his other arguments. Ex falso sequitur quodlibet, and his writing style has all the bad qualities of Eliezer’s, so I can’t bring myself to read it in enough depth to write a point by point rebuttal.
David Chapman, once you talk to him and actually double-crux, is much more sensible than your reading of him. He was an AI researcher himself.
His popular writing has a vibe of “nerds and I F*ing Love Science fans: you aren’t so great!” that can be off-putting. If you ignore the implicit insult, assume it can’t hurt you, and try to figure out what he literally believes is true, it’s not actually irrationalist.
This sounds true, and is useful clarification for David Chapman in particular, but I don’t think it’s bad for PDV to pump against tone and signaling that will, if unchecked, reliably undermine norms of epistemic hygiene and reliably incentivize/cause people to feel proud of [behavior sets] that are roughly equally likely to emerge from “serious rationalist investigating means and modes that are outside of rationalist culture norms” and “sloppy thinker going off the rails.”
In other words, I could believe you entirely about David Chapman (and basically do) and nevertheless want to upvote a policy that disincentivizes people who talk like that. If eight out of ten people who talk like that are Doing It Wrong, then in my opinion the responses are, ranked from least to most awesome:
Let them have the microphone and lean into the fallacy of the gray (deontological allow)
Hold a low bar of suspicion and reject particular people or claims (“for” with exceptions)
Shut them down and shut them out (deontological against)
Hold a high bar of suspicion and vet particular people or claims (“against” with exceptions)
I think we know enough about how things play out (see the history of General Semantics’ slow unraveling and decline) to lean for both the third and fourth bullets over the second. I think the second is a siren song that overestimates our ability to expunge bad habits and bad norms after they’ve already taken root. In my experience, those bad habits and bad norms just never go away, so it’s worth being proactive in preventing them from forming in the first place.
I acknowledge that Good and Smart and Perceptive people could disagree. Wouldn’t mind pointing more clearly at cruxes if that seems productive, but would ask that the other group go first.
Whoa whoa whoa.
This essay, which you recommend “holding to a high bar of suspicion”, is basically “guilty” of contrasting “heart” and “head”, and claiming that personal experience or maturity can be differently valuable than explicit reasoning.
Gordon’s general corpus of work is about developing psychological maturity, which is pretty much Kegan’s topic as well. It’s about the squishy stuff that we call “wisdom.”
I can see some possible reasons why this topic and outlook is off-putting.
First of all, it’s squishy and hard to prove, by its nature. I don’t think that’s inherently bad—poetry and literature are also squishy insights into human nature, and I think they’re valuable.
Second of all, people like Gordon and the developmental psychologists have certain implicit presumptions about the Good Life that I’m not sure I share. They tend to favor adapting to the status quo more than changing it—in writers like Erikson and Kohlberg, there’s a lot of pro-death sentiment, and a lot of talk about cooperating with one’s dominant society. They tend to talk about empathy in ways that sometimes (though not always) conflict with my beliefs about autonomy.
At its worst, the ideal of humanely accepting the “complexity” of life leads people to commit some actual harms—Siddhartha Mukherjee, the cancer researcher and physician who wrote the Pulitzer-winning The Emperor of Maladies, is famous for his humane, compassionate, “mature” outlook, and probably would get classified as having a high “developmental stage”, and he’s a major popular promoter of the view that cancer is intrinsically incurable. In my opinion, passive acceptance that cancer cannot be cured is one of the reasons that we haven’t made more progress in cancer treatment. Medical progress is a real thing in the real world, and sounding wise by accepting death is not actually good for humankind.
But. I definitely don’t believe in jumping down the throat of everybody who sounds vaguely developmental-psych-ish or Continental-philosophy-ish and calling on the community to shun them! That’s not what people seeking truth would do.
I know the feeling of “this is scary voodoo designed to demoralize me, get it away!” I’ve learned that you have to take time on your own to face your fears, read the original sources behind the “voodoo”, and pull apart the ideas until they’re either trivially wrong (and hence not scary) or have a grain of truth (which is also not scary.)
I can see the voodoo too. I can see collectivist and mystical vibes a long way off. Let’s be gentlemen anyway.
To give an example, my first reaction to reading Heidegger is “this is voodoo!” I still think he’s a bad person with bad values. But I can clarify somewhat what I think is true in his philosophy and false in it, and I think my understanding of psychology (and potentially AI) is sounder for that struggle.
There’s stuff I can’t read without getting “triggered.” I know my limits and proceed slowly. But ultimately, if there’s something I have reason to believe has substance on topics I care about, I expect to eventually hit the books and wrestle with the ideas. I think I’ll eventually get there with developmental psychology. Which is why I think people like Gordon are good to have in my noosphere.
D’awwww, thanks! :-)
I disagree with very little in the above, and think that disagreement is mainly summed up with me being slightly more conservative/wary and you being slightly less so. =)
I do, however, object to the implication that I was, in any way, advocating “jumping down the throat of” or “calling on the community to shun” ideas that pattern-match to things that are bad-in-expectation. I think that people reading this thread will reasonably assume that that was your summary of me (since you were making that objection in response to my comment) and that this is an uncharitable strawman that clearly doesn’t match the actual words in my actual comment.
Perhaps that impression would not have been left if I had more strenuously objected to the specific over-the-top stuff in PDV’s post (“cancer” and so forth)? I left those out of my endorsement, but maybe that would’ve been clearer if I’d included specific lines against them.
Do you have a specific document that you mean when you reference the history of General Semantics slow unraveling and decline? What do you think should they have done differently?
Nothing wrong with talking “like this”[1] if you’re having good epistemics while you do it. I do agree that it’s harder to evaluate and should get held to the same high standard that easily verified things are held to.
[1] (for values of “like this” that aren’t “have bad epistemics”)
Ehhhhhhhhr mooooostly agree? But there are social effects, like shifts in the Overton Window. I claim that I have seen high-status people with rock-solid epistemics spout frothing madness as they explored the edges of what is known, and they were taking all of the things that they were saying with heavy helpings of salt and so forth, but they weren’t transparent to onlookers about the fact that they were spitballing, and then some of those onlookers went on to spout frothing madness themselves (but with less rock-solid epistemics themselves) and by the time you got three or four steps removed there were people who just thought that sort of reasoning was a central example of rationality because lots of people were doing it sans context or caveat.
annoying but fair.
I agree that it is annoying. We’ll patch this in the update to Humans 2.0.
I’m not sure I believe that isn’t a contradiction in terms.
I don’t know what he believes. I know only what he says. If he doesn’t believe what he says, that isn’t exactly a ringing endorsement, but would complicate things.
What I do know is that his entire notion of meaningness, and everything I’ve ever read from that blog, is anti-truth and anti-rationality. It’s grounded in assertions that rationality has problems which I do not accept are problems and makes bald-faced assertions (see: eternalism, aka ‘Truth exists’, which is asserted to be wrong because it contains divine command theory as a subset) that are just not true, while laying the foundation for his other arguments. Ex falso sequitur quodlibet, and his writing style has all the bad qualities of Eliezer’s, so I can’t bring myself to read it in enough depth to write a point by point rebuttal.