Although I don’t usually write LW comments, I’m writing a post right now and this is helping me clarify my thoughts on a range of historical incidents.
In hindsight, I’m worried that you wrote this apology. I think it’s an unhealthy obeisance.
I suspect you noticed how Eliezer often works to degrade the status of people who disagree with him and otherwise treats them poorly. As I will support in an upcoming essay, his writing is often optimized to exploit intellectual insecurity (e.g. by frequently praising his own expertise, or appealing to a fictional utopia of fictional geniuses who agree that you’re an idiot or wrong[1]) and to demean others’ contributions (e.g. by claiming to have invented them already, or calling them fake, or emphasizing how far behind everyone else is). It’s not that it’s impossible for these claims to have factual merit, but rather the presentation and the usage of these claims seem optimized to push others down. This has the effect of increasing his own status.
Anger and frustration are a rational reaction in that situation (though it’s important to express those emotions in healthy ways—I think your original comment wasn’t perfect there). And yet you ended up the one humbled for focusing on status too much!
by frequently praising his own expertise, or appealing to a fictional utopia of fictional geniuses who agree that you’re an idiot or wrong[1])
This part in particular is easily one of the most problematic things I see Yudkowsky do, because a fictional world can be almost arbitrarily different from our world, and thus lessons from a fictional world often fail to generalize (and that’s conditioning on it being logically coherent), so there’s very little reason to do this unless you are very careful, and at that point, you could just focus on the lessons of our own world’s history.
Even when I mention stuff like halting oracles, which are almost certainly not possible in the world we live in, I don’t make the mistake of thinking that halting oracles can give us insights for our own world, because our world and the world where we can compute the halting problem are so different as to make a lot of lessons non-transferable (I’m referring to a recent discussion in discord here).
There are good reasons why we should mostly not use fiction to inform real-world beliefs very much, courtesy of Eliezer Yudkowsky himself:
Although I don’t usually write LW comments, I’m writing a post right now and this is helping me clarify my thoughts on a range of historical incidents.
In hindsight, I’m worried that you wrote this apology. I think it’s an unhealthy obeisance.
I suspect you noticed how Eliezer often works to degrade the status of people who disagree with him and otherwise treats them poorly. As I will support in an upcoming essay, his writing is often optimized to exploit intellectual insecurity (e.g. by frequently praising his own expertise, or appealing to a fictional utopia of fictional geniuses who agree that you’re an idiot or wrong[1]) and to demean others’ contributions (e.g. by claiming to have invented them already, or calling them fake, or emphasizing how far behind everyone else is). It’s not that it’s impossible for these claims to have factual merit, but rather the presentation and the usage of these claims seem optimized to push others down. This has the effect of increasing his own status.
Anger and frustration are a rational reaction in that situation (though it’s important to express those emotions in healthy ways—I think your original comment wasn’t perfect there). And yet you ended up the one humbled for focusing on status too much!
See https://www.lesswrong.com/posts/tcCxPLBrEXdxN5HCQ/shah-and-yudkowsky-on-alignment-failures and search for “even if he looks odd to you because you’re not seeing the population of other dath ilani.”
This part in particular is easily one of the most problematic things I see Yudkowsky do, because a fictional world can be almost arbitrarily different from our world, and thus lessons from a fictional world often fail to generalize (and that’s conditioning on it being logically coherent), so there’s very little reason to do this unless you are very careful, and at that point, you could just focus on the lessons of our own world’s history.
Even when I mention stuff like halting oracles, which are almost certainly not possible in the world we live in, I don’t make the mistake of thinking that halting oracles can give us insights for our own world, because our world and the world where we can compute the halting problem are so different as to make a lot of lessons non-transferable (I’m referring to a recent discussion in discord here).
There are good reasons why we should mostly not use fiction to inform real-world beliefs very much, courtesy of Eliezer Yudkowsky himself:
https://www.lesswrong.com/posts/rHBdcHGLJ7KvLJQPk/the-logical-fallacy-of-generalization-from-fictional