Assuming “lizardman” here is referring to this post, the usage of terminology seems wrong. In that post, “lizardman” is used specifically to mean rare outliers, so under that definition, it’s quite impossible for 99.9%+ of the world to be that. It also portrays a particular archetype of unreasonable person, which I think is what you’re intending to refer to; but as far as I can tell that archetype is in fact rare.
That post made me write this post, but I’m not sure that I’m referring to the same thing. Basically I mean something like “people whose beliefs or actions are so unreasonable, even on things that they should have thought long and hard about, that they seem to belong to a different species from myself.” Like Robin Hanson in this tweet or Elizer Yudkowsky when he thought he would singlehandedly solve all the philosophical problems associated with building a Friendly AI (looks like I can’t avoid giving examples after all). I’m pretty sure these two belong in the top 0.1 percentile of all humans as far as being reasonable, hence the title.
For the record my update here has been of the sort “I should expect from the outside that I would also see this sort of behavior from myself without a very targeted effort to avoid it, as it seems to me like a human universal” rather than “This is an easy mistake for me to avoid”.
Assuming “lizardman” here is referring to this post, the usage of terminology seems wrong. In that post, “lizardman” is used specifically to mean rare outliers, so under that definition, it’s quite impossible for 99.9%+ of the world to be that. It also portrays a particular archetype of unreasonable person, which I think is what you’re intending to refer to; but as far as I can tell that archetype is in fact rare.
That post made me write this post, but I’m not sure that I’m referring to the same thing. Basically I mean something like “people whose beliefs or actions are so unreasonable, even on things that they should have thought long and hard about, that they seem to belong to a different species from myself.” Like Robin Hanson in this tweet or Elizer Yudkowsky when he thought he would singlehandedly solve all the philosophical problems associated with building a Friendly AI (looks like I can’t avoid giving examples after all). I’m pretty sure these two belong in the top 0.1 percentile of all humans as far as being reasonable, hence the title.
For the record my update here has been of the sort “I should expect from the outside that I would also see this sort of behavior from myself without a very targeted effort to avoid it, as it seems to me like a human universal” rather than “This is an easy mistake for me to avoid”.