Weighing back in here, I will clarify my comment which the comment you quote was based on, that my OH had this precise thought (“self-important pompous fools”) when he came across this site initially. The content of the sequences he found trivial. He generally finds it easy to be rational, and didn’t see the point of getting a community together to learn how to be more rational. In fact, it’s a large (reverse) inferential distance for him just to understand that some people find figuring these ideas out actually non-trivial (and yet still care about them). He doesn’t understand how people can compartmentalise their minds at all.
Very few people sort themselves in bands according to “rationality”, and my OH takes part in just regular discussions with regular smart people, except he’s better at correcting wrong arguments than most. “Some people being unfixably wrong about things” is just a part of life for him, and without ideas like transhumanism to motivate you, it’s quite hard to bring yourself to care about how wrong the rest of the world is—just being right yourself is sufficient.
Thanks for this explanation. Does your OH participate in discussion here? If so does he enjoy them (more than discussions with “regular smart people”)? Do you or he have any suggestions how we might better attract people like him (i.e., who are “naturally” rational and find it hard to understand at first why Eliezer is making a big deal out of “rationality”)?
He doesn’t participate in discussions here, because he doesn’t think he has anything new to add. (This discussion is starting to prove otherwise though!)
I asked him about what more could be done to attract people like him. He said: “I think what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered). That would give a stronger incentive to read on.”
So basically, what would help that is having an intro page which says “here’s where to start, but if you know the basics already, here’s some interesting problems to draw you in”
The other problem for him is a lot of the content reading like what he calls ‘pulp-philosophy’ - being to philosophy what pulp fiction is to literature. “If you find an average philosophy blog, it is either uninteresting or wrong, but has a really inflated view of itself. There is a lot of philosophy/rationality stuff on the internet, which had primed me to just ignore that kind of website.”
If there is a way, then, to distinguish LW from less good websites, without worsening other problems with other audiences, that might be good, though I personally can’t think of anything that would help on this front.
what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered
He did, that’s what prompted the statement. He found them really interesting, and almost got to the right answer before one of our friends spoilered it, but he enjoyed the challenge and does enjoy thinking about things that way.
Weighing back in here, I will clarify my comment which the comment you quote was based on, that my OH had this precise thought (“self-important pompous fools”) when he came across this site initially. The content of the sequences he found trivial. He generally finds it easy to be rational, and didn’t see the point of getting a community together to learn how to be more rational. In fact, it’s a large (reverse) inferential distance for him just to understand that some people find figuring these ideas out actually non-trivial (and yet still care about them). He doesn’t understand how people can compartmentalise their minds at all.
Very few people sort themselves in bands according to “rationality”, and my OH takes part in just regular discussions with regular smart people, except he’s better at correcting wrong arguments than most. “Some people being unfixably wrong about things” is just a part of life for him, and without ideas like transhumanism to motivate you, it’s quite hard to bring yourself to care about how wrong the rest of the world is—just being right yourself is sufficient.
Thanks for this explanation. Does your OH participate in discussion here? If so does he enjoy them (more than discussions with “regular smart people”)? Do you or he have any suggestions how we might better attract people like him (i.e., who are “naturally” rational and find it hard to understand at first why Eliezer is making a big deal out of “rationality”)?
He doesn’t participate in discussions here, because he doesn’t think he has anything new to add. (This discussion is starting to prove otherwise though!)
I asked him about what more could be done to attract people like him. He said: “I think what you need to do to encourage people like me is essentially to pose some interesting problems (e.g. free will) prominently, along with a hint there being an /interesting/ solution (e.g. suggesting that the free will question is similar to the tree in a forest question in how it can be answered). That would give a stronger incentive to read on.”
So basically, what would help that is having an intro page which says “here’s where to start, but if you know the basics already, here’s some interesting problems to draw you in”
The other problem for him is a lot of the content reading like what he calls ‘pulp-philosophy’ - being to philosophy what pulp fiction is to literature. “If you find an average philosophy blog, it is either uninteresting or wrong, but has a really inflated view of itself. There is a lot of philosophy/rationality stuff on the internet, which had primed me to just ignore that kind of website.”
If there is a way, then, to distinguish LW from less good websites, without worsening other problems with other audiences, that might be good, though I personally can’t think of anything that would help on this front.
Did your OH read Yudkowsky’s posts on free will? If so, what does he think of them?
He did, that’s what prompted the statement. He found them really interesting, and almost got to the right answer before one of our friends spoilered it, but he enjoyed the challenge and does enjoy thinking about things that way.