Ok, this is the point where I decide to be mildly obnoxious and use Sam’s work as an indication that humans have many more cognitive biases and fallacies than even people at LW realize. In particular, the above post displays a large amount of artificial classification in trying to claim that specific scale issues somehow become differences in kind. This seems very similar to (for example) creationists who claim to accept microevolution but not macroevolution. Moreover, the presence of these problems does not leave in some cases even after prolonged exposure to careful rational and critical thinking.
The fourth point does a good example of this:
Cryonics has many single-points of failure and many unresolved questions: A couple examples: a.) Is there anything fundamental about death the precludes the possibility of restoring life? b.) Is it possible to maintain self/person-hood by simply maintaing physical state integrity? c.) Is the general concept of organ preservation able to be scaled to entire bodies or is there a limit to what can be restarted? d.) Is a living organism required to overcome the effects of the cryopreservation? etc.
To be sure, there are other problems here as well (such as the heavy overlap between a,b and c and the fact that a seems to take “death” as a potentially ontological fundamental phenomenon), but the issue I want to focus on is c, “is the general concept of organ preservation able to be scaled to entire bodies or is there a limit to what can be restarted?” This is an assertion that while any single part of a system can be restarted, somehow it takes a much higher burden of evidence to assert that the entire system can be restarted. This seems similar to the micro/macro fallacy but I’m not sure what precisely the fallacy is. I’d almost be tempted to coin a new one, something like “failure to reduce.”
I have to wonder if this sort of thing is an indication that LW is not substantially succeeding in improving rationality. Sam’s first comment on LW was about a year ago, and his posting quality has either remained the same or declined during that time (although to be fair it is difficult to distinguish between rationality and civility issues in his case.) Now, it seems based on comments Sam has made that it is probable that he hasn’t read the sequences. Sam’s emphasis on wanting to only read “authorities” may play a role but that may be simply a specific defense in this case against reading posts which challenge his worldview (the strongest evidence for this case is that people have summarized the demand for particular proof argument and he’s still ignoring or misinterpreting it) . Is Sam a representative sample? If there are a substantial number of people here who have gone through and interacted with the community and yet have not improved their rationality, does that suggest we have a problem that requires a change in tactics?
Sam certainly isn’t the only example of this sort of problem, and even the general community here sometimes demonstrates strong biases that impact their evaluation of claims(I’ve noticed this most strongly where evaluations of historical claims are concerned). So, are we succeeding? Should the presence of people like Sam mean we should be concerned that we are not?
Yes. I think LW’s problems as an introduction to rationality go far beyond this. The Sequences are a great introduction to rationality if you were in to them from early on and could take part in the discussions they generated, but as a sequence of cold blog posts they’re a large, disconnected and forbidding introduction, and in any case there’s no easy way to read them in order. LW in general doesn’t come across as a website with a mission of improving rationality so much as a community with curious shared interests like deciding how many boxes to take and getting our heads frozen.
I’m not sure that’s the case. I read most of the sequences before posting here, and I’m aware of at least two people personally who’ve started reading the sequences fairly recently. And I know a third person who refuses to read anything linked to on LW because she’s heard that “LW’s archives are addictive like TVtropes on crack” which suggests that to at least some people the Sequences are interesting enough to read. I’m more inclined to wonder a) are they having an impact? And b) how do you get people like Sam who are clearly intelligent and educated to read them or to improve their rationality by some other means?
That’s encouraging! Doubtless there’s much more we can do to make it easier for people to get into this sort of thing, but I’ll adjust my estimate of how well we’re doing right now upwards—thanks!
Failure to reach Sam in a year of text-only contact is not a strong indictment of our rationality, any more than failure to transcend mass-energy conservation and speed-of-light limits is an indictment of intelligence. Some people just can’t be persuaded, and others can’t be persuaded within the resources and ethical principles we’re willing to apply.
Sam has claimed repeatedly that he wishes to disengage from this discussion. This isn’t a particularly credible claim, considering his willingness to violate it, but let’s honor it anyway. Move on to someone who might be a better investment.
One solution would be to “Do Science” to the problem. I.e. fund cognitive psychology experiments to ascertain what methods actually verifiably work best for making people rational.
Ok, this is the point where I decide to be mildly obnoxious and use Sam’s work as an indication that humans have many more cognitive biases and fallacies than even people at LW realize. In particular, the above post displays a large amount of artificial classification in trying to claim that specific scale issues somehow become differences in kind. This seems very similar to (for example) creationists who claim to accept microevolution but not macroevolution. Moreover, the presence of these problems does not leave in some cases even after prolonged exposure to careful rational and critical thinking.
The fourth point does a good example of this:
To be sure, there are other problems here as well (such as the heavy overlap between a,b and c and the fact that a seems to take “death” as a potentially ontological fundamental phenomenon), but the issue I want to focus on is c, “is the general concept of organ preservation able to be scaled to entire bodies or is there a limit to what can be restarted?” This is an assertion that while any single part of a system can be restarted, somehow it takes a much higher burden of evidence to assert that the entire system can be restarted. This seems similar to the micro/macro fallacy but I’m not sure what precisely the fallacy is. I’d almost be tempted to coin a new one, something like “failure to reduce.”
I have to wonder if this sort of thing is an indication that LW is not substantially succeeding in improving rationality. Sam’s first comment on LW was about a year ago, and his posting quality has either remained the same or declined during that time (although to be fair it is difficult to distinguish between rationality and civility issues in his case.) Now, it seems based on comments Sam has made that it is probable that he hasn’t read the sequences. Sam’s emphasis on wanting to only read “authorities” may play a role but that may be simply a specific defense in this case against reading posts which challenge his worldview (the strongest evidence for this case is that people have summarized the demand for particular proof argument and he’s still ignoring or misinterpreting it) . Is Sam a representative sample? If there are a substantial number of people here who have gone through and interacted with the community and yet have not improved their rationality, does that suggest we have a problem that requires a change in tactics?
Sam certainly isn’t the only example of this sort of problem, and even the general community here sometimes demonstrates strong biases that impact their evaluation of claims(I’ve noticed this most strongly where evaluations of historical claims are concerned). So, are we succeeding? Should the presence of people like Sam mean we should be concerned that we are not?
Yes. I think LW’s problems as an introduction to rationality go far beyond this. The Sequences are a great introduction to rationality if you were in to them from early on and could take part in the discussions they generated, but as a sequence of cold blog posts they’re a large, disconnected and forbidding introduction, and in any case there’s no easy way to read them in order. LW in general doesn’t come across as a website with a mission of improving rationality so much as a community with curious shared interests like deciding how many boxes to take and getting our heads frozen.
I’m not sure that’s the case. I read most of the sequences before posting here, and I’m aware of at least two people personally who’ve started reading the sequences fairly recently. And I know a third person who refuses to read anything linked to on LW because she’s heard that “LW’s archives are addictive like TVtropes on crack” which suggests that to at least some people the Sequences are interesting enough to read. I’m more inclined to wonder a) are they having an impact? And b) how do you get people like Sam who are clearly intelligent and educated to read them or to improve their rationality by some other means?
That’s encouraging! Doubtless there’s much more we can do to make it easier for people to get into this sort of thing, but I’ll adjust my estimate of how well we’re doing right now upwards—thanks!
Maybe we need to do more with “This ia a part of my life I’d like to improve, how do I apply rationality to it?”.
Sometimes it helps to have good examples of a skill in action rather than being told about solved problems.
Rotten wood cannot be carved.
Failure to reach Sam in a year of text-only contact is not a strong indictment of our rationality, any more than failure to transcend mass-energy conservation and speed-of-light limits is an indictment of intelligence. Some people just can’t be persuaded, and others can’t be persuaded within the resources and ethical principles we’re willing to apply.
Sam has claimed repeatedly that he wishes to disengage from this discussion. This isn’t a particularly credible claim, considering his willingness to violate it, but let’s honor it anyway. Move on to someone who might be a better investment.
One solution would be to “Do Science” to the problem. I.e. fund cognitive psychology experiments to ascertain what methods actually verifiably work best for making people rational.
I think that the problem is humans innately, viscerally prefer motivated cognition to bayesian-style belief updating.
Humans engaging in rationality rather than sophistry is like (to borrow a metaphor from Ciphergoth) dogs walking on their hind legs.
I mean sure, you can train an able and willing human to do it, under favorable circumstances. But don’t expect it to always work or be easy.