Actually, I think Michael Vassar’s complaint is pretty similar to Cousin It’s, so not only have you not overcompensated, you haven’t compensated much at all. To overcompensate would be to produce Wei Dai’s list, which would still be better than this post.
Wei Dai’s list mostly focuses on physiological evidence, which my next post will try to nail down; this post was an attempt to cover the relevant anthropological evidence.
I read cousin_it’s comment as something along the lines of “your ratio of facts to interesting claims is too low,” and tried to adjust accordingly. And now MichaelVassar’s complaint seems to be along the lines of “your ratio of facts to interesting claims is too high.” If I’ve read both of them correctly, that sounds like overcompensation to me. If I’ve totally failed to get the point that either or both of them were making, then I either compensated for the wrong thing or, as you suggest, compensated much less than I thought I had.
Clarification on whether my readings were accurate would be appreciated; I may need to seriously re-calibrate.
I saw my initial complaint as identical to CousinIt’s except for being admittedly lazy and vague (but later elaborated somewhat). I don’t think that you understood my or cousin it’s complaints. None of the complaints are about ratios of fact to claims. Both are about the relationship between facts and claims. It seems to me that, simply put, you don’t know how facts are supposed to relate to claims.
Cousin it’s original comment said
“This post doesn’t make a convincing argument for any of its points.” That’s the key point. It’s not enough that fact suggest claims to you. It’s necessary to actually make an argument that leads from the facts to the claims. You don’t seem to be doing that but making mistakes, which is why I find it difficult to criticize your mistakes. Rather, you don’t seem to be making arguments at all, just giving facts and saying what they suggest to you. This is a valid way of reasoning, communicating, or figuring things out. It’s a useful way of arguing, and it’s all humans did for thousands of years. Combined with swordsmanship it can even work for resolving arguments, but by itself it doesn’t serve that purpose. I honestly think that you don’t know that something else is possible, and I think that in this respect you are like almost everyone else in the world.
It seems to me to be a serious deficiency in the LW population that we aren’t able to recognize this problem and that when we do see it we don’t know how to do anything about it. Until we do, we will mostly be an echo chamber, talking only to the tiny fringe of the population who shares our implicit beliefs about how arguments should be made.
The good news is, this is WrongBot’s chance to be a hero. If he can keep paying attention to what other people are doing on this forum, patterns of address other than sharing facts and claims are likely to click into place for him. He’ll find that he can share information in a manner conforming to those patterns and that when he does it will be much better received, and especially much more likely to win concessions and resolve apparent differences of opinion, especially by the top posters. If he is then able to correctly figure out what sort of explanation would have enabled other people to quickly explain this difference in reasoning technique, he will be better positioned than any of us are to expand Less Wrong’s target audience to the huge majority of people at all levels of intelligence who are simply not ready for the Sequences. If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.
If such a technique is fast enough and reliable enough
It won’t be, since the interpretation of a message is dependent on the internal state and structure of the message’s interpreter—i.e., the receiving human being.
This is a fundamental flaw to all forms of attempted other- and self-optimization, not just the ones involving the development of rationality.
Teaching literacy and arithmetic are pretty fast. So is a lot of Feldenkrais and Alexander Technique, first aid, or basic swimming. I don’t see any strong generalizations to make here.
Teaching literacy and arithmetic are pretty fast. So is a lot of Feldenkrais and Alexander Technique, first aid, or basic swimming. I don’t see any strong generalizations to make here.
I didn’t say you can’t teach things quickly, I’m saying that teaching them depends on the state of the learner. That includes a lot of things like whether the person is motivated to do it, and whether they have existing bad habits or interfering beliefs.
Also, ISTM most of the things you just mentioned require external feedback for most people to learn quickly; simply giving someone a static “explanation” of the skill is not sufficient for them to actually learn to do it, or at least not very efficiently.
(Which is also part of my point about learner state-dependence. Learning skills in general requires interaction and feedback of some kind; explanations are not sufficient.)
I’m currently disentangling myself from the ill effects of combining Alexander Technique with perfectionism and desperation. I hope I’m not adding another layer of bad habits.
I agree with pjeby—the state of the person receiving the information makes a big difference.
“If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.”
This seems like an incredibly strong claim, especially given the divisions and arguments even among Less Wrong posters. Perhaps WrongBot is merely low-level and misguided, and should listen to more advanced users and mend his ways—but what about Roko, for instance?
If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.
This seems like an incredibly strong claim, especially given the divisions and arguments even among Less Wrong posters.
Well, what Vassar is saying here is tricky! If I say, for example, that if I could do X, I could make a billion dollars and get Eliezer to admit that I am smarter than he is, then I am in effect saying that X is really hard to do. As to why Vassar would say such a thing, well, a few days ago, WrongBot was complaining (and getting upvoted for it) that in his criticism of WrongBot, Vassar had not said anything that WrongBot could use to improve WrongBot’s rationality. So, my guess is that this is Vassar’s somewhat roundabout way of saying to WrongBot that doing that is really hard, and if WrongBot ever comes to have any good suggestions on how to do it, he should share them with everyone here.
I hasten to add that today WrongBot was careful in his wording to avoid implying that anyone here had any obligation to improve WrongBot’s writing or thinking (but of course this careful wording came after Vassar comment).
I might be interpreting Michael Vassar’s post incorrectly, but it seemed like an authentic, if radically optimistic, suggestion and not a hyperbolic or sarcastic one.
It wasn’t sarcastic. I really think that it’s fairly likely to be possible, but extremely difficult. OTOH, I think that many extremely difficult things are worth attempting. That’s why SIAI exists after all. LW posters may disagree fairly frequently, but that’s probably significantly because there are so few of us that we don’t really have time to collectively build an official correct world-view which is far better than any of us could do on our own.
I really do think my claim about the implications of developing such a technique is correct, in fact, understated, and that this follows trivially from the world having resources which are far beyond what is needed to solve its problems if those resources were allocated half-way sanely. A large number of Rokos would definitely be enough to do the job.
If I say that “if you could travel backward in time by arranging four flux capacitors into a wheatstone bridge, someone probably would have travelled back in time already, and consequently, you probably cannot travel back in time by arranging flux capacitors into a wheatstone bridge,” I am being neither hyperbolic nor sarcastic (nor am I being optimistic).
Thank you for continuing to engage after my rather silly reply; while in the process of writing a more detailed response to your latest post, I figured out what you meant originally. I now agree with your earlier interpretation of Michael Vassar’s post, though I am still skeptical of the jump between “dramatically expanding LW” and “solving all the world’s problems without a singularity.”
I am still skeptical of the jump between “dramatically expanding LW” and “solving all the world’s problems without a singularity.”
Your skepticism of the jump is reasonable and understandable. Note however that having served as President of the Singularity Institute for the last two years or so, Vassar has a great deal of experience in thinking about the global situation.
Actually, I think Michael Vassar’s complaint is pretty similar to Cousin It’s, so not only have you not overcompensated, you haven’t compensated much at all. To overcompensate would be to produce Wei Dai’s list, which would still be better than this post.
Wei Dai’s list mostly focuses on physiological evidence, which my next post will try to nail down; this post was an attempt to cover the relevant anthropological evidence.
I read cousin_it’s comment as something along the lines of “your ratio of facts to interesting claims is too low,” and tried to adjust accordingly. And now MichaelVassar’s complaint seems to be along the lines of “your ratio of facts to interesting claims is too high.” If I’ve read both of them correctly, that sounds like overcompensation to me. If I’ve totally failed to get the point that either or both of them were making, then I either compensated for the wrong thing or, as you suggest, compensated much less than I thought I had.
Clarification on whether my readings were accurate would be appreciated; I may need to seriously re-calibrate.
I saw my initial complaint as identical to CousinIt’s except for being admittedly lazy and vague (but later elaborated somewhat). I don’t think that you understood my or cousin it’s complaints. None of the complaints are about ratios of fact to claims. Both are about the relationship between facts and claims. It seems to me that, simply put, you don’t know how facts are supposed to relate to claims.
Cousin it’s original comment said “This post doesn’t make a convincing argument for any of its points.”
That’s the key point. It’s not enough that fact suggest claims to you. It’s necessary to actually make an argument that leads from the facts to the claims. You don’t seem to be doing that but making mistakes, which is why I find it difficult to criticize your mistakes. Rather, you don’t seem to be making arguments at all, just giving facts and saying what they suggest to you. This is a valid way of reasoning, communicating, or figuring things out. It’s a useful way of arguing, and it’s all humans did for thousands of years. Combined with swordsmanship it can even work for resolving arguments, but by itself it doesn’t serve that purpose. I honestly think that you don’t know that something else is possible, and I think that in this respect you are like almost everyone else in the world.
It seems to me to be a serious deficiency in the LW population that we aren’t able to recognize this problem and that when we do see it we don’t know how to do anything about it. Until we do, we will mostly be an echo chamber, talking only to the tiny fringe of the population who shares our implicit beliefs about how arguments should be made.
The good news is, this is WrongBot’s chance to be a hero. If he can keep paying attention to what other people are doing on this forum, patterns of address other than sharing facts and claims are likely to click into place for him. He’ll find that he can share information in a manner conforming to those patterns and that when he does it will be much better received, and especially much more likely to win concessions and resolve apparent differences of opinion, especially by the top posters. If he is then able to correctly figure out what sort of explanation would have enabled other people to quickly explain this difference in reasoning technique, he will be better positioned than any of us are to expand Less Wrong’s target audience to the huge majority of people at all levels of intelligence who are simply not ready for the Sequences. If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.
It won’t be, since the interpretation of a message is dependent on the internal state and structure of the message’s interpreter—i.e., the receiving human being.
This is a fundamental flaw to all forms of attempted other- and self-optimization, not just the ones involving the development of rationality.
Teaching literacy and arithmetic are pretty fast. So is a lot of Feldenkrais and Alexander Technique, first aid, or basic swimming. I don’t see any strong generalizations to make here.
I didn’t say you can’t teach things quickly, I’m saying that teaching them depends on the state of the learner. That includes a lot of things like whether the person is motivated to do it, and whether they have existing bad habits or interfering beliefs.
Also, ISTM most of the things you just mentioned require external feedback for most people to learn quickly; simply giving someone a static “explanation” of the skill is not sufficient for them to actually learn to do it, or at least not very efficiently.
(Which is also part of my point about learner state-dependence. Learning skills in general requires interaction and feedback of some kind; explanations are not sufficient.)
I’m currently disentangling myself from the ill effects of combining Alexander Technique with perfectionism and desperation. I hope I’m not adding another layer of bad habits.
I agree with pjeby—the state of the person receiving the information makes a big difference.
Maybe it would help to suggest a few especially good examples of going from facts to claims.
“If such a technique is fast enough and reliable enough I would literally expect its development to solve all of the world’s problems within a half century in the absence of a Singularity before then.”
This seems like an incredibly strong claim, especially given the divisions and arguments even among Less Wrong posters. Perhaps WrongBot is merely low-level and misguided, and should listen to more advanced users and mend his ways—but what about Roko, for instance?
katydee replying to Mike Vassar:
Well, what Vassar is saying here is tricky! If I say, for example, that if I could do X, I could make a billion dollars and get Eliezer to admit that I am smarter than he is, then I am in effect saying that X is really hard to do. As to why Vassar would say such a thing, well, a few days ago, WrongBot was complaining (and getting upvoted for it) that in his criticism of WrongBot, Vassar had not said anything that WrongBot could use to improve WrongBot’s rationality. So, my guess is that this is Vassar’s somewhat roundabout way of saying to WrongBot that doing that is really hard, and if WrongBot ever comes to have any good suggestions on how to do it, he should share them with everyone here.
I hasten to add that today WrongBot was careful in his wording to avoid implying that anyone here had any obligation to improve WrongBot’s writing or thinking (but of course this careful wording came after Vassar comment).
I might be interpreting Michael Vassar’s post incorrectly, but it seemed like an authentic, if radically optimistic, suggestion and not a hyperbolic or sarcastic one.
It wasn’t sarcastic. I really think that it’s fairly likely to be possible, but extremely difficult. OTOH, I think that many extremely difficult things are worth attempting. That’s why SIAI exists after all. LW posters may disagree fairly frequently, but that’s probably significantly because there are so few of us that we don’t really have time to collectively build an official correct world-view which is far better than any of us could do on our own.
I really do think my claim about the implications of developing such a technique is correct, in fact, understated, and that this follows trivially from the world having resources which are far beyond what is needed to solve its problems if those resources were allocated half-way sanely. A large number of Rokos would definitely be enough to do the job.
I’ll redouble my efforts, then. This topic also probably deserves a thread of its own.
If I say that “if you could travel backward in time by arranging four flux capacitors into a wheatstone bridge, someone probably would have travelled back in time already, and consequently, you probably cannot travel back in time by arranging flux capacitors into a wheatstone bridge,” I am being neither hyperbolic nor sarcastic (nor am I being optimistic).
Thank you for continuing to engage after my rather silly reply; while in the process of writing a more detailed response to your latest post, I figured out what you meant originally. I now agree with your earlier interpretation of Michael Vassar’s post, though I am still skeptical of the jump between “dramatically expanding LW” and “solving all the world’s problems without a singularity.”
Your skepticism of the jump is reasonable and understandable. Note however that having served as President of the Singularity Institute for the last two years or so, Vassar has a great deal of experience in thinking about the global situation.
My pleasure.
Agreed.