And that claim is what I have been inquiring about. How is an outsider going to tell if the people here are the best rationalists around? Your post just claimed this[.]
My post didn’t claim Less Wrong contains the best rationalists anywhere. It claimed that for many readers, Less Wrong is the best community of aspiring rationalists that they have easy access to. I wish you would be careful to be clear about exactly what is at issue and to avoid straw man attacks.
As to how to evaluate Less Wrongers’, or others’, rationality skills:
It is hard to assess others’ rationality by evaluating their opinions on a small number of controversial issues. This difficulty stems partly from the difficulty of oneself determining the right answers (so as to know whether to raise or lower one’s estimate of others with those views). And it stems in part from the fact that a small number of yes/no or multiple-choice-style opinions will provide only limited evidence, especially given communities’ tendency to copy the opinions of others within the community.
One can more easily notice what processes LWers and others follow, and one can ask whether these processes are likely to promote true beliefs. For example, LWers tend to say they’re aiming for true beliefs, rather than priding themselves in their faith, optimism, etc. Also, folks here do an above-average job of actually appearing curious, of updating their claims in response to evidence, of actively seeking counter-evidence, of separating claims into separately testable/evaluable components, etc.
At the risk of repeating myself: it is these processes that I, and at least some others, have primarily learned from the sequences/OB/LW. This material has helped me learn to actually aim for accurate beliefs, and it has given me tools for doing so more effectively. (Yes, much of the material in the sequences is obvious in some sense; but reading the sequences moved it from “somewhat clear when I bothered to think about it” to actually a part of my habits for thinking.) I’m a bit frustrated here, but my feeling is that you are not yet using these habits consistently in your writing—you don’t appear curious, and you are not carefully factoring issues into separable claims that can be individually evaluated. If you do, we might make more progress talking together!
My impression is that XiXiDu is curious and that what you’re frustrated by has more to do with his difficulty expressing himself than with closed-mindedness on his part. Note that he compiled a highly upvoted list of references and resources for Less Wrong—I read this as evidence that he’s interested in Less Wrong’s mission and think that his comments should be read more charitably.
I’ll try to recast what I think he’s trying to say in clearer terms sometime over the next few days
I agree with you, actually. He does seem curious; I shouldn’t have said otherwise. He just also seems drawn to the more primate-politics-prone topics within Less Wrong, and he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.
Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.
He just also seems drawn to the more primate-politics-prone topics within Less Wrong
Arguably the primate-politics-prone topics are the most important ones; the tendency that you describe can be read as seriousness of purpose.
he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.
Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.
Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.
Not to mention more pragmatic socially in the general case. Unless you believe you have the capacity to be particularly dominant in a context and wish to introduce yourself near the top of a hierarchy. Some people try that here from time to time, particularly those who think they are impressive elsewhere. It is a higher risk move and best used when you know you will be able to go and open a new set, I mean community, if your dominant entry fails.
Confession: Having a few muddled ideas of signalling in mind when I joined LessWrong, I knew of this pattern (works really well at parties!) and decided that people here were too savvy, so I specifically focused on entering as low as possible in the hierarchy. I’m curious whether that was well-received because of various status reasons (made others feel higher-status) or because it was simply more polite and agreeable.
I’ll try to recast what I think he’s trying to say in clearer terms sometime over the next few days
Quick Summary (Because I wanted to ask you about Baez anyway / ~off-topic regarding the OP):
Why does someone like me, someone who has no formal education, understand the importance of research on friendly AI and the risks posed by AGI research and someone like John Baez (top mathematician) tries to save the planet from risks that I believe can be neglected. That is what I’m very curious about. It might appear differently from what I’ve been saying here in the past, but I’m only taking a different position to get some feedback. I really do not disagree with anything on Less Wrong. I’m unable to talk to those people and ask them, but I can challenge you people in their name and see what feedback I get.
What’s interesting here is that the response I got so far made me doubt if my agreement with Less Wrong and the SIAI is as sane as I believed to think. I also started to doubt that Eliezer Yudkowsky is as smart as I though, and I thought he’s the smartest person alive. It’s just that the best the people here can come up with is telling you to read the sequences, complain about how you say something rather than what you are saying, tell you that people who disagree are intellectual impotent, or just state that they don’t have to convince you (no shit Sherlock!).
So why have I commented on this post? I’m trying to improve LW through my own perception and what I noticed about outsiders I chatted with about LW (which is probably a rationalization, the real reason being that the attitude here pisses me off). Also what I’m most curious about is the strong contrast between LW and the academia. It just seems wrong that people who’d really need to know what LW has to say are not educated enough and those who are don’t care or doubt what is being said. I’m in-between here and wonder about my own sanity. Yet I don’t care enough and am too lazy to take enough effort to better express myself. But nobody else seems to be doing it. Even to me, who agrees (yes I do) this place often seems like an echo chamber that does response to critics with cryptic messages or the rationality equivalent of grammar police.
I’ll try my best leave you alone now, I was just too tired today to do much else and commented here again (I only wanted to check for new posts and just saw two that could both be a tract from Jehovah’s Witnesses minus some image of Yudkowsky riding a donkey ;-) Argh, why am I writing this? Grrr I have to shut up now. Sorry I can’t resist, here goes...
Though there are many brilliant people within academia, there is also shortsightedness and group-think within academia which could have led the academic establishment to ignore important issues concerning safety of advanced future technologies.
I’ve seen very little (if anything) in the way of careful rebuttals of SIAI’s views from the academic establishment. As such, I don’t think that there’s strong evidence against SIAI’s claims. At the same time, I have the impression that SIAI has not done enough to solicit feedback from the academic establishment.
Concerning the apparent group think on Less Wrong: something relevant that I’ve learned over the past few months is that some of the vocal SIAI supporters on LW express views that are quite unrepresentative of those of the SIAI staff. I initially misjudged SIAI on account of past unawareness of this point.
I believe that if you’re going to express doubts and/or criticism about LW and/or SIAI you should take the time and energy to express these carefully and diplomatically. Expressing unclear or inflammatory doubts and/or criticism is conducive to being rejected out of hand. I agree with Anna’s comment here.
Finding people smarter than oneself is essential to making oneself more effective and stretching one’s abilities and goals.
For an example I’m closely familiar with: I think one of Jimmy Wales’ great personal achievements with Wikipedia, as an impressively smart fellow himself, is that he discovered an extremely efficient mechanism for gathering around him people who made him feel really dumb by comparison. He’d be first to admit that a lot of those he’s gathered around him outshine him.
Getting smarter people than yourself to sign up for your goals is, I suspect, one marker of success in selecting a good goal.
Please judge the above comment as a temporary lapse of sanity. I’m really sorry I failed again. But it’s getting better. After turning off my PC I told myself dozens of times what an idiot I am. I always forget who I am and who you people are. When I read who multifoliaterose is I wanted to sink into the ground for that I even dare to bother you people with my gibberish.
I guess you overestimate my education and intelligence and truly try to read something into my comments that isn’t there. Well, never mind.
I agree; the average quality of your comments and posts has been increasing with time and I commend you for this.
When I read who multifoliaterose is I wanted to sink into the ground for that I even dare to bother you people with my gibberish.
This statement carries the connotation that I’m very important. At present I don’t think that there’s solid evidence in this direction. In any case; no need to feel self-conscious about taking my time, I’m happy to make your acquaintance and engage with you.
...seems to be all about global warming. I rate that as a top dud cause—but there is a lot of noise—and thus money, fame, etc—associated with it—so obviously it will attract those interested in such things.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism. People like to associate themselves with grand causes for reasons that apparently have a lot to do with social signalling and status—and very little to do with the world actually being at risk.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism.
Surely the skepticism should be directed toward the question of whether their recipe actually does save the world, rather than against their motivation. I don’t think that an analysis of motivations for something like this even begins to pay any rent.
For me, this is a standard technique. Whenever someone tells me how altruistic they are or have been, I try and figure out which replicators are likely to be involved in the display. It often makes a difference whether someone’s brain has been hijacked by memes—whether they are signalling their status to prospective business partners, their wealth to prospective mates—or whatever.
For example, if they are attempting to infect me with the same memes that have hijacked their own brain, my memetic immune system is activated—whereas if they are trying to convince people what a fine individual they are, my reaction is different.
What you said seems fine, but not the reason why you chose to say it in this context, the implied argument. The form of expression makes it hard to argue with. Say it out loud.
This doesn’t address the problem with that particular comment. What you implied is well-known, the problem I pointed out was not that it’s hard to figure out, but that you protected your argument in a weasely form of expression.
It sounds as though you would like to criticise an argument that you think I am implicitly making—but since I never actually made the argument, that gives you an amorphous surface to attack. I don’t plan to do anything to assist with that matter just now—other priorities seem more pressing.
It sounds as though you would like to criticise an argument that you think I am implicitly making—but since I never actually made the argument, that gives you an amorphous surface to attack. I don’t plan to do anything to assist with that matter just now—other priorities seem more pressing.
Yes, that’s exactly the problem. We all should strive to make our arguments easy to attack, errors easy to notice and address. Not having that priority hurts epistemic commons.
My argument was general—I think you want something specific.
However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me—by occupying my time with matters of relatively minor significance.
It would be nice if I had time available to devote to such tasks—but in the mean time, I am pretty sure the epistemic commons can get along without my additional input.
However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me—by occupying my time with matters of relatively minor significance.
Since significance of the matter is one of the topics under discussion, it can’t be used as an argument.
Edit: But it works as an element of a description of why certain actions take place.
What I mean is that I assign the matter relatively minor significance—so I get on with other things.
Yes, I indeed made a mistake by missing this aspect (factual description of how a belief caused actions as opposed to normative discussion of actions given the question of correctness of the belief).
As a separate matter, I don’t believe the premise is correct (that any additional effort is required to phrase things non-weasely), and thus that the belief in question plays even the explanatory role. But this is also under discussion, so I can’t use that as an argument.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism.
Well, yes, but if someone tells you they are the tallest person in the world, you also should treat that with considerable scepticism. After all, there can only be one person who actually is the tallest person in the world, and it’s unlikely in the extreme that one random guy would be that person. A one-in-six-billion chance is small enough to reject out-of-hand, surely!
The guy looks pretty tall though. How about you get out a tape-measure and then consult the records on height?
“Considerable scepticism” is not an argument against a claim. It is an argument for more evidence. What evidence makes John Baez’s claims that he is trying to save the world more likely to be signalling than a genuine attempt?
If someone I met told me they were the tallest person in the world, I would indeed treat that with considerable scepticism. I would count my knowledge about the 7 billion people in the world as evidence weighing heavily against the claim.
Your 7 billion people are just your prior probability for him being the tallest before you actually examine his size. Once you have seen that he is somewhat tall, you can start developing a better prior:
If he’s taller than any of the people you know that puts him in at least the top three hundredth—so less than 24 million people remain as contenders. If he’s taller than anyone you’ve ever seen, that puts him in at least the top two thousandth—so less than 3.5 million of that 7 billion are actually potential evidence he’s wrong.
So now our prior is 1 in 3.5 million. Now it’s time to look for evidence. At this point, the number of people in the world is irrelevant: it’s already been factored into the equation. What evidence can we use to find our posterior probability?
A cool thing about Bayesian reasoning is that you can cut extreme numbers down to reasonable sizes with some very cheap and very quick tests. In the case of possible ulterior motives for claiming to be saving the world, you can with some small effort distinguish between the “signalling” and “genuine” hypotheses. What tests—what evidence—should we be looking for here, to spot which one is the case?
My post didn’t claim Less Wrong contains the best rationalists anywhere. It claimed that for many readers, Less Wrong is the best community of aspiring rationalists that they have easy access to. I wish you would be careful to be clear about exactly what is at issue and to avoid straw man attacks.
As to how to evaluate Less Wrongers’, or others’, rationality skills: It is hard to assess others’ rationality by evaluating their opinions on a small number of controversial issues. This difficulty stems partly from the difficulty of oneself determining the right answers (so as to know whether to raise or lower one’s estimate of others with those views). And it stems in part from the fact that a small number of yes/no or multiple-choice-style opinions will provide only limited evidence, especially given communities’ tendency to copy the opinions of others within the community.
One can more easily notice what processes LWers and others follow, and one can ask whether these processes are likely to promote true beliefs. For example, LWers tend to say they’re aiming for true beliefs, rather than priding themselves in their faith, optimism, etc. Also, folks here do an above-average job of actually appearing curious, of updating their claims in response to evidence, of actively seeking counter-evidence, of separating claims into separately testable/evaluable components, etc.
At the risk of repeating myself: it is these processes that I, and at least some others, have primarily learned from the sequences/OB/LW. This material has helped me learn to actually aim for accurate beliefs, and it has given me tools for doing so more effectively. (Yes, much of the material in the sequences is obvious in some sense; but reading the sequences moved it from “somewhat clear when I bothered to think about it” to actually a part of my habits for thinking.) I’m a bit frustrated here, but my feeling is that you are not yet using these habits consistently in your writing—you don’t appear curious, and you are not carefully factoring issues into separable claims that can be individually evaluated. If you do, we might make more progress talking together!
My impression is that XiXiDu is curious and that what you’re frustrated by has more to do with his difficulty expressing himself than with closed-mindedness on his part. Note that he compiled a highly upvoted list of references and resources for Less Wrong—I read this as evidence that he’s interested in Less Wrong’s mission and think that his comments should be read more charitably.
I’ll try to recast what I think he’s trying to say in clearer terms sometime over the next few days
I agree with you, actually. He does seem curious; I shouldn’t have said otherwise. He just also seems drawn to the more primate-politics-prone topics within Less Wrong, and he seems further to often express himself in spaghetti-at-the-wall mixtures of true and untrue, and relevant and irrelevant statements that confuse the conversation.
Less Wrong is a community that many of us care about; and it is kind, when one is new to a community and is still learning to express oneself, to tread a little more softly than XiXiDu has been.
Arguably the primate-politics-prone topics are the most important ones; the tendency that you describe can be read as seriousness of purpose.
Agreed.
Not to mention more pragmatic socially in the general case. Unless you believe you have the capacity to be particularly dominant in a context and wish to introduce yourself near the top of a hierarchy. Some people try that here from time to time, particularly those who think they are impressive elsewhere. It is a higher risk move and best used when you know you will be able to go and open a new set, I mean community, if your dominant entry fails.
Confession: Having a few muddled ideas of signalling in mind when I joined LessWrong, I knew of this pattern (works really well at parties!) and decided that people here were too savvy, so I specifically focused on entering as low as possible in the hierarchy. I’m curious whether that was well-received because of various status reasons (made others feel higher-status) or because it was simply more polite and agreeable.
Quick Summary (Because I wanted to ask you about Baez anyway / ~off-topic regarding the OP):
Why does someone like me, someone who has no formal education, understand the importance of research on friendly AI and the risks posed by AGI research and someone like John Baez (top mathematician) tries to save the planet from risks that I believe can be neglected. That is what I’m very curious about. It might appear differently from what I’ve been saying here in the past, but I’m only taking a different position to get some feedback. I really do not disagree with anything on Less Wrong. I’m unable to talk to those people and ask them, but I can challenge you people in their name and see what feedback I get.
What’s interesting here is that the response I got so far made me doubt if my agreement with Less Wrong and the SIAI is as sane as I believed to think. I also started to doubt that Eliezer Yudkowsky is as smart as I though, and I thought he’s the smartest person alive. It’s just that the best the people here can come up with is telling you to read the sequences, complain about how you say something rather than what you are saying, tell you that people who disagree are intellectual impotent, or just state that they don’t have to convince you (no shit Sherlock!).
So why have I commented on this post? I’m trying to improve LW through my own perception and what I noticed about outsiders I chatted with about LW (which is probably a rationalization, the real reason being that the attitude here pisses me off). Also what I’m most curious about is the strong contrast between LW and the academia. It just seems wrong that people who’d really need to know what LW has to say are not educated enough and those who are don’t care or doubt what is being said. I’m in-between here and wonder about my own sanity. Yet I don’t care enough and am too lazy to take enough effort to better express myself. But nobody else seems to be doing it. Even to me, who agrees (yes I do) this place often seems like an echo chamber that does response to critics with cryptic messages or the rationality equivalent of grammar police.
I’ll try my best leave you alone now, I was just too tired today to do much else and commented here again (I only wanted to check for new posts and just saw two that could both be a tract from Jehovah’s Witnesses minus some image of Yudkowsky riding a donkey ;-) Argh, why am I writing this? Grrr I have to shut up now. Sorry I can’t resist, here goes...
Though there are many brilliant people within academia, there is also shortsightedness and group-think within academia which could have led the academic establishment to ignore important issues concerning safety of advanced future technologies.
I’ve seen very little (if anything) in the way of careful rebuttals of SIAI’s views from the academic establishment. As such, I don’t think that there’s strong evidence against SIAI’s claims. At the same time, I have the impression that SIAI has not done enough to solicit feedback from the academic establishment.
John Baez will be posting an interview with Eliezer sometime soon. It should be informative to see the back and forth between the two of them.
Concerning the apparent group think on Less Wrong: something relevant that I’ve learned over the past few months is that some of the vocal SIAI supporters on LW express views that are quite unrepresentative of those of the SIAI staff. I initially misjudged SIAI on account of past unawareness of this point.
I believe that if you’re going to express doubts and/or criticism about LW and/or SIAI you should take the time and energy to express these carefully and diplomatically. Expressing unclear or inflammatory doubts and/or criticism is conducive to being rejected out of hand. I agree with Anna’s comment here.
Wow, that’s cool! They read my mind :-)
Even Eliezer Yudkowsky doesn’t believe he’s the smartest person alive. He’s the founder of the site and set its tone early, but that’s not the same thing.
Finding people smarter than oneself is essential to making oneself more effective and stretching one’s abilities and goals.
For an example I’m closely familiar with: I think one of Jimmy Wales’ great personal achievements with Wikipedia, as an impressively smart fellow himself, is that he discovered an extremely efficient mechanism for gathering around him people who made him feel really dumb by comparison. He’d be first to admit that a lot of those he’s gathered around him outshine him.
Getting smarter people than yourself to sign up for your goals is, I suspect, one marker of success in selecting a good goal.
Please judge the above comment as a temporary lapse of sanity. I’m really sorry I failed again. But it’s getting better. After turning off my PC I told myself dozens of times what an idiot I am. I always forget who I am and who you people are. When I read who multifoliaterose is I wanted to sink into the ground for that I even dare to bother you people with my gibberish.
I guess you overestimate my education and intelligence and truly try to read something into my comments that isn’t there. Well, never mind.
I agree; the average quality of your comments and posts has been increasing with time and I commend you for this.
This statement carries the connotation that I’m very important. At present I don’t think that there’s solid evidence in this direction. In any case; no need to feel self-conscious about taking my time, I’m happy to make your acquaintance and engage with you.
http://johncarlosbaez.wordpress.com/
...seems to be all about global warming. I rate that as a top dud cause—but there is a lot of noise—and thus money, fame, etc—associated with it—so obviously it will attract those interested in such things.
If someone tells you they are trying to save the planet, you should normally treat that with considerable scepticism. People like to associate themselves with grand causes for reasons that apparently have a lot to do with social signalling and status—and very little to do with the world actually being at risk.
Some take it too far: http://en.wikipedia.org/wiki/Messiah_complex
Surely the skepticism should be directed toward the question of whether their recipe actually does save the world, rather than against their motivation. I don’t think that an analysis of motivations for something like this even begins to pay any rent.
For me, this is a standard technique. Whenever someone tells me how altruistic they are or have been, I try and figure out which replicators are likely to be involved in the display. It often makes a difference whether someone’s brain has been hijacked by memes—whether they are signalling their status to prospective business partners, their wealth to prospective mates—or whatever.
For example, if they are attempting to infect me with the same memes that have hijacked their own brain, my memetic immune system is activated—whereas if they are trying to convince people what a fine individual they are, my reaction is different.
What you said seems fine, but not the reason why you chose to say it in this context, the implied argument. The form of expression makes it hard to argue with. Say it out loud.
There is more from me on the topic in my “DOOM!” video. Spoken out loud, nontheless ;-)
This doesn’t address the problem with that particular comment. What you implied is well-known, the problem I pointed out was not that it’s hard to figure out, but that you protected your argument in a weasely form of expression.
It sounds as though you would like to criticise an argument that you think I am implicitly making—but since I never actually made the argument, that gives you an amorphous surface to attack. I don’t plan to do anything to assist with that matter just now—other priorities seem more pressing.
Yes, that’s exactly the problem. We all should strive to make our arguments easy to attack, errors easy to notice and address. Not having that priority hurts epistemic commons.
My argument was general—I think you want something specific.
However, preparing specific statements tailored to each of the DOOM-promoters involved is a non-trivial task, which would hurt me—by occupying my time with matters of relatively minor significance.
It would be nice if I had time available to devote to such tasks—but in the mean time, I am pretty sure the epistemic commons can get along without my additional input.
Since significance of the matter is one of the topics under discussion, it can’t be used as an argument.
Edit: But it works as an element of a description of why certain actions take place.
What I mean is that I assign the matter relatively minor significance—so I get on with other things.
I am not out to persuade others whether my analysis is correct—again, I have other things to do than publicly parade an analysis of my priorities.
Maybe my priority analysis is correct. Maybe my priority analysis is wrong. In either case, it is my main reason for not doing such tasks.
Yes, I indeed made a mistake by missing this aspect (factual description of how a belief caused actions as opposed to normative discussion of actions given the question of correctness of the belief).
As a separate matter, I don’t believe the premise is correct (that any additional effort is required to phrase things non-weasely), and thus that the belief in question plays even the explanatory role. But this is also under discussion, so I can’t use that as an argument.
Well, yes, but if someone tells you they are the tallest person in the world, you also should treat that with considerable scepticism. After all, there can only be one person who actually is the tallest person in the world, and it’s unlikely in the extreme that one random guy would be that person. A one-in-six-billion chance is small enough to reject out-of-hand, surely!
The guy looks pretty tall though. How about you get out a tape-measure and then consult the records on height?
“Considerable scepticism” is not an argument against a claim. It is an argument for more evidence. What evidence makes John Baez’s claims that he is trying to save the world more likely to be signalling than a genuine attempt?
If someone I met told me they were the tallest person in the world, I would indeed treat that with considerable scepticism. I would count my knowledge about the 7 billion people in the world as evidence weighing heavily against the claim.
Your 7 billion people are just your prior probability for him being the tallest before you actually examine his size. Once you have seen that he is somewhat tall, you can start developing a better prior:
If he’s taller than any of the people you know that puts him in at least the top three hundredth—so less than 24 million people remain as contenders. If he’s taller than anyone you’ve ever seen, that puts him in at least the top two thousandth—so less than 3.5 million of that 7 billion are actually potential evidence he’s wrong.
So now our prior is 1 in 3.5 million. Now it’s time to look for evidence. At this point, the number of people in the world is irrelevant: it’s already been factored into the equation. What evidence can we use to find our posterior probability?
A cool thing about Bayesian reasoning is that you can cut extreme numbers down to reasonable sizes with some very cheap and very quick tests. In the case of possible ulterior motives for claiming to be saving the world, you can with some small effort distinguish between the “signalling” and “genuine” hypotheses. What tests—what evidence—should we be looking for here, to spot which one is the case?