This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went “oh hey, our predictions keep failing, maybe that’s evidence our theory isn’t actually right after all”? If you guessed none at all, give yourself a brownie point for an inspired guess. It’s like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go “well our date was wrong, but that doesn’t mean it’s not going to happen, of course it is, Real Soon Now.” Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it’s never “well, maybe recursive self-improvement isn’t all it’s cracked up to be,” it’s always “your faith wasn’t strong enough,” oops, “you weren’t using enough of it,” or “that’s not the right kind” or some other excuse.
That’s what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you’ll see when the Apocalypse comes and we all die, ha ha. And that’s the most polite terms I’m willing to put it in.
I’ve made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn’t have to be taking over the world, just sustained improvement beyond what my theory says should be possible.
If you’re willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you’ll agree your theory has been falsified, great. If not, fine, I’ll assume your faith is absolute and drop this debate.
It’s like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go “well our date was wrong, but that doesn’t mean it’s not going to happen, of course it is, Real Soon Now.”
That the Singularity concept pattern-matches doomsday cults is nothing new to anyone here. You looked further into it and declared it false, wedrifid and others looked into it and declared it possible. The discussion is now about evidence between those two points of view. Repeating that it looks like a doomsday cult is taking a step backwards, back to where we came to this discussion from.
rwallace’s argument isn’t centering on the standard argument that makes it look like a doomsday cult. He’s focusing on an apparent repetition of predictions while failing to update when those predictions have failed. That’s different than the standard claim about why Singularitarianism pattern matches with doomsday cults, and should, to a Bayesian, be fairly disturbing if he is correct about such a history.
Fair enough. I guess his rant pattern-matched the usual anti-doomsday-cult stuff I see involving the singularity. Keep in mind that, as a Bayesian, it is possible to adjust the value of those people making the predictions instead of the likelihood of the event. Certainly, that is what I have done; I care less for predictions, even from people I trust to reason well, because a history of failing predictions has taught me not that predicted events don’t happen, but rather that predictions are full of crap. This has the converse effect of greatly reducing the value of (in hindsight) correct predictions; which seems to be a pretty common failure mode for a lot of belief mechanisms: that a correct prediction alone is enough evidence. I would require the process by which the prediction was produced to consistently predict correctly.
The pattern you are completing here has very little relevance to the actual content of the conversation. The is no prediction here about the date of a possible singularity and, for that matter, no mention of how probable it is. When, or if, someone such as yourself creates a human level general intelligent agent and releases it then that will go a long way towards demonstrating that one of the theories is false.
You have iterated through a series of argument attempts here, abandoning each only to move to another equally flawed. The current would appear to be ‘straw man’… and not a particularly credible straw man at that. (EDIT: Actually, no you have actually kept the ‘unfalsifiable’ thing here, somehow.)
Your debating methods are not up to the standards that are found to be effective and well received on lesswrong.
I feel like I am in agreement that computer hardware plus human algorithm equals FOOM. Just as hominids improved very steeply as a few bits were put in place which may or may not correspond to but probably included symbolic processing, I think that putting an intelligent algorithm in place on current computers is likely to create extremely rapid advancement.
On the other hand, it’s possible that this isn’t the case. We could sit around all day and play reference-class tennis, but we should be able to agree that there EXIST reference classes which provide SOME evidence against the thesis. The fact that fields like CAD have significant bottlenecks due to compiling time, for example, indicates that some progress currently driven by innovation still has a machine bottleneck and will not experience a recursive speedup when done by ems. The fact that in fields like applied math, new algorithms which are human insights often create serious jumps is evidence that these fields will experience recursive speedups when done by ems.
The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps. It was a pretty mundane comment, and when I saw it it had over ten upvotes I was disappointed and reminded of RationalWiki’s claims that the site is a personality cult. rwallace responded by asking Eliezer to live up to the “standards that are found to be effective and well received on lesswrong,” albeit he asked in a fairly snarky way. You not only responded with more snark, but (a) represented a significant “downgrade” from a real response from Eliezer, giving the impression that he has better things to do than respond to serious engagements with his arguments, and (b) did not reply with a serious engagement of the arguments, such as an acknowledgement of a level of evidence.
You could have responded by saying that “fields of knowledge relevant to taking over the world seem much more likely to me to be social areas where big insights are valuable and less like CAD where compiling processes take time. Therefore while your thesis that many areas of an em’s speedup will be curve-constrained may be true, it still seems unlikely to effect the probability of a FOOM.”
In which case you would have presented what rwallace requested—a possibility of falsification—without any need to accept his arguments. If Eliezer had replied in this way in the first place, perhaps no one involved in this conversation would have gotten annoyed and wasted the possibility of a valuable discussion.
I agree that this thread of comments has been generally lacking in the standards of argument usually present on LessWrong. But from my perspective you have not been bringing the conversation up to a higher level as much as stoking the fire of your initial disagreement.
I am disappointed in you, and by the fact that you were upvoted while rwallace was downvoted; this seems like a serious failure on the part of the community to maintain its standards.
To be clear: I do not agree with rwallace’s position here, I do not think that he was engaging at the level that is common and desirable here. But you did not make it easier for him to do that, you made it harder, and that is far more deserving of downvotes.
This would seem to suggest that you expected something different from me, that is better according to your preferences. This surprises me—I think my comments here are entirely in character, whether that character is one that appeals to you or not. The kind of objections I raise here are also in character. I consistently object to arguments of this kind and used in the way they are here. Perhaps ongoing dislike or disrespect would be more appropriate than disappointment?
You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.
I am disappointed that such a well-established member of our community would behave in the way you did; your 6000 karma gives me the expectations that have not been met.
I realize that you may represent a slightly different slice of the LessWrong personality spectrum that I do, and this probably accounts for some amount of the difference, but this appeared to me to be a breakdown of civility which is not or at least should not be dependent on your personality.
I don’t know you well enough to dislike you. I’ve seen enough of your posts to know that you contribute to the community in a positive way most of the time. Right now it just feels like you had a bad day and got upset about the thread and didn’t give yourself time to cool off before posting again. If this is a habit for you, then it is my opinion that it is a bad habit and I think you can do better.
You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.
Ahh. That does make sense. I fundamentally disagree with everything else of significance in your judgement here but from your premises I can see how dissapointment is consistent.
I will not respond to those judgments except in as much as to say that I don’t agree with you on any of the significant points. My responses here are considered, necessary and if anything erred on the side of restraint. Bullshit, in the technical sense is the enemy here. This post and particularly the techniques used to defend it are bullshit in that sense. That it somehow got voted above −5 is troubling to me.
I agree that the arguments made in the original post tend to brush relevant details under the rug. But there is a difference between saying that an argument is flawed and trying to help fix it, and saying that it is irrelevant and the person is making a pure appeal to their own authority.
I was interested to see a more technical discussion of what sorts of things might be from the same reference class as recursive self-improvement. I was happy to see a viewpoint being represented on Less Wrong that was more diverse than the standard “party line.” Even if the argument is flawed I was glad to see it.
I would have been much happier to see the argument deconstructed than I am now having seen it turned into a flame war.
Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.
rwallace responded by saying:
My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement. … Are you saying your theory makes no other predictions than [AI will cause the world to end]?
Then in your reply you say he is accusing you of making that claim.
The way he asked his question was impolite. However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.
At no point did the thread become, in my mind, about your belief that his argument was despicable. If I understand correctly, you believe that by drawing attention to technical details, he is drawing attention away from the strongest arguments on the topic and therefore moving people towards less correct beliefs in a dangerous way. This is a reasonable objection, but again at no point did I see this thread become about your objection in a positive light rather than being about his post in a negative light.
If you are interested in making your case explicitly, or demonstrating where you have attempted to make it, I would be very interested to see it. If you are interested in providing other explicit falsifiable claims or demonstrating where they have been made I would be interested to see that as well.
If you are interested only in discussing who knows the community better and using extremely vague terms like “mechanisms of reasoning as employed here” then I think we both have better ways to spend our time.
However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.
You are simply wrong.
‘Falsifiable’ isn’t a rally call… it actually refers to a distinct concept—and was supplied multiple times in a completely unambiguous fashion.
I think we both have better ways to spend our time.
I did not initiate this conversation and at no time did I desire it. I did choose to reply to some of your comments.
I am disappointed that such a well-established member of our community would behave in the way you did
Wedrifid pointed out flaws in a flawed post, and pointed out flaws in a series of flawed arguments. You could debate the degree of politeness required but pointing out flaws is in some fundamental ways an impolite act. It is also a foundation of improving rationality. To the extent that these comment sections are about improving rationality, wedrifid behaved exactly as they should have.
Karma on LessWrong isn’t about politeness, as far as I have seen. For what it’s worth, in my kibitzer’d neutral observations, the unanimous downvoting is because readers spotted flaws; unanimous upvoting is for posts that point out flaws in posts.
I’m starting to think we may need to bring up Eliezer’s ‘tending to the garden before it becomes overgrown’ and ‘raising the sanity waterline’ posts from early on. There has been a recent trend of new users picking an agenda to support then employing the same kinds of fallacies and debating tactics in their advocacy. Then, when they are inevitably downvoted there is the same sense of outrage that mere LW participants dare evaluate their comments negatively.
It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn’t be that people here are able to evaluate the reasoning and consider the reasoning used to b more important than which side the author is on.
This isn’t a problem if it happens now and again. Either the new user has too much arrogance to learn to adapt to lesswrong standards and leave or they learn what is expected here and integrate into the culture. The real problem comes when arational debators are able to lend support to each other, preventing natural social pressures from having the full effect. That’s when the sanity waterline can really start to fall.
It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn’t be that people here are able to evaluate the reasoning and consider the reasoning used to be more important than which side the author is on.
When we see this, we should point them to the correspondence bias and the evil enemies posts and caution them not to assume that a critical reply is an attack from someone who is subverting the community—or worse, defending the community from the truth.
As an aside, top level posts are scary. Twice I have written up something, and both times I deleted it because I thought I wouldn’t be able to accept criticism. There is this weird feeling you get when you look at your pet theories and novel ideas you have come up with: they feel like truth, and you know how good LessWrong is with the truth. They are going to love this idea, know that it is true immediately and with the same conviction that you have, and celebrate you as a good poster and community member. After deleting the posts (and maybe this is rationalization) it occurred to me that had anyone disagreed, that would have been evidence not that I was wrong, but that they hated truth.
I didn’t mean just that he was impolite, or just that pointing out flaws in a flawed argument is bad or impolite. Of course when a post is flawed it should be criticized.
I am disappointed that the criticism was destructive, claiming that the post was a pure appeal to authority, rather than constructive, discussing how we might best update on this evidence, even if our update is very small or even in the opposite direction.
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
It’s called less wrong for a reason. Encouraging the use of fallacious reasoning and dark arts rhetoric even by leaving it with a neutral reception would be fundamentally opposed to the purpose of this site. Most of the sequences, in fact, have been about how not to think stupid thoughts. One of the ways to do that is to prevent your habitat from overwhelming you with them and limiting your discussions to those that are up to at least a crudely acceptable level.
If you want a debate about AI subjects where the environment isn’t primarily focussed on rewarding sound reasoning then I am almost certain that there are other places that are more welcoming.
This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning. The argument could be made, and if you had started or if you continue by making that argument I would be satisfied with that.
I am happy to see that elsewhere there are responses which acknowledge that interesting information has been presented before completely demolishing the original article.
This makes me think that pursuing this argument between the two of us is not worthwhile, as it draws attention to both of us making posts that are not satisfying to each other and away from other posts which may seem productive to both of us.
This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning.
Agreed. It takes an effort of willpower not to get defensive when you are criticised, so an attack (especially with insults) is likely to cause the target to become defensive and try to fight back rather than learn where they went wrong. As we know from the politics sequence, an attack might even make their conviction stronger!
However,
I do not believe that this is necessarily the best way to promote sound reasoning.
I actually can’t find a post on LessWrong specifically about this, but it has been said many times that the best is the enemy of the good. Be very wary of shooting down an idea because it is not the best idea. In the overwhelming majority of cases, the idea is better than doing nothing, and (again I don’t have the cite, but it has been discussed here before) if you spend too much time looking for the best, you don’t have any time left to do any of the ideas, so you end up doing nothing—which is worse than the mediocre idea you argued against.
If I was to order the ways of dealing with poor reasoning, it would look like this: Point out poor reasoning > Attack poor reasoning with insult > Leave poor reasoning alone.
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
I tend to agree, but what are those higher standards? One I would suggest is that the act of pointing out a flaw ought to be considered unsuccessful if the author of the flaw is not enlightened by the criticism. Sometimes communicating the existence of a flaw requires some handholding.
To those who object “It is not my job to educate a bias-laden idiot”, I respond, “And it is not my job to upvote your comment, either.”
Pointing out a flaw and suggesting how it might be amended would be an excellent post. Asking politely if the author has a different amendment in mind would be terrific.
And I could be incorrect here, but isn’t this site about nurturing rationalists? As I understand it, all of us humans (and clippy) are bias-laden idiots and the point of LessWrong is for us to educate ourselves and each other.
You keep switching back and forth between “is” and “ought” and I think this leads you into error.
The simplest prediction from wedrifid’s high karma is that his comments will be voted up. On the whole, his comments on this thread were voted up. The community normally agrees with him and today it agrees with him. This suggests that he is not behaving differently.
You have been around this community a while and should already have assessed its judgement and the meaning of karma. If you think that the community expresses bad judgement through its karma, then you should not be disappointed in bad behavior by high karma users. (So it would seem rather strange to write the above comment!) If you normally think that the community expresses good judgement through karma, then it is probably expressing similarly good judgement today.
Most likely, the difference is you, that you do not have the distance to adequately judge your interactions. Yes, there are other possibilities; it is also possible that “foom” is a special topic that the community and wedrifid cannot deal with rationally. But is it so likely that they cannot deal with it civilly?
In the “The Maes-Garreau Point” Kevin Kelly lists poorly-referenced predictions of “when they think the Singularity will appear” of 2001, 2004 and 2005 - by Nick Hogard, Nick Bostrom and Eleizer Yudkowsky respectively.
But only a potential warning sign—fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.
Right, but we expect that for the promethium atom. If physicists had predicted that a certain radioactive sample would decay in a fixed time, and they kept pushing up the time for when it would happen, and didn’t alter their hypotheses at all, I’d be very worried about the state of physics.
Not off the top of my head, which is one reason I didn’t bring it up until I got pissed off :) I remember a number of people predicting 2000, over the last decades of the 20th century, I think Turing himself was one of the earliest.
Turing never discussed much like a Singularity to my knowledge. What you may be thinking of is how in his original article proposing the Turing Test he said that he expected that it would take around fifty years for machines to pass the Turing Test. He wrote the essay in 1950. But, Turing’s remark is not the same claim as a Singularity occurring in 2000. Turing was off for when we’d have AI. As far as I know, he didn’t comment on anything like a Singularity.
Ah, that’s the one I’m thinking of—he didn’t comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn’t save any citations at the time and a quick Google search didn’t find any, which is one of the reasons I’m not writing a post on failed Singularity predictions.
Another reason, hopefully, is that there would always have been a wide range of predictions, and there’s a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.
If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn’t very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren’t ravaged.
This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn’t matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don’t Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn’t yet reached us.
Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don’t Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn’t yet reached us.
Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)
-sigh-
This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went “oh hey, our predictions keep failing, maybe that’s evidence our theory isn’t actually right after all”? If you guessed none at all, give yourself a brownie point for an inspired guess. It’s like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go “well our date was wrong, but that doesn’t mean it’s not going to happen, of course it is, Real Soon Now.” Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it’s never “well, maybe recursive self-improvement isn’t all it’s cracked up to be,” it’s always “your faith wasn’t strong enough,” oops, “you weren’t using enough of it,” or “that’s not the right kind” or some other excuse.
That’s what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you’ll see when the Apocalypse comes and we all die, ha ha. And that’s the most polite terms I’m willing to put it in.
I’ve made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn’t have to be taking over the world, just sustained improvement beyond what my theory says should be possible.
If you’re willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you’ll agree your theory has been falsified, great. If not, fine, I’ll assume your faith is absolute and drop this debate.
That the Singularity concept pattern-matches doomsday cults is nothing new to anyone here. You looked further into it and declared it false, wedrifid and others looked into it and declared it possible. The discussion is now about evidence between those two points of view. Repeating that it looks like a doomsday cult is taking a step backwards, back to where we came to this discussion from.
rwallace’s argument isn’t centering on the standard argument that makes it look like a doomsday cult. He’s focusing on an apparent repetition of predictions while failing to update when those predictions have failed. That’s different than the standard claim about why Singularitarianism pattern matches with doomsday cults, and should, to a Bayesian, be fairly disturbing if he is correct about such a history.
Fair enough. I guess his rant pattern-matched the usual anti-doomsday-cult stuff I see involving the singularity. Keep in mind that, as a Bayesian, it is possible to adjust the value of those people making the predictions instead of the likelihood of the event. Certainly, that is what I have done; I care less for predictions, even from people I trust to reason well, because a history of failing predictions has taught me not that predicted events don’t happen, but rather that predictions are full of crap. This has the converse effect of greatly reducing the value of (in hindsight) correct predictions; which seems to be a pretty common failure mode for a lot of belief mechanisms: that a correct prediction alone is enough evidence. I would require the process by which the prediction was produced to consistently predict correctly.
The pattern you are completing here has very little relevance to the actual content of the conversation. The is no prediction here about the date of a possible singularity and, for that matter, no mention of how probable it is. When, or if, someone such as yourself creates a human level general intelligent agent and releases it then that will go a long way towards demonstrating that one of the theories is false.
You have iterated through a series of argument attempts here, abandoning each only to move to another equally flawed. The current would appear to be ‘straw man’… and not a particularly credible straw man at that. (EDIT: Actually, no you have actually kept the ‘unfalsifiable’ thing here, somehow.)
Your debating methods are not up to the standards that are found to be effective and well received on lesswrong.
The way that this thread played out bothered me.
I feel like I am in agreement that computer hardware plus human algorithm equals FOOM. Just as hominids improved very steeply as a few bits were put in place which may or may not correspond to but probably included symbolic processing, I think that putting an intelligent algorithm in place on current computers is likely to create extremely rapid advancement.
On the other hand, it’s possible that this isn’t the case. We could sit around all day and play reference-class tennis, but we should be able to agree that there EXIST reference classes which provide SOME evidence against the thesis. The fact that fields like CAD have significant bottlenecks due to compiling time, for example, indicates that some progress currently driven by innovation still has a machine bottleneck and will not experience a recursive speedup when done by ems. The fact that in fields like applied math, new algorithms which are human insights often create serious jumps is evidence that these fields will experience recursive speedups when done by ems.
The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps. It was a pretty mundane comment, and when I saw it it had over ten upvotes I was disappointed and reminded of RationalWiki’s claims that the site is a personality cult. rwallace responded by asking Eliezer to live up to the “standards that are found to be effective and well received on lesswrong,” albeit he asked in a fairly snarky way. You not only responded with more snark, but (a) represented a significant “downgrade” from a real response from Eliezer, giving the impression that he has better things to do than respond to serious engagements with his arguments, and (b) did not reply with a serious engagement of the arguments, such as an acknowledgement of a level of evidence.
You could have responded by saying that “fields of knowledge relevant to taking over the world seem much more likely to me to be social areas where big insights are valuable and less like CAD where compiling processes take time. Therefore while your thesis that many areas of an em’s speedup will be curve-constrained may be true, it still seems unlikely to effect the probability of a FOOM.”
In which case you would have presented what rwallace requested—a possibility of falsification—without any need to accept his arguments. If Eliezer had replied in this way in the first place, perhaps no one involved in this conversation would have gotten annoyed and wasted the possibility of a valuable discussion.
I agree that this thread of comments has been generally lacking in the standards of argument usually present on LessWrong. But from my perspective you have not been bringing the conversation up to a higher level as much as stoking the fire of your initial disagreement.
I am disappointed in you, and by the fact that you were upvoted while rwallace was downvoted; this seems like a serious failure on the part of the community to maintain its standards.
To be clear: I do not agree with rwallace’s position here, I do not think that he was engaging at the level that is common and desirable here. But you did not make it easier for him to do that, you made it harder, and that is far more deserving of downvotes.
This would seem to suggest that you expected something different from me, that is better according to your preferences. This surprises me—I think my comments here are entirely in character, whether that character is one that appeals to you or not. The kind of objections I raise here are also in character. I consistently object to arguments of this kind and used in the way they are here. Perhaps ongoing dislike or disrespect would be more appropriate than disappointment?
You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.
I am disappointed that such a well-established member of our community would behave in the way you did; your 6000 karma gives me the expectations that have not been met.
I realize that you may represent a slightly different slice of the LessWrong personality spectrum that I do, and this probably accounts for some amount of the difference, but this appeared to me to be a breakdown of civility which is not or at least should not be dependent on your personality.
I don’t know you well enough to dislike you. I’ve seen enough of your posts to know that you contribute to the community in a positive way most of the time. Right now it just feels like you had a bad day and got upset about the thread and didn’t give yourself time to cool off before posting again. If this is a habit for you, then it is my opinion that it is a bad habit and I think you can do better.
Ahh. That does make sense. I fundamentally disagree with everything else of significance in your judgement here but from your premises I can see how dissapointment is consistent.
I will not respond to those judgments except in as much as to say that I don’t agree with you on any of the significant points. My responses here are considered, necessary and if anything erred on the side of restraint. Bullshit, in the technical sense is the enemy here. This post and particularly the techniques used to defend it are bullshit in that sense. That it somehow got voted above −5 is troubling to me.
I agree that the arguments made in the original post tend to brush relevant details under the rug. But there is a difference between saying that an argument is flawed and trying to help fix it, and saying that it is irrelevant and the person is making a pure appeal to their own authority.
I was interested to see a more technical discussion of what sorts of things might be from the same reference class as recursive self-improvement. I was happy to see a viewpoint being represented on Less Wrong that was more diverse than the standard “party line.” Even if the argument is flawed I was glad to see it.
I would have been much happier to see the argument deconstructed than I am now having seen it turned into a flame war.
I believe I observed that it was far worse than an appeal to authority.
You do not understand the mechanisms of reasoning as employed here well enough to see why the comments here received the reception that they did.
In this comment rwallace asks you to make a falsifiable prediction. In this comment you state:
rwallace responded by saying:
Then in your reply you say he is accusing you of making that claim.
The way he asked his question was impolite. However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.
It is true that I do not understand the mechanisms. I thought that I understood that the policy of LessWrong is not to dismiss arguments but to fight the strongest argument that can be built out of that argument’s corpse.
At no point did the thread become, in my mind, about your belief that his argument was despicable. If I understand correctly, you believe that by drawing attention to technical details, he is drawing attention away from the strongest arguments on the topic and therefore moving people towards less correct beliefs in a dangerous way. This is a reasonable objection, but again at no point did I see this thread become about your objection in a positive light rather than being about his post in a negative light.
If you are interested in making your case explicitly, or demonstrating where you have attempted to make it, I would be very interested to see it. If you are interested in providing other explicit falsifiable claims or demonstrating where they have been made I would be interested to see that as well. If you are interested only in discussing who knows the community better and using extremely vague terms like “mechanisms of reasoning as employed here” then I think we both have better ways to spend our time.
You are simply wrong.
‘Falsifiable’ isn’t a rally call… it actually refers to a distinct concept—and was supplied multiple times in a completely unambiguous fashion.
I did not initiate this conversation and at no time did I desire it. I did choose to reply to some of your comments.
Wedrifid pointed out flaws in a flawed post, and pointed out flaws in a series of flawed arguments. You could debate the degree of politeness required but pointing out flaws is in some fundamental ways an impolite act. It is also a foundation of improving rationality. To the extent that these comment sections are about improving rationality, wedrifid behaved exactly as they should have.
Karma on LessWrong isn’t about politeness, as far as I have seen. For what it’s worth, in my kibitzer’d neutral observations, the unanimous downvoting is because readers spotted flaws; unanimous upvoting is for posts that point out flaws in posts.
I’m starting to think we may need to bring up Eliezer’s ‘tending to the garden before it becomes overgrown’ and ‘raising the sanity waterline’ posts from early on. There has been a recent trend of new users picking an agenda to support then employing the same kinds of fallacies and debating tactics in their advocacy. Then, when they are inevitably downvoted there is the same sense of outrage that mere LW participants dare evaluate their comments negatively.
It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn’t be that people here are able to evaluate the reasoning and consider the reasoning used to b more important than which side the author is on.
This isn’t a problem if it happens now and again. Either the new user has too much arrogance to learn to adapt to lesswrong standards and leave or they learn what is expected here and integrate into the culture. The real problem comes when arational debators are able to lend support to each other, preventing natural social pressures from having the full effect. That’s when the sanity waterline can really start to fall.
When we see this, we should point them to the correspondence bias and the evil enemies posts and caution them not to assume that a critical reply is an attack from someone who is subverting the community—or worse, defending the community from the truth.
As an aside, top level posts are scary. Twice I have written up something, and both times I deleted it because I thought I wouldn’t be able to accept criticism. There is this weird feeling you get when you look at your pet theories and novel ideas you have come up with: they feel like truth, and you know how good LessWrong is with the truth. They are going to love this idea, know that it is true immediately and with the same conviction that you have, and celebrate you as a good poster and community member. After deleting the posts (and maybe this is rationalization) it occurred to me that had anyone disagreed, that would have been evidence not that I was wrong, but that they hated truth.
I didn’t mean just that he was impolite, or just that pointing out flaws in a flawed argument is bad or impolite. Of course when a post is flawed it should be criticized.
I am disappointed that the criticism was destructive, claiming that the post was a pure appeal to authority, rather than constructive, discussing how we might best update on this evidence, even if our update is very small or even in the opposite direction.
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
It’s called less wrong for a reason. Encouraging the use of fallacious reasoning and dark arts rhetoric even by leaving it with a neutral reception would be fundamentally opposed to the purpose of this site. Most of the sequences, in fact, have been about how not to think stupid thoughts. One of the ways to do that is to prevent your habitat from overwhelming you with them and limiting your discussions to those that are up to at least a crudely acceptable level.
If you want a debate about AI subjects where the environment isn’t primarily focussed on rewarding sound reasoning then I am almost certain that there are other places that are more welcoming.
This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning. The argument could be made, and if you had started or if you continue by making that argument I would be satisfied with that.
I am happy to see that elsewhere there are responses which acknowledge that interesting information has been presented before completely demolishing the original article.
This makes me think that pursuing this argument between the two of us is not worthwhile, as it draws attention to both of us making posts that are not satisfying to each other and away from other posts which may seem productive to both of us.
Agreed. It takes an effort of willpower not to get defensive when you are criticised, so an attack (especially with insults) is likely to cause the target to become defensive and try to fight back rather than learn where they went wrong. As we know from the politics sequence, an attack might even make their conviction stronger!
However,
I actually can’t find a post on LessWrong specifically about this, but it has been said many times that the best is the enemy of the good. Be very wary of shooting down an idea because it is not the best idea. In the overwhelming majority of cases, the idea is better than doing nothing, and (again I don’t have the cite, but it has been discussed here before) if you spend too much time looking for the best, you don’t have any time left to do any of the ideas, so you end up doing nothing—which is worse than the mediocre idea you argued against.
If I was to order the ways of dealing with poor reasoning, it would look like this: Point out poor reasoning > Attack poor reasoning with insult > Leave poor reasoning alone.
Again, I disagree substantially with your observations on the critical premises.
I tend to agree, but what are those higher standards? One I would suggest is that the act of pointing out a flaw ought to be considered unsuccessful if the author of the flaw is not enlightened by the criticism. Sometimes communicating the existence of a flaw requires some handholding.
To those who object “It is not my job to educate a bias-laden idiot”, I respond, “And it is not my job to upvote your comment, either.”
Pointing out a flaw and suggesting how it might be amended would be an excellent post. Asking politely if the author has a different amendment in mind would be terrific.
And I could be incorrect here, but isn’t this site about nurturing rationalists? As I understand it, all of us humans (and clippy) are bias-laden idiots and the point of LessWrong is for us to educate ourselves and each other.
You keep switching back and forth between “is” and “ought” and I think this leads you into error.
The simplest prediction from wedrifid’s high karma is that his comments will be voted up. On the whole, his comments on this thread were voted up. The community normally agrees with him and today it agrees with him. This suggests that he is not behaving differently.
You have been around this community a while and should already have assessed its judgement and the meaning of karma. If you think that the community expresses bad judgement through its karma, then you should not be disappointed in bad behavior by high karma users. (So it would seem rather strange to write the above comment!) If you normally think that the community expresses good judgement through karma, then it is probably expressing similarly good judgement today.
Most likely, the difference is you, that you do not have the distance to adequately judge your interactions. Yes, there are other possibilities; it is also possible that “foom” is a special topic that the community and wedrifid cannot deal with rationally. But is it so likely that they cannot deal with it civilly?
I did not say that. I said that symbolic logic probably wasn’t It. You made up your own reason why, and a poor one.
Out of morbid curiosity, what is your reason for symbolic logic not being it?
I second the question out of healthy curiosity.
That’s fair. I apologize, I shouldn’t have put words in your mouth. That was the impression I got, but it was unfounded to say it came from you.
So, I’m vaguely aware of Singularity claims for 2010. Do you have citations for people making such claims that it would happen in 2000 or 2005?
I agree that pushing something farther and farther into the future is a potential warning sign.
In the “The Maes-Garreau Point” Kevin Kelly lists poorly-referenced predictions of “when they think the Singularity will appear” of 2001, 2004 and 2005 - by Nick Hogard, Nick Bostrom and Eleizer Yudkowsky respectively.
But only a potential warning sign—fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.
Right, but we expect that for the promethium atom. If physicists had predicted that a certain radioactive sample would decay in a fixed time, and they kept pushing up the time for when it would happen, and didn’t alter their hypotheses at all, I’d be very worried about the state of physics.
Not off the top of my head, which is one reason I didn’t bring it up until I got pissed off :) I remember a number of people predicting 2000, over the last decades of the 20th century, I think Turing himself was one of the earliest.
Turing never discussed much like a Singularity to my knowledge. What you may be thinking of is how in his original article proposing the Turing Test he said that he expected that it would take around fifty years for machines to pass the Turing Test. He wrote the essay in 1950. But, Turing’s remark is not the same claim as a Singularity occurring in 2000. Turing was off for when we’d have AI. As far as I know, he didn’t comment on anything like a Singularity.
Ah, that’s the one I’m thinking of—he didn’t comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn’t save any citations at the time and a quick Google search didn’t find any, which is one of the reasons I’m not writing a post on failed Singularity predictions.
Another reason, hopefully, is that there would always have been a wide range of predictions, and there’s a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.
If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn’t very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren’t ravaged.
This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn’t matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don’t Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn’t yet reached us.
Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.
Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)