It seems like you’re entirely ignoring feedback effects from more and better intelligence being better at creating more and better intelligence, as argued in Yudkowsky’s side of the FOOM debate.
And hardware overhang (faster computers developed before general cognitive algorithms, first AGI taking over all the supercomputers on the Internet) and fast infrastructure (molecular nanotechnology) and many other inconvenient ideas.
Also if you strip away the talk about “imbalance” what it works out to is that there’s a self-contained functioning creature, the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability. Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself. Chimpanzees were not “lopsided”, they were complete packages designed for an environment; it turned out there were things that could be done which created a huge increase in optimization power (calling this “symbolic processing” assumes a particular theory of mind, and I think it is mistaken) and perhaps there are yet more things like that, such as, oh, say, self-modification of code.
I’m not Eliezer, but will try to guess what he’d have answered. The awesome powers of your mind only feel like they’re about “symbols”, because symbols are available to the surface layer of your mind, while most of the real (difficult) processing is hidden. Relevant posts: Detached Lever Fallacy, Words as Mental Paintbrush Handles.
The posts (at least the second one) seem to point that symbolic reasoning is overstated and at least some reasoning is clearly non-symbolic (e.g. visual).
In this context the question is whether the symbolic processing (there is definitely some—math, for example) gave pre-humans the boost that allowed the huge increase in computing power, so I am not seeing the contradiction.
Speech is a kind of symbolic processing, and is probably an important capability in mankind’s intellectual evolution, even if symbolic processing for the purpose of reasoning (as in syllogisms and such) is an ineffectual modern invention.
calling this “symbolic processing” assumes a particular theory of mind, and I think it is mistaken
Interesting. Can you elaborate or link to something?
Susan Blackmore argues that what originally caused the “huge increase in optimization power” was memes—not symbolic processing—which probably started up a bit later than the human cranium’s expansion did.
What’s clearly fundamental about the human/chimpanzee advantage, the thing that made us go FOOM and take over the world, is that we can, extremely efficiently, share knowledge. This is not as good as fusing all our brains into a giant brain, but it’s much much better than just having a brain.
This analysis possibly suggests that “taking over the world’s computing resources” is the most likely FOOM, because it is similar to the past FOOM, but that is weak evidence.
...the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability.
The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it’s not like you’re adding a tiny bit of code and get a superapish intelligence.
Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself.
Nothing is offered to support the assertion that there is another such jump. If you were to assert this then another premise of yours, that an universal computing device can simulate every physical process, could be questioned based on the same principle. So here is an antiprediction, humans are on equal footing with any other intelligence who can master abstract reasoning (that does not necessarily include speed or overcoming bias).
Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself.
Nothing is offered to support the assertion that there is another such jump.
In a public debate, it makes sense to defend both sides of an argument, because each of the debaters actually tries to convince a passive third party whose beliefs are not clearly known. But any given person that you are trying to convince doesn’t have a duty to convince you that the argument you offer is incorrect. It’s just not an efficient thing to do. A person should always be allowed to refute an argument on the grounds that they don’t currently believe it to be true. They can be called on contradicting assertions of not believing or believing certain things, but never required to prove a belief. The latter would open a separate argument, maybe one worth engaging in, but often a distraction from the original one, especially when the new argument being separate is not explicitly acknowledged.
I agree with some of what you wrote although I’m not sure why you wrote it. Anyway, I was giving an argumentative inverse of what Yudkowsky asserted and hereby echoed his own rhetoric. Someone claimed A and in return Yudkowsky claimed that A is a bare assertion, therefore ¬A, whereupon I claimed that ¬A is a bare assertion therefore the truth-value of A is again ~unknown. This of course could have been inferred from Yudkowskys statement alone, if interpreted as a predictive inverse (antiprediction), if not for the last sentence which states, “[...] and perhaps there are yet more things like that, such as, oh, say, self-modification of code.” [1] Perhaps yes, perhaps not. Given that his comment already scored 16 when I replied, I believed that highlighting that it offered no convincing evidence for or against A would be justified by one sentence alone. Here we may disagree, but note that my comment included more information than that particular sentence alone.
Self-modification of code does not necessarily amount to a superhuman level of abstract reasoning similar to that between humans and chimps but might very well be unfeasible as it demands self-knowledge requiring resources exceeding that of any given intelligence. This would agree with the line of argumentation in the original post, namely that the next step (e.g. an improved AGI created by the existing AGI) will require a doubling of resources. Hereby we are on par again, two different predictions canceling out each other.
You should keep track of whose beliefs you are talking about, as it’s not always useful or possible to work with the actual truth of informal statements where you analyze correctness of debate. A person holding a wrong belief for wrong reasons can still be correct in rejecting an incorrect argument for incorrectness of those wrong reasons.
If A believes X, then (NOT X) is a “bare assertion”, not enough to justify A changing their belief. For B, who believes (NOT X), stating “X” is also a bare assertion, not enough to justify changing the belief. There is no inferential link between refuted assertions and beliefs that were held all along. A believes X not because “(NOT X) is a bare assertion”, even though A believes both that “(NOT X) is a bare assertion” (correctly) and X (of unknown truth).
There is no inferential link between refuted assertions and beliefs that were held all along.
That is true. Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander. This I tried to highlight without having to side with one party.
Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander.
They don’t cancel out each other, as they both lack convincing power, equally irrelevant. It’s an error to state as arguments what you know your audience won’t agree with (change their mind in response to). At the same time, explicitly rejecting an argument that failed to convince is entirely correct.
They don’t cancel out each other, as they both lack convincing power, equally irrelevant.
Let’s assume that you contemplate the possibility of an outcome Z. Now you come across a discussion between agent A and agent B discussing the prediction that Z is true. If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z and the confidence of agent B necessary to proclaim that Z is true. Agent A does however proclaim argument Y in favor of Z being false and you believe that Y is equally unconvincing than argument X in favor of Z being true. You might now conclude again that the truth-value of Z is ~unknown as each argument and the confidence of its facilitator ~outweigh each other.
Therefore no information is irrelevant if it is the only information about any given outcome in question. Your judgement might weigh less than the confidence of an agent with possible unknown additional substantiation in favor of its argument. If you are unable to judge the truth-value of an exclusive disjunction then that any given argument about it is not compelling does tell more about you than the agent that does proclaim it.
Any argument alone has to be taken into account, if only due to its logical consequence. Every argument should be incorporated into your probability estimations for that it signals a certain confidence (for that it is proclaimed at all) of the agent that is uttering it. Yet if there exists a counterargument that is inverse to the original argument you’ll have to take that into account as well. This counterargument might very well outweigh the original argument. Therefore there are no arguments that lack the power to convince, however small, yet arguments can outweigh and trump each other.
I think it became more confused now. With C and D unrelated, what do you care for (C XOR D)? For the same reason, you can’t now expect evidence for C to always be counter-evidence for D.
Whoops, I’m just learning the basics (some practise here). I took NOT Z as an independent proposition. I guess there is no simple way to express this if you do not assign the negotation of Z its own variable, in case you want it to be an indepedent proposition?
If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z
B believes that X argues for Z, but you might well believe that X argues against Z. (You are considering a model of a public debate, while this comment was more about principles for an argument between two people.)
Also, it’s strange that you are contemplating levels of belief in Z, while A and B assert it being purely true or false. How overconfident of them.
(Haven’t yet got around to a complete reply rectifying the model, but will do eventually.)
See my reply to saturn on recursive self-improvement. Potential hardware overhang, I already addressed. Nanotechnology is thus far following the curve of capability, and there is every reason to expect it will continue to do so in the future. I already explained the sense in which chimpanzees were lopsided. Self modification of code has been around for decades.
Nanotechnology is thus far following the curve of capability, and there is every reason to expect it will continue to do so in the future.
May be minorly off-topic: nothing Drexler hypothesised has, as far as I know, even been started. As I understand it, the state of things is still that we still have literally no idea how to get there from here, and what’s called “nanotechnology” is material science or synthetic biology. Do you have details of what you’re describing as following the curve?
A good source of such details is Drexler’s blog, where he has written some good articles about—and seems to consider highly relevant—topics like protein design and DNA origami.
(cough) I’m sure Drexler has much detail on Drexler’s ideas. Assume I’m familiar with the advocates. I’m speaking of third-party sources, such as from the working worlds of physics, chemistry, physical chemistry and material science for example.
As far as I know—and I have looked—there’s little or nothing. No progress to nanobots, no progress to nanofactories. The curve in this case is a flat line at zero. Hence asking you specifically for detail on what you are plotting on your graph.
Well, that’s a bit like saying figuring out how to smelt iron constituted no progress to the Industrial Revolution. These things have to go a step at a time, and my point in referring to Drexler’s blog was that he seems to think e.g. protein design and DNA origami do constitute real progress.
As for things you could plot on a graph, consider the exponentially increasing amount of computing power put into molecular modeling simulations, not just by nanotechnology advocates, but people who actually do e.g. protein design for living today.
Also, I’m not sure what you mean by “symbolic processing” assuming a particular theory of mind—theories of mind differ on the importance thereof, but I’m not aware of any that dispute its existence. I’ll second the request for elaboration on this.
I’ll also ask, assuming I’m right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?
I’ll also ask, assuming I’m right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?
It would be better if you waited until you had made somewhat of a solid argument before you resorted to that appeal. Even Robin’s “Trust me, I’m an Economist!” is more persuasive.
The Bottom Line is one of the earliest posts in Eliezer’s own rationality sequences and describes approximately this objection. You’ll note that he added an Addendum:
This is intended as a caution for your own thinking, not a Fully General Counterargument against conclusions you don’t like.
I’m resisting the temptation to say “trust me, I’m an AGI researcher” :-) Bear in mind that my bottom line was actually the pro “AI go foom” side; it’s still what I would like to believe.
But my theory is clearly falsifiable. I stand by my position that it’s fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.
I’m resisting the temptation to say “trust me, I’m an AGI researcher” :-)
But barely. ;)
You would not believe how little that would impress me. Well, I suppose you would—I’ve been talking with XiXi about Ben, after all. I wouldn’t exactly say that your status incentives promote neutral reasoning on this position—or Robin on the same. It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.
Bear in mind that my bottom line was actually the pro “AI go foom” side; it’s still what I would like to believe.
You are trying to create AGI without friendliness and you would like to believe it will go foom? And this is supposed to make us trust your judgement with respect to AI risks?
Incidentally, ‘the bottom line’ accusation here was yours, not the other way around. The reference was to question its premature use as a fully general counterargument.
But my theory is clearly falsifiable. I stand by my position that it’s fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.
We are talking here about predictions of the future. Predictions. That’s an important keyword that is related to falsifiability. Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.
You just tagged teamed one general counterargument out to replace it with a new one. Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity. Predictions, for crying out loud.
I wouldn’t exactly say that your status incentives promote neutral reasoning on this position
No indeed, they very strongly promote belief in AI foom—that’s why I bought into that belief system for a while, because if true, it would make me a potential superhero.
It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.
Nope, it’s exactly in the core of my expertise. Not that I’m expecting you to believe my conclusions for that reason.
You are trying to create AGI without friendliness and you would like to believe it will go foom?
When I believed in foom, I was working on Friendly AI. Now that I no longer believe that, I’ve reluctantly accepted human level AI in the near future is not possible, and I’m working on smarter tool AI instead—well short of human equivalence, but hopefully, with enough persistence and luck, better than what we have today.
We are talking here about predictions of the future. Predictions. That’s an important keyword that is related to falsifiability.
That is what falsifiability refers to, yes.
My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement.
Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.
Are you saying your theory makes no other predictions than this?
Are you saying your theory makes no other predictions than this?
RWallace, you made a suggestion of unfalsifiabiity, a ridiculous claim. I humored you by giving the most significant, obvious and overwhelmingly critical way to falsify (or confirm) the theory. You now presume to suggest that such a reply amounts to a claim that this is the only prediction that could be made. This is, to put it in the most polite terms I am willing, disingenuous.
This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went “oh hey, our predictions keep failing, maybe that’s evidence our theory isn’t actually right after all”? If you guessed none at all, give yourself a brownie point for an inspired guess. It’s like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go “well our date was wrong, but that doesn’t mean it’s not going to happen, of course it is, Real Soon Now.” Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it’s never “well, maybe recursive self-improvement isn’t all it’s cracked up to be,” it’s always “your faith wasn’t strong enough,” oops, “you weren’t using enough of it,” or “that’s not the right kind” or some other excuse.
That’s what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you’ll see when the Apocalypse comes and we all die, ha ha. And that’s the most polite terms I’m willing to put it in.
I’ve made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn’t have to be taking over the world, just sustained improvement beyond what my theory says should be possible.
If you’re willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you’ll agree your theory has been falsified, great. If not, fine, I’ll assume your faith is absolute and drop this debate.
It’s like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go “well our date was wrong, but that doesn’t mean it’s not going to happen, of course it is, Real Soon Now.”
That the Singularity concept pattern-matches doomsday cults is nothing new to anyone here. You looked further into it and declared it false, wedrifid and others looked into it and declared it possible. The discussion is now about evidence between those two points of view. Repeating that it looks like a doomsday cult is taking a step backwards, back to where we came to this discussion from.
rwallace’s argument isn’t centering on the standard argument that makes it look like a doomsday cult. He’s focusing on an apparent repetition of predictions while failing to update when those predictions have failed. That’s different than the standard claim about why Singularitarianism pattern matches with doomsday cults, and should, to a Bayesian, be fairly disturbing if he is correct about such a history.
Fair enough. I guess his rant pattern-matched the usual anti-doomsday-cult stuff I see involving the singularity. Keep in mind that, as a Bayesian, it is possible to adjust the value of those people making the predictions instead of the likelihood of the event. Certainly, that is what I have done; I care less for predictions, even from people I trust to reason well, because a history of failing predictions has taught me not that predicted events don’t happen, but rather that predictions are full of crap. This has the converse effect of greatly reducing the value of (in hindsight) correct predictions; which seems to be a pretty common failure mode for a lot of belief mechanisms: that a correct prediction alone is enough evidence. I would require the process by which the prediction was produced to consistently predict correctly.
The pattern you are completing here has very little relevance to the actual content of the conversation. The is no prediction here about the date of a possible singularity and, for that matter, no mention of how probable it is. When, or if, someone such as yourself creates a human level general intelligent agent and releases it then that will go a long way towards demonstrating that one of the theories is false.
You have iterated through a series of argument attempts here, abandoning each only to move to another equally flawed. The current would appear to be ‘straw man’… and not a particularly credible straw man at that. (EDIT: Actually, no you have actually kept the ‘unfalsifiable’ thing here, somehow.)
Your debating methods are not up to the standards that are found to be effective and well received on lesswrong.
I feel like I am in agreement that computer hardware plus human algorithm equals FOOM. Just as hominids improved very steeply as a few bits were put in place which may or may not correspond to but probably included symbolic processing, I think that putting an intelligent algorithm in place on current computers is likely to create extremely rapid advancement.
On the other hand, it’s possible that this isn’t the case. We could sit around all day and play reference-class tennis, but we should be able to agree that there EXIST reference classes which provide SOME evidence against the thesis. The fact that fields like CAD have significant bottlenecks due to compiling time, for example, indicates that some progress currently driven by innovation still has a machine bottleneck and will not experience a recursive speedup when done by ems. The fact that in fields like applied math, new algorithms which are human insights often create serious jumps is evidence that these fields will experience recursive speedups when done by ems.
The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps. It was a pretty mundane comment, and when I saw it it had over ten upvotes I was disappointed and reminded of RationalWiki’s claims that the site is a personality cult. rwallace responded by asking Eliezer to live up to the “standards that are found to be effective and well received on lesswrong,” albeit he asked in a fairly snarky way. You not only responded with more snark, but (a) represented a significant “downgrade” from a real response from Eliezer, giving the impression that he has better things to do than respond to serious engagements with his arguments, and (b) did not reply with a serious engagement of the arguments, such as an acknowledgement of a level of evidence.
You could have responded by saying that “fields of knowledge relevant to taking over the world seem much more likely to me to be social areas where big insights are valuable and less like CAD where compiling processes take time. Therefore while your thesis that many areas of an em’s speedup will be curve-constrained may be true, it still seems unlikely to effect the probability of a FOOM.”
In which case you would have presented what rwallace requested—a possibility of falsification—without any need to accept his arguments. If Eliezer had replied in this way in the first place, perhaps no one involved in this conversation would have gotten annoyed and wasted the possibility of a valuable discussion.
I agree that this thread of comments has been generally lacking in the standards of argument usually present on LessWrong. But from my perspective you have not been bringing the conversation up to a higher level as much as stoking the fire of your initial disagreement.
I am disappointed in you, and by the fact that you were upvoted while rwallace was downvoted; this seems like a serious failure on the part of the community to maintain its standards.
To be clear: I do not agree with rwallace’s position here, I do not think that he was engaging at the level that is common and desirable here. But you did not make it easier for him to do that, you made it harder, and that is far more deserving of downvotes.
This would seem to suggest that you expected something different from me, that is better according to your preferences. This surprises me—I think my comments here are entirely in character, whether that character is one that appeals to you or not. The kind of objections I raise here are also in character. I consistently object to arguments of this kind and used in the way they are here. Perhaps ongoing dislike or disrespect would be more appropriate than disappointment?
You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.
I am disappointed that such a well-established member of our community would behave in the way you did; your 6000 karma gives me the expectations that have not been met.
I realize that you may represent a slightly different slice of the LessWrong personality spectrum that I do, and this probably accounts for some amount of the difference, but this appeared to me to be a breakdown of civility which is not or at least should not be dependent on your personality.
I don’t know you well enough to dislike you. I’ve seen enough of your posts to know that you contribute to the community in a positive way most of the time. Right now it just feels like you had a bad day and got upset about the thread and didn’t give yourself time to cool off before posting again. If this is a habit for you, then it is my opinion that it is a bad habit and I think you can do better.
You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.
Ahh. That does make sense. I fundamentally disagree with everything else of significance in your judgement here but from your premises I can see how dissapointment is consistent.
I will not respond to those judgments except in as much as to say that I don’t agree with you on any of the significant points. My responses here are considered, necessary and if anything erred on the side of restraint. Bullshit, in the technical sense is the enemy here. This post and particularly the techniques used to defend it are bullshit in that sense. That it somehow got voted above −5 is troubling to me.
I agree that the arguments made in the original post tend to brush relevant details under the rug. But there is a difference between saying that an argument is flawed and trying to help fix it, and saying that it is irrelevant and the person is making a pure appeal to their own authority.
I was interested to see a more technical discussion of what sorts of things might be from the same reference class as recursive self-improvement. I was happy to see a viewpoint being represented on Less Wrong that was more diverse than the standard “party line.” Even if the argument is flawed I was glad to see it.
I would have been much happier to see the argument deconstructed than I am now having seen it turned into a flame war.
Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.
rwallace responded by saying:
My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement. … Are you saying your theory makes no other predictions than [AI will cause the world to end]?
Then in your reply you say he is accusing you of making that claim.
The way he asked his question was impolite. However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.
At no point did the thread become, in my mind, about your belief that his argument was despicable. If I understand correctly, you believe that by drawing attention to technical details, he is drawing attention away from the strongest arguments on the topic and therefore moving people towards less correct beliefs in a dangerous way. This is a reasonable objection, but again at no point did I see this thread become about your objection in a positive light rather than being about his post in a negative light.
If you are interested in making your case explicitly, or demonstrating where you have attempted to make it, I would be very interested to see it. If you are interested in providing other explicit falsifiable claims or demonstrating where they have been made I would be interested to see that as well.
If you are interested only in discussing who knows the community better and using extremely vague terms like “mechanisms of reasoning as employed here” then I think we both have better ways to spend our time.
However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.
You are simply wrong.
‘Falsifiable’ isn’t a rally call… it actually refers to a distinct concept—and was supplied multiple times in a completely unambiguous fashion.
I think we both have better ways to spend our time.
I did not initiate this conversation and at no time did I desire it. I did choose to reply to some of your comments.
I am disappointed that such a well-established member of our community would behave in the way you did
Wedrifid pointed out flaws in a flawed post, and pointed out flaws in a series of flawed arguments. You could debate the degree of politeness required but pointing out flaws is in some fundamental ways an impolite act. It is also a foundation of improving rationality. To the extent that these comment sections are about improving rationality, wedrifid behaved exactly as they should have.
Karma on LessWrong isn’t about politeness, as far as I have seen. For what it’s worth, in my kibitzer’d neutral observations, the unanimous downvoting is because readers spotted flaws; unanimous upvoting is for posts that point out flaws in posts.
I’m starting to think we may need to bring up Eliezer’s ‘tending to the garden before it becomes overgrown’ and ‘raising the sanity waterline’ posts from early on. There has been a recent trend of new users picking an agenda to support then employing the same kinds of fallacies and debating tactics in their advocacy. Then, when they are inevitably downvoted there is the same sense of outrage that mere LW participants dare evaluate their comments negatively.
It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn’t be that people here are able to evaluate the reasoning and consider the reasoning used to b more important than which side the author is on.
This isn’t a problem if it happens now and again. Either the new user has too much arrogance to learn to adapt to lesswrong standards and leave or they learn what is expected here and integrate into the culture. The real problem comes when arational debators are able to lend support to each other, preventing natural social pressures from having the full effect. That’s when the sanity waterline can really start to fall.
It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn’t be that people here are able to evaluate the reasoning and consider the reasoning used to be more important than which side the author is on.
When we see this, we should point them to the correspondence bias and the evil enemies posts and caution them not to assume that a critical reply is an attack from someone who is subverting the community—or worse, defending the community from the truth.
As an aside, top level posts are scary. Twice I have written up something, and both times I deleted it because I thought I wouldn’t be able to accept criticism. There is this weird feeling you get when you look at your pet theories and novel ideas you have come up with: they feel like truth, and you know how good LessWrong is with the truth. They are going to love this idea, know that it is true immediately and with the same conviction that you have, and celebrate you as a good poster and community member. After deleting the posts (and maybe this is rationalization) it occurred to me that had anyone disagreed, that would have been evidence not that I was wrong, but that they hated truth.
I didn’t mean just that he was impolite, or just that pointing out flaws in a flawed argument is bad or impolite. Of course when a post is flawed it should be criticized.
I am disappointed that the criticism was destructive, claiming that the post was a pure appeal to authority, rather than constructive, discussing how we might best update on this evidence, even if our update is very small or even in the opposite direction.
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
It’s called less wrong for a reason. Encouraging the use of fallacious reasoning and dark arts rhetoric even by leaving it with a neutral reception would be fundamentally opposed to the purpose of this site. Most of the sequences, in fact, have been about how not to think stupid thoughts. One of the ways to do that is to prevent your habitat from overwhelming you with them and limiting your discussions to those that are up to at least a crudely acceptable level.
If you want a debate about AI subjects where the environment isn’t primarily focussed on rewarding sound reasoning then I am almost certain that there are other places that are more welcoming.
This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning. The argument could be made, and if you had started or if you continue by making that argument I would be satisfied with that.
I am happy to see that elsewhere there are responses which acknowledge that interesting information has been presented before completely demolishing the original article.
This makes me think that pursuing this argument between the two of us is not worthwhile, as it draws attention to both of us making posts that are not satisfying to each other and away from other posts which may seem productive to both of us.
This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning.
Agreed. It takes an effort of willpower not to get defensive when you are criticised, so an attack (especially with insults) is likely to cause the target to become defensive and try to fight back rather than learn where they went wrong. As we know from the politics sequence, an attack might even make their conviction stronger!
However,
I do not believe that this is necessarily the best way to promote sound reasoning.
I actually can’t find a post on LessWrong specifically about this, but it has been said many times that the best is the enemy of the good. Be very wary of shooting down an idea because it is not the best idea. In the overwhelming majority of cases, the idea is better than doing nothing, and (again I don’t have the cite, but it has been discussed here before) if you spend too much time looking for the best, you don’t have any time left to do any of the ideas, so you end up doing nothing—which is worse than the mediocre idea you argued against.
If I was to order the ways of dealing with poor reasoning, it would look like this: Point out poor reasoning > Attack poor reasoning with insult > Leave poor reasoning alone.
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
I tend to agree, but what are those higher standards? One I would suggest is that the act of pointing out a flaw ought to be considered unsuccessful if the author of the flaw is not enlightened by the criticism. Sometimes communicating the existence of a flaw requires some handholding.
To those who object “It is not my job to educate a bias-laden idiot”, I respond, “And it is not my job to upvote your comment, either.”
Pointing out a flaw and suggesting how it might be amended would be an excellent post. Asking politely if the author has a different amendment in mind would be terrific.
And I could be incorrect here, but isn’t this site about nurturing rationalists? As I understand it, all of us humans (and clippy) are bias-laden idiots and the point of LessWrong is for us to educate ourselves and each other.
You keep switching back and forth between “is” and “ought” and I think this leads you into error.
The simplest prediction from wedrifid’s high karma is that his comments will be voted up. On the whole, his comments on this thread were voted up. The community normally agrees with him and today it agrees with him. This suggests that he is not behaving differently.
You have been around this community a while and should already have assessed its judgement and the meaning of karma. If you think that the community expresses bad judgement through its karma, then you should not be disappointed in bad behavior by high karma users. (So it would seem rather strange to write the above comment!) If you normally think that the community expresses good judgement through karma, then it is probably expressing similarly good judgement today.
Most likely, the difference is you, that you do not have the distance to adequately judge your interactions. Yes, there are other possibilities; it is also possible that “foom” is a special topic that the community and wedrifid cannot deal with rationally. But is it so likely that they cannot deal with it civilly?
In the “The Maes-Garreau Point” Kevin Kelly lists poorly-referenced predictions of “when they think the Singularity will appear” of 2001, 2004 and 2005 - by Nick Hogard, Nick Bostrom and Eleizer Yudkowsky respectively.
But only a potential warning sign—fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.
Right, but we expect that for the promethium atom. If physicists had predicted that a certain radioactive sample would decay in a fixed time, and they kept pushing up the time for when it would happen, and didn’t alter their hypotheses at all, I’d be very worried about the state of physics.
Not off the top of my head, which is one reason I didn’t bring it up until I got pissed off :) I remember a number of people predicting 2000, over the last decades of the 20th century, I think Turing himself was one of the earliest.
Turing never discussed much like a Singularity to my knowledge. What you may be thinking of is how in his original article proposing the Turing Test he said that he expected that it would take around fifty years for machines to pass the Turing Test. He wrote the essay in 1950. But, Turing’s remark is not the same claim as a Singularity occurring in 2000. Turing was off for when we’d have AI. As far as I know, he didn’t comment on anything like a Singularity.
Ah, that’s the one I’m thinking of—he didn’t comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn’t save any citations at the time and a quick Google search didn’t find any, which is one of the reasons I’m not writing a post on failed Singularity predictions.
Another reason, hopefully, is that there would always have been a wide range of predictions, and there’s a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.
If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn’t very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren’t ravaged.
This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn’t matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don’t Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn’t yet reached us.
Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don’t Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn’t yet reached us.
Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)
In fairness, I’m not sure anyone is really an expert on this (although this doesn’t detract from your point at all.)
You are right, and I would certainly not expect anyone to have such expertise for me to take their thoughts seriously. I am simply wary of Economists (Robin) or AGI creator hopefuls claiming that their expertise should be deferred to (only relevant here as a hypothetical pseudo-claim). Professions will naturally try to claim more territory than would be objectively appropriate. This isn’t because the professionals are actively deceptive but rather because it is the natural outcome of tribal instincts. Lets face it—intellectual disciplines and fields of expertise are mostly about pissing on trees with but with better hygiene.
Yes, but why would the antipredictions of AGI researcher not outweigh yours as they are directly inverse? Further, if your predictions are not falsifiable then they are by definition true and cannot be refuted. Therefore it is not unreasonable to ask for what would prematurely disqualify your predictions so as to be able to argue based on diverging opinions here. Otherwise, as I said above, we’ll have two inverse predictions outweigh each other, and not the discussion about risk estimations we should be having.
rwallace said it all in his comment that has been downvoted. Since I’m unable to find anything wrong with his comment and don’t understand yours at all, which has for unknown reasons be upvoted, there’s no way for me to counter what you say besides by what I’ve already said.
Here’s a wild guess of what I believe to be the positions. rwallace asks you what information would make you update or abandon your predictions. You in turn seem to believe that predictions are just that, the utterance of that might be possible, unquestionable and not subject to any empirical criticism.
I believe I’m at least smarter than the general public, although I haven’t read a lot of Less Wrong yet. Further I’m always willing to announce that I have been wrong and to change my mind. This should at least make you question your communication skills regarding outsiders, a little bit.
Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity.
Theories are collections or proofs and a hypothesis is a prediction or collection of predictions and must be falsifiable or proven to become a collection of proofs that is a theory. It is not absurd at all to challenge predictions based on their refutability, as any prediction that isn’t falsifiable will be eternal and therefore useless.
The wikipedia article on falsifiablility would be a good place to start if you wish to understand what is wrong with way falsification has been used (or misused) here. With falsifiability understood, seeing the problem should be straightforward.
I’ll just back out and withdraw my previous statements here. I have already been reading that Wiki entry when you replied. It would certainly take too long to figure out where I might be wrong here. I thought falsifiablility has been sufficiently clear to me to ask for what would change someones mind if I believe that a given prediction is sufficiently unspecific.
I have to immerse myself into the shallows that are the foundations of falsifiability (philosophy). I have done so in the past and will continue to do so, but that will take time. Nothing so far has really convinced me that a unfalsifiable idea can provide more than hints of what might be possible and therefore something new to try. Yet empirical criticism, in the form of the eventual realization of ones ideas, or a prove of contradiction (respectively inconsistency), seems to be the best bedding of any truth-value (at least in retrospect to a prediction). That is why I like to ask for what information would change ones mind about an idea, prediction or hypothesis. I call this falsifiability. If one replied, “nothing falsifiability is misused here”, I would conclude that his idea is unfalsifiable. Maybe wrongly so!
I’d like to know if you disagree with this comment. It would help me to figure out where we disagree or what exactly I’m missing or misunderstand with regard to falsifiability and the value of predictions.
It seems like you’re entirely ignoring feedback effects from more and better intelligence being better at creating more and better intelligence, as argued in Yudkowsky’s side of the FOOM debate.
And hardware overhang (faster computers developed before general cognitive algorithms, first AGI taking over all the supercomputers on the Internet) and fast infrastructure (molecular nanotechnology) and many other inconvenient ideas.
Also if you strip away the talk about “imbalance” what it works out to is that there’s a self-contained functioning creature, the chimpanzee, and natural selection burps into it a percentage more complexity and quadruple the computing power, and it makes a huge jump in capability. Nothing is offered to support the assertion that this is the only such jump which exists, except the bare assertion itself. Chimpanzees were not “lopsided”, they were complete packages designed for an environment; it turned out there were things that could be done which created a huge increase in optimization power (calling this “symbolic processing” assumes a particular theory of mind, and I think it is mistaken) and perhaps there are yet more things like that, such as, oh, say, self-modification of code.
Interesting. Can you elaborate or link to something?
I’m not Eliezer, but will try to guess what he’d have answered. The awesome powers of your mind only feel like they’re about “symbols”, because symbols are available to the surface layer of your mind, while most of the real (difficult) processing is hidden. Relevant posts: Detached Lever Fallacy, Words as Mental Paintbrush Handles.
Thanks.
The posts (at least the second one) seem to point that symbolic reasoning is overstated and at least some reasoning is clearly non-symbolic (e.g. visual).
In this context the question is whether the symbolic processing (there is definitely some—math, for example) gave pre-humans the boost that allowed the huge increase in computing power, so I am not seeing the contradiction.
Speech is a kind of symbolic processing, and is probably an important capability in mankind’s intellectual evolution, even if symbolic processing for the purpose of reasoning (as in syllogisms and such) is an ineffectual modern invention.
Susan Blackmore argues that what originally caused the “huge increase in optimization power” was memes—not symbolic processing—which probably started up a bit later than the human cranium’s expansion did.
What’s clearly fundamental about the human/chimpanzee advantage, the thing that made us go FOOM and take over the world, is that we can, extremely efficiently, share knowledge. This is not as good as fusing all our brains into a giant brain, but it’s much much better than just having a brain.
This analysis possibly suggests that “taking over the world’s computing resources” is the most likely FOOM, because it is similar to the past FOOM, but that is weak evidence.
The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it’s not like you’re adding a tiny bit of code and get a superapish intelligence.
Nothing is offered to support the assertion that there is another such jump. If you were to assert this then another premise of yours, that an universal computing device can simulate every physical process, could be questioned based on the same principle. So here is an antiprediction, humans are on equal footing with any other intelligence who can master abstract reasoning (that does not necessarily include speed or overcoming bias).
In a public debate, it makes sense to defend both sides of an argument, because each of the debaters actually tries to convince a passive third party whose beliefs are not clearly known. But any given person that you are trying to convince doesn’t have a duty to convince you that the argument you offer is incorrect. It’s just not an efficient thing to do. A person should always be allowed to refute an argument on the grounds that they don’t currently believe it to be true. They can be called on contradicting assertions of not believing or believing certain things, but never required to prove a belief. The latter would open a separate argument, maybe one worth engaging in, but often a distraction from the original one, especially when the new argument being separate is not explicitly acknowledged.
I agree with some of what you wrote although I’m not sure why you wrote it. Anyway, I was giving an argumentative inverse of what Yudkowsky asserted and hereby echoed his own rhetoric. Someone claimed A and in return Yudkowsky claimed that A is a bare assertion, therefore ¬A, whereupon I claimed that ¬A is a bare assertion therefore the truth-value of A is again ~unknown. This of course could have been inferred from Yudkowskys statement alone, if interpreted as a predictive inverse (antiprediction), if not for the last sentence which states, “[...] and perhaps there are yet more things like that, such as, oh, say, self-modification of code.” [1] Perhaps yes, perhaps not. Given that his comment already scored 16 when I replied, I believed that highlighting that it offered no convincing evidence for or against A would be justified by one sentence alone. Here we may disagree, but note that my comment included more information than that particular sentence alone.
Self-modification of code does not necessarily amount to a superhuman level of abstract reasoning similar to that between humans and chimps but might very well be unfeasible as it demands self-knowledge requiring resources exceeding that of any given intelligence. This would agree with the line of argumentation in the original post, namely that the next step (e.g. an improved AGI created by the existing AGI) will require a doubling of resources. Hereby we are on par again, two different predictions canceling out each other.
You should keep track of whose beliefs you are talking about, as it’s not always useful or possible to work with the actual truth of informal statements where you analyze correctness of debate. A person holding a wrong belief for wrong reasons can still be correct in rejecting an incorrect argument for incorrectness of those wrong reasons.
If A believes X, then (NOT X) is a “bare assertion”, not enough to justify A changing their belief. For B, who believes (NOT X), stating “X” is also a bare assertion, not enough to justify changing the belief. There is no inferential link between refuted assertions and beliefs that were held all along. A believes X not because “(NOT X) is a bare assertion”, even though A believes both that “(NOT X) is a bare assertion” (correctly) and X (of unknown truth).
That is true. Yet for a third party, one that is unaware of any additional substantiation not featured in the debate itself, a prediction and its antipredication cancel out each other. As a result no conclusion can be drawn by an uninformed bystander. This I tried to highlight without having to side with one party.
They don’t cancel out each other, as they both lack convincing power, equally irrelevant. It’s an error to state as arguments what you know your audience won’t agree with (change their mind in response to). At the same time, explicitly rejecting an argument that failed to convince is entirely correct.
Let’s assume that you contemplate the possibility of an outcome Z. Now you come across a discussion between agent A and agent B discussing the prediction that Z is true. If agent B does proclaim the argument X in favor of Z being true and you believe that X is not convincing then this still gives you new information about agent B and the likelihood of Z being true. You might now conclude that Z is slightly more likely to be true because of additional information in favor of Z and the confidence of agent B necessary to proclaim that Z is true. Agent A does however proclaim argument Y in favor of Z being false and you believe that Y is equally unconvincing than argument X in favor of Z being true. You might now conclude again that the truth-value of Z is ~unknown as each argument and the confidence of its facilitator ~outweigh each other.
Therefore no information is irrelevant if it is the only information about any given outcome in question. Your judgement might weigh less than the confidence of an agent with possible unknown additional substantiation in favor of its argument. If you are unable to judge the truth-value of an exclusive disjunction then that any given argument about it is not compelling does tell more about you than the agent that does proclaim it.
Any argument alone has to be taken into account, if only due to its logical consequence. Every argument should be incorporated into your probability estimations for that it signals a certain confidence (for that it is proclaimed at all) of the agent that is uttering it. Yet if there exists a counterargument that is inverse to the original argument you’ll have to take that into account as well. This counterargument might very well outweigh the original argument. Therefore there are no arguments that lack the power to convince, however small, yet arguments can outweigh and trump each other.
ETA: Fixed the logic, thanks Vladimir_Nesov.
Z XOR ¬Z is always TRUE.
(I know what you mean, but it looks funny.)
Fixed it now (I hope), thanks.
I think it became more confused now. With C and D unrelated, what do you care for (C XOR D)? For the same reason, you can’t now expect evidence for C to always be counter-evidence for D.
Thanks for your patience and feedback, I updated it again. I hope it is now somewhat more clear what I’m trying to state.
Whoops, I’m just learning the basics (some practise here). I took NOT Z as an independent proposition. I guess there is no simple way to express this if you do not assign the negotation of Z its own variable, in case you want it to be an indepedent proposition?
B believes that X argues for Z, but you might well believe that X argues against Z. (You are considering a model of a public debate, while this comment was more about principles for an argument between two people.)
Also, it’s strange that you are contemplating levels of belief in Z, while A and B assert it being purely true or false. How overconfident of them.
(Haven’t yet got around to a complete reply rectifying the model, but will do eventually.)
See my reply to saturn on recursive self-improvement. Potential hardware overhang, I already addressed. Nanotechnology is thus far following the curve of capability, and there is every reason to expect it will continue to do so in the future. I already explained the sense in which chimpanzees were lopsided. Self modification of code has been around for decades.
May be minorly off-topic: nothing Drexler hypothesised has, as far as I know, even been started. As I understand it, the state of things is still that we still have literally no idea how to get there from here, and what’s called “nanotechnology” is material science or synthetic biology. Do you have details of what you’re describing as following the curve?
Perhaps start here, with his early work on the potential of hypertext ;-)
A good source of such details is Drexler’s blog, where he has written some good articles about—and seems to consider highly relevant—topics like protein design and DNA origami.
(cough) I’m sure Drexler has much detail on Drexler’s ideas. Assume I’m familiar with the advocates. I’m speaking of third-party sources, such as from the working worlds of physics, chemistry, physical chemistry and material science for example.
As far as I know—and I have looked—there’s little or nothing. No progress to nanobots, no progress to nanofactories. The curve in this case is a flat line at zero. Hence asking you specifically for detail on what you are plotting on your graph.
There has been some impressive sounding research done on simulated diamondoid tooltips for this kind of thing. (Admittedly, done by advocates.)
I suspect when these things do arrive, they will tend to have hard vacuum, cryogenic temperatures, and flat surfaces as design constraints.
Well, that’s a bit like saying figuring out how to smelt iron constituted no progress to the Industrial Revolution. These things have to go a step at a time, and my point in referring to Drexler’s blog was that he seems to think e.g. protein design and DNA origami do constitute real progress.
As for things you could plot on a graph, consider the exponentially increasing amount of computing power put into molecular modeling simulations, not just by nanotechnology advocates, but people who actually do e.g. protein design for living today.
Also, I’m not sure what you mean by “symbolic processing” assuming a particular theory of mind—theories of mind differ on the importance thereof, but I’m not aware of any that dispute its existence. I’ll second the request for elaboration on this.
I’ll also ask, assuming I’m right, is there any weight of evidence whatsoever that would convince you of this? Or is AI go foom for you a matter of absolute, unshakable faith?
It would be better if you waited until you had made somewhat of a solid argument before you resorted to that appeal. Even Robin’s “Trust me, I’m an Economist!” is more persuasive.
The Bottom Line is one of the earliest posts in Eliezer’s own rationality sequences and describes approximately this objection. You’ll note that he added an Addendum:
I’m resisting the temptation to say “trust me, I’m an AGI researcher” :-) Bear in mind that my bottom line was actually the pro “AI go foom” side; it’s still what I would like to believe.
But my theory is clearly falsifiable. I stand by my position that it’s fair to ask you and Eliezer whether your theory is falsifiable, and if so, what evidence you would agree to have falsified it.
But barely. ;)
You would not believe how little that would impress me. Well, I suppose you would—I’ve been talking with XiXi about Ben, after all. I wouldn’t exactly say that your status incentives promote neutral reasoning on this position—or Robin on the same. It is also slightly outside of the core of your expertise, which is exactly where the judgement of experts is notoriously demonstrated to be poor.
You are trying to create AGI without friendliness and you would like to believe it will go foom? And this is supposed to make us trust your judgement with respect to AI risks?
Incidentally, ‘the bottom line’ accusation here was yours, not the other way around. The reference was to question its premature use as a fully general counterargument.
We are talking here about predictions of the future. Predictions. That’s an important keyword that is related to falsifiability. Build a flipping AGI of approximately human level and see if whether the world as we know it ends within a year.
You just tagged teamed one general counterargument out to replace it with a new one. Unfalsifiability has a clear meaning when it comes to creating and discussing theories and it is inapplicable here to the point of utter absurdity. Predictions, for crying out loud.
No indeed, they very strongly promote belief in AI foom—that’s why I bought into that belief system for a while, because if true, it would make me a potential superhero.
Nope, it’s exactly in the core of my expertise. Not that I’m expecting you to believe my conclusions for that reason.
When I believed in foom, I was working on Friendly AI. Now that I no longer believe that, I’ve reluctantly accepted human level AI in the near future is not possible, and I’m working on smarter tool AI instead—well short of human equivalence, but hopefully, with enough persistence and luck, better than what we have today.
That is what falsifiability refers to, yes.
My theory makes the prediction that even when recursive self-improvement is used, the results will be within the curve of capability, and will not produce more than a steady exponential rate of improvement.
Are you saying your theory makes no other predictions than this?
RWallace, you made a suggestion of unfalsifiabiity, a ridiculous claim. I humored you by giving the most significant, obvious and overwhelmingly critical way to falsify (or confirm) the theory. You now presume to suggest that such a reply amounts to a claim that this is the only prediction that could be made. This is, to put it in the most polite terms I am willing, disingenuous.
-sigh-
This crap goes on year after year, decade after bloody decade. Did you know the Singularity was supposed to happen in 2000? Then in 2005. Then in 2010. Guess how many Singularitarians went “oh hey, our predictions keep failing, maybe that’s evidence our theory isn’t actually right after all”? If you guessed none at all, give yourself a brownie point for an inspired guess. It’s like the people who congregate on top of a hill waiting for the angels or the flying saucers to take them up to heaven. They just go “well our date was wrong, but that doesn’t mean it’s not going to happen, of course it is, Real Soon Now.” Every time we actually try to do any recursive self-improvement, it fails to do anything like what the AI foom crowd says it should do, but of course, it’s never “well, maybe recursive self-improvement isn’t all it’s cracked up to be,” it’s always “your faith wasn’t strong enough,” oops, “you weren’t using enough of it,” or “that’s not the right kind” or some other excuse.
That’s what I have to deal with, and when I asked you for a prediction, you gave me the usual crap about oh well you’ll see when the Apocalypse comes and we all die, ha ha. And that’s the most polite terms I’m willing to put it in.
I’ve made it clear how my theory can be falsified: demonstrate recursive self-improvement doing something beyond the curve of capability. Doesn’t have to be taking over the world, just sustained improvement beyond what my theory says should be possible.
If you’re willing to make an actual, sensible prediction of RSI doing something, or some other event (besides the Apocalypse) coming to pass, such that if it fails to do that, you’ll agree your theory has been falsified, great. If not, fine, I’ll assume your faith is absolute and drop this debate.
That the Singularity concept pattern-matches doomsday cults is nothing new to anyone here. You looked further into it and declared it false, wedrifid and others looked into it and declared it possible. The discussion is now about evidence between those two points of view. Repeating that it looks like a doomsday cult is taking a step backwards, back to where we came to this discussion from.
rwallace’s argument isn’t centering on the standard argument that makes it look like a doomsday cult. He’s focusing on an apparent repetition of predictions while failing to update when those predictions have failed. That’s different than the standard claim about why Singularitarianism pattern matches with doomsday cults, and should, to a Bayesian, be fairly disturbing if he is correct about such a history.
Fair enough. I guess his rant pattern-matched the usual anti-doomsday-cult stuff I see involving the singularity. Keep in mind that, as a Bayesian, it is possible to adjust the value of those people making the predictions instead of the likelihood of the event. Certainly, that is what I have done; I care less for predictions, even from people I trust to reason well, because a history of failing predictions has taught me not that predicted events don’t happen, but rather that predictions are full of crap. This has the converse effect of greatly reducing the value of (in hindsight) correct predictions; which seems to be a pretty common failure mode for a lot of belief mechanisms: that a correct prediction alone is enough evidence. I would require the process by which the prediction was produced to consistently predict correctly.
The pattern you are completing here has very little relevance to the actual content of the conversation. The is no prediction here about the date of a possible singularity and, for that matter, no mention of how probable it is. When, or if, someone such as yourself creates a human level general intelligent agent and releases it then that will go a long way towards demonstrating that one of the theories is false.
You have iterated through a series of argument attempts here, abandoning each only to move to another equally flawed. The current would appear to be ‘straw man’… and not a particularly credible straw man at that. (EDIT: Actually, no you have actually kept the ‘unfalsifiable’ thing here, somehow.)
Your debating methods are not up to the standards that are found to be effective and well received on lesswrong.
The way that this thread played out bothered me.
I feel like I am in agreement that computer hardware plus human algorithm equals FOOM. Just as hominids improved very steeply as a few bits were put in place which may or may not correspond to but probably included symbolic processing, I think that putting an intelligent algorithm in place on current computers is likely to create extremely rapid advancement.
On the other hand, it’s possible that this isn’t the case. We could sit around all day and play reference-class tennis, but we should be able to agree that there EXIST reference classes which provide SOME evidence against the thesis. The fact that fields like CAD have significant bottlenecks due to compiling time, for example, indicates that some progress currently driven by innovation still has a machine bottleneck and will not experience a recursive speedup when done by ems. The fact that in fields like applied math, new algorithms which are human insights often create serious jumps is evidence that these fields will experience recursive speedups when done by ems.
The beginning of this thread was Eliezer making a comment to the effect that symbolic logic is something computers can do so it must not be what makes humans more special than chimps. It was a pretty mundane comment, and when I saw it it had over ten upvotes I was disappointed and reminded of RationalWiki’s claims that the site is a personality cult. rwallace responded by asking Eliezer to live up to the “standards that are found to be effective and well received on lesswrong,” albeit he asked in a fairly snarky way. You not only responded with more snark, but (a) represented a significant “downgrade” from a real response from Eliezer, giving the impression that he has better things to do than respond to serious engagements with his arguments, and (b) did not reply with a serious engagement of the arguments, such as an acknowledgement of a level of evidence.
You could have responded by saying that “fields of knowledge relevant to taking over the world seem much more likely to me to be social areas where big insights are valuable and less like CAD where compiling processes take time. Therefore while your thesis that many areas of an em’s speedup will be curve-constrained may be true, it still seems unlikely to effect the probability of a FOOM.”
In which case you would have presented what rwallace requested—a possibility of falsification—without any need to accept his arguments. If Eliezer had replied in this way in the first place, perhaps no one involved in this conversation would have gotten annoyed and wasted the possibility of a valuable discussion.
I agree that this thread of comments has been generally lacking in the standards of argument usually present on LessWrong. But from my perspective you have not been bringing the conversation up to a higher level as much as stoking the fire of your initial disagreement.
I am disappointed in you, and by the fact that you were upvoted while rwallace was downvoted; this seems like a serious failure on the part of the community to maintain its standards.
To be clear: I do not agree with rwallace’s position here, I do not think that he was engaging at the level that is common and desirable here. But you did not make it easier for him to do that, you made it harder, and that is far more deserving of downvotes.
This would seem to suggest that you expected something different from me, that is better according to your preferences. This surprises me—I think my comments here are entirely in character, whether that character is one that appeals to you or not. The kind of objections I raise here are also in character. I consistently object to arguments of this kind and used in the way they are here. Perhaps ongoing dislike or disrespect would be more appropriate than disappointment?
You are one of the most prolific posters on Less Wrong. You have over 6000 karma, which means that for anyone who has some portion of their identity wrapped up in the quality of the community, you serve as at least a partial marker of how well that community is doing.
I am disappointed that such a well-established member of our community would behave in the way you did; your 6000 karma gives me the expectations that have not been met.
I realize that you may represent a slightly different slice of the LessWrong personality spectrum that I do, and this probably accounts for some amount of the difference, but this appeared to me to be a breakdown of civility which is not or at least should not be dependent on your personality.
I don’t know you well enough to dislike you. I’ve seen enough of your posts to know that you contribute to the community in a positive way most of the time. Right now it just feels like you had a bad day and got upset about the thread and didn’t give yourself time to cool off before posting again. If this is a habit for you, then it is my opinion that it is a bad habit and I think you can do better.
Ahh. That does make sense. I fundamentally disagree with everything else of significance in your judgement here but from your premises I can see how dissapointment is consistent.
I will not respond to those judgments except in as much as to say that I don’t agree with you on any of the significant points. My responses here are considered, necessary and if anything erred on the side of restraint. Bullshit, in the technical sense is the enemy here. This post and particularly the techniques used to defend it are bullshit in that sense. That it somehow got voted above −5 is troubling to me.
I agree that the arguments made in the original post tend to brush relevant details under the rug. But there is a difference between saying that an argument is flawed and trying to help fix it, and saying that it is irrelevant and the person is making a pure appeal to their own authority.
I was interested to see a more technical discussion of what sorts of things might be from the same reference class as recursive self-improvement. I was happy to see a viewpoint being represented on Less Wrong that was more diverse than the standard “party line.” Even if the argument is flawed I was glad to see it.
I would have been much happier to see the argument deconstructed than I am now having seen it turned into a flame war.
I believe I observed that it was far worse than an appeal to authority.
You do not understand the mechanisms of reasoning as employed here well enough to see why the comments here received the reception that they did.
In this comment rwallace asks you to make a falsifiable prediction. In this comment you state:
rwallace responded by saying:
Then in your reply you say he is accusing you of making that claim.
The way he asked his question was impolite. However in the whole of this thread, you have not attempted to provide a single falsifiable point, despite the fact that this is what he was explicitly asking for.
It is true that I do not understand the mechanisms. I thought that I understood that the policy of LessWrong is not to dismiss arguments but to fight the strongest argument that can be built out of that argument’s corpse.
At no point did the thread become, in my mind, about your belief that his argument was despicable. If I understand correctly, you believe that by drawing attention to technical details, he is drawing attention away from the strongest arguments on the topic and therefore moving people towards less correct beliefs in a dangerous way. This is a reasonable objection, but again at no point did I see this thread become about your objection in a positive light rather than being about his post in a negative light.
If you are interested in making your case explicitly, or demonstrating where you have attempted to make it, I would be very interested to see it. If you are interested in providing other explicit falsifiable claims or demonstrating where they have been made I would be interested to see that as well. If you are interested only in discussing who knows the community better and using extremely vague terms like “mechanisms of reasoning as employed here” then I think we both have better ways to spend our time.
You are simply wrong.
‘Falsifiable’ isn’t a rally call… it actually refers to a distinct concept—and was supplied multiple times in a completely unambiguous fashion.
I did not initiate this conversation and at no time did I desire it. I did choose to reply to some of your comments.
Wedrifid pointed out flaws in a flawed post, and pointed out flaws in a series of flawed arguments. You could debate the degree of politeness required but pointing out flaws is in some fundamental ways an impolite act. It is also a foundation of improving rationality. To the extent that these comment sections are about improving rationality, wedrifid behaved exactly as they should have.
Karma on LessWrong isn’t about politeness, as far as I have seen. For what it’s worth, in my kibitzer’d neutral observations, the unanimous downvoting is because readers spotted flaws; unanimous upvoting is for posts that point out flaws in posts.
I’m starting to think we may need to bring up Eliezer’s ‘tending to the garden before it becomes overgrown’ and ‘raising the sanity waterline’ posts from early on. There has been a recent trend of new users picking an agenda to support then employing the same kinds of fallacies and debating tactics in their advocacy. Then, when they are inevitably downvoted there is the same sense of outrage that mere LW participants dare evaluate their comments negatively.
It must be that all the lesswrong objectors are true believers in an echo chamber. Or maybe those that make the effort to reply are personally flawed. It couldn’t be that people here are able to evaluate the reasoning and consider the reasoning used to b more important than which side the author is on.
This isn’t a problem if it happens now and again. Either the new user has too much arrogance to learn to adapt to lesswrong standards and leave or they learn what is expected here and integrate into the culture. The real problem comes when arational debators are able to lend support to each other, preventing natural social pressures from having the full effect. That’s when the sanity waterline can really start to fall.
When we see this, we should point them to the correspondence bias and the evil enemies posts and caution them not to assume that a critical reply is an attack from someone who is subverting the community—or worse, defending the community from the truth.
As an aside, top level posts are scary. Twice I have written up something, and both times I deleted it because I thought I wouldn’t be able to accept criticism. There is this weird feeling you get when you look at your pet theories and novel ideas you have come up with: they feel like truth, and you know how good LessWrong is with the truth. They are going to love this idea, know that it is true immediately and with the same conviction that you have, and celebrate you as a good poster and community member. After deleting the posts (and maybe this is rationalization) it occurred to me that had anyone disagreed, that would have been evidence not that I was wrong, but that they hated truth.
I didn’t mean just that he was impolite, or just that pointing out flaws in a flawed argument is bad or impolite. Of course when a post is flawed it should be criticized.
I am disappointed that the criticism was destructive, claiming that the post was a pure appeal to authority, rather than constructive, discussing how we might best update on this evidence, even if our update is very small or even in the opposite direction.
I guess what I’m saying is that we should hold our upvotes to a higher standard than just “pointing out flaws in an argument.”
It’s called less wrong for a reason. Encouraging the use of fallacious reasoning and dark arts rhetoric even by leaving it with a neutral reception would be fundamentally opposed to the purpose of this site. Most of the sequences, in fact, have been about how not to think stupid thoughts. One of the ways to do that is to prevent your habitat from overwhelming you with them and limiting your discussions to those that are up to at least a crudely acceptable level.
If you want a debate about AI subjects where the environment isn’t primarily focussed on rewarding sound reasoning then I am almost certain that there are other places that are more welcoming.
This particular thread has been about attacking poor reasoning via insult. I do not believe that this is necessarily the best way to promote sound reasoning. The argument could be made, and if you had started or if you continue by making that argument I would be satisfied with that.
I am happy to see that elsewhere there are responses which acknowledge that interesting information has been presented before completely demolishing the original article.
This makes me think that pursuing this argument between the two of us is not worthwhile, as it draws attention to both of us making posts that are not satisfying to each other and away from other posts which may seem productive to both of us.
Agreed. It takes an effort of willpower not to get defensive when you are criticised, so an attack (especially with insults) is likely to cause the target to become defensive and try to fight back rather than learn where they went wrong. As we know from the politics sequence, an attack might even make their conviction stronger!
However,
I actually can’t find a post on LessWrong specifically about this, but it has been said many times that the best is the enemy of the good. Be very wary of shooting down an idea because it is not the best idea. In the overwhelming majority of cases, the idea is better than doing nothing, and (again I don’t have the cite, but it has been discussed here before) if you spend too much time looking for the best, you don’t have any time left to do any of the ideas, so you end up doing nothing—which is worse than the mediocre idea you argued against.
If I was to order the ways of dealing with poor reasoning, it would look like this: Point out poor reasoning > Attack poor reasoning with insult > Leave poor reasoning alone.
Again, I disagree substantially with your observations on the critical premises.
I tend to agree, but what are those higher standards? One I would suggest is that the act of pointing out a flaw ought to be considered unsuccessful if the author of the flaw is not enlightened by the criticism. Sometimes communicating the existence of a flaw requires some handholding.
To those who object “It is not my job to educate a bias-laden idiot”, I respond, “And it is not my job to upvote your comment, either.”
Pointing out a flaw and suggesting how it might be amended would be an excellent post. Asking politely if the author has a different amendment in mind would be terrific.
And I could be incorrect here, but isn’t this site about nurturing rationalists? As I understand it, all of us humans (and clippy) are bias-laden idiots and the point of LessWrong is for us to educate ourselves and each other.
You keep switching back and forth between “is” and “ought” and I think this leads you into error.
The simplest prediction from wedrifid’s high karma is that his comments will be voted up. On the whole, his comments on this thread were voted up. The community normally agrees with him and today it agrees with him. This suggests that he is not behaving differently.
You have been around this community a while and should already have assessed its judgement and the meaning of karma. If you think that the community expresses bad judgement through its karma, then you should not be disappointed in bad behavior by high karma users. (So it would seem rather strange to write the above comment!) If you normally think that the community expresses good judgement through karma, then it is probably expressing similarly good judgement today.
Most likely, the difference is you, that you do not have the distance to adequately judge your interactions. Yes, there are other possibilities; it is also possible that “foom” is a special topic that the community and wedrifid cannot deal with rationally. But is it so likely that they cannot deal with it civilly?
I did not say that. I said that symbolic logic probably wasn’t It. You made up your own reason why, and a poor one.
Out of morbid curiosity, what is your reason for symbolic logic not being it?
I second the question out of healthy curiosity.
That’s fair. I apologize, I shouldn’t have put words in your mouth. That was the impression I got, but it was unfounded to say it came from you.
So, I’m vaguely aware of Singularity claims for 2010. Do you have citations for people making such claims that it would happen in 2000 or 2005?
I agree that pushing something farther and farther into the future is a potential warning sign.
In the “The Maes-Garreau Point” Kevin Kelly lists poorly-referenced predictions of “when they think the Singularity will appear” of 2001, 2004 and 2005 - by Nick Hogard, Nick Bostrom and Eleizer Yudkowsky respectively.
But only a potential warning sign—fusion power is always 25 years away, but so is the decay of a Promethium-145 atom.
Right, but we expect that for the promethium atom. If physicists had predicted that a certain radioactive sample would decay in a fixed time, and they kept pushing up the time for when it would happen, and didn’t alter their hypotheses at all, I’d be very worried about the state of physics.
Not off the top of my head, which is one reason I didn’t bring it up until I got pissed off :) I remember a number of people predicting 2000, over the last decades of the 20th century, I think Turing himself was one of the earliest.
Turing never discussed much like a Singularity to my knowledge. What you may be thinking of is how in his original article proposing the Turing Test he said that he expected that it would take around fifty years for machines to pass the Turing Test. He wrote the essay in 1950. But, Turing’s remark is not the same claim as a Singularity occurring in 2000. Turing was off for when we’d have AI. As far as I know, he didn’t comment on anything like a Singularity.
Ah, that’s the one I’m thinking of—he didn’t comment on a Singularity, but did predict human level AI by 2000. Some later people did, but I didn’t save any citations at the time and a quick Google search didn’t find any, which is one of the reasons I’m not writing a post on failed Singularity predictions.
Another reason, hopefully, is that there would always have been a wide range of predictions, and there’s a lot of room for proving points by being selective about which ones to highlight, and even if you looked at all predictions there are selection effects in that the ones that were repeated or even stated in the first place tend to be the more extreme ones.
If you think that most Singularities will be Unfreindly, the Anthropic Shadow means that their absense from our time-line isn’t very strong evidence against their being likely in the future: no matter what proportion of the multiverse sees the light cone paperclipped in 2005, all the observers in 2010 will be in universes that weren’t ravaged.
This is true if you think the maximum practical speed of interstellar colonization will be extremely close to (or faster than) the speed of light. (In which case, it doesn’t matter whether we are talking Singularity or not, friendly or not, only that colonization suppresses subsequent evolution of intelligent life, which seems like a reasonable hypothesis.)
If the maximum practical speed of interstellar colonization is significantly slower than the speed of light (and assuming mass/energy as we know them remain scarce resources, e.g. advanced civilizations don’t Sublime into hyperspace or whatever), then we would be able to observe advanced civilizations in our past light cone whose colonization wave hasn’t yet reached us.
Of course there is as yet no proof of either hypothesis, but such reasonable estimates as we currently have, suggest the latter.
Nitpick: If the civilization is spreading by SETI attack, observing them could be the first stage of being colonized by them. But I think the discussion may be drifting off-point here. (Edited for spelling.)
You are not an expert on recursive self improvement, as it relates to AGI or the phenomenon in general.
In fairness, I’m not sure anyone is really an expert on this (although this doesn’t detract from your point at all.)
You are right, and I would certainly not expect anyone to have such expertise for me to take their thoughts seriously. I am simply wary of Economists (Robin) or AGI creator hopefuls claiming that their expertise should be deferred to (only relevant here as a hypothetical pseudo-claim). Professions will naturally try to claim more territory than would be objectively appropriate. This isn’t because the professionals are actively deceptive but rather because it is the natural outcome of tribal instincts. Lets face it—intellectual disciplines and fields of expertise are mostly about pissing on trees with but with better hygiene.
Yes, but why would the antipredictions of AGI researcher not outweigh yours as they are directly inverse? Further, if your predictions are not falsifiable then they are by definition true and cannot be refuted. Therefore it is not unreasonable to ask for what would prematurely disqualify your predictions so as to be able to argue based on diverging opinions here. Otherwise, as I said above, we’ll have two inverse predictions outweigh each other, and not the discussion about risk estimations we should be having.
The claim being countered was falsifiability. Your reply here is beyond irrelevant to the comment you quote.
rwallace said it all in his comment that has been downvoted. Since I’m unable to find anything wrong with his comment and don’t understand yours at all, which has for unknown reasons be upvoted, there’s no way for me to counter what you say besides by what I’ve already said.
Here’s a wild guess of what I believe to be the positions. rwallace asks you what information would make you update or abandon your predictions. You in turn seem to believe that predictions are just that, the utterance of that might be possible, unquestionable and not subject to any empirical criticism.
I believe I’m at least smarter than the general public, although I haven’t read a lot of Less Wrong yet. Further I’m always willing to announce that I have been wrong and to change my mind. This should at least make you question your communication skills regarding outsiders, a little bit.
Theories are collections or proofs and a hypothesis is a prediction or collection of predictions and must be falsifiable or proven to become a collection of proofs that is a theory. It is not absurd at all to challenge predictions based on their refutability, as any prediction that isn’t falsifiable will be eternal and therefore useless.
The wikipedia article on falsifiablility would be a good place to start if you wish to understand what is wrong with way falsification has been used (or misused) here. With falsifiability understood, seeing the problem should be straightforward.
I’ll just back out and withdraw my previous statements here. I have already been reading that Wiki entry when you replied. It would certainly take too long to figure out where I might be wrong here. I thought falsifiablility has been sufficiently clear to me to ask for what would change someones mind if I believe that a given prediction is sufficiently unspecific.
I have to immerse myself into the shallows that are the foundations of falsifiability (philosophy). I have done so in the past and will continue to do so, but that will take time. Nothing so far has really convinced me that a unfalsifiable idea can provide more than hints of what might be possible and therefore something new to try. Yet empirical criticism, in the form of the eventual realization of ones ideas, or a prove of contradiction (respectively inconsistency), seems to be the best bedding of any truth-value (at least in retrospect to a prediction). That is why I like to ask for what information would change ones mind about an idea, prediction or hypothesis. I call this falsifiability. If one replied, “nothing falsifiability is misused here”, I would conclude that his idea is unfalsifiable. Maybe wrongly so!
Thou art wise.
I’d like to know if you disagree with this comment. It would help me to figure out where we disagree or what exactly I’m missing or misunderstand with regard to falsifiability and the value of predictions.