Haven’t there been a lot more than a million people in history that claimed saving the world, with 0 successes? Without further information, reasonable estimates are from 0 to 1/million.
Haven’t there been a lot more than a million people in history that claimed saving the world, with 0 successes?
Can you name ten who claimed to do so via non-supernatural/extraterrestrial means? Even counting claims of the supernatural I would be surprised to learn there had been a million.
In any case, nuclear war, peak oil, global warming, overpopulation attracted a huge number of people who claimed that civilization will end unless this or that will be done.
In the ordinary sense that Richard Dawkins and James Randi use.
In any case, nuclear war, peak oil, global warming, overpopulation attracted a huge number of people who claimed that civilization will end unless this or that will be done.
“If we don’t continue to practice agriculture or hunting and gathering, civilization will end.”
There are plenty of true statements like that. Your argument needs people who said that such and such things needed to be done, and that they were the ones who were going to cause the things in question. If you list some specific people, you can then identify relevant features that are or are not shared.
Disclaimer: I think near-term efforts to reduce AI risk will probably not be determinative in preventing an existential risk, but have a non-negligible probability of doing so that gives high expected value at the current margin via a number of versions of cost-benefit analysis. Moreso for individual near-term AI risk efforts.
It’s true that Mr. Yudowsky’s claims violate relatively few of the generally accepted laws of physics, compared to the average messiah-claimant, but not every false claim is trivially so. Indeed, some of the most successful cults are built around starting with something that actually does work and taking it too far.
I can name only one explicit point of departure and it’s defensible.
Other saviors have claimed e. g. the ability to resurrect a person long since reduced to moldy bones, which spits in the face of thermodynamics. Relative to that, a quibble with QM involves very few physical laws.
So why was this post voted down so far? It appears to be a relevant and informative link to a non-crank source, with no incivility that I could see.
Overconfidence in the assertion. Presumption of a foregone conclusion.
It was a relevant link and I enjoyed doing the background reading finding out just how seriously relevant authorities take this fellow’s stance. He is not a crank but he is someone with a large personal stake. The claim in the article seems to have an element of spin in the interpretations of interpretations as it were.
I did lower my confidence in how well I grasp QM but much of that confidence was restored once I traced down some more expert positions and scanned some wikipedia articles. I focussed in particular on whether MW is a ‘pure’ interpretation. That is, whether it does actually deviate from the formal math.
With an introduction like that, the link should go to a recent announcement in a major scientific journal by a lot of respected people based on overwhelming evidence, not this one guy writing a non-peer-reviewed argument about an experiment ten years ago that AFAICT most physicists see as perfectly consistent with our existing understanding of QM.
It is a source targeted at the general public, which unfortunately does not know enough to hire a competent columnist. John Cramer has used the wrong equations to arrive at an incorrect description of the Afshar experiment, which he uses to justify his own interpretation of QM, which he wants to be correct. The experiment is not in conflict with the known laws of physics.
In general, I advise you to mistrust reports of recent developments in physics, if you have no physics training. I check a number of popular sources occasionally and about half of the articles are either wrong or misleading. For example, you may have recently heard about Erik Verlinde’s theories about entropic gravity. If gravity were an entropic force, gravitational field would cause extremely rapid decoherence, preventing, for example, the standard double-slit experiment. This is obviously not observed, yet this theory is one of the more well-known ones among physics fans.
Incivility gets most of the big downvotes, and genuine insight gets the big upvotes, but I’ve noticed that the +1s and −1s tend to reflect compliance with site norms more than skill.
This is worrying, of course, but I’m not equipped to fix it.
If the stated rule for voting is “upvote what you want more of; downvote what you want less of,” and the things that are getting upvoted are site norms and the things that are getting downvoted aren’t, one interpretation is that that the system is working properly: they are site norms precisely because they are the things people want more of, which are therefore getting upvoted.
Incivility gets most of the big downvotes, and genuine insight gets the big upvotes, but I’ve noticed that the +1s and −1s tend to reflect compliance with site norms more than skill.
:P Skill? What is this skill of which you speak?
This is worrying, of course, but I’m not equipped to fix it.
That isn’t my experience. When in the mood to gain popularity +5 comments are easy to spin while bulk +1s take rather a lot of typing. I actually expect that even trying to get +1s I would accidentally get about at least 1/5th as many +5s as +1s.
Edit: I just scanned back through the last few pages of my comments. I definitely haven’t been in a ‘try to appear deep and insightful’ kind of mood and even so more karma came from +5s than +1s. I was surprised because I actually thought my recent comments may have been an exception.
Interesting, but I don’t think that’s the right characterization of the content of the link. It’s John Cramer (proponent of the transactional interpretation) claiming that the Afshar Experiment’s results falsfify both Copenhagen and MWI. I think you’re better off reading about the experiment directly.
Identify the element of MWI that according to Cramer’s blog is not consistent with the mathematical formalism of Quantum Mechanics and if you happen to be thinking that then stop.
In short, I argue “naturalism” means, in the simplest terms, that every mental thing is entirely caused by fundamentally nonmental things, and is entirely dependent on nonmental things for its existence. Therefore, “supernaturalism” means that at least some mental things cannot be reduced to nonmental things.
I don’t quite understand your confusion. An AGI is a computer program, and friendliness is a property of a computer program. Yes, these concepts allude to mental concepts on our maps, but these mental concepts are reducible to properties of the nonmental substrates that are our brains. In fact, the goal of FAI research is to find the reduction of friendliness to nonmental things.
Concepts of sin, or prayer, or karma, or intelligent design, and most other things considered “supernatural” can be reducible to properties of physical world with enough hand waving.
Karma scores in this thread suggest it falls in reference class of “arguing against groupthink”, which ironically increases estimates of Eliezer being a crackpot, and lesswrong turning into a cult, possibly via evaporative cooling.
Karma scores in this thread suggest it falls in reference class of “arguing against groupthink”, which ironically increases estimates of Eliezer being a crackpot, and lesswrong turning into a cult, possibly via evaporative cooling.
No, that’s really not borne out by the evidence. Multifolaterose’s posts have been strongly upvoted, it seems to me, by a significant group of readers who see themselves as defenders against groupthink. It’s just that you have been voted down for refusing to see a distinction that’s clearly there, between “here is a complicated thing which nevertheless must be reducible to simpler things, which is what we’re in the process of rigorously doing” and “here is a magical thing which we won’t ever have a mathematical understanding of, but it will work if we play by the right rules”.
Set your preferences to only hide comments below −5. Go to an old Open Thread or a particularly large discussion, and search for “comment score below threshold”.
“Karma” is the only definitionally supernatural item on that list—it is defined to be not reducible to nonmental mechanism. The others are merely elements of belief systems which contain elements that are supernatural (e.g. God).
Yes, the concept of “karma” can be reduced to naturalistic roots if you accept metaphysical naturalism, but the actual thing cannot be. It’s the quotation which you can reduce, not the referent.
You can reduce an AGI to the behavior of computer chips (or whatever fancy-schmancy substrate they end up running on), which are themselves just channels for the flow of electrons. Nothing mental there. Friendliness is a description of the utility function and decision theory of an AGI, both of which can be reduced to patterns of electrons on a computer chip.
It’s all electrons floating around. We just talk about ridiculous abstract things like AGI and Friendliness because it makes the math tractable.
But there is further information. We must expect Eliezer to make use of all of the information available to him when making such an implied estimation and similarly use everything we have available when evaluating credibility of any expressed claims.
It does apply to every person. The other information you have about the claimed messiahs may allow you to conclude them not worthy of further consideration (or any consideration). The low prior makes that easy. But if you do consider other arguments for some reason, you have to take them into account. And some surface signals can be enough to give you grounds for in fact seeking/considering more data. Also, you are never justified in active arguing from ignorance: if you expend some effort on the arguing, you must consider the question sufficiently important, which must cause you to learn more about it if you believe yourself to be ignorant about potentially conclusion-changing detail.
I don’t know to what you refer either, but I can guess. The thing is, my guesses haven’t been doing very well lately, so I would appreciate some feedback. Were you suggesting that you would have thought that taw should have more easily understood your point, but he didn’t (because inferential distance between you was greater than expected)?
I don’t know to what you refer either, but I can guess.
I admit was being obscure so I’m rather impressed that you followed my reasoning—especially since it included a reference that you may not be familiar with. I kept it obscure because I wanted the focus to be on my confusion while minimising slight to taw.
Actually this whole post-thread has been eye opening and or confusing and or surprising to me. I’ve been blinking and double taking all over the place: “people think?”, “that works?”, etc. What surprised me most (and in a good way) was the degree to which all the comments have been a net positive. Political and personal topics so often become negative sum but this one didn’t seem to.
As a newcomer to LW, it has certainly impressed me. I’ve never before seen a discussion where the topic was “Is this guy we all respect a loon?” and the whole discussion is filled with clever arguments and surprising connections being drawn by all concerned—most particularly by the purported/disputed loon himself.
I wouldn’t have believed it possible if I hadn’t participated in it myself.
Also, you are never justified in active arguing from ignorance: if you expend some effort on the arguing
Reference class of “people who claimed to be saving the world and X” has exactly the same number of successes as reference class of “people who claimed to be saving the world and not X”, for every X.
It will be smaller, so you could argue that evidence against Eliezer will be weaker (0 successes in 1000 tries vs 0 successes in 1000000 tries), but every such X needs evidence by Occam’s razor (or your favourite equivalent). Otherwise you can take X = “wrote Harry Potter fanfiction” to ignore pretty much all past failures.
A million? The only source of that quantity of would-be saviours I can think of is One True Way proselytising religions, but those millions are not independent—Christianity and Islam are it.
But the whole argument is wrong. Many claimed to fly and none succeeded—until someone did. Many claimed transmutation and none succeeded—until someone did. Many failed to resolve the problem of Euclid’s 5th postulate—until someone did. That no-one has succeeded at a thing is a poor argument for saying the next person to try will also fail (and an even worse one for saying the thing will never be done). You say “without further information”, but presumably you think this case falls within that limitation, or you would not have made the argument.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that “further information”: studying his reasons for his claims.
And for every notable prophet or peace activist or whatever there are thousands forgotten by history.
And if you count Petrov—it’s not obvious why as he didn’t save the world—in any case he wasn’t claiming that he’s going to save the world earlier, so P(saved the world|claimed to be world-savior) is less than P(saved the world|didn’t claim to be world-savior).
But the whole argument is wrong. Many claimed to fly and none succeeded—until someone did.
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that “further information”: studying his reasons for his claims.
You should count Bacon, who believed himself– accurately– to be taking the first essential steps toward understanding and mastery of nature for the good of mankind. If you don’t count him on the grounds that he wasn’t concerned with existential risk, then you’d have to throw out all prophets who didn’t claim that their failure would increase existential risk.
He believed that the scientific method he developed and popularized would improve the world in ways that were previously unimaginable. He was correct, and his life accelerated the progress of the scientific revolution.
The claim may be weaker than a claim to help with existential risk, but it still falls into your reference class more easily than a lot of messiahs do.
Just for a starter: [lists of self-styled gods and divine emissaries]
I’ll give you more than two, but that still doesn’t amount to millions, and not all of those claimed to be saving the world. But now we’re into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn’t an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn’t great either.
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
[It’s extremely unlikely that] Eliezer or someone else who looks like a crackpot will actually save the world
This is ambiguous.
The most likely parse means: It’s nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they’re working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
Because you are making a binary decision based on that estimate:
Given how low the chance is, I’ll pass.
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat?
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not what the outcome was.
Try peak oil/anti-nuclear/global warming/etc. activists then? They tend to claim their movement saves the world, not themselves personally, but I’m sure I could find sufficient number of them who also had some personality cult thrown in.
Sure, but that would 1) reduce you 1/100000 figure, esp. if you take only the leaders of the said movement. And I would not find claims of saving the world by anti-nuke scientists in say the 1960s preposterous.
I think that if you accept that AGI is “near”, that FAI is important to try in order to prevent it, and that EY was at the very least the person who brought spotlight to the problem (which is a fact), you can end up thinking that he might actually make a difference.
As the comments discuss, that was not an extinction event, barring further burdensome assumptions about nuclear winter or positive feedbacks of social collapse.
No, the Permanent Mission of the Russian Federation to the United Nations disagrees with this story, and Wikipedia quotes that disagreement. The very next section explains why that disagreement may be incorrect.
Do you have any candidates in mind, or some plausible scenario how the world might have been saved by a single person without achieving due prominence?
reduce you 1/100000 figure, esp. if you take only the leaders of the said movement
I already did, there was a huge number of such movements, most of them highly obscure (not unlike Eliezer). I’d expect some power law distribution in prominence, so for every one we’ve heard about there’d be far more we didn’t.
I think that if you accept that AGI is “near”, that FAI is important to try in order to prevent it
I don’t, and the link from AGI to FAI is as weak as from oil production statistics to civilizational collapse peakoilers promised.
The part where development of AGI fooms immediately into superintelligence and destroys the world. Evidence for it in not even circumstantial, it is fictional.
Still, when I imagine something that is smarter than man who created it, it seems it would be able to improve itself.I would bet on that; I do not see a strong reason why this would not happen. What about you? Are you with Hanson on this one?
Haven’t there been a lot more than a million people in history that claimed saving the world, with 0 successes? Without further information, reasonable estimates are from 0 to 1/million.
Can you name ten who claimed to do so via non-supernatural/extraterrestrial means? Even counting claims of the supernatural I would be surprised to learn there had been a million.
And FAI counts as not “supernatural” how?
In any case, nuclear war, peak oil, global warming, overpopulation attracted a huge number of people who claimed that civilization will end unless this or that will be done.
In the ordinary sense that Richard Dawkins and James Randi use.
“If we don’t continue to practice agriculture or hunting and gathering, civilization will end.”
There are plenty of true statements like that. Your argument needs people who said that such and such things needed to be done, and that they were the ones who were going to cause the things in question. If you list some specific people, you can then identify relevant features that are or are not shared.
Disclaimer: I think near-term efforts to reduce AI risk will probably not be determinative in preventing an existential risk, but have a non-negligible probability of doing so that gives high expected value at the current margin via a number of versions of cost-benefit analysis. Moreso for individual near-term AI risk efforts.
It’s true that Mr. Yudowsky’s claims violate relatively few of the generally accepted laws of physics, compared to the average messiah-claimant, but not every false claim is trivially so. Indeed, some of the most successful cults are built around starting with something that actually does work and taking it too far.
“relatively few”? Name two.
I can name only one explicit point of departure and it’s defensible.
Other saviors have claimed e. g. the ability to resurrect a person long since reduced to moldy bones, which spits in the face of thermodynamics. Relative to that, a quibble with QM involves very few physical laws.
A—his beliefs on MWI have no bearing on his relative importance wrt the future of the world.
B—when you say “defensible”, you mean “accepted by the clear majority of scientists working in the field”.
MWI has been empirically falsified.
http://www.analogsf.com/0410/altview2.shtml
What now?
This is a tiny minority opinion, based on math that is judged incorrect by the overwhelming majority of experts.
Can someone link to a good explanation of all this. Or write one?
So why was this post voted down so far? It appears to be a relevant and informative link to a non-crank source, with no incivility that I could see.
Overconfidence in the assertion. Presumption of a foregone conclusion.
It was a relevant link and I enjoyed doing the background reading finding out just how seriously relevant authorities take this fellow’s stance. He is not a crank but he is someone with a large personal stake. The claim in the article seems to have an element of spin in the interpretations of interpretations as it were.
I did lower my confidence in how well I grasp QM but much of that confidence was restored once I traced down some more expert positions and scanned some wikipedia articles. I focussed in particular on whether MW is a ‘pure’ interpretation. That is, whether it does actually deviate from the formal math.
With an introduction like that, the link should go to a recent announcement in a major scientific journal by a lot of respected people based on overwhelming evidence, not this one guy writing a non-peer-reviewed argument about an experiment ten years ago that AFAICT most physicists see as perfectly consistent with our existing understanding of QM.
It is a source targeted at the general public, which unfortunately does not know enough to hire a competent columnist. John Cramer has used the wrong equations to arrive at an incorrect description of the Afshar experiment, which he uses to justify his own interpretation of QM, which he wants to be correct. The experiment is not in conflict with the known laws of physics.
In general, I advise you to mistrust reports of recent developments in physics, if you have no physics training. I check a number of popular sources occasionally and about half of the articles are either wrong or misleading. For example, you may have recently heard about Erik Verlinde’s theories about entropic gravity. If gravity were an entropic force, gravitational field would cause extremely rapid decoherence, preventing, for example, the standard double-slit experiment. This is obviously not observed, yet this theory is one of the more well-known ones among physics fans.
Incivility gets most of the big downvotes, and genuine insight gets the big upvotes, but I’ve noticed that the +1s and −1s tend to reflect compliance with site norms more than skill.
This is worrying, of course, but I’m not equipped to fix it.
If the stated rule for voting is “upvote what you want more of; downvote what you want less of,” and the things that are getting upvoted are site norms and the things that are getting downvoted aren’t, one interpretation is that that the system is working properly: they are site norms precisely because they are the things people want more of, which are therefore getting upvoted.
:P Skill? What is this skill of which you speak?
Ignore it and write comments worth +5. :)
It’s easier to write five yes-man quotes for +1 each than one +5 comment, which seems like a flawed incentive system.
That isn’t my experience. When in the mood to gain popularity +5 comments are easy to spin while bulk +1s take rather a lot of typing. I actually expect that even trying to get +1s I would accidentally get about at least 1/5th as many +5s as +1s.
Edit: I just scanned back through the last few pages of my comments. I definitely haven’t been in a ‘try to appear deep and insightful’ kind of mood and even so more karma came from +5s than +1s. I was surprised because I actually thought my recent comments may have been an exception.
This is what I find, scanning back over my last 20 comments. My last 30 include a +19 so I didn’t even bother.
And of course karma is a flawed incentive system. It’s not meant as an incentive system.
I actually ignored everything that wasn’t exactly a +5 to make the world that much less convenient. :P
Interesting, but I don’t think that’s the right characterization of the content of the link. It’s John Cramer (proponent of the transactional interpretation) claiming that the Afshar Experiment’s results falsfify both Copenhagen and MWI. I think you’re better off reading about the experiment directly.
That experiment is ten years old and its implications are rather controversial.
Identify the element of MWI that according to Cramer’s blog is not consistent with the mathematical formalism of Quantum Mechanics and if you happen to be thinking that then stop.
CarlShulman is correct, but for reference, Richard Carrier’s definition of “supernatural”:
By this definition isn’t “AGI” borderline supernatural and “friendliness” entirely supernatural?
This doesn’t feel like a right definition.
I don’t quite understand your confusion. An AGI is a computer program, and friendliness is a property of a computer program. Yes, these concepts allude to mental concepts on our maps, but these mental concepts are reducible to properties of the nonmental substrates that are our brains. In fact, the goal of FAI research is to find the reduction of friendliness to nonmental things.
Concepts of sin, or prayer, or karma, or intelligent design, and most other things considered “supernatural” can be reducible to properties of physical world with enough hand waving.
Karma scores in this thread suggest it falls in reference class of “arguing against groupthink”, which ironically increases estimates of Eliezer being a crackpot, and lesswrong turning into a cult, possibly via evaporative cooling.
No, that’s really not borne out by the evidence. Multifolaterose’s posts have been strongly upvoted, it seems to me, by a significant group of readers who see themselves as defenders against groupthink. It’s just that you have been voted down for refusing to see a distinction that’s clearly there, between “here is a complicated thing which nevertheless must be reducible to simpler things, which is what we’re in the process of rigorously doing” and “here is a magical thing which we won’t ever have a mathematical understanding of, but it will work if we play by the right rules”.
Is there a way to find random sample of threads with heavy downvoting? My experience on reddit suggests it’s usually groupthink.
Set your preferences to only hide comments below −5. Go to an old Open Thread or a particularly large discussion, and search for “comment score below threshold”.
“Karma” is the only definitionally supernatural item on that list—it is defined to be not reducible to nonmental mechanism. The others are merely elements of belief systems which contain elements that are supernatural (e.g. God).
Yes, the concept of “karma” can be reduced to naturalistic roots if you accept metaphysical naturalism, but the actual thing cannot be. It’s the quotation which you can reduce, not the referent.
You can reduce an AGI to the behavior of computer chips (or whatever fancy-schmancy substrate they end up running on), which are themselves just channels for the flow of electrons. Nothing mental there. Friendliness is a description of the utility function and decision theory of an AGI, both of which can be reduced to patterns of electrons on a computer chip.
It’s all electrons floating around. We just talk about ridiculous abstract things like AGI and Friendliness because it makes the math tractable.
But there is further information. We must expect Eliezer to make use of all of the information available to him when making such an implied estimation and similarly use everything we have available when evaluating credibility of any expressed claims.
Nitpick: Do you mean credulity or credibility?
The one that makes sense. Thanks. :)
Unfortunately Internet doesn’t let me guess if you meant this humorously or not.
Entirely seriously. I also don’t see anything particularly funny about it.
Why does it apply to Eliezer and not to every other person claiming to be a messiah?
It does apply to every person. The other information you have about the claimed messiahs may allow you to conclude them not worthy of further consideration (or any consideration). The low prior makes that easy. But if you do consider other arguments for some reason, you have to take them into account. And some surface signals can be enough to give you grounds for in fact seeking/considering more data. Also, you are never justified in active arguing from ignorance: if you expend some effort on the arguing, you must consider the question sufficiently important, which must cause you to learn more about it if you believe yourself to be ignorant about potentially conclusion-changing detail.
See also: How Much Thought, Readiness Heuristics.
I am confused. Something has gone terribly wrong with my inferential distance prediction model.
And I have no idea what you refer to.
Approximately this entire comment branch.
I don’t know to what you refer either, but I can guess. The thing is, my guesses haven’t been doing very well lately, so I would appreciate some feedback. Were you suggesting that you would have thought that taw should have more easily understood your point, but he didn’t (because inferential distance between you was greater than expected)?
I admit was being obscure so I’m rather impressed that you followed my reasoning—especially since it included a reference that you may not be familiar with. I kept it obscure because I wanted the focus to be on my confusion while minimising slight to taw.
Actually this whole post-thread has been eye opening and or confusing and or surprising to me. I’ve been blinking and double taking all over the place: “people think?”, “that works?”, etc. What surprised me most (and in a good way) was the degree to which all the comments have been a net positive. Political and personal topics so often become negative sum but this one didn’t seem to.
As a newcomer to LW, it has certainly impressed me. I’ve never before seen a discussion where the topic was “Is this guy we all respect a loon?” and the whole discussion is filled with clever arguments and surprising connections being drawn by all concerned—most particularly by the purported/disputed loon himself.
I wouldn’t have believed it possible if I hadn’t participated in it myself.
Reference class of “people who claimed to be saving the world and X” has exactly the same number of successes as reference class of “people who claimed to be saving the world and not X”, for every X.
It will be smaller, so you could argue that evidence against Eliezer will be weaker (0 successes in 1000 tries vs 0 successes in 1000000 tries), but every such X needs evidence by Occam’s razor (or your favourite equivalent). Otherwise you can take X = “wrote Harry Potter fanfiction” to ignore pretty much all past failures.
A million? The only source of that quantity of would-be saviours I can think of is One True Way proselytising religions, but those millions are not independent—Christianity and Islam are it.
There has been at least one technological success, so that’s a success rate of 1 out of 3, not 0 out of a million.
But the whole argument is wrong. Many claimed to fly and none succeeded—until someone did. Many claimed transmutation and none succeeded—until someone did. Many failed to resolve the problem of Euclid’s 5th postulate—until someone did. That no-one has succeeded at a thing is a poor argument for saying the next person to try will also fail (and an even worse one for saying the thing will never be done). You say “without further information”, but presumably you think this case falls within that limitation, or you would not have made the argument.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that “further information”: studying his reasons for his claims.
Just for a starter:
http://en.wikipedia.org/wiki/List_of_messiah_claimants
http://en.wikipedia.org/wiki/List_of_people_considered_to_be_deities
http://en.wikipedia.org/wiki/Category:Deified_people
http://en.wikipedia.org/wiki/Jewish_Messiah_claimants
And for every notable prophet or peace activist or whatever there are thousands forgotten by history.
And if you count Petrov—it’s not obvious why as he didn’t save the world—in any case he wasn’t claiming that he’s going to save the world earlier, so P(saved the world|claimed to be world-savior) is less than P(saved the world|didn’t claim to be world-savior).
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
Given how low the chance is, I’ll pass.
You should count Bacon, who believed himself– accurately– to be taking the first essential steps toward understanding and mastery of nature for the good of mankind. If you don’t count him on the grounds that he wasn’t concerned with existential risk, then you’d have to throw out all prophets who didn’t claim that their failure would increase existential risk.
Accurately? Bacon doesn’t seem to have any special impact on anything, or on existential risks in particular.
Man, I hope you don’t mean that.
He believed that the scientific method he developed and popularized would improve the world in ways that were previously unimaginable. He was correct, and his life accelerated the progress of the scientific revolution.
The claim may be weaker than a claim to help with existential risk, but it still falls into your reference class more easily than a lot of messiahs do.
This looks like a drastic overinterpretation. He seems like just another random philosopher, he didn’t “develop scientific method”, empiricism was far older and modern science far more recent than Bacon, and there’s little basis for even claiming radically discontinuous “scientific revolution” around Bacon’s times.
I’ll give you more than two, but that still doesn’t amount to millions, and not all of those claimed to be saving the world. But now we’re into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn’t an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn’t great either.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
This is ambiguous.
The most likely parse means: It’s nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they’re working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
I agree, but Eliezer strongly rejects this claim. Probably by making a reference class for just himself.
Because you are making a binary decision based on that estimate:
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
The outcome showed that Aumann was wrong, mostly.
Yes, if you accept religious lunatics as your reference class.
Try peak oil/anti-nuclear/global warming/etc. activists then? They tend to claim their movement saves the world, not themselves personally, but I’m sure I could find sufficient number of them who also had some personality cult thrown in.
Sure, but that would 1) reduce you 1/100000 figure, esp. if you take only the leaders of the said movement. And I would not find claims of saving the world by anti-nuke scientists in say the 1960s preposterous.
I think that if you accept that AGI is “near”, that FAI is important to try in order to prevent it, and that EY was at the very least the person who brought spotlight to the problem (which is a fact), you can end up thinking that he might actually make a difference.
Yeah, I’m tickled by the estimate that so far 0 people have saved the world. How do we know that? The world is still here, after all.
Eliezer has already placed a Go stone on that intersection, it turns out.
As the comments discuss, that was not an extinction event, barring further burdensome assumptions about nuclear winter or positive feedbacks of social collapse.
In any case Wikipedia disagrees with this story.
No, the Permanent Mission of the Russian Federation to the United Nations disagrees with this story, and Wikipedia quotes that disagreement. The very next section explains why that disagreement may be incorrect.
Do you have any candidates in mind, or some plausible scenario how the world might have been saved by a single person without achieving due prominence?
I already did, there was a huge number of such movements, most of them highly obscure (not unlike Eliezer). I’d expect some power law distribution in prominence, so for every one we’ve heard about there’d be far more we didn’t.
I don’t, and the link from AGI to FAI is as weak as from oil production statistics to civilizational collapse peakoilers promised.
Ok, thinking how close we are to AGI is a prior I do not care to argue about, but don’t you think AGI is a concern? What do you mean by a weak link?
The part where development of AGI fooms immediately into superintelligence and destroys the world. Evidence for it in not even circumstantial, it is fictional.
Ok, of course it’s fictional—hasn’t happened yet!
Still, when I imagine something that is smarter than man who created it, it seems it would be able to improve itself.I would bet on that; I do not see a strong reason why this would not happen. What about you? Are you with Hanson on this one?