A million? The only source of that quantity of would-be saviours I can think of is One True Way proselytising religions, but those millions are not independent—Christianity and Islam are it.
But the whole argument is wrong. Many claimed to fly and none succeeded—until someone did. Many claimed transmutation and none succeeded—until someone did. Many failed to resolve the problem of Euclid’s 5th postulate—until someone did. That no-one has succeeded at a thing is a poor argument for saying the next person to try will also fail (and an even worse one for saying the thing will never be done). You say “without further information”, but presumably you think this case falls within that limitation, or you would not have made the argument.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that “further information”: studying his reasons for his claims.
And for every notable prophet or peace activist or whatever there are thousands forgotten by history.
And if you count Petrov—it’s not obvious why as he didn’t save the world—in any case he wasn’t claiming that he’s going to save the world earlier, so P(saved the world|claimed to be world-savior) is less than P(saved the world|didn’t claim to be world-savior).
But the whole argument is wrong. Many claimed to fly and none succeeded—until someone did.
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that “further information”: studying his reasons for his claims.
You should count Bacon, who believed himself– accurately– to be taking the first essential steps toward understanding and mastery of nature for the good of mankind. If you don’t count him on the grounds that he wasn’t concerned with existential risk, then you’d have to throw out all prophets who didn’t claim that their failure would increase existential risk.
He believed that the scientific method he developed and popularized would improve the world in ways that were previously unimaginable. He was correct, and his life accelerated the progress of the scientific revolution.
The claim may be weaker than a claim to help with existential risk, but it still falls into your reference class more easily than a lot of messiahs do.
Just for a starter: [lists of self-styled gods and divine emissaries]
I’ll give you more than two, but that still doesn’t amount to millions, and not all of those claimed to be saving the world. But now we’re into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn’t an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn’t great either.
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
[It’s extremely unlikely that] Eliezer or someone else who looks like a crackpot will actually save the world
This is ambiguous.
The most likely parse means: It’s nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they’re working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
Because you are making a binary decision based on that estimate:
Given how low the chance is, I’ll pass.
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat?
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not what the outcome was.
A million? The only source of that quantity of would-be saviours I can think of is One True Way proselytising religions, but those millions are not independent—Christianity and Islam are it.
There has been at least one technological success, so that’s a success rate of 1 out of 3, not 0 out of a million.
But the whole argument is wrong. Many claimed to fly and none succeeded—until someone did. Many claimed transmutation and none succeeded—until someone did. Many failed to resolve the problem of Euclid’s 5th postulate—until someone did. That no-one has succeeded at a thing is a poor argument for saying the next person to try will also fail (and an even worse one for saying the thing will never be done). You say “without further information”, but presumably you think this case falls within that limitation, or you would not have made the argument.
So there is no short-cut to judging the claims of a messianic zealot. You have to do the leg-work of getting that “further information”: studying his reasons for his claims.
Just for a starter:
http://en.wikipedia.org/wiki/List_of_messiah_claimants
http://en.wikipedia.org/wiki/List_of_people_considered_to_be_deities
http://en.wikipedia.org/wiki/Category:Deified_people
http://en.wikipedia.org/wiki/Jewish_Messiah_claimants
And for every notable prophet or peace activist or whatever there are thousands forgotten by history.
And if you count Petrov—it’s not obvious why as he didn’t save the world—in any case he wasn’t claiming that he’s going to save the world earlier, so P(saved the world|claimed to be world-savior) is less than P(saved the world|didn’t claim to be world-savior).
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
Given how low the chance is, I’ll pass.
You should count Bacon, who believed himself– accurately– to be taking the first essential steps toward understanding and mastery of nature for the good of mankind. If you don’t count him on the grounds that he wasn’t concerned with existential risk, then you’d have to throw out all prophets who didn’t claim that their failure would increase existential risk.
Accurately? Bacon doesn’t seem to have any special impact on anything, or on existential risks in particular.
Man, I hope you don’t mean that.
He believed that the scientific method he developed and popularized would improve the world in ways that were previously unimaginable. He was correct, and his life accelerated the progress of the scientific revolution.
The claim may be weaker than a claim to help with existential risk, but it still falls into your reference class more easily than a lot of messiahs do.
This looks like a drastic overinterpretation. He seems like just another random philosopher, he didn’t “develop scientific method”, empiricism was far older and modern science far more recent than Bacon, and there’s little basis for even claiming radically discontinuous “scientific revolution” around Bacon’s times.
I’ll give you more than two, but that still doesn’t amount to millions, and not all of those claimed to be saving the world. But now we’re into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn’t an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn’t great either.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
This is ambiguous.
The most likely parse means: It’s nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they’re working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
I agree, but Eliezer strongly rejects this claim. Probably by making a reference class for just himself.
Because you are making a binary decision based on that estimate:
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
The outcome showed that Aumann was wrong, mostly.