Just for a starter: [lists of self-styled gods and divine emissaries]
I’ll give you more than two, but that still doesn’t amount to millions, and not all of those claimed to be saving the world. But now we’re into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
You seem to be horribly confused here. I’m not arguing that nobody will ever save the world, just that a particular person claiming to is extremely unlikely.
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn’t an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn’t great either.
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
[It’s extremely unlikely that] Eliezer or someone else who looks like a crackpot will actually save the world
This is ambiguous.
The most likely parse means: It’s nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they’re working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
Because you are making a binary decision based on that estimate:
Given how low the chance is, I’ll pass.
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat?
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not what the outcome was.
I’ll give you more than two, but that still doesn’t amount to millions, and not all of those claimed to be saving the world. But now we’re into reference class tennis. Is lumping Eliezer in with people claiming to be god more useful than lumping him in with people who foresee a specific technological existential threat and are working to avoid it?
Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world. That may be the trade you want to make, but it isn’t an argument for anyone else to do the same.
Unlike Eliezer, I refuse to see this as a bad thing. Reference classes are the best tool we have for thinking about rare events.
You mean like people protesting nuclear power, GMOs, and LHC? Their track record isn’t great either.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
This is ambiguous.
The most likely parse means: It’s nearly certain that not one person in the class [*] will turn out to actually save the world.
This is extremely shaky.
Or, you could mean: take any one person from that class. That one person is extremely unlikely to actually save the world.
This is uncontroversial.
[*] the class of all the people who would seem like crackpots if you knew them when (according to them) they’re working to save the world, but before they actually get to do it (or fail, or die first without the climax ever coming).
I agree, but Eliezer strongly rejects this claim. Probably by making a reference class for just himself.
Because you are making a binary decision based on that estimate:
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
The outcome showed that Aumann was wrong, mostly.