Of course, but the price of the Spectator’s Argument is that you will be wrong every time someone does save the world.
How so? I’m not saying it’s entirely impossible that Eliezer or someone else who looks like a crackpot will actually save the world, just that it’s extremely unlikely.
Because you are making a binary decision based on that estimate:
Given how low the chance is, I’ll pass.
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat?
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not what the outcome was.
Because you are making a binary decision based on that estimate:
With that rule, you will always make that decision, always predict that the unlikely will not happen, untii the bucket goes to the well once too often.
Let me put this the other way round: on what evidence would you take seriously someone’s claim to be doing effective work against an existential threat? Of course, first there would have to be an existential threat, and I recall from the London meetup I was at that you don’t think there are any, although that hasn’t come up in this thread. I also recall you and ciphergoth going hammer-and-tongs over that for ages, but not whether you eventually updated from that position.
Eliezer’s claims are not that he’s doing effective work, his claims are pretty much of being a messiah saving humanity from super-intelligent paperclip optimizers. That requires far more evidence. Ridiculously more, because you not only have to show that his work reduces some existential threat, but at the same time it doesn’t increase some other threat to larger degree (pro-technology vs anti-technology crowds suffer from this—it’s not obvious who’s increasing and who’s decreasing existential threats). You can as well ask me what evidence would I need to take seriously someone’s claim that he’s a second coming of Jesus—in both cases it would need to be truly extraordinary evidence.
Anyway, the best understood kind of existential threats are asteroid impacts, and there are people who try to do something about them, some even in US Congress. I see a distinct lack of messiah complexes and personality cults there, very much unlike AI crowd which seems to consist mostly of people with delusions of grandeur.
Is there any other uncontroversial case like that?
The outcome showed that Aumann was wrong, mostly.