I’m not private_messaging, but I think he has a marginally valid point, even though I disagree with his sensational style.
I personally would estimate FinalState’s chances of building a working AGI at approximately epsilon, given the total absence of evidence. My opinion doesn’t really matter, though, because I’m just some guy with a LessWrong account.
The SIAI folks, on the other hand, have made it their mission in life to prevent the rise of un-Friendly AGI. Thus, they could make FinalState’s life difficult in some way, in order to fulfill their core mission. In effect, FinalState’s post could be seen as a Pascal’s Mugging attempt vs. SIAI.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so. There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers. SIAI knows all this and therefore will ignore FinalState completely, as well they should.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so.
I think that depends on what level of suppression one is willing to employ, though in general I agree with you. FinalState had admitted to being a troll, but even if he was an earnest crank, the magnitude of the expected value of his work would still be quite small, even when you do account for SIAI’s bias.
There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers
What are they, out of curiosity ? I think I missed that part of the Sequences...
What are they, out of curiosity ? I think I missed that part of the Sequences...
It’s not in the main Sequences, it’s in the various posts on decision theory and Pascal’s Muggings. I hope our resident decision theory experts will correct me if I’m wrong, but my understanding is this. If an agent is of the type that gives into Pascal’s Mugging, then other agents who know that have an incentive to mug them. If all potential muggers know that they’ll get no concessions from an agent, they have no incentive to mug them. I don’t think this covers “Pascal’s Gift” scenarios where an agent is offered a tiny probability of a large positive utility, but it covers scenarios involving a small chance of a large disutility.
I’m not private_messaging, but I think he has a marginally valid point, even though I disagree with his sensational style.
I personally would estimate FinalState’s chances of building a working AGI at approximately epsilon, given the total absence of evidence. My opinion doesn’t really matter, though, because I’m just some guy with a LessWrong account.
The SIAI folks, on the other hand, have made it their mission in life to prevent the rise of un-Friendly AGI. Thus, they could make FinalState’s life difficult in some way, in order to fulfill their core mission. In effect, FinalState’s post could be seen as a Pascal’s Mugging attempt vs. SIAI.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so. There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers. SIAI knows all this and therefore will ignore FinalState completely, as well they should.
I think that depends on what level of suppression one is willing to employ, though in general I agree with you. FinalState had admitted to being a troll, but even if he was an earnest crank, the magnitude of the expected value of his work would still be quite small, even when you do account for SIAI’s bias.
What are they, out of curiosity ? I think I missed that part of the Sequences...
It’s not in the main Sequences, it’s in the various posts on decision theory and Pascal’s Muggings. I hope our resident decision theory experts will correct me if I’m wrong, but my understanding is this. If an agent is of the type that gives into Pascal’s Mugging, then other agents who know that have an incentive to mug them. If all potential muggers know that they’ll get no concessions from an agent, they have no incentive to mug them. I don’t think this covers “Pascal’s Gift” scenarios where an agent is offered a tiny probability of a large positive utility, but it covers scenarios involving a small chance of a large disutility.
I’m not sure that that is in fact an admission of being a troll… it reads as fairly ambiguous to me. Do other people have readings on this?