The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so. There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers. SIAI knows all this and therefore will ignore FinalState completely, as well they should.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so.
I think that depends on what level of suppression one is willing to employ, though in general I agree with you. FinalState had admitted to being a troll, but even if he was an earnest crank, the magnitude of the expected value of his work would still be quite small, even when you do account for SIAI’s bias.
There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers
What are they, out of curiosity ? I think I missed that part of the Sequences...
What are they, out of curiosity ? I think I missed that part of the Sequences...
It’s not in the main Sequences, it’s in the various posts on decision theory and Pascal’s Muggings. I hope our resident decision theory experts will correct me if I’m wrong, but my understanding is this. If an agent is of the type that gives into Pascal’s Mugging, then other agents who know that have an incentive to mug them. If all potential muggers know that they’ll get no concessions from an agent, they have no incentive to mug them. I don’t think this covers “Pascal’s Gift” scenarios where an agent is offered a tiny probability of a large positive utility, but it covers scenarios involving a small chance of a large disutility.
The social and opportunity costs of trying to supress a “UFAI attempt” as implausible as FinalState’s are far higher than the risk of failing to do so. There are also decision-theoretic reasons never to give in to Pascal-Mugging-type offers. SIAI knows all this and therefore will ignore FinalState completely, as well they should.
I think that depends on what level of suppression one is willing to employ, though in general I agree with you. FinalState had admitted to being a troll, but even if he was an earnest crank, the magnitude of the expected value of his work would still be quite small, even when you do account for SIAI’s bias.
What are they, out of curiosity ? I think I missed that part of the Sequences...
It’s not in the main Sequences, it’s in the various posts on decision theory and Pascal’s Muggings. I hope our resident decision theory experts will correct me if I’m wrong, but my understanding is this. If an agent is of the type that gives into Pascal’s Mugging, then other agents who know that have an incentive to mug them. If all potential muggers know that they’ll get no concessions from an agent, they have no incentive to mug them. I don’t think this covers “Pascal’s Gift” scenarios where an agent is offered a tiny probability of a large positive utility, but it covers scenarios involving a small chance of a large disutility.
I’m not sure that that is in fact an admission of being a troll… it reads as fairly ambiguous to me. Do other people have readings on this?