I guess I’m just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you’re trying to lobby the burden of proof tennis ball to Pogge’s court: AI “might” turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we’re uncharitable for 30 years) he assents to, so it’s Pogge’s job to show that AI is probably not like that scenario.
Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity—hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers’ paper and lukeprog’s site.
If I can do one thing with the paper, I’d just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it’s just by browsing singinst.org.
I guess I’m just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you’re trying to lobby the burden of proof tennis ball to Pogge’s court: AI “might” turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we’re uncharitable for 30 years) he assents to, so it’s Pogge’s job to show that AI is probably not like that scenario.
Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity—hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers’ paper and lukeprog’s site.
If I can do one thing with the paper, I’d just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it’s just by browsing singinst.org.
Thanks.