If I’m reading correctly, the argument you appear to present in your paper is:
We (Thomas Pogge) want to end poverty.
An AI could end poverty.
Therefore, we should build an AI.
This isn’t a strong argument. Probably Pogge thinks that ending poverty is perfectly feasible without building AI, so if you want to change his mind, you need to show that an AI solution can likely be implemented faster than a non-AI one in addition to being sufficiently safe.
It seems like your paper just sets out to establish that there might be some strong arguments for Singularity activism as a response to global poverty somewhere in the vicinity without trying very hard to spell them out.
I was actually trying for a stronger claim—that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid (which has a lot of downsides) for ending poverty. More generally, I want to show that AI dominates other strategies of moral action because of its tremendous scope, despite a) its uncertainty, b) focus on future people, and c) risks of bad consequences.
Your charge of vagueness is worth considering as well, though perhaps I’ll just need to apply it to future writing. I’ll get back to work. Thanks again.
I guess I’m just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you’re trying to lobby the burden of proof tennis ball to Pogge’s court: AI “might” turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we’re uncharitable for 30 years) he assents to, so it’s Pogge’s job to show that AI is probably not like that scenario.
Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity—hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers’ paper and lukeprog’s site.
If I can do one thing with the paper, I’d just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it’s just by browsing singinst.org.
I was actually trying for a stronger claim—that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid
Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it’s not as worthy a goal as AGI?
If I’m reading correctly, the argument you appear to present in your paper is:
We (Thomas Pogge) want to end poverty.
An AI could end poverty.
Therefore, we should build an AI.
This isn’t a strong argument. Probably Pogge thinks that ending poverty is perfectly feasible without building AI, so if you want to change his mind, you need to show that an AI solution can likely be implemented faster than a non-AI one in addition to being sufficiently safe.
It seems like your paper just sets out to establish that there might be some strong arguments for Singularity activism as a response to global poverty somewhere in the vicinity without trying very hard to spell them out.
Thanks for the feedback—I appreciate it.
I was actually trying for a stronger claim—that AI (as a permanent solution that takes some time to develop) is better than institutional work or humanitarian aid (which has a lot of downsides) for ending poverty. More generally, I want to show that AI dominates other strategies of moral action because of its tremendous scope, despite a) its uncertainty, b) focus on future people, and c) risks of bad consequences.
Your charge of vagueness is worth considering as well, though perhaps I’ll just need to apply it to future writing. I’ll get back to work. Thanks again.
I guess I’m just not currently seeing the arguments for those things (though I may just be confused somehow). It seems more like you’re trying to lobby the burden of proof tennis ball to Pogge’s court: AI “might” turn out to be as good as the scenario (50% chance of permanently ending world poverty forever if we’re uncharitable for 30 years) he assents to, so it’s Pogge’s job to show that AI is probably not like that scenario.
Right, I hear you. I definitely try to avoid dealing specifically with arguments about the likelihood of the Singularity—hopefully passing the reader off to treatments created specifically for that purpose, like Chalmers’ paper and lukeprog’s site.
If I can do one thing with the paper, I’d just like for Pogge to feel that he needs to address the possibility of the Singularity somehow, even if it’s just by browsing singinst.org.
Thanks.
Have you considered diminishing returns? We have more resources available to us than are currently useful in the goal of persuing AGI. Would you argue that we should let those resources go fallow rather than work to mitigate ongoing problems in the duration of the period before our AGI efforts succeed merely because it’s not as worthy a goal as AGI?
Would seems to be the word that is necessary there!