Either charities like the Gates Foundation and Good Ventures are hoarding money at the price of millions of preventable deaths
My assumption before reading this has been that this is the case. Given that, does a reason remain to update away from the position that the GiveWell claim is basically correct?
For the rest of this post, let’s suppose the true amount of money needed to save a life through GiveWell’s top charities is 50.000$. I don’t think anything about Singer’s main point changes.
For one, it’s my understanding that decreasing animal suffering is at least an order of magnitude more effective than decreasing human suffering. If the arguments you make here apply equally to that (which I don’t think they do), and we take the above number, well that’s 5000$ for a benefit-as-large-as-one-life-saved, which is still sufficient for Singer’s argument
Secondly, I don’t think your arguments apply to existential risk prevention and even if they did and we decrease effectiveness there by one order of magnitude, that’d also still validate Singer’s argument if we take my priors.
I notice that I’m very annoyed at your on-the-side link to the article about OpenAI with the claim that they’re doing the opposite of what the argument justifying the intervention recommends. It’s my understanding that the article, though plausible at the time, was very speculative and has been falsified since it’s been written. In particular, OpenAI has pledged not to take part in an arms race under reasonable conditions, which directly contradicts one of the points of that article. Quote:
Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be “a better-than-even chance of success in the next two years.”
That, and they seem to have an ethics board with significant power (this is based on deciding not to release the full version of GPT). I believe they also said that they won’t publish capability results in the future, which also contradicts one of the main concerns (which, again, was reasonable at the time). Please either reply or amend your post.
My assumption before reading this has been that this is the case. Given that, does a reason remain to update away from the position that the GiveWell claim is basically correct?
For the rest of this post, let’s suppose the true amount of money needed to save a life through GiveWell’s top charities is 50.000$. I don’t think anything about Singer’s main point changes.
For one, it’s my understanding that decreasing animal suffering is at least an order of magnitude more effective than decreasing human suffering. If the arguments you make here apply equally to that (which I don’t think they do), and we take the above number, well that’s 5000$ for a benefit-as-large-as-one-life-saved, which is still sufficient for Singer’s argument
Secondly, I don’t think your arguments apply to existential risk prevention and even if they did and we decrease effectiveness there by one order of magnitude, that’d also still validate Singer’s argument if we take my priors.
I notice that I’m very annoyed at your on-the-side link to the article about OpenAI with the claim that they’re doing the opposite of what the argument justifying the intervention recommends. It’s my understanding that the article, though plausible at the time, was very speculative and has been falsified since it’s been written. In particular, OpenAI has pledged not to take part in an arms race under reasonable conditions, which directly contradicts one of the points of that article. Quote:
That, and they seem to have an ethics board with significant power (this is based on deciding not to release the full version of GPT). I believe they also said that they won’t publish capability results in the future, which also contradicts one of the main concerns (which, again, was reasonable at the time). Please either reply or amend your post.