I don’t think they their numbering between those asserts a ranking.
I think defining outputs in technically precise ways isn’t very useful in this case (I think it obfuscates the thrust of the argument). I’m saying more something like ‘these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don’t literally mention that the top charity on their webpage is their no. 1 charity.’
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
GiveWell doesn’t have a metric based on which it decides whether charity 1 is ranked above or below charity 2.
It’s charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear.
That’s exactly the alternative to weighting all the criteria explicitely and ranking the charities. It’s leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.
I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.
Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit
Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.
Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.
Ranked is a word with a clear meaning and it’s not what they are doing.
GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.
If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the “against cash” results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn’t be if charities would just be ranked by the metric in the results tab.
Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don’t think that excuses strawmanning in any way when you assert to want to have a constructive discussion.
The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It’s again quite different then this article that eschews any transparency. Of course you or the outers can say “we care about things besides transparency more” but that’s still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.
I think defining outputs in technically precise ways isn’t very useful in this case (I think it obfuscates the thrust of the argument). I’m saying more something like ‘these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don’t literally mention that the top charity on their webpage is their no. 1 charity.’
Nice! This was clarifying, thanks.
GiveWell doesn’t have a metric based on which it decides whether charity 1 is ranked above or below charity 2.
It’s charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear.
That’s exactly the alternative to weighting all the criteria explicitely and ranking the charities. It’s leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.
I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.
Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit
Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.
Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.
Ranked is a word with a clear meaning and it’s not what they are doing.
GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.
If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the “against cash” results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn’t be if charities would just be ranked by the metric in the results tab.
Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don’t think that excuses strawmanning in any way when you assert to want to have a constructive discussion.
The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It’s again quite different then this article that eschews any transparency. Of course you or the outers can say “we care about things besides transparency more” but that’s still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.