The ‘hackers rather than engineers’ point is interesting—what are some strong examples of people in the community acting like the former rather than the latter?
I actually think GiveWell analysts’ approach of filling their estimates for each charity in a spreadsheet (or charity entreneurship’s intervention selection processes) look a lot like ranked optimisation. It’s not as robotic as the authors made it sound though (e.g. cluster thinking approach).
Agreed re: how we pursue rationality training.
The point on aligning between different stakeholders is a good one, though I actually think the community has neglected that kind of multi-agent analysis in the past (e.g. see Critch’s recent post), and arguments I have seen seem premised on abstractions and metrics of success that make obvious sense to us (as far as I can tell, they involved few real-life human-to-human conversations trying to interpret how other stakeholder groups perceive the world differently and building a broader consensus with them).
For the other points (e.g. LessWrong conversations being open, save some intellectual capacity needed to engage), it might be better to chat back and forth about them in a call some time! To me, most of them seem to capture a narrowly relevant perspective, but then jump to a ‘therefore this written sentence is clearly wrong’ conclusion from there.
One clarification – I decided myself to not mention the author’s names. So that is on me.
Two reasons: it might have provoked more instinctual negative reactions (and assumptions) about the authors, and also I had made edits that the authors didn’t have the spare time to look through well.
GiveWell recommends currently recommends nine charities. I don’t think they their numbering between those asserts a ranking.
If you would have a ranking the difference between the charities based on an optimization metric, the 9th rank and the 10th rank would likely be similar to that of the difference between the 4th ranked and the 5th ranked.
Rather Give Well gives a list of charities that they consider worthy of donations and then it’s up to the donors to pick among that list the ones that feel best to them. Given the options that are available for going about recommending people to give to charities that’s not the one that favors going for ranks. They easily could have chosen ranks but decided against it.
I don’t remember any significant post that ranks solutions to a problem on LessWrong or one that recommends that you should rank solutions.
The ‘hackers rather than engineers’ point is interesting—what are some strong examples of people in the community acting like the former rather than the latter?
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
A lot of the rest of the Xrisk debate is also more from a safety perspective where the paradigms come out of the hacker movement.
GiveWell might be more in the middle between the two poles but as far as I understand a GiveWell report tries to tell you everything that’s worthwhile to know about a given charity and not just those things that focus on the metrics.
I don’t think there’s someone I would see as a core part of the rationalist community that’s more engineer then hacker.
I don’t think they their numbering between those asserts a ranking.
I think defining outputs in technically precise ways isn’t very useful in this case (I think it obfuscates the thrust of the argument). I’m saying more something like ‘these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don’t literally mention that the top charity on their webpage is their no. 1 charity.’
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
GiveWell doesn’t have a metric based on which it decides whether charity 1 is ranked above or below charity 2.
It’s charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear.
That’s exactly the alternative to weighting all the criteria explicitely and ranking the charities. It’s leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.
I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.
Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit
Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.
Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.
Ranked is a word with a clear meaning and it’s not what they are doing.
GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.
If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the “against cash” results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn’t be if charities would just be ranked by the metric in the results tab.
Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don’t think that excuses strawmanning in any way when you assert to want to have a constructive discussion.
The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It’s again quite different then this article that eschews any transparency. Of course you or the outers can say “we care about things besides transparency more” but that’s still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.
Thanks for the points.
The ‘hackers rather than engineers’ point is interesting—what are some strong examples of people in the community acting like the former rather than the latter?
I actually think GiveWell analysts’ approach of filling their estimates for each charity in a spreadsheet (or charity entreneurship’s intervention selection processes) look a lot like ranked optimisation. It’s not as robotic as the authors made it sound though (e.g. cluster thinking approach).
Agreed re: how we pursue rationality training.
The point on aligning between different stakeholders is a good one, though I actually think the community has neglected that kind of multi-agent analysis in the past (e.g. see Critch’s recent post), and arguments I have seen seem premised on abstractions and metrics of success that make obvious sense to us (as far as I can tell, they involved few real-life human-to-human conversations trying to interpret how other stakeholder groups perceive the world differently and building a broader consensus with them).
For the other points (e.g. LessWrong conversations being open, save some intellectual capacity needed to engage), it might be better to chat back and forth about them in a call some time! To me, most of them seem to capture a narrowly relevant perspective, but then jump to a ‘therefore this written sentence is clearly wrong’ conclusion from there.
One clarification – I decided myself to not mention the author’s names. So that is on me. Two reasons: it might have provoked more instinctual negative reactions (and assumptions) about the authors, and also I had made edits that the authors didn’t have the spare time to look through well.
GiveWell recommends currently recommends nine charities. I don’t think they their numbering between those asserts a ranking.
If you would have a ranking the difference between the charities based on an optimization metric, the 9th rank and the 10th rank would likely be similar to that of the difference between the 4th ranked and the 5th ranked.
Rather Give Well gives a list of charities that they consider worthy of donations and then it’s up to the donors to pick among that list the ones that feel best to them. Given the options that are available for going about recommending people to give to charities that’s not the one that favors going for ranks. They easily could have chosen ranks but decided against it.
I don’t remember any significant post that ranks solutions to a problem on LessWrong or one that recommends that you should rank solutions.
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
A lot of the rest of the Xrisk debate is also more from a safety perspective where the paradigms come out of the hacker movement.
GiveWell might be more in the middle between the two poles but as far as I understand a GiveWell report tries to tell you everything that’s worthwhile to know about a given charity and not just those things that focus on the metrics.
I don’t think there’s someone I would see as a core part of the rationalist community that’s more engineer then hacker.
I think defining outputs in technically precise ways isn’t very useful in this case (I think it obfuscates the thrust of the argument). I’m saying more something like ‘these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don’t literally mention that the top charity on their webpage is their no. 1 charity.’
Nice! This was clarifying, thanks.
GiveWell doesn’t have a metric based on which it decides whether charity 1 is ranked above or below charity 2.
It’s charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear.
That’s exactly the alternative to weighting all the criteria explicitely and ranking the charities. It’s leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.
I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.
Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit
Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.
Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.
Ranked is a word with a clear meaning and it’s not what they are doing.
GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.
If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the “against cash” results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn’t be if charities would just be ranked by the metric in the results tab.
Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don’t think that excuses strawmanning in any way when you assert to want to have a constructive discussion.
The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It’s again quite different then this article that eschews any transparency. Of course you or the outers can say “we care about things besides transparency more” but that’s still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.