What is more important is what specifically links SV’s apparent rationalism to NRx attitudes. Given the way technologists’ dreams are increasingly shaping our future, we have a right to know what these dreams hold.
Having open discussions on LessWrong seems to me a very democratic way of going about discussing dreams of the future. There’s some hurdle of needing a certain amount of intellectual capacity to engage with it, but’s it’s very different from doing things in a closed way.
It’s quite ironic that the authors of this article don’t stand by it with their own names, which tells you a lot of what value they put on transparacy in practice.
Tetlock found that conventional experts tied to a particular domain were overrated in their ability to predict the future. Instead, open-minded generalists could train to become “superforecasters” enabling them to forecast a future event relatively accurately by thinking logically about similar cases in the past.
Tetlocks work seems to me a to have a very demoratizing conclusion. Tetlock found that it doesn’t take deep subject matter experience or Mensa level intelligence to make good predictions. GJOpen is in itself a very democratic endevour.
Tetlock does found that thinking about similar cases in the past is part of being good at forcasting but I don’t know what the word “logically” does in the sentence.
If you wanted to strawman Telock I’m not sure that there’s a way to do it better then what this article does.
I have the impression that the author thinks that “a normal person can learn to forcast better then domain experts” somehow implies being undemocratic because they see “democracy” as being about listening to them and their domain expert friends.
Rationalists weren’t simply more tolerant of contrarian arguments about, for example, racial or gendered differences in intelligence or capacity to learn and work. [...] These core disagreements reflected and helped shape how SV thought about politics.
To me that looks like either purposefully strawmanning or incompetence. Lawrence Summers focused on those over at Harvard. SV political discourse in the last years were mainly about differences in motivation and not gendered differences in intelligence or capacity to learn and work.
Engineers, then, tend to think of their work as a series of “optimization problems,” transforming apparently complex situations into an “objective function,” which ranks possible solutions, given the tradeoffs and constraints, to find the best one and implement it.
I don’t see that anywhere. Even GiveWell doesn’t doesn’t have a charity ranking but uses other ways to present their charity evaluations.
As a whole rationalists are much more hackers then they are engineers. Part of being a hacker is being constantly switching between layers of abstraction and not staying with one layer and just focusing on that.
More in the Silicon Valley space, there’s a reason that YC focuses on telling people to talk to their users. Having metrics is important but anyone who thinks that a good Silicon Valley startup.
An analysis that pretends that there are no hackers in Silicon Valley and completely ignores the hacking ideology is going to do a very poor job at capturing what Silicon Valley is about.
To go back to our rationality community, the idea that our rationalist community focuses on Optimization completely misunderstands it. We don’t have clear metrics for rationality that we could optimize. I think it would be great if we would have such a metric to test our rationality training but till now we didn’t persue rationality training as an optimization problem.
Inadequate Equilibria came to the conclusion that most of the bigger problems we have in the world aren’t optimization problems but about finding ways to align incentives between different agents or building coalitions.
Curtis Yarvin, who had played a significant role in the early days of LessWrong
That would be news to me and I have been around LessWrong for a long time.
However, sometimes Rationalism didn’t counter the biases of its practitioners or speed their convergence on learning wisdom available in other traditions…
This is wrong. You find plenty of posts about Circling or meditation on LessWrong. Rationalists are one of the communities that did manage to adopt hand signs for approval after it turned out that this is very useful cultural technology.
The ‘hackers rather than engineers’ point is interesting—what are some strong examples of people in the community acting like the former rather than the latter?
I actually think GiveWell analysts’ approach of filling their estimates for each charity in a spreadsheet (or charity entreneurship’s intervention selection processes) look a lot like ranked optimisation. It’s not as robotic as the authors made it sound though (e.g. cluster thinking approach).
Agreed re: how we pursue rationality training.
The point on aligning between different stakeholders is a good one, though I actually think the community has neglected that kind of multi-agent analysis in the past (e.g. see Critch’s recent post), and arguments I have seen seem premised on abstractions and metrics of success that make obvious sense to us (as far as I can tell, they involved few real-life human-to-human conversations trying to interpret how other stakeholder groups perceive the world differently and building a broader consensus with them).
For the other points (e.g. LessWrong conversations being open, save some intellectual capacity needed to engage), it might be better to chat back and forth about them in a call some time! To me, most of them seem to capture a narrowly relevant perspective, but then jump to a ‘therefore this written sentence is clearly wrong’ conclusion from there.
One clarification – I decided myself to not mention the author’s names. So that is on me.
Two reasons: it might have provoked more instinctual negative reactions (and assumptions) about the authors, and also I had made edits that the authors didn’t have the spare time to look through well.
GiveWell recommends currently recommends nine charities. I don’t think they their numbering between those asserts a ranking.
If you would have a ranking the difference between the charities based on an optimization metric, the 9th rank and the 10th rank would likely be similar to that of the difference between the 4th ranked and the 5th ranked.
Rather Give Well gives a list of charities that they consider worthy of donations and then it’s up to the donors to pick among that list the ones that feel best to them. Given the options that are available for going about recommending people to give to charities that’s not the one that favors going for ranks. They easily could have chosen ranks but decided against it.
I don’t remember any significant post that ranks solutions to a problem on LessWrong or one that recommends that you should rank solutions.
The ‘hackers rather than engineers’ point is interesting—what are some strong examples of people in the community acting like the former rather than the latter?
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
A lot of the rest of the Xrisk debate is also more from a safety perspective where the paradigms come out of the hacker movement.
GiveWell might be more in the middle between the two poles but as far as I understand a GiveWell report tries to tell you everything that’s worthwhile to know about a given charity and not just those things that focus on the metrics.
I don’t think there’s someone I would see as a core part of the rationalist community that’s more engineer then hacker.
I don’t think they their numbering between those asserts a ranking.
I think defining outputs in technically precise ways isn’t very useful in this case (I think it obfuscates the thrust of the argument). I’m saying more something like ‘these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don’t literally mention that the top charity on their webpage is their no. 1 charity.’
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
GiveWell doesn’t have a metric based on which it decides whether charity 1 is ranked above or below charity 2.
It’s charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear.
That’s exactly the alternative to weighting all the criteria explicitely and ranking the charities. It’s leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.
I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.
Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit
Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.
Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.
Ranked is a word with a clear meaning and it’s not what they are doing.
GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.
If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the “against cash” results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn’t be if charities would just be ranked by the metric in the results tab.
Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don’t think that excuses strawmanning in any way when you assert to want to have a constructive discussion.
The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It’s again quite different then this article that eschews any transparency. Of course you or the outers can say “we care about things besides transparency more” but that’s still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.
Having open discussions on LessWrong seems to me a very democratic way of going about discussing dreams of the future. There’s some hurdle of needing a certain amount of intellectual capacity to engage with it, but’s it’s very different from doing things in a closed way.
It’s quite ironic that the authors of this article don’t stand by it with their own names, which tells you a lot of what value they put on transparacy in practice.
Tetlocks work seems to me a to have a very demoratizing conclusion. Tetlock found that it doesn’t take deep subject matter experience or Mensa level intelligence to make good predictions. GJOpen is in itself a very democratic endevour.
Tetlock does found that thinking about similar cases in the past is part of being good at forcasting but I don’t know what the word “logically” does in the sentence.
If you wanted to strawman Telock I’m not sure that there’s a way to do it better then what this article does.
I have the impression that the author thinks that “a normal person can learn to forcast better then domain experts” somehow implies being undemocratic because they see “democracy” as being about listening to them and their domain expert friends.
To me that looks like either purposefully strawmanning or incompetence. Lawrence Summers focused on those over at Harvard. SV political discourse in the last years were mainly about differences in motivation and not gendered differences in intelligence or capacity to learn and work.
I don’t see that anywhere. Even GiveWell doesn’t doesn’t have a charity ranking but uses other ways to present their charity evaluations.
As a whole rationalists are much more hackers then they are engineers. Part of being a hacker is being constantly switching between layers of abstraction and not staying with one layer and just focusing on that.
More in the Silicon Valley space, there’s a reason that YC focuses on telling people to talk to their users. Having metrics is important but anyone who thinks that a good Silicon Valley startup.
An analysis that pretends that there are no hackers in Silicon Valley and completely ignores the hacking ideology is going to do a very poor job at capturing what Silicon Valley is about.
To go back to our rationality community, the idea that our rationalist community focuses on Optimization completely misunderstands it. We don’t have clear metrics for rationality that we could optimize. I think it would be great if we would have such a metric to test our rationality training but till now we didn’t persue rationality training as an optimization problem.
Inadequate Equilibria came to the conclusion that most of the bigger problems we have in the world aren’t optimization problems but about finding ways to align incentives between different agents or building coalitions.
That would be news to me and I have been around LessWrong for a long time.
This is wrong. You find plenty of posts about Circling or meditation on LessWrong. Rationalists are one of the communities that did manage to adopt hand signs for approval after it turned out that this is very useful cultural technology.
Thanks for the points.
The ‘hackers rather than engineers’ point is interesting—what are some strong examples of people in the community acting like the former rather than the latter?
I actually think GiveWell analysts’ approach of filling their estimates for each charity in a spreadsheet (or charity entreneurship’s intervention selection processes) look a lot like ranked optimisation. It’s not as robotic as the authors made it sound though (e.g. cluster thinking approach).
Agreed re: how we pursue rationality training.
The point on aligning between different stakeholders is a good one, though I actually think the community has neglected that kind of multi-agent analysis in the past (e.g. see Critch’s recent post), and arguments I have seen seem premised on abstractions and metrics of success that make obvious sense to us (as far as I can tell, they involved few real-life human-to-human conversations trying to interpret how other stakeholder groups perceive the world differently and building a broader consensus with them).
For the other points (e.g. LessWrong conversations being open, save some intellectual capacity needed to engage), it might be better to chat back and forth about them in a call some time! To me, most of them seem to capture a narrowly relevant perspective, but then jump to a ‘therefore this written sentence is clearly wrong’ conclusion from there.
One clarification – I decided myself to not mention the author’s names. So that is on me. Two reasons: it might have provoked more instinctual negative reactions (and assumptions) about the authors, and also I had made edits that the authors didn’t have the spare time to look through well.
GiveWell recommends currently recommends nine charities. I don’t think they their numbering between those asserts a ranking.
If you would have a ranking the difference between the charities based on an optimization metric, the 9th rank and the 10th rank would likely be similar to that of the difference between the 4th ranked and the 5th ranked.
Rather Give Well gives a list of charities that they consider worthy of donations and then it’s up to the donors to pick among that list the ones that feel best to them. Given the options that are available for going about recommending people to give to charities that’s not the one that favors going for ranks. They easily could have chosen ranks but decided against it.
I don’t remember any significant post that ranks solutions to a problem on LessWrong or one that recommends that you should rank solutions.
Most of the AI safety discourse comes from the hacking perspective as it grows out of general security thinking where you have to think through all the layers of a problem and optimizing any single point of failure doesn’t bring you safety.
A lot of the rest of the Xrisk debate is also more from a safety perspective where the paradigms come out of the hacker movement.
GiveWell might be more in the middle between the two poles but as far as I understand a GiveWell report tries to tell you everything that’s worthwhile to know about a given charity and not just those things that focus on the metrics.
I don’t think there’s someone I would see as a core part of the rationalist community that’s more engineer then hacker.
I think defining outputs in technically precise ways isn’t very useful in this case (I think it obfuscates the thrust of the argument). I’m saying more something like ‘these charity selection procedures look roughly like ranking charities on weighted criteria and getting the top ones funded, even though GiveWell staff don’t literally mention that the top charity on their webpage is their no. 1 charity.’
Nice! This was clarifying, thanks.
GiveWell doesn’t have a metric based on which it decides whether charity 1 is ranked above or below charity 2.
It’s charity recommendations are instead based on assumptions that different people value different things. Some people might want the more uncertain intervention of deworming that has potentially a very large effect while others want bet nets where the effects are much more clear.
That’s exactly the alternative to weighting all the criteria explicitely and ranking the charities. It’s leaving the weighting of the different criteria implicit and allowing the reader to do their own weighting.
I do think whether or not you have an explicit weighting or a more intuitive implicit one is a significantly different approach.
Sounds like a fair distinction. At the very least, GiveWell has been leaving room for their analysts and readers to personally plug in how they value trade-offs between different desiderata (e.g. increasing yearly income by vs. DALYs), as well as ways for judging the effectiveness of a charity and its intervention. Just scanning through their sheet again: https://docs.google.com/spreadsheets/d/16XOOB1oWse1ICbF0OVXUYtwWwpvG3mxAAQ6LYAAndQU/edit
Having said that, there are hundreds of charities that haven’t passed through their vetting, many of which I assume involved checking whether those measured up in cost-effectiveness against some set of metrics.
Even in acknowledging the nuances you point out, it’s hard for me to not see the shared analytical processes that GiveWell staff use as softened versions of ranked optimisation.
Ranked is a word with a clear meaning and it’s not what they are doing.
GiveWell writes long charity evaluation and then after having made them evaluates the charities by taking all the quanitative and qualitative information from their report together and make a judgement about which charities perform better on their criteria.
If you look at the scores of the linked document you see that GiveDirectly scores an order of magnitude worse in the “against cash” results tab. Yet, GiveDirectly is still one of the recommended charities. It wouldn’t be if charities would just be ranked by the metric in the results tab.
Most of the time you make a strawman out of something, the real thing is a softened version of what the strawman is. I don’t think that excuses strawmanning in any way when you assert to want to have a constructive discussion.
The extreme transparency that GiveWell has to be able to notice when they make mistakes is also part of their operating philosophy that matter when you are talking about enabling democratic dialog. It’s again quite different then this article that eschews any transparency. Of course you or the outers can say “we care about things besides transparency more” but that’s still a value judgement that puts transparency lower on the list of priorities. I also think that authors that allow you to publish their work without attribution do share responsibility for their work being published without attribution.