There’s a bunch I agree on like that democracy is just one system for aggregating preferences and ideas, that AI safety has become more accepted as a field by academia over time, that the article makes vaguely pejorative associations, and that Yarvain doesn’t appear to particularly argue from LW premises (nor have played any key role in this community, as Christian pointed out).
Hope you don’t mind that I play devil’s advocate on your elaborate and thoughtful comment. Will keep it brief, but can expound later.
seem to assume that the case for democracy rests on a strong belief in a high and somewhat equally distributed level of rationality and competence on the part of the general public
So my interpretation of the article (haven’t asked the original authors) is more like that it’s saying that rationalists rate how high the worth of an idea or someone’s thinking competence scores across too narrow a set of dimensions. And in that, fail to integrate outsiders’ perspectives in ways that could take the community out of local decision-making optima.
Using precise language doesn’t mean ignoring imprecisely-phrased concerns
IMO, some comments on this article itself are strong examples of people failing to pick up on imprecisely-phrased concerns.
As far as I can see, there’s tonnes of rationalist discussion on these issues.
I have also seen rationalist discussions on issues mentioned in this article – e.g. social media recommendation algorithms and filter bubbles. But this is different than interpreting outsiders’ concerns on issues.
NRx isn’t rationalism, not even a bitty bit.
I’m trying to triangulate between people’s impressions here, since I’ve only briefly visited Berkeley (so not geographically SV, but I think the authors were really referring to technologists around the San Francisco Bay Area put widely).
Two counterarguments: 1. NRx is a tiny community so the fact that many rationalists even know about them, have apparently sometimes shared board seats with them, read their work sometimes, and have actively tried to keep them out of the forum indicates that neo-reactionaries are relatively much more connected with the rationalist community vs. other communities out there.
2. Some rationalists figures have a similarly undemocractic mindset to pushing for system changes premised on their reasoning steps. e.g. Hanson on implementing prediction markets / futurarchy system, while arguing that others don’t take his arguments seriously because it goes against signalling to their tribe.
I really don’t quite know what to do with a criticism of the movement that claims we’re the trigger-happy optimisers.
So the writing does seem to attribute naive ways of optimising stuff to rationality. But that wasn’t the core of the criticism. The core critique was that we tend to distill societal issues into optimisation problems, whether or not we see optimisation as good or bad (e.g. thinking about how to do the most good; worrying about Goodharting or an instrumentally converging AGI). And this view has its own implicit set of assumptions baked in.
Similarly, we talk relatively much about what drives human (and come to think of it, artificial) intelligence (also in this comment thread), including genetic factors, even though the discussions I’ve read seemed a lot more nuanced and socially considerate than the authors let on. Edit: I also agree that the article confounds g factor with what we would call epistemic rationality, and that although the community also prizes the former to some extent, rationalists do clearly distinguish the two, encouraging diligent practice in refining judgement for the latter.
There’s a bunch I agree on like that democracy is just one system for aggregating preferences and ideas, that AI safety has become more accepted as a field by academia over time, that the article makes vaguely pejorative associations, and that Yarvain doesn’t appear to particularly argue from LW premises (nor have played any key role in this community, as Christian pointed out).
Hope you don’t mind that I play devil’s advocate on your elaborate and thoughtful comment. Will keep it brief, but can expound later.
So my interpretation of the article (haven’t asked the original authors) is more like that it’s saying that rationalists rate how high the worth of an idea or someone’s thinking competence scores across too narrow a set of dimensions. And in that, fail to integrate outsiders’ perspectives in ways that could take the community out of local decision-making optima.
IMO, some comments on this article itself are strong examples of people failing to pick up on imprecisely-phrased concerns.
I have also seen rationalist discussions on issues mentioned in this article – e.g. social media recommendation algorithms and filter bubbles. But this is different than interpreting outsiders’ concerns on issues.
I’m trying to triangulate between people’s impressions here, since I’ve only briefly visited Berkeley (so not geographically SV, but I think the authors were really referring to technologists around the San Francisco Bay Area put widely).
Two counterarguments:
1. NRx is a tiny community so the fact that many rationalists even know about them, have apparently sometimes shared board seats with them, read their work sometimes, and have actively tried to keep them out of the forum indicates that neo-reactionaries are relatively much more connected with the rationalist community vs. other communities out there.
2. Some rationalists figures have a similarly undemocractic mindset to pushing for system changes premised on their reasoning steps. e.g. Hanson on implementing prediction markets / futurarchy system, while arguing that others don’t take his arguments seriously because it goes against signalling to their tribe.
So the writing does seem to attribute naive ways of optimising stuff to rationality. But that wasn’t the core of the criticism. The core critique was that we tend to distill societal issues into optimisation problems, whether or not we see optimisation as good or bad (e.g. thinking about how to do the most good; worrying about Goodharting or an instrumentally converging AGI). And this view has its own implicit set of assumptions baked in.
Similarly, we talk relatively much about what drives human (and come to think of it, artificial) intelligence (also in this comment thread), including genetic factors, even though the discussions I’ve read seemed a lot more nuanced and socially considerate than the authors let on. Edit: I also agree that the article confounds g factor with what we would call epistemic rationality, and that although the community also prizes the former to some extent, rationalists do clearly distinguish the two, encouraging diligent practice in refining judgement for the latter.