Hey Brendan, welcome to LessWrong. I have some disagreements with how you relate to the possibility of human extinction from AI in your earlier essay (which I regret missing at the time it was published). In general, I read the essay as treating each “side” approximately as an emotional stance one could adopt, and arguing that the “middle way” stance is better than being either an unbriddled pessimist or optimist. But it doesn’t meaningfully engage with arguments for why we should expect AI to kill everyone, if we continue on the current path, or even really seem to acknowledge that there are any. There are a few things that seem like they are trying to argue against the case for AI x-risk, which I’ll address below, alongside some things that don’t see like they’re intended to be arguments about this, but that I also disagree with.
But rationalism ends up being a commitment to a very myopic notion of rationality, centered on Bayesian updating with a value function over outcomes.
I’m a bit sad that you’ve managed to spend a non-trivial amount of time engaging with the broader rationalist blogosphere and related intellectual outputs, and decided to dismiss it as myopic without either explaining what you mean (what would be a less myopic version of rationality?) or support (what is the evidence that led you to think that “rationalism”, as it currently exists in the world, is the myopic and presumably less useful version of the ideal you have in mind?). How is one supposed argue against this? Of the many possible claims you could be making here, I think most of them are very clearly wrong, but I’m not going to spend my time rebutting imagined arguments, and instead suggest that you point to specific failures you’ve observed.
An excessive focus on the extreme case too often blinds the long-termist school from the banal and natural threats that lie before us: the feeling of isolation from hyper-stimulating entertainment at all hours, the proliferation of propaganda, the end of white-collar jobs, and so forth.
I am not a long-termist, but I have to point out that this is not an argument that the long-termist case for concern is wrong. Also it itself is wrong, or at least deeply contrary to my experience: the average long-termist working on AI risk has probably spent more time thinking about those problems than 99% of the population.
EA does this by placing arguments about competing ends beyond rational inquiry.
I think you meant to make a very different claim here, as suggested by part of the next section:
However, the commonsensical, and seemingly compelling, focus on ‘effectiveness’ and ‘altruism’ distracts from a fundamental commitment to certain radical philosophical premises. For example, proximity or time should not govern other-regarding behavior.
Even granting this for the sake of argument (though in reality very few EAs are strict utilitarians in terms of impartiality), this would not put arguments about competing ends beyond rational inquiry. It’s possible you mean something different by “rational inquiry” than my understanding of it, of course, but I don’t see any further explanation or argument about this pretty surprising claim. “Arguments about competing ends by means of rational inquiry” is sort of… EA’s whole deal, at least as a philosophy. Certainly the “community” fails to live up to the ideal, but it at least tries a fair bit.
When EA meets AI, you end up with a problematic equation: even a tiny probability of doom x negative infinity utils equals negative infinity utils. Individual behavior in the face of this equation takes on cosmic significance. People like many of you readers–adept at subjugating the world with symbols–become the unlikely superheroes, the saviors of humanity.
It is true that there are many people on the internet making dumb arguments in support of basically every position imaginable. I have seen people make those arguments. Pascalian multiplication by infinity is not the “core argument” for why extinction risk from AI is an overriding concern, not for rationalists, not for long-termists, not for EAs. I have not met anybody working on mitigating AI risk who thinks our unconditional risk of extinction from AI is under 1%, and most people are between 5% and ~99.5%. Importantly, those estimates are driven by specific object-level arguments based on their beliefs about the world and predictions about the future, i.e. how capable future AI systems will be relative to humans, what sorts of motivations they will have if we keep building them the way we’re building them, etc. I wish your post had spent time engaging with those arguments instead of knocking down a transparently silly reframing of Pascal’s Wager that no serious person working on AI risk would agree with.
Unlike the pessimistic school, the proponents of a more techno-optimistic approach begin with gratitude for the marvelous achievements of the modern marriage of science, technology, and capitalism.
This is at odds with your very own description of rationalists just a thousand words prior:
The tendency of rationalism, then, is towards a so-called extropianism. In this transhumanist vision, humans transcend the natural limits of suffering and death.
Granted, you do not explicitly describe rationalists as grateful for the “marvelous achievements of the modern marriage of science, technology, and capitalism”. I am not sure if you have ever met a rationalist, but around these parts I hear “man, capitalism is awesome” (basically verbatim) and similar sentiments often enough that I’m not sure how we continue to survive living in Berkeley unscathed.
Though we sympathize with the existential risk school in the concern for catastrophe, we do not focus only on this narrow position. This partly stems from a humility about the limitations of human reason—to either imagine possible futures or wholly shape technology’s medium- and long-term effects.
I ask you to please at least try engaging with object-level arguments before declaring that reasoning about the future consequences of one’s actions is so difficult as to be pointless. After all, you don’t actually believe that: you think that your proposed path will have better consequences than the alternatives you describe. Why so?
Hey Brendan, welcome to LessWrong. I have some disagreements with how you relate to the possibility of human extinction from AI in your earlier essay (which I regret missing at the time it was published). In general, I read the essay as treating each “side” approximately as an emotional stance one could adopt, and arguing that the “middle way” stance is better than being either an unbriddled pessimist or optimist. But it doesn’t meaningfully engage with arguments for why we should expect AI to kill everyone, if we continue on the current path, or even really seem to acknowledge that there are any. There are a few things that seem like they are trying to argue against the case for AI x-risk, which I’ll address below, alongside some things that don’t see like they’re intended to be arguments about this, but that I also disagree with.
I’m a bit sad that you’ve managed to spend a non-trivial amount of time engaging with the broader rationalist blogosphere and related intellectual outputs, and decided to dismiss it as myopic without either explaining what you mean (what would be a less myopic version of rationality?) or support (what is the evidence that led you to think that “rationalism”, as it currently exists in the world, is the myopic and presumably less useful version of the ideal you have in mind?). How is one supposed argue against this? Of the many possible claims you could be making here, I think most of them are very clearly wrong, but I’m not going to spend my time rebutting imagined arguments, and instead suggest that you point to specific failures you’ve observed.
I am not a long-termist, but I have to point out that this is not an argument that the long-termist case for concern is wrong. Also it itself is wrong, or at least deeply contrary to my experience: the average long-termist working on AI risk has probably spent more time thinking about those problems than 99% of the population.
I think you meant to make a very different claim here, as suggested by part of the next section:
Even granting this for the sake of argument (though in reality very few EAs are strict utilitarians in terms of impartiality), this would not put arguments about competing ends beyond rational inquiry. It’s possible you mean something different by “rational inquiry” than my understanding of it, of course, but I don’t see any further explanation or argument about this pretty surprising claim. “Arguments about competing ends by means of rational inquiry” is sort of… EA’s whole deal, at least as a philosophy. Certainly the “community” fails to live up to the ideal, but it at least tries a fair bit.
It is true that there are many people on the internet making dumb arguments in support of basically every position imaginable. I have seen people make those arguments. Pascalian multiplication by infinity is not the “core argument” for why extinction risk from AI is an overriding concern, not for rationalists, not for long-termists, not for EAs. I have not met anybody working on mitigating AI risk who thinks our unconditional risk of extinction from AI is under 1%, and most people are between 5% and ~99.5%. Importantly, those estimates are driven by specific object-level arguments based on their beliefs about the world and predictions about the future, i.e. how capable future AI systems will be relative to humans, what sorts of motivations they will have if we keep building them the way we’re building them, etc. I wish your post had spent time engaging with those arguments instead of knocking down a transparently silly reframing of Pascal’s Wager that no serious person working on AI risk would agree with.
This is at odds with your very own description of rationalists just a thousand words prior:
Granted, you do not explicitly describe rationalists as grateful for the “marvelous achievements of the modern marriage of science, technology, and capitalism”. I am not sure if you have ever met a rationalist, but around these parts I hear “man, capitalism is awesome” (basically verbatim) and similar sentiments often enough that I’m not sure how we continue to survive living in Berkeley unscathed.
I ask you to please at least try engaging with object-level arguments before declaring that reasoning about the future consequences of one’s actions is so difficult as to be pointless. After all, you don’t actually believe that: you think that your proposed path will have better consequences than the alternatives you describe. Why so?