Snippet from a discussion I was having with someone about whether current AI is net bad. Reproducing here because it’s something I’ve been meaning to articulate publicly for a while.
[Them] I’d worry that as it becomes cheaper that OpenAI, other enterprises and consumers just find new ways to use more of it. I think that ends up displacing more sustainable and healthier ways of interfacing with the world.
[Me] Sure, absolutely, Jevons paradox. I guess the question for me is whether that use is worth it, both to the users and in terms of negative externalities. As far as users go, I feel like people need to decide that for themselves. Certainly a lot of people spend money in ways that they find worth it but seem dumb to me, and I’m sure that some of the ways I spend money seem dumb to a lot of people. De gustibus non disputandum est.
As far as negative externalities go, I agree we should be very aware of the downsides, both environmental and societal. Personally I expect that AI at its current and near-future levels is net positive for both of those.
Environmentally, I expect that AI contributions to science and technology will do enough to help us solve climate problems to more than pay for their environmental cost (and even if that weren’t true, ultimately for me it’s in the same category as other things we choose to do that use energy and hence have environmental cost—I think that as a society we should ensure that companies absorb those negative externalities, but it’s not like I think no one should ever use electricity; I think energy use per se is morally neutral, it’s just that the environmental costs have to be compensated for).
Socially I also expect it to be net positive, more tentatively. There are some uses that seem like they’ll be massive social upsides (in terms of both individual impact and scale). In addition to medical and scientific research, one that stands out for me a lot is providing children—ideally all the children in the world—with lifelong tutors that can get to know them and their strengths and weak points and tailor learning to their exact needs. When I think of how many children get poor schooling—or no schooling—the impact of that just seems massive. The biggest downside is the risk of possible long-term disempowerment from relying more and more heavily on AI, and it’s hard to know how to weigh that in the balance. But I don’t think that’s likely to be a big issue with current levels of AI.
I still think that going forward, AI presents great existential risk. But I don’t think that means we need to see AI as negative in every way. On the contrary, I think that as we work to slow or stop AI development, we need to stay exquisitely aware of the costs we’re imposing on the world: the children who won’t have those tutors, the lifesaving innovations that will happen later if at all. I think it’s worth it! But it’s a painful tradeoff to make, and I think we should try to live with the cognitive dissonance of that rather than falling into “All AI is bad.”
Snippet from a discussion I was having with someone about whether current AI is net bad. Reproducing here because it’s something I’ve been meaning to articulate publicly for a while.