Thank you for writing this—it’s a really useful and accurate view, I think. I too deal with both of these mental bastards and you’re right, it can be hard to see them separately; but this is almost exactly how it is for me too and I’m glad you shared it.
Reading this also made me have a bunch more thoughts about monotropism, which I’ve been studying with great interest lately. The depression physical movement thing you describe (which yup, hard same here) feels like it must be related to my high monotropism somehow, and I’m looking forward to looking into the link more. (You may look monotropism up if you’re interested, but i just wanted to share that your post gave me a good lead on a useful idea, which i appreciate!)
A few things stood out to me here; namely, a steelman and a strawman.
You make the sweeping claim that “AI can bring—already brings—lots of value, and general improvements to human lives”, but don’t substantiate that claim at all. (Maybe you think it’s obvious, as a daily user. I think there’s lots of room to challenge AI utility to human beings.) Much of the “benefits of AI” talk boils down to advertising and hopeful hype from invested industries. I would understand claiming, as an example, that “AI increases the productivity or speed of certain tasks, like writing cover letters”. That might be an improvement in human lives, though it depends on things like whether the quality also decreases, and who or what is harmed as part of the cost of doing it.
But this should be explored and supported; it is not at all obvious. Claiming that there is “lots of value” isn’t very persuasive by itself—especially since you include “improvements to human lives” in your statement. I’d be very curious to know which improvements to human lives AI has brought, and whether they actually stand up against, not just the dangers, but the actual already-existing downsides of AI, as well.
Those are, I feel, the strawmen in this argument. And I apologize if I make any inadvertently heated word-choices here; the tactic you’re using is a theme I’ve been seeing that’s getting under my skin: To weigh AI’s “possible gains” against its “potential dangers” in a very sci-fi, what-might-happen-if-it-wakes-up way, while failing to weigh its actual, demonstrated harms and downsides as part of the equasion at all. This irks me. I think it particularly irks me when an argument (such as this one) claims all the potential upsides for AI as benefits for humans in one sweeping sentence, but then passes over the real harms to humans that even the fledgling version of AI we have has already caused, or set in motion.
I understand that the “threat” of sentient supercomputer is sexier to think about—and it serves as a great humblebrag for the industry, too. They get to say “Yes yes, we understand that the little people are worried our computers are TOO smart, hahaha, yes let’s focus on that”—but it’s disingenuous to call the other problems “boring dangers”, even though I’m sure there’s no interest in discussing them at AI tech conventions. But many of these issues aren’t dangers; they’re already-present, active problems that function as distinct downsides to allowing AI (with agency or not) unfettered access to our marketplaces.
Three of many possible examples, open to argument, but worthy of consideration, of already-a-downside “dangers” are: Damage to the environment / wasting tons of resources in an era where we should definitely be improving on efficiency (and you know, feeding the poor and stuff, maybe rather than “giving” Altman 7 trillion dollars). Mass-scale theft from artists and craftspeople that could harm or even destroy entire industries or areas of high human value. (And yes, that’s an example of “the tech industry being bad guys” and not inherent to AI as a concept, but it is also how the real AI is built and used by the people actually doing it, and currently no-one seems able to stop them. So it’s the same type of problem as having an AI that was designed poorly with regards to safety: Some rich dudes could absolutely decide to release it in spite of technicalities like human welfare. I mention this to point out that the mechanism for greedy corporations to ignore safety and human lives is already active in this space, so maybe “we’re stopping all sorts of bad guys already” isn’t such an ironclad reason to ignore those dangers. Or any dangers, because um, that’s just kind of a terrible argument for anything; sorry.) And a third one, the scrambling of fact and fiction[1] to the point where search-engines are losing utility and people who need to know the difference for critical reasons—like courts, scientists and teachers—are struggling to do their work.
All of which is a bit of a long way to say that I see a steelman and a strawman here, making this argument pretty weak overall. I also see ways you could improve both of those, by looking into the details (if they exist) of your steelman, and broadening your strawmen to include not just theoretical downsides but real ones.
But you made me think, and elucidate something that’s been irritating me for many months; thank you for that!
[1] I object to the term “hallucination”; it’s inaccurate and offensive. I don’t love “fiction” either, but at least it’s accurate.