OpenAI is recklessly scaling AI. Besides accelerating “progress” toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles only, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone’s personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.
Could you justify that it causes increasing harms? My intuition is that OpenAI is currently net-positive without taking into account future risks. It’s just an intuition, however, I have not spent time thinking about it and writing down numbers.
I’d agree the OpenAI product line is net positive (though not super hung up on that). Sam Altman demonstrating what kind of actions you can get away with in front of everyone’s eyes seems problematic.
For OpenAI to scale more toward “AGI”, the corporation needs more data, more automatable work, more profitable uses for working machines, and more hardware to run those machines.
If you look at how OpenAI has been increasing those four variables, you can notice that there are harms associated with each. This tends to result in increasing harms.
One obvious example: if they increase hardware, this also increases pollution (from mining, producing, installing, and running the hardware).
Note that the above is not a claim that the harms outweigh the benefits. But if OpenAI & co continue down their current trajectory, I expect that most communities would look back and say that the harms to what they care about in their lives were not worth it.
I wrote a guide to broader AI harms meant to emotionally resonate for laypeople here.
Could you justify that it causes increasing harms? My intuition is that OpenAI is currently net-positive without taking into account future risks. It’s just an intuition, however, I have not spent time thinking about it and writing down numbers.
(I agree it’s net-negative overall.)
I’d agree the OpenAI product line is net positive (though not super hung up on that). Sam Altman demonstrating what kind of actions you can get away with in front of everyone’s eyes seems problematic.
Very much agreeing with this.
Appreciating your inquisitive question!
One way to think about it:
For OpenAI to scale more toward “AGI”, the corporation needs more data, more automatable work, more profitable uses for working machines, and more hardware to run those machines.
If you look at how OpenAI has been increasing those four variables, you can notice that there are harms associated with each. This tends to result in increasing harms.
One obvious example: if they increase hardware, this also increases pollution (from mining, producing, installing, and running the hardware).
Note that the above is not a claim that the harms outweigh the benefits. But if OpenAI & co continue down their current trajectory, I expect that most communities would look back and say that the harms to what they care about in their lives were not worth it.
I wrote a guide to broader AI harms meant to emotionally resonate for laypeople here.