The part I disagree with is the “this has already happened” part. I think it’s pretty clear that corporations aren’t completely in control of the world yet; also, corporations aren’t AIs and are more inherently safe / aligned to human values than I expect AIs to be by default. (Even though, yes, corporations are dangerous and unaligned to human values)
They may be more in control than you think. For many people in the US or EU, they’d be more harmed by Amazon, Google, and Apple banning their account (and preventing future accounts) or by a twitter campaign to discredit them than they would be by a short stint in jail. And the majority of concentrated compute power is in MS, Google, and Amazon data centers.
Still, I agree that it’s not a done deal. It COULD be the way AI takes over, but I don’t think it’s happened yet—today’s corporations haven’t exhibited the competence and ability to optimize their control to the degree they could with true AGI.
The degree of misalignment is also definitely arguable. One of my main points in posting this is that “inescapable dystopia” doesn’t require an AI that is so obviously misaligned as to cackle evilly while developing grey goo for paperclips. It can be very bad with only a mildly-divergent-but-powerful optimizer.
The part I disagree with is the “this has already happened” part. I think it’s pretty clear that corporations aren’t completely in control of the world yet; also, corporations aren’t AIs and are more inherently safe / aligned to human values than I expect AIs to be by default. (Even though, yes, corporations are dangerous and unaligned to human values)
They may be more in control than you think. For many people in the US or EU, they’d be more harmed by Amazon, Google, and Apple banning their account (and preventing future accounts) or by a twitter campaign to discredit them than they would be by a short stint in jail. And the majority of concentrated compute power is in MS, Google, and Amazon data centers.
Still, I agree that it’s not a done deal. It COULD be the way AI takes over, but I don’t think it’s happened yet—today’s corporations haven’t exhibited the competence and ability to optimize their control to the degree they could with true AGI.
The degree of misalignment is also definitely arguable. One of my main points in posting this is that “inescapable dystopia” doesn’t require an AI that is so obviously misaligned as to cackle evilly while developing grey goo for paperclips. It can be very bad with only a mildly-divergent-but-powerful optimizer.