It might be a bit easier to explain if the approach is rooted in humans, and not in the dangers of AGI. (let’s conveniently ignore the “direct” dangers of AGI until round 2)
To be entirely precise, what happens if we have GPT6 or 10 in the next 5 years, wielded by Microsoft, OpenAI, Musk, Facebook, and Google etc, or if you want to be unspecific, any “for profit” company, or worse.. governments.
What happens if a superfast supersmart AGI is developed by a stock market company, that has to worry about shareholder value, stock prices, or profits? It will use the company’s goal-aligned AGI to achieve the company’s goals, beat the competition etc, and it’s an absolute certainty that they will collude with humanity’s goals. Just show him one of Satya Nadella’s interviews about MS integrating AI into its development process, and to its products. The man is literally beaming with greed and giddiness in a manner, that is very very thinly veiled. He almost blurts out at one point, how happy he is that he will be able to fire the expensive programmers, and raise profits, but he pivots and calls it “the democratization of coding”. I have zero doubts, that the current stock market-driven economies will use AI & AGI for enhancing profits & drive down costs, resulting in mass layoffs and global social turmoil.
Imagine if a totalitarian regime or an extreme right-wing government gets a narrow AGI, that is trained for surveillance and control, and you get 1984 on steroids, and I don’t even want to unpack the racist chain of thought, I don’t think I need to.
To sum it up, humanity, in its current state is not ready for an AGI, as our current state of civilization is not mature enough, to use it for the common good of the 8+ billion people in the world. Our economics and politics are based on scarcity and privileged groups, who live their existence at the expense of the rest of humanity.
The above is scary in itself, and comes with the assumption, that we CAN align with AGI at all.
It might be a bit easier to explain if the approach is rooted in humans, and not in the dangers of AGI. (let’s conveniently ignore the “direct” dangers of AGI until round 2)
To be entirely precise, what happens if we have GPT6 or 10 in the next 5 years, wielded by Microsoft, OpenAI, Musk, Facebook, and Google etc, or if you want to be unspecific, any “for profit” company, or worse.. governments.
What happens if a superfast supersmart AGI is developed by a stock market company, that has to worry about shareholder value, stock prices, or profits? It will use the company’s goal-aligned AGI to achieve the company’s goals, beat the competition etc, and it’s an absolute certainty that they will collude with humanity’s goals. Just show him one of Satya Nadella’s interviews about MS integrating AI into its development process, and to its products. The man is literally beaming with greed and giddiness in a manner, that is very very thinly veiled. He almost blurts out at one point, how happy he is that he will be able to fire the expensive programmers, and raise profits, but he pivots and calls it “the democratization of coding”. I have zero doubts, that the current stock market-driven economies will use AI & AGI for enhancing profits & drive down costs, resulting in mass layoffs and global social turmoil.
Imagine if a totalitarian regime or an extreme right-wing government gets a narrow AGI, that is trained for surveillance and control, and you get 1984 on steroids, and I don’t even want to unpack the racist chain of thought, I don’t think I need to.
To sum it up, humanity, in its current state is not ready for an AGI, as our current state of civilization is not mature enough, to use it for the common good of the 8+ billion people in the world. Our economics and politics are based on scarcity and privileged groups, who live their existence at the expense of the rest of humanity.
The above is scary in itself, and comes with the assumption, that we CAN align with AGI at all.