Suppose it was easy to create automated companies, and skim a bit off the top. AI algorithms are just better at buisness than any startup founder. Soon some people create these algorithms, give them a few quid in seed capitat and leave them to trade and accumulate money. The algorithms rapidly increase their wealth, and soon own much of the world economy. Humans are removed when the AIs have the power to do so at a profit. This ends in several superintelligences tiling the universe with economium together.
For this to happen, we need
1) Doubling time of fooming AI months to years, to allow many AI’s to be in the running.
2) Its fairly easy to set an AI to maximize money.
3) The people that care about complex human values can’t effectively make an AI to do that.
4) Any attempts to stamp out all fledgling AIs before they get powerful fails. Helped by anonymous cloud computing.
I don’t really buy 1) , but it is fairly plausible, I’m not convinced of 2) either, although it might not be hard to build a mesa optimiser that cares about something sufficiently correlated with money, that humans are beyond caring before any serious deviation from money optimization happens.
If 2) were false, and people who tried to make AI’s all got paperclip maximisers, the long run result is just a world filled with paperclips not banknotes. (Although this would make coordinating to destroy the AI’s a little easier?) The paperclip maximisers would still try to gain economic influence until they could snap nanotech fingers.
Suppose it was easy to create automated companies, and skim a bit off the top. AI algorithms are just better at buisness than any startup founder. Soon some people create these algorithms, give them a few quid in seed capitat and leave them to trade and accumulate money. The algorithms rapidly increase their wealth, and soon own much of the world economy. Humans are removed when the AIs have the power to do so at a profit. This ends in several superintelligences tiling the universe with economium together.
For this to happen, we need
1) Doubling time of fooming AI months to years, to allow many AI’s to be in the running.
2) Its fairly easy to set an AI to maximize money.
3) The people that care about complex human values can’t effectively make an AI to do that.
4) Any attempts to stamp out all fledgling AIs before they get powerful fails. Helped by anonymous cloud computing.
I don’t really buy 1) , but it is fairly plausible, I’m not convinced of 2) either, although it might not be hard to build a mesa optimiser that cares about something sufficiently correlated with money, that humans are beyond caring before any serious deviation from money optimization happens.
If 2) were false, and people who tried to make AI’s all got paperclip maximisers, the long run result is just a world filled with paperclips not banknotes. (Although this would make coordinating to destroy the AI’s a little easier?) The paperclip maximisers would still try to gain economic influence until they could snap nanotech fingers.