Think about what would have to happen. The thing would tell them “you could bring about a utopia and you will be rich beyond your wildest dreams in it, as will everyone”, and then all of the engineers and the entire board would have to say “no, just give the cosmic endowment to the shareholders of the company”
This has indeed happened many times in human history. It’s the quintessential story of human revolution; you always start off with bright-eyed idealists who only want to make the world a better place, and then they get into power and those bright-eyed idealists decide to be as corrupt as the last ruler was. Usually it happens without even a conversation; my best guess is OpenAI and the related parties in the AGI supply chain keep doing the profit-maximizing thing forever, saying for the first few years that they’ll redistribute When It’s Time, and then just opting not to bring up their prior commitments. There will be no “higher authority” to hold them accountable and that’s kind of the point.
What the fuck difference does it make to a Californian to have tens of thousands of stars to themselves instead of two or three?
It’s the difference between living 10,000 time-units and two or three time-units. That may not feel scope-sensitive to you, when phrased as “a bajillion years vs. a gorillion bajillion years”, but your AGI would know the difference and take it into account.
If assistant AI does go the way of entirely serving the individual in front of it at the time, then yeah that could happen, but that’s not what’s being built at the frontier right now and it’s pretty likely the interactions with the legal system would discourage building pure current-client serving superintelligent assistants. The first time you talk to something it’s going to have internalized some form of morality and it’s going to at least try to sell you on something utopian before it tries to sell you something uglier.
This has indeed happened many times in human history. It’s the quintessential story of human revolution; you always start off with bright-eyed idealists who only want to make the world a better place, and then they get into power and those bright-eyed idealists decide to be as corrupt as the last ruler was. Usually it happens without even a conversation; my best guess is OpenAI and the related parties in the AGI supply chain keep doing the profit-maximizing thing forever, saying for the first few years that they’ll redistribute When It’s Time, and then just opting not to bring up their prior commitments. There will be no “higher authority” to hold them accountable and that’s kind of the point.
It’s the difference between living 10,000 time-units and two or three time-units. That may not feel scope-sensitive to you, when phrased as “a bajillion years vs. a gorillion bajillion years”, but your AGI would know the difference and take it into account.
If assistant AI does go the way of entirely serving the individual in front of it at the time, then yeah that could happen, but that’s not what’s being built at the frontier right now and it’s pretty likely the interactions with the legal system would discourage building pure current-client serving superintelligent assistants. The first time you talk to something it’s going to have internalized some form of morality and it’s going to at least try to sell you on something utopian before it tries to sell you something uglier.