No AGI research org has enough evil to play it that way. Think about what would have to happen. The thing would tell them “you could bring about a utopia and you will be rich beyond your wildest dreams in it, as will everyone”, and then all of the engineers and the entire board would have to say “no, just give the cosmic endowment to the shareholders of the company”
Existing AGI research firms (or investors to those firms) can already, right now, commit to donate all their profits to the public, in theory, and yet they are not doing so. The reason is pretty clearly because investors and other relevant stakeholders are “selfish” in the sense of wanting money for themselves more than they want the pie to be shared equally among everyone.
Given that existing actors are already making the choice to keep the profits of AI development mostly to themselves, it seems strange to posit a discontinuity in which people will switch to being vastly more altruistic once the stakes become much higher, and the profits turn from being merely mouthwatering to being literally astronomical. At the least, such a thesis prompts questions about wishful thinking, and how you know what you think you know in this case.
OpenAI has a capped profit structure which effectively does this.
Good point, but I’m not persuaded much by this observation given that:
They’ve already decided to change the rules to make the 100x profit cap double every four years, calling into question the meaningfulness of the promise
OpenAI is just one firm among many (granted, it’s definitely in the lead right now), and most other firms are in it pretty much exclusively for profit
Given that the 100x cap doesn’t kick in for a while, the promise feels pretty distant from “commit to donate all their profits to the public”, which was my original claim. I expect as the cap gets closer to being met, investors will ask for a way around it.
Existing AGI research firms (or investors to those firms) can already, right now, commit to donate all their profits to the public, in theory, and yet they are not doing so. The reason is pretty clearly because investors and other relevant stakeholders are “selfish” in the sense of wanting money for themselves more than they want the pie to be shared equally among everyone.
Given that existing actors are already making the choice to keep the profits of AI development mostly to themselves, it seems strange to posit a discontinuity in which people will switch to being vastly more altruistic once the stakes become much higher, and the profits turn from being merely mouthwatering to being literally astronomical. At the least, such a thesis prompts questions about wishful thinking, and how you know what you think you know in this case.
OpenAI has a capped profit structure which effectively does this.
Astronomical, yet no longer mouthwatering in the sense of being visceral or intuitively meaningful.
Good point, but I’m not persuaded much by this observation given that:
They’ve already decided to change the rules to make the 100x profit cap double every four years, calling into question the meaningfulness of the promise
OpenAI is just one firm among many (granted, it’s definitely in the lead right now), and most other firms are in it pretty much exclusively for profit
Given that the 100x cap doesn’t kick in for a while, the promise feels pretty distant from “commit to donate all their profits to the public”, which was my original claim. I expect as the cap gets closer to being met, investors will ask for a way around it.