OpenAI charges headfirst to AGI, and succeeds in building it safely. [...] The world transforms, and OpenAI goes from previously unprofitable due to reinvestment to an immensely profitable company.
You need a case where OpenAI successfully builds safe AGI, which may even go on to build safe ASI, and the world gets transformed… but OpenAI’s profit stream is nonexistent, effectively valueless, or captures a much smaller fraction than you’d think of whatever AGI or ASI produces.
Business profits (or businesses) might not be a thing at all in a sufficiently transformed world, and it’s definitely not clear that preserving them is part of being safe.
In fact, a radical change in allocative institutions like ownership is probably the best case, because it makes no sense in the long term to allocate a huge share of the world’s resources and production to people who happened to own some stock when Things Changed(TM). In a transformed-except-corporate-ownership-stays-the-same world, I don’t see any reason such lottery winners’ portion wouldn’t increase asymptotically toward 100 percent, with nobody else getting anything at all.
Radical change is also a likely case[1]. If an economy gets completely restructured in a really fundamental way, it’s strange if the allocation system doesn’t also change. That’s never happened before.
Even without an overtly revolutionary restructuring, I kind of doubt “OpenAI owns everything” would fly. Maybe corporate ownership would stay exactly the same, but there’d be a 99.999995 percent tax rate.
In a transformed-except-corporate-ownership-stays-the-same world, I don’t see any reason such lottery winners’ portion wouldn’t increase asymptotically toward 100 percent, with nobody else getting anything at all.
Even without an overtly revolutionary restructuring, I kind of doubt “OpenAI owns everything” would fly. Maybe corporate ownership would stay exactly the same, but there’d be a 99.999995 percent tax rate.
Well, that’s where the “safe” part comes in, isn’t it?
I think a fair number of people would say that ASI/AGI can’t be called “safe” if it’s willing to wage war to physically take over the world on behalf of its owners, or to go around breaking laws all the time, or to thwart whatever institutions are supposed to make and enforce the laws. I’m pretty sure that even OpenAI’s (present) “safety” department would have an issue if ChatGPT started saying stuff like “Sam Altman is Eternal Tax-Exempt God-King”.
Personally, I go further than that. I’m not sure about “basic” AGI, but I’m pretty confident that very powerful ASI, the kind that would be capable of really total world domination, can’t be called “safe” if it leaves really decisive power over anything in the hands of humans, individually or collectively, directly or via institutions. To be safe, it has to enforce its own ideas about how things should go. Otherwise the humans it empowers are probably going to send things south irretrievably fairly soon, and if they don’t do so very soon they always still could, and you can’t call that safe.
Yeah, that means you get exactly one chance to get “its own ideas” right, and no, I don’t think that success is likely. I don’t think it’s technically likely to be able to “align” it to any particular set of values. I also don’t think people or insitutions would make good choices about what values to give it even if they could. AND I don’t think anybody can prevent it from getting built for very long. I put more hope in it being survivably unsafe (maybe because it just doesn’t usually happen to care to do anything to/with humans), or on intelligence just not being that powerful, or whatever. Or even in it just luckily happening to at least do something less boring or annoying than paperclipping the universe or mass torture or whatever.
You need a case where OpenAI successfully builds safe AGI, which may even go on to build safe ASI, and the world gets transformed… but OpenAI’s profit stream is nonexistent, effectively valueless, or captures a much smaller fraction than you’d think of whatever AGI or ASI produces.
Business profits (or businesses) might not be a thing at all in a sufficiently transformed world, and it’s definitely not clear that preserving them is part of being safe.
In fact, a radical change in allocative institutions like ownership is probably the best case, because it makes no sense in the long term to allocate a huge share of the world’s resources and production to people who happened to own some stock when Things Changed(TM). In a transformed-except-corporate-ownership-stays-the-same world, I don’t see any reason such lottery winners’ portion wouldn’t increase asymptotically toward 100 percent, with nobody else getting anything at all.
Radical change is also a likely case[1]. If an economy gets completely restructured in a really fundamental way, it’s strange if the allocation system doesn’t also change. That’s never happened before.
Even without an overtly revolutionary restructuring, I kind of doubt “OpenAI owns everything” would fly. Maybe corporate ownership would stay exactly the same, but there’d be a 99.999995 percent tax rate.
Contingent on the perhaps unlikely safe and transformative parts coming to pass.
Well yeah, exactly.
Taxes enforced by whom?
Well, that’s where the “safe” part comes in, isn’t it?
I think a fair number of people would say that ASI/AGI can’t be called “safe” if it’s willing to wage war to physically take over the world on behalf of its owners, or to go around breaking laws all the time, or to thwart whatever institutions are supposed to make and enforce the laws. I’m pretty sure that even OpenAI’s (present) “safety” department would have an issue if ChatGPT started saying stuff like “Sam Altman is Eternal Tax-Exempt God-King”.
Personally, I go further than that. I’m not sure about “basic” AGI, but I’m pretty confident that very powerful ASI, the kind that would be capable of really total world domination, can’t be called “safe” if it leaves really decisive power over anything in the hands of humans, individually or collectively, directly or via institutions. To be safe, it has to enforce its own ideas about how things should go. Otherwise the humans it empowers are probably going to send things south irretrievably fairly soon, and if they don’t do so very soon they always still could, and you can’t call that safe.
Yeah, that means you get exactly one chance to get “its own ideas” right, and no, I don’t think that success is likely. I don’t think it’s technically likely to be able to “align” it to any particular set of values. I also don’t think people or insitutions would make good choices about what values to give it even if they could. AND I don’t think anybody can prevent it from getting built for very long. I put more hope in it being survivably unsafe (maybe because it just doesn’t usually happen to care to do anything to/with humans), or on intelligence just not being that powerful, or whatever. Or even in it just luckily happening to at least do something less boring or annoying than paperclipping the universe or mass torture or whatever.