Well to be fair to Microsoft/OpenAI, they are a for-profit corporation, they can’t exactly say “and we will limit the future prospects of our business beyond X threshold”.
And since there are many such organizations on Earth, and they’re not going away anytime soon, race dynamics would overtake them even if they did issue such a statement and commit to it.
The salient question is before all this, how can truly global, truly effective coordination be achieved? At what cost? And is this cost bearable to the decision makers and wider population?
My personal opinion is that given current geopolitical tensions, it’s exceedingly unlikely this will occur before a mega-disaster actually happens, thus there might be some merit in an alternate approach.
That cap is very high, something like 1000x investment. They’re not near it, so they could be sued by investors if they admitted to slowing down even a little.
The whole scheme for OpenAI is nuts, but I think they’re getting less nuts as they think more about the issue. Which is weak praise.
Well to be fair to Microsoft/OpenAI, they are a for-profit corporation, they can’t exactly say “and we will limit the future prospects of our business beyond X threshold”.
And since there are many such organizations on Earth, and they’re not going away anytime soon, race dynamics would overtake them even if they did issue such a statement and commit to it.
The salient question is before all this, how can truly global, truly effective coordination be achieved? At what cost? And is this cost bearable to the decision makers and wider population?
My personal opinion is that given current geopolitical tensions, it’s exceedingly unlikely this will occur before a mega-disaster actually happens, thus there might be some merit in an alternate approach.
They actually explicitly set up their business like this, as a capped-profits company
On-paper for what is now a subsidiary of a much larger company.
In practice Microsoft management can’t say now there is a cap on the most promising future business area because of their subsidiary.
This is how management-board-shareholder dynamics of a big company works.
I didn’t spell it out as this is a well known aspect of OpenAI.
That cap is very high, something like 1000x investment. They’re not near it, so they could be sued by investors if they admitted to slowing down even a little.
The whole scheme for OpenAI is nuts, but I think they’re getting less nuts as they think more about the issue. Which is weak praise.