I’m concerned about OpenAI’s behavior in context of their stated trajectory towards level 5 intelligence—running an organization. If the model for a successful organization lies in the dissonance between actions intended to foster goodwill (open research/open source/non-profit/safety concerned/benefit all of humanity) but those virtuous paradigms are all instrumental rather than intrinsic, requiring NDAs/financial pressure/lobbying to be whitewashed, scaling that up with AGI (which would have more intimate and expansive data, greater persuasiveness, more emotional detachment, less moral hesitation) seems clearly problematic.
Oddly, that could be the key to getting out from under its contract with Microsoft. The contract contains a clause that says that if OpenAI builds artificial general intelligence, or A.G.I. — roughly speaking, a machine that matches the power of the human brain — Microsoft loses access to OpenAI’s technologies.
The clause was meant to ensure that a company like Microsoft did not misuse this machine of the future, but today, OpenAI executives see it as a path to a better contract, according to a person familiar with the company’s negotiations. Under the terms of the contract, the OpenAI board could decide when A.G.I. has arrived.
Despite being founded on the precept of developing AGI, structuring the company and many major contracts around the idea, while never precisely defining it—there now seems to be deliberate distancing, as evidenced here. Notably Sam’s recent vision of the future “The Intelligence Age” does not mention AGI.
I expect more tweets like this from OpenAI employees in the coming weeks/months, expressing doubts about the notion of AGI, often taking care to say that the causal motivations are altruistic/epistemic.
I categorically disagree with Eliezer’s tweet that “OpenAI fired everyone with a conscience”, and all of this might not be egregious as far as corporate sleights-of-hand/dissonance go—but scaled up recursively, eg. when extended to principles relating to alignment/warning shots/surveillance/misinformation/weapons, this does not bode well.
Resonating with you here! Yes, I think autonomous corporations (and other organisations) would result in society-wide extraction, destabilisation and totalitarianism.
Thanks! I should have been more clear that the trajectory toward level 5 (with all human virtue/trust being hackable for instrumental gains) itself is concerning, not just the eventual leap when it gets there.
I’m concerned about OpenAI’s behavior in context of their stated trajectory towards level 5 intelligence—running an organization. If the model for a successful organization lies in the dissonance between actions intended to foster goodwill (open research/open source/non-profit/safety concerned/benefit all of humanity) but those virtuous paradigms are all instrumental rather than intrinsic, requiring NDAs/financial pressure/lobbying to be whitewashed, scaling that up with AGI (which would have more intimate and expansive data, greater persuasiveness, more emotional detachment, less moral hesitation) seems clearly problematic.
The next goodwill-inducing paradigm that has outlived its utility seems to be the concept of “AGI”:
From here:
Despite being founded on the precept of developing AGI, structuring the company and many major contracts around the idea, while never precisely defining it—there now seems to be deliberate distancing, as evidenced here. Notably Sam’s recent vision of the future “The Intelligence Age” does not mention AGI.
I expect more tweets like this from OpenAI employees in the coming weeks/months, expressing doubts about the notion of AGI, often taking care to say that the causal motivations are altruistic/epistemic.
I categorically disagree with Eliezer’s tweet that “OpenAI fired everyone with a conscience”, and all of this might not be egregious as far as corporate sleights-of-hand/dissonance go—but scaled up recursively, eg. when extended to principles relating to alignment/warning shots/surveillance/misinformation/weapons, this does not bode well.
Resonating with you here! Yes, I think autonomous corporations (and other organisations) would result in society-wide extraction, destabilisation and totalitarianism.
Thanks! I should have been more clear that the trajectory toward level 5 (with all human virtue/trust being hackable for instrumental gains) itself is concerning, not just the eventual leap when it gets there.