They recently spun-out a capped-profit company, which seems like the end goal is monetizing some of their recent advancements. The page linked in the previous sentence also has some stuff about safety and about how none of their day-to-day work is changing, but it doesn’t seem that encouraging.
I found this moderately encouraging instead of discouraging. So far I think OpenAI is 2 for 2 on managing organizational transitions in ways that seem likely to not compromise safety very much (or even improve safety) while expanding their access to resources; if you think the story of building AGI looks more like assembling a coalition that’s able to deploy massive resources to solve the problem than a flash of insight in a basement, then the ability to manage those transitions becomes a core part of the overall safety story.
That’s an interesting point. Why do you think that the new organizational transition is not compromising safety? (I have no formed opinion on this, but it seems that adding economic incentives is dangerous by default)
I agree that adding economic incentives is dangerous by default, but think their safeguards are basically adequate to overcome that incentive pressure. At the time I spent an hour trying to come up with improvements to the structure, and ended up not thinking of anything. Also remember that this sort of change, even if it isn’t a direct improvement, can be an indirect improvement by cutting off unpleasant possibilities; for example, before the move to the LP, there was some risk OpenAI would become a regular for-profit, and the LP move dramatically lowered that risk.
I also think for most of the things I’m concerned about, psychological pressure to think the thing isn’t dangerous is more important; like, I don’t think we’re in the cigarette case where it’s mostly other people who get cancer while the company profits; I think we’re in the case where either the bomb ignites the atmosphere or it doesn’t, and even in wartime the evidence was that people would abandon plans that posed a serious chance of destroying humanity.
Note also that economic incentives quite possibly push away from AGI towards providing narrow services (see Drexler’s various arguments that AGI isn’t economically useful, and so people won’t make it by default). If you are more worried about companies that want to build AGIs and then ask it what to do than you are about companies that want to build AIs to accomplish specific tasks, increased short-term profit motive makes OpenAI more likely to move in the second direction. [I think this consideration is pretty weak but worth thinking about.]
So if I understand your main point, you argue that OpenAI LP incentivized new investments without endangering the safety, thanks to the capped returns. And that this tradeoff looks like one of the best possible, compared to becoming a for-profit or getting bought by a big for-profit company. Is that right?
I also think for most of the things I’m concerned about, psychological pressure to think the thing isn’t dangerous is more important; like, I don’t think we’re in the cigarette case where it’s mostly other people who get cancer while the company profits; I think we’re in the case where either the bomb ignites the atmosphere or it doesn’t, and even in wartime the evidence was that people would abandon plans that posed a serious chance of destroying humanity.
I agree with you that we’re in the second case, but that doesn’t necessarily means that there’s a fire alarm. And economic incentives might push you to go slightly further, where it looks like everything is still okay, but we reached transformative AI in a terrible way. [I don’t think it is actually the case for OpenAI right now, just answering to your point.]
Note also that economic incentives quite possibly push away from AGI towards providing narrow services (see Drexler’s various arguments that AGI isn’t economically useful, and so people won’t make it by default). If you are more worried about companies that want to build AGIs and then ask it what to do than you are about companies that want to build AIs to accomplish specific tasks, increased short-term profit motive makes OpenAI more likely to move in the second direction
Good point, I need to think more about that. A counterargument that springs to mind is that AGI research might push forward other kinds of AI, and thus bring transformative AI sooner even if it isn’t an AGI.
Out of the various mechanisms, I think the capped returns are relatively low ranking; probably the top on my list is the nonprofit board having control over decision-making (and implicitly the nonprofit board’s membership not being determined by investors, as would happen in a normal company).
I found this moderately encouraging instead of discouraging. So far I think OpenAI is 2 for 2 on managing organizational transitions in ways that seem likely to not compromise safety very much (or even improve safety) while expanding their access to resources; if you think the story of building AGI looks more like assembling a coalition that’s able to deploy massive resources to solve the problem than a flash of insight in a basement, then the ability to manage those transitions becomes a core part of the overall safety story.
This makes sense to me, given the situation you describe.
That’s an interesting point. Why do you think that the new organizational transition is not compromising safety? (I have no formed opinion on this, but it seems that adding economic incentives is dangerous by default)
I agree that adding economic incentives is dangerous by default, but think their safeguards are basically adequate to overcome that incentive pressure. At the time I spent an hour trying to come up with improvements to the structure, and ended up not thinking of anything. Also remember that this sort of change, even if it isn’t a direct improvement, can be an indirect improvement by cutting off unpleasant possibilities; for example, before the move to the LP, there was some risk OpenAI would become a regular for-profit, and the LP move dramatically lowered that risk.
I also think for most of the things I’m concerned about, psychological pressure to think the thing isn’t dangerous is more important; like, I don’t think we’re in the cigarette case where it’s mostly other people who get cancer while the company profits; I think we’re in the case where either the bomb ignites the atmosphere or it doesn’t, and even in wartime the evidence was that people would abandon plans that posed a serious chance of destroying humanity.
Note also that economic incentives quite possibly push away from AGI towards providing narrow services (see Drexler’s various arguments that AGI isn’t economically useful, and so people won’t make it by default). If you are more worried about companies that want to build AGIs and then ask it what to do than you are about companies that want to build AIs to accomplish specific tasks, increased short-term profit motive makes OpenAI more likely to move in the second direction. [I think this consideration is pretty weak but worth thinking about.]
So if I understand your main point, you argue that OpenAI LP incentivized new investments without endangering the safety, thanks to the capped returns. And that this tradeoff looks like one of the best possible, compared to becoming a for-profit or getting bought by a big for-profit company. Is that right?
I agree with you that we’re in the second case, but that doesn’t necessarily means that there’s a fire alarm. And economic incentives might push you to go slightly further, where it looks like everything is still okay, but we reached transformative AI in a terrible way. [I don’t think it is actually the case for OpenAI right now, just answering to your point.]
Good point, I need to think more about that. A counterargument that springs to mind is that AGI research might push forward other kinds of AI, and thus bring transformative AI sooner even if it isn’t an AGI.
Out of the various mechanisms, I think the capped returns are relatively low ranking; probably the top on my list is the nonprofit board having control over decision-making (and implicitly the nonprofit board’s membership not being determined by investors, as would happen in a normal company).