If we stand by while OpenAI violates its charter, it signals that their execs can get away with it. Worse, it signals that we don’t care.
what signals you send to OAI execs seems not relevant.
in the case where they really can’t get away with it, e.g. where the state will really arrest them, then sending them signals / influencing their information state is not what causes that outcome.
if your advocacy causes the world to change such that “they can’t get away with it” becomes true, this also does not route through influencing their information state.
OpenAI is seen as the industry leader, yet projected to lose $5 billion this year
i don’t see why this would lead them to downsize, if “the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year”
what signals you send to OAI execs seems not relevant.
Right, I don’t occupy myself much with what the execs think. I do worry about stretching the “Overton window” for concerned/influential stakeholders broadly. Like, if no-one (not even AI Safety folk) acts to prevent OpenAI from continuing to violate its charter, then everyone kinda gets used to it being this way and maybe assumes it can’t be helped or is actually okay.
i don’t see why this would lead them to downsize, if “the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year”
Note that with ‘investments’, I meant injections of funds to cover business capital expenditures in general, including just to keep running their models. My phrasing here is a little confusing, but couldn’t find another concise way to put it yet.
The reason why OpenAI and other large-AI-model companies would cease to gain investments, is similar to why dotcom companies ceased to gain investments (even though a few like Amazon went on to be trillion-dollar companies). Because investors become skeptical about their prospect of the companies reaching break even and about whether they would still be able to offload their stake later (to even more investors willing to sink in their capital).
what signals you send to OAI execs seems not relevant.
in the case where they really can’t get away with it, e.g. where the state will really arrest them, then sending them signals / influencing their information state is not what causes that outcome.
if your advocacy causes the world to change such that “they can’t get away with it” becomes true, this also does not route through influencing their information state.
i don’t see why this would lead them to downsize, if “the gap between industry investment in deep learning and actual revenue has ballooned to over $600 billion a year”
Right, I don’t occupy myself much with what the execs think. I do worry about stretching the “Overton window” for concerned/influential stakeholders broadly. Like, if no-one (not even AI Safety folk) acts to prevent OpenAI from continuing to violate its charter, then everyone kinda gets used to it being this way and maybe assumes it can’t be helped or is actually okay.
Note that with ‘investments’, I meant injections of funds to cover business capital expenditures in general, including just to keep running their models. My phrasing here is a little confusing, but couldn’t find another concise way to put it yet.
The reason why OpenAI and other large-AI-model companies would cease to gain investments, is similar to why dotcom companies ceased to gain investments (even though a few like Amazon went on to be trillion-dollar companies). Because investors become skeptical about their prospect of the companies reaching break even and about whether they would still be able to offload their stake later (to even more investors willing to sink in their capital).
Let me rephrase that sentence to ‘industry expenditures in deep learning’.