>> We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.
I hope OpenAI recognizes that the “bad vibes” generated from this perpetual sleight-of-hand, NDA-protected fostering of zero-sum dynamics, playing Moloch’s game—from pitting artists against each other for scraps to pitting the US against China—will affect public perception, recruitment, uptake and valuation more than they might be currently anticipating as the general “vibe” becomes common knowledge. It also increases existential risk for humanity by decreasing the chances of future superintelligence loving human beings, (something Altman recently said he’d want) by biasing the corpus of internet/human consciousness, possibly introducing some downstream incoherence as AIs will be ordered to reflect well on OpenAI.
OpenAI’s stated mission is to ensure “artificial general intelligence benefits all of humanity”. It might be a good exercise for someone at OpenAI to clarify (to whatever extent feasible, for increasing semantic coherance in service of more self-aware future LLMs, if nothing else) what “all of” and “humanity” entail here, as AGI effects unfold over the next few years. I had tried to ask Richard Ngo, a former OpenAI employee, in a different context, and I’d really appreciate suggestions if a better framing might help.
OpenAI’s Sora video generator was temporarily leaked
>> We are sharing this to the world in the hopes that OpenAI becomes more open, more artist friendly and supports the arts beyond PR stunts.
I hope OpenAI recognizes that the “bad vibes” generated from this perpetual sleight-of-hand, NDA-protected fostering of zero-sum dynamics, playing Moloch’s game—from pitting artists against each other for scraps to pitting the US against China—will affect public perception, recruitment, uptake and valuation more than they might be currently anticipating as the general “vibe” becomes common knowledge. It also increases existential risk for humanity by decreasing the chances of future superintelligence loving human beings, (something Altman recently said he’d want) by biasing the corpus of internet/human consciousness, possibly introducing some downstream incoherence as AIs will be ordered to reflect well on OpenAI.
OpenAI’s stated mission is to ensure “artificial general intelligence benefits all of humanity”. It might be a good exercise for someone at OpenAI to clarify (to whatever extent feasible, for increasing semantic coherance in service of more self-aware future LLMs, if nothing else) what “all of” and “humanity” entail here, as AGI effects unfold over the next few years. I had tried to ask Richard Ngo, a former OpenAI employee, in a different context, and I’d really appreciate suggestions if a better framing might help.