I think (not sure!) the damage from people/orgs/states going “wow, AI is powerful, I will try to build some” is larger than the upside of people/orgs/states going “wow, AI is powerful, I should be scared of it”. It only takes one strong enough one of the former to kill everyone, and the latter is gonna have a very hard time stopping all of them.
By not informing the public that AI is indeed powerful, awareness of that fact is disproportionately allocated to people who will choose to think hard about it on their own, and thus that knowledge is more likely to be in reasonabler hands (for example they’d also be more likely to think “hmm maybe I shouldn’t build unaligned powerful AI”).
The same goes for cyborg tools, as well as general insights about AI: we should want them to be differentially accessible to alignment people than the general public.
In fact, my biggest criticism of OpenAI is not that they built GPTs, but that they productized it, made it widely available, and created a giant public frenzy about LLMs. I think we’d have more time to solve alignment if they kept it internally and the public wasn’t thinking about AI nearly as much.
I think (not sure!) the damage from people/orgs/states going “wow, AI is powerful, I will try to build some” is larger than the upside of people/orgs/states going “wow, AI is powerful, I should be scared of it”. It only takes one strong enough one of the former to kill everyone, and the latter is gonna have a very hard time stopping all of them.
By not informing the public that AI is indeed powerful, awareness of that fact is disproportionately allocated to people who will choose to think hard about it on their own, and thus that knowledge is more likely to be in reasonabler hands (for example they’d also be more likely to think “hmm maybe I shouldn’t build unaligned powerful AI”).
The same goes for cyborg tools, as well as general insights about AI: we should want them to be differentially accessible to alignment people than the general public.
In fact, my biggest criticism of OpenAI is not that they built GPTs, but that they productized it, made it widely available, and created a giant public frenzy about LLMs. I think we’d have more time to solve alignment if they kept it internally and the public wasn’t thinking about AI nearly as much.