Direct “technical AI safety” work is not the only way for technical people (who think that governance & politics, outreach, advocacy, and field-building work doesn’t fit them well) to contribute to the larger “project” of “ensuring that the AI transition of the civilisation goes well”.
Now, as powerful LLMs are available, is the golden age to build innovative systems and tools to improve[1]:
Social systems: innovative LLM/AI-first social networks that solve the social dilemma? (I don’t have a good existing examples of such projects, though)
I believe that if such projects are approached with integrity, thoughtful planning, and AI safety considerations at heart rather than with short-term thinking (specifically, not considering how the project will play out if or when AGI is developed and unleashed on the economy and the society) and profit-extraction motives, they could shape to shape the trajectory of the AI transition in a positive way, and the impact may be comparable to some direct technical AI safety/alignment work.
In the context of this post, it’s important that the verticals and projects mentioned above could either be conventionally VC-funded because they could promise direct financial returns to the investors, or could receive philanthropic or government funding that wouldn’t otherwise go to technical AI safety projects. Also, there is a number of projects in these areas that are already well-funded and hiring.
Joining such projects might also be a good fit for software engineers and other IT and management professionals who don’t feel they are smart enough or have the right intellectual predispositions to do good technical research, anyway, even there was enough well-funded “technical AI safety research orgs”. There should be some people who do science and some people who do engineering.
I didn’t do serious due diligence and impact analysis on any of the projects mentioned. The mentioned projects are just meant to illustrate the respective verticals, and are not endorsements.
@Nathan Helm-Burger’s comment made me think it’s worthwhile to reiterate here the point that I periodically make:
Direct “technical AI safety” work is not the only way for technical people (who think that governance & politics, outreach, advocacy, and field-building work doesn’t fit them well) to contribute to the larger “project” of “ensuring that the AI transition of the civilisation goes well”.
Now, as powerful LLMs are available, is the golden age to build innovative systems and tools to improve[1]:
Politics: see https://cip.org/, Audrey Tang’s projects
Social systems: innovative LLM/AI-first social networks that solve the social dilemma? (I don’t have a good existing examples of such projects, though)
Psychotherapy, coaching: see Inflection
Economics: see Verses, One Project, the Gaia Consortium
Epistemic infrastructure: see Subconscious Network, Ought, the Cyborgism agenda, Quantum Leap (AI safety edtech)
Authenticity infrastructure: see Optic, proof-of-personhood projects
Cybersec/infosec: see various AI startups for cybersecurity, trustoverip.org
More?
I believe that if such projects are approached with integrity, thoughtful planning, and AI safety considerations at heart rather than with short-term thinking (specifically, not considering how the project will play out if or when AGI is developed and unleashed on the economy and the society) and profit-extraction motives, they could shape to shape the trajectory of the AI transition in a positive way, and the impact may be comparable to some direct technical AI safety/alignment work.
In the context of this post, it’s important that the verticals and projects mentioned above could either be conventionally VC-funded because they could promise direct financial returns to the investors, or could receive philanthropic or government funding that wouldn’t otherwise go to technical AI safety projects. Also, there is a number of projects in these areas that are already well-funded and hiring.
Joining such projects might also be a good fit for software engineers and other IT and management professionals who don’t feel they are smart enough or have the right intellectual predispositions to do good technical research, anyway, even there was enough well-funded “technical AI safety research orgs”. There should be some people who do science and some people who do engineering.
I didn’t do serious due diligence and impact analysis on any of the projects mentioned. The mentioned projects are just meant to illustrate the respective verticals, and are not endorsements.