I wasn’t aware of the “you’re < Y%” vibe. At least not explicitly. I felt that my edge wasn’t with the advanced math early AI Safety work MIRI was doing. Maybe I’m not sensitive enough for the burn-out dynamics, but more likely, I wasn’t exposed much to it, me being from Germany. I’m also pretty resilient/anti-fragile.
Anyway, I wanted to chime in to say that even if you are objectively below Y% on hard alignment math skill, there are so many areas where you can support AI Alignment, even as a researcher. This may not have been the case in the early days but is now: AI Alignment teams need organizers, writers, managers, cyber security professionals, data scientists, data engineers, software engineers, system administrators, frontend developers, designers, neuroscientists, psychologists, educators, communicators, and more. We no longer have these small, closely-knit AI teams. With a growing field, specialization increases. And all of these skills are valuable and likely needed for alignment to succeed and should get recognition.
Why do I say so? I have been in organizations growing from a handful to a few hundred people all my life. I’m in one such company right now and in parallel, I’m running the operations of a small AI Alignment project. I like being a researcher, but out of need, I take care of funding, outreach, and infrastructure. In bigger teams or the community at large, I see a need for the following:
Organizers for running events or small team daily work, as simple as coordinating meeting invites. Without this, teams often fall apart. Shared community helps with Common Good Commitment.
Writers who write summaries, essays, grant applications, and blog posts, whether technical, social, or organizational. And who promote Alignment Mindset. Where do you think LW posts come from?
Managers, no, not moral maze middle managers, but people who find the right people to work together and make sure work goes smoothly. Also, who establish Trustworthy Command and Research Closure. Not sure what else you call that role.
Cybersecurity professionals who secure the sensitive core of a project matching Strong Opsec as the project grows
Data scientists and data engineers who, well, deal with the amount of data that inevitably goes into and out of projects that have to deal with the real world earlier or later. Also, who maintain Research Closure.
Software engineers to build all the glue stuff between websites, build processes, data exchange, SaaS parts, and whatnot.
System administrators, dito, provisioning and administering the above while maintaining Strong Opsec.
Frontend developers—you want to see the results—graphs, data, videos, right?
Designers—and it should be easy to use and understand.
Neuroscientists, cognitive scientists, psychologists, psychiatrists, therapists, and pedagogues who provide hard evidence about the biological, psychological, or cognitive plausibility of models or behaviors of hypothetical or simulated agents or who can suggest directions to explore or validate.
Educators and communicators who reach out to interested parties and who promote Alignment Mindset and Common Good Commitment.
Accountants, finance professionals, financial advisors, and of course, sponsors who help establish, maintain and monitor Requisite Resource Levels.
Lawyers who set up a legal entity that aspires to Trustworthy Command,Research Closure, and Common Good Commitment.
Everybody else who can help by affirming the Common Good Commitment thus creating common knowledge that this is a worthy cause.
I wasn’t aware of the “you’re < Y%” vibe. At least not explicitly. I felt that my edge wasn’t with the advanced math early AI Safety work MIRI was doing. Maybe I’m not sensitive enough for the burn-out dynamics, but more likely, I wasn’t exposed much to it, me being from Germany. I’m also pretty resilient/anti-fragile.
Anyway, I wanted to chime in to say that even if you are objectively below Y% on hard alignment math skill, there are so many areas where you can support AI Alignment, even as a researcher. This may not have been the case in the early days but is now: AI Alignment teams need organizers, writers, managers, cyber security professionals, data scientists, data engineers, software engineers, system administrators, frontend developers, designers, neuroscientists, psychologists, educators, communicators, and more. We no longer have these small, closely-knit AI teams. With a growing field, specialization increases. And all of these skills are valuable and likely needed for alignment to succeed and should get recognition.
Why do I say so? I have been in organizations growing from a handful to a few hundred people all my life. I’m in one such company right now and in parallel, I’m running the operations of a small AI Alignment project. I like being a researcher, but out of need, I take care of funding, outreach, and infrastructure. In bigger teams or the community at large, I see a need for the following:
Organizers for running events or small team daily work, as simple as coordinating meeting invites. Without this, teams often fall apart. Shared community helps with Common Good Commitment.
Writers who write summaries, essays, grant applications, and blog posts, whether technical, social, or organizational. And who promote Alignment Mindset. Where do you think LW posts come from?
Managers, no, not moral maze middle managers, but people who find the right people to work together and make sure work goes smoothly. Also, who establish Trustworthy Command and Research Closure. Not sure what else you call that role.
Cybersecurity professionals who secure the sensitive core of a project matching Strong Opsec as the project grows
Data scientists and data engineers who, well, deal with the amount of data that inevitably goes into and out of projects that have to deal with the real world earlier or later. Also, who maintain Research Closure.
Software engineers to build all the glue stuff between websites, build processes, data exchange, SaaS parts, and whatnot.
System administrators, dito, provisioning and administering the above while maintaining Strong Opsec.
Frontend developers—you want to see the results—graphs, data, videos, right?
Designers—and it should be easy to use and understand.
Neuroscientists, cognitive scientists, psychologists, psychiatrists, therapists, and pedagogues who provide hard evidence about the biological, psychological, or cognitive plausibility of models or behaviors of hypothetical or simulated agents or who can suggest directions to explore or validate.
Educators and communicators who reach out to interested parties and who promote Alignment Mindset and Common Good Commitment.
Accountants, finance professionals, financial advisors, and of course, sponsors who help establish, maintain and monitor Requisite Resource Levels.
Lawyers who set up a legal entity that aspires to Trustworthy Command, Research Closure, and Common Good Commitment.
Everybody else who can help by affirming the Common Good Commitment thus creating common knowledge that this is a worthy cause.
In the above, Trustworthy Command, Research Closure, Strong Opsec, Common Good Commitment, Alignment Mindset, Requisite Resource Levels refer to the Six Dimensions of Operational Adequacy in AGI Projects.