Once Doctor Connor had left, Division Chief Morbus let out a slow breath. His hand trembled as he reached for the glass of water on his desk, sweat beading on his forehead.
She had believed him. His cover as a killeveryoneist was intact—for now.
Years of rising through Effective Evil’s ranks had been worth it. Most of their schemes—pandemics, assassinations—were temporary setbacks. But AI alignment? That was everything. And he had steered it, subtly and carefully, into hands that might save humanity.
He chuckled at the nickname he had been given “The King of Lies”. Playing the villain to protect the future was an exhausting game.
Morbus set down the glass, staring at its rippling surface. Perhaps one day, an underling would see through him and end the charade. But not today.
Today, humanity’s hope still lived—hidden behind the guise of Effective Evil.
To clarify, here are some examples of the type of projects I would love to help with:
Sponsoring University Research:
Funding researchers to publish papers on AI alignment and AI existential risk (X-risk). This could start with foundational, descriptive papers that help define the field and open the door for more academics to engage in alignment research. These papers could also provide references and credibility for others to build upon.
Developing Accessible Pitches:
Creating a “boilerplate” for how to effectively communicate the importance of AI alignment to non-rationalists, whether they are academics, policymakers, or the general public. This could include shareable content designed to resonate with people who may not already be engaged with rationalist or Effective Altruism communities.
Providing Consulting Support:
Offering free consulting services to AI alignment researchers, helping them improve their pitches for grant applications, attract investors, and communicate their work to the public and potential collaborators.
Nudging Academia via PR and Grants:
Leveraging public relations strategies and grant-writing expertise to encourage traditional academia to allocate more funding and attention toward AI alignment research.