This matches my internal experience that caused me to bring a ton of resources into existence in the alignment ecosystem (with various collaborators):
aisafety.info—Man, there really should be a single point of access that lets people self-onboard into the effort. (Helped massively by Rob Miles’s volunteer community, soon to launch a paid distillation fellowship)
aisafety.training—Maybe we should have a unified place with all the training programs and conferences so people can find what to apply to? (AI Safety Support had a great database that just needed a frontend)
aisafety.world—Let’s make a map of everything in AI existential safety so people know what orgs, blogs, funding sources, resources, etc exist, in a nice sharable format. (Hamish did the coding, Superlinear funded it)
ea.domains—Wow, there sure are a lot of vital domains that could get grabbed by squatters. Let’s step in and save them for good orgs and projects.
aisafety.community—There’s no up-to-date list of online communities. This is an obvious missing resource.
Rob Miles videos are too rare, almost entirely bottlenecked on the research and scriptwriting process. So I built some infrastructure which allows volunteers to collaborate as teams on scripts for him, being tested now.
Ryan Kidd said there should be a nice professional site which lists all the orgs in a format which helps people leaving SERI MATS decide where to apply. aisafety.careers is my answer, though it’s not quite ready yet. Volunteers wanted to help write up descriptions for orgs in the Google Docs we have auto-syncing with the site!
Nonlinear wanted a prize platform, and that seemed pretty useful as a way to usefully use the firehose of money while FTXFF was still a thing, so I built Superlinear.
There are a lot of obvious low-hanging fruit here. I need more hands. Let’s make a monthly call and project database so I can easily pitch these to all the people who want to help save the world and don’t know what to do. A bunch of great devs joined!
and 6+ more major projects as well as a ton of minor ones, but that’s enough to list here.
I do worry I might be neglecting my actual highest EV thing though, which is my moonshot formal alignmentproposal (low chance of the research direction working out, but much more direct if it does). Fixing the alignment ecosystem is just so obviously helpful though, and has nice feedback loops.
I’ve kept updating in the direction of do a bunch of little things that don’t seem blocked/tangled on anything even if they seem trivial in the grand scheme of things. In the process of doing those you will free up memory and learn a bunch about the nature of the bigger things that are blocked while simultaneously revving your own success spiral and action-bias.
Hell yeah!
This matches my internal experience that caused me to bring a ton of resources into existence in the alignment ecosystem (with various collaborators):
aisafety.info—Man, there really should be a single point of access that lets people self-onboard into the effort. (Helped massively by Rob Miles’s volunteer community, soon to launch a paid distillation fellowship)
aisafety.training—Maybe we should have a unified place with all the training programs and conferences so people can find what to apply to? (AI Safety Support had a great database that just needed a frontend)
aisafety.world—Let’s make a map of everything in AI existential safety so people know what orgs, blogs, funding sources, resources, etc exist, in a nice sharable format. (Hamish did the coding, Superlinear funded it)
ea.domains—Wow, there sure are a lot of vital domains that could get grabbed by squatters. Let’s step in and save them for good orgs and projects.
aisafety.community—There’s no up-to-date list of online communities. This is an obvious missing resource.
Rob Miles videos are too rare, almost entirely bottlenecked on the research and scriptwriting process. So I built some infrastructure which allows volunteers to collaborate as teams on scripts for him, being tested now.
Ryan Kidd said there should be a nice professional site which lists all the orgs in a format which helps people leaving SERI MATS decide where to apply. aisafety.careers is my answer, though it’s not quite ready yet. Volunteers wanted to help write up descriptions for orgs in the Google Docs we have auto-syncing with the site!
Nonlinear wanted a prize platform, and that seemed pretty useful as a way to usefully use the firehose of money while FTXFF was still a thing, so I built Superlinear.
There are a lot of obvious low-hanging fruit here. I need more hands. Let’s make a monthly call and project database so I can easily pitch these to all the people who want to help save the world and don’t know what to do. A bunch of great devs joined!
and 6+ more major projects as well as a ton of minor ones, but that’s enough to list here.
I do worry I might be neglecting my actual highest EV thing though, which is my moonshot formal alignment proposal (low chance of the research direction working out, but much more direct if it does). Fixing the alignment ecosystem is just so obviously helpful though, and has nice feedback loops.
I’ve kept updating in the direction of do a bunch of little things that don’t seem blocked/tangled on anything even if they seem trivial in the grand scheme of things. In the process of doing those you will free up memory and learn a bunch about the nature of the bigger things that are blocked while simultaneously revving your own success spiral and action-bias.
Yeah, that makes a lot of sense and fits my experience of what works.