Overall, I’m not sure what the takeaways are for the Alignment Newsletter.
On the one hand, it scored pretty highly in several categories, particularly “total usefulness”. It also probably takes less time than some other things above it, making it pretty cost-effective if you just quickly look at the numbers.
On the other hand, the main theory of impact is that people who work on alignment can use it to keep up to date about alignment work. But deriving reach as total usefulness / average usefulness, we see that ⇐ 6⁄30 people who are paid to work on technical AI alignment research and 16⁄51 people who spend time solving alignment problems said that they engaged with the newsletter. Intuitively that feels pretty low and a signal against the main theory of impact.
Thanks for running this!
Overall, I’m not sure what the takeaways are for the Alignment Newsletter.
On the one hand, it scored pretty highly in several categories, particularly “total usefulness”. It also probably takes less time than some other things above it, making it pretty cost-effective if you just quickly look at the numbers.
On the other hand, the main theory of impact is that people who work on alignment can use it to keep up to date about alignment work. But deriving reach as total usefulness / average usefulness, we see that ⇐ 6⁄30 people who are paid to work on technical AI alignment research and 16⁄51 people who spend time solving alignment problems said that they engaged with the newsletter. Intuitively that feels pretty low and a signal against the main theory of impact.