Thank you for this. This is very close to what I was hoping to find!
It looks like Benjamin Hilton makes a rough guess of the proportion of workers dedicated to AI x-risk for each organization. This seems appropriate for assessing a rough % across all organizations, but if we want to nudge organizations to employ more people toward alignment then I think we want to highlight exact figures.
E.g. we want to ask the organizations how many people they have working on alignment and then post what they say—a sort of accountability feedback loop.
Thank you for this. This is very close to what I was hoping to find!
It looks like Benjamin Hilton makes a rough guess of the proportion of workers dedicated to AI x-risk for each organization. This seems appropriate for assessing a rough % across all organizations, but if we want to nudge organizations to employ more people toward alignment then I think we want to highlight exact figures.
E.g. we want to ask the organizations how many people they have working on alignment and then post what they say—a sort of accountability feedback loop.