I was recently looking into donating to CLTR and I’m curious why you are excited about it? My sense was that little of its work was directly relevant to x-risk (for example this report on disinformation is essentially useless for preventing x-risk AFAICT), and the relevant work seemed to be not good or possibly counterproductive. For example their report on “a pro-innovation approach to regulating AI” seemed bad to me on two counts:
There is a genuine tradeoff between accelerating AI-driven innovation and decreasing x-risk. So to the extent that this report’s recommendations support innovation, they increase x-risk, which makes this report net harmful.
The report’s recommendations are kind of vacuous, e.g. they recommend “reducing inefficiencies”, like yes, this is a fully generalizable good thing but it’s not actionable.
(So basically I think this report would be net negative if it wasn’t vacuous, but because it’s vacuous, it’s net neutral.)
This is the sense I get as someone who doesn’t know anything about policy and is just trying to get the sense of orgs’ work by reading their websites.
I don’t know. I’m not directly familiar with CLTR’s work — my excitement about them is deference-based. (Same for Horizon and TFS, mostly. I inside-view endorse the others I mention.)
I am excited about donations to all of the following, in no particular order:
AI governance
GovAI (mostly research) [actually I haven’t checked whether they’re funding-constrained]
IAPS (mostly research)
Horizon (field-building)
CLTR (policy engagement)
Edit: also probably The Future Society (policy engagement, I think) and others but I’m less confident
LTFF/ARM
Lightcone
I was recently looking into donating to CLTR and I’m curious why you are excited about it? My sense was that little of its work was directly relevant to x-risk (for example this report on disinformation is essentially useless for preventing x-risk AFAICT), and the relevant work seemed to be not good or possibly counterproductive. For example their report on “a pro-innovation approach to regulating AI” seemed bad to me on two counts:
There is a genuine tradeoff between accelerating AI-driven innovation and decreasing x-risk. So to the extent that this report’s recommendations support innovation, they increase x-risk, which makes this report net harmful.
The report’s recommendations are kind of vacuous, e.g. they recommend “reducing inefficiencies”, like yes, this is a fully generalizable good thing but it’s not actionable.
(So basically I think this report would be net negative if it wasn’t vacuous, but because it’s vacuous, it’s net neutral.)
This is the sense I get as someone who doesn’t know anything about policy and is just trying to get the sense of orgs’ work by reading their websites.
I don’t know. I’m not directly familiar with CLTR’s work — my excitement about them is deference-based. (Same for Horizon and TFS, mostly. I inside-view endorse the others I mention.)