@gabe_cc
Gabriel Alfour
The Ideological Spiral
Trying the Obvious Thing
The Three Ideological Stances
Happy to have a public conversation on the topic. Just DM me on Twitter if you are interested.
Ideologies are slow and necessary, for now
Conjecture: A Roadmap for Cognitive Software and A Humanist Future of AI
The Compendium, A full argument about extinction risk from AGI
“Epistemic range of motion” and LessWrong moderation
There is no IQ for AI
For Civilization and Against Niceness
On Lies and Liars
By this logic, any instrumental action taken towards an altruistic goal would be “for personal gain”.
I think you are making a genuine mistake, and that I could have been clearer.
There are instrumental actions that favour everyone (raising epistemic standards), and instrumental actions that favour you (making money).
The latter are for personal gains, regardless of your end goals.
Sorry for not getting deeper into it in this comment. This is quite a vast topic.
I might instead write a longer post about the interactions of deontology & consequentialism, and egoism & altruism.
Lying is Cowardice, not Strategy
Cognitive Emulation: A Naive AI Safety Proposal
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
(I strongly upvoted the comment to signal boost it, and possibly let people who agree easily express their agreement to it directly if they don’t have any specific meta-level observation to share)
4% of alignment spending on this seems like clearly way too much.
-
The main hoped-for benefits are “teach the scientific community new things” and “plausibly going viral repeatedly”.
For the first one, it seems like one more exploration among many others, on par with @janus’ for instance.
For the second one, as you put it, “more hype” is not what is missing in AI. People see Midjourney getting better over the years, people see ChatGPT getting better over the years, and companies are optimising quite hard for flashy demonstrations.
-
I guess I just dislike this type of “theory of change that relies on too many unwarranted assumptions to be meaningful, but somehow still manages to push capabilities and AI hype, and makes grabs for attention and money” in general.
This quote from the “Late 2027” ideal success illustrates what I mean by “too many unwarranted assumptions”:
If that’s where the value is in the best case, I would just put the money into “grassroots advocacy for an AI development pause or slowdown” directly.
If the pitch is “let the AIs do the grassroots advocacy by themselves because it’s more efficient”, then I would suggest instead doing it directly and thinking of where AIs could help in an aligned way.
If the pitch is “let’s invest money into AI hype, and then leverage it for grassroots advocacy for an AI development pause or slowdown”, I would not recommend it because the default optimisation target of hype depletes many commons. Were I to recommend it, I would recommend doing it more directly, and likely checking with people who have run successful hype campaigns instead.
-
I would recommend asking people doing grassroots advocacy how much they think a fun agency demo would help, and more seriously how much they’d be willing to pay (either in $$ or in time).
ControlAI (where I advise), but also PauseAI or possibly even MIRI now with their book tour and more public appearances.
-
There’s another thing that I dislike, but is harder to articulate. Two quotes that make it more salient:
“So, just a couple percent.” (to justify the $4M spend)
“I’m not too worried about adding to the hype; it seems like AI companies have plenty of hype already” (to justify why it’s ok to do more AI hype)
This looks to me how we die by a thousand cuts, a lack of focus, and a lack of coordination.
Fmpov, there should be a high threshold for the alignment community to seriously consider a project that is not just fully working on core problems. Like alignment (as opposed to evals, AGI but safe, AGI but for safety), on extinction risks awareness (like the CAIS statement, AI2027) or on pause advocacy (as opposed to job loss, meta-crisis, etc.).
We should certainly have a couple of meta-projects, that is close to the ops budget of NGO. Like 10-15% of the budget on coordination tools (LW, regranters, etc.). But by bulk, we should do the obvious thing.