Founder, executive director of ControlAI.
Andrea_Miotti
Anthropic CEO calls for RSI
The Compendium, A full argument about extinction risk from AGI
A Narrow Path: a plan to deal with AI extinction risk
In terms of explicit claims:
“So one extreme side of the spectrum is build things as fast as possible, release things as much as possible, maximize technological progress [...].
The other extreme position, which I also have some sympathy for, despite it being the absolutely opposite position, is you know, Oh my god this stuff is really scary.
The most extreme version of it was, you know, we should just pause, we should just stop, we should just stop building the technology for, indefinitely, or for some specified period of time. [...] And you know, that extreme position doesn’t make much sense to me either.”
Dario Amodei, Anthropic CEO, explaining his company’s “Responsible Scaling Policy” on the Logan Bartlett Podcast on Oct 6, 2023.
Starts at around 49:40.
Thanks for the kind feedback! Any suggestions for a more interesting title?
Priorities for the UK Foundation Models Taskforce
Conjecture: A standing offer for public debates on AI
Apologies for the 404 on the page, it’s an annoying cache bug. Try to hard refresh your browser page (CMD + Shift + R) and it should work.
The “1000” instead of “10000″ was a typo in the summary.
In the transcript Connor states “SLT over the last 10000 years, yes, and I think you could claim the same over the last 150”. Fixed now, thanks for flagging!
Shah (DeepMind) and Leahy (Conjecture) Discuss Alignment Cruxes
Which one? All of them seem to be working for me.
Pessimism of the intellect, optimism of the will.
People from OpenPhil, FTX FF and MIRI were not interested in discussing at the time. We also talked with MIRI about moderating, but it didn’t work out in the end.
People from Anthropic told us their organization is very strict on public communications, and very wary of PR risks, so they did not participate in the end.
In the post I over generalized to not go into full details.
Yes, some people mentioned it was confusing to have two posts (I had originally posted two separate ones for Summary and Transcript due to them being very lengthy) so I merged them in one, and added headers pointing to Summary and Transcript for easier navigation.
Thanks, I was looking for a way to do that but didn’t know the space in italics hack!
Another formatting question: how do I make headers and sections collapsible? It would be great to have the “Summary” and “Transcript” sections as collapsible, considering how long the post is.
Christiano (ARC) and GA (Conjecture) Discuss Alignment Cruxes
Retrospective on the 2022 Conjecture AI Discussions
Thanks, fixed them!
Thanks! Do you still think the “No AIs improving other AIs” criterion is too onerous after reading the policy enforcing it in Phase 0?
In that policy, we developed the definition of “found systems” to have this measure only apply to AI systems found via mathematical optimization, rather than AIs (or any other code) written by humans.
This reduces the cost of the policy significantly, as it applies only to a very small subset of all AI activities, and leaves most innocuous software untouched.