As you know, I have huge respect for USG natsec folks. But there are (at least!) two flavors of them: 1) the cautious, measure-twice-cut-once sort that have carefully managed deterrencefor decades, and 2) the “fuck you, I’m doing Iran-Contra” folks. Which do you expect will get in control of such a program ? It’s not immediately clear to me which ones would.
davekasten
I think this is a (c) leaning (b), especially given that we’re doing it in public. Remember, the Manhattan Project was a highly-classified effort and we know it by an innocuous name given to it to avoid attention.
Saying publicly, “yo, China, we view this as an all-costs priority, hbu” is a great way to trigger a race with China...
But if it turned out that we knew from ironclad intel with perfect sourcing that China was already racing (I don’t expect this to be the case), then I would lean back more towards (c).
I’ll be in Berkeley Weds evening through next Monday, would love to chat with, well, basically anyone who wants to chat. (I’ll be at The Curve Fri-Sun, so if you’re already gonna be there, come find me there between the raindrops!)
Thanks, looking forward to it! Please do let us folks who worked on A Narrow Path (especially me, @Tolga , and @Andrea_Miotti ) know if we can be helpful in bouncing around ideas as you work on the treaty proposal!
Is there a longer-form version with draft treaty langugage (even an outline)? I’d be curious to read it.
I think people opposing this have a belief that the counterfactual is “USG doesn’t have LLMs” instead of “USG spins up its own LLM development effort using the NSA’s no-doubt-substantial GPU clusters”.
Needless to say, I think the latter is far more likely.
I think the thing that you’re not considering is that when tunnels are more prevalent and more densely packed, the incentives to use the defensive strategy of “dig a tunnel, then set off a very big bomb in it that collapses many tunnels” gets far higher. It wouldn’t always be infantry combat, it would often be a subterranean equivalent of indirect fires.
Ok, so Anthropic’s new policy post (explicitly NOT linkposting it properly since I assume @Zac Hatfield-Dodds or @Evan Hubinger or someone else from Anthropic will, and figure the main convo should happen there, and don’t want to incentivize fragmenting of conversation) seems to have a very obvious implication.
Unrelated, I just slammed a big AGI-by-2028 order on Manifold Markets.
Yup. The fact that the profession that writes the news sees “I should resign in protest” as their own responsibility in this circumstance really reveals something.
At LessOnline, there was a big discussion one night around the picnic tables with @Eliezer_Yudkovsky , @habryka , and some interlocutors from the frontier labs (you’ll momentarily see why I’m being vague on the latter names).
One question was: “does DC actually listen to whistleblowers?” and I contributed that, in fact, DC does indeed have a script for this, and resigning in protest is a key part of it, especially ever since the Nixon years.
Here is a usefully publicly-shareable anecdote on how strongly this norm is embedded in national security decision-making, from the New Yorker article “The U.S. Spies Who Sound the Alarm About Election Interference” by David Kirkpatrick, Oct 21, 2024:
(https://archive.ph/8Nkx5)The experts’ chair insisted that in this cycle the intelligence agencies had not withheld information “that met all five of the criteria”—and did not risk exposing sources and methods. Nor had the leaders’ group ever overruled a recommendation by the career experts. And if they did? It would be the job of the chair of the experts’ group to stand up or speak out, she told me: “That is why we pick a career civil servant who is retirement-eligible.” In other words, she can resign in protest.
Does “highest status” here mean highest expertise in a domain generally agreed by people in that domain, and/or education level, and/or privileged schools, and/or from more economically powerful countries etc?
I mean, functionally all of those things. (Well, minus the country dynamic. Everyone at this event I talked to was US, UK, or Canadian, which is all sorta one team for purposes of status dynamics at that event)
I was being intentionally broad, here. I am probably less interested for purposes of this particular post only in the question of “who controls the future” swerves and more about “what else would interested, agentic actors do” questions.
It is not at all clear to me that OpenPhil is the only org who feels this way—I can think of several non-EA-ish charities that if they genuinely 100% believed “none of the people you care for will die of the evils you fight if you can just keep them alive for the next 90 days” would plausibly do some interestingly agentic stuff.
Oh, to be clear I’m not sure this is at all actually likely, but I was curious if anyone had explored the possibility conditional on it being likely
Basic Q: has anyone written much down about what sorts of endgame strategies you’d see just-before-ASI from the perspective of “it’s about to go well, and we want to maximize the benefits of it” ?
For example: if we saw OpenPhil suddenly make a massive push to just mitigate mortality at the cost of literally every other development goal they have, I might suspect that they suspect that we’re about to all be immortal under ASI, and they’re trying to get as many people possible to that future…
yup, as @sanxiyn says, this already exists. Their example is, AIUI, a high-end research one; an actually-on-your-laptop-right-now, but admittedly more narrow example is address space layout randomization.
Wild speculation: they also have a sort of we’re-watching-but-unsure provision about cyber operations capability in their most recent RSP update. In it, they say in part that “it is also possible that by the time these capabilities are reached, there will be evidence that such a standard is not necessary (for example, because of the potential use of similar capabilities for defensive purposes).” Perhaps they’re thinking that automated vulnerability discovery is at least plausibly on-net-defensive-balance-favorable*, and so they aren’t sure it should be regulated as closely, even if in still in some informal sense “dual use” ?
Again, WILD speculation here.
*A claim that is clearly seen as plausible by, e.g., the DARPA AI Grand Challenge effort.
It seems like the current meta is to write a big essay outlining your opinions about AI (see, e.g., Gladstone Report, Situational Awareness, various essays recently by Sam Altman and Dario Amodei, even the A Narrow Path report I co-authored).
Why do we think this is the case?
I can imagine at least 3 hypotheses:
1. Just path-dependence; someone did it, it went well, others imitated2. Essays are High Status Serious Writing, and people want to obtain that trophy for their ideas
3. This is a return to the true original meaning of an essay, under Montaigne, that it’s an attempt to write thinking down when it’s still inchoate, in an effort to make it more comprehensible not only to others but also to oneself. And AGI/ASI is deeply uncertain, so the essay format is particularly suited for this.
What do you think?
Okay, I spent much more time with the Anthropic RSP revisions today. Overall, I think it has two big thematic shifts for me:
1. It’s way more “professionally paranoid,” but needs even more so on non-cyber risks. A good start, but needs more on being able to stop human intelligence (i.e., good old fashioned spies)
2. It really has an aggressively strong vibe of “we are actually using this policy, and We Have Many Line Edits As A Result.” You may not think that RSPs are sufficient—I’m not sure I do, necessarily—but I am heartened slightly that they genuinely seem to take the RSP seriously to the point of having mildly-frustrated-about-process-hiccup footnoes about it. (Free advice to Anthropic PR: interview a bunch of staff about this on camera, cut it together, and post it, it will be lovely and humanizing and great recruitment material, I bet).
It’s a small but positive sign that Anthropic sees taking 3 days beyond their RSP’s specified timeframe to conduct a process without a formal exception as an issue. Signals that at least some members of the team there are extremely attuned to normalization of deviance concerns.
Oh, it very possibly is the wrongest part of the piece! I put it in the original workshop draft as I was running out of time and wanted to provoke debate.
A brief gesture at a sketch of the intuition: imagine a different, crueler world, where there were orders of magnitude more nation-states, but at the start only a few nuclear powers, like in our world, with a 1950s-level tech base. If the few nuclear powers want to keep control, they’ll have to divert huge chunks of their breeder reactors’ output to pre-emptively nuking any site in the many many non-nuclear-club states that could be an arms program to prevent breakouts, then any of the nuclear powers would have to wait a fairly long time to assemble an arms stockpile sufficient to launch a Project Orion into space.