I’m a PhD student at the University of Amsterdam. I have research experience in multivariate information theory and equivariant deep learning and recently got very interested into AI alignment. https://langleon.github.io/
Leon Lang
After the US election, the twitter competitor bluesky suddenly gets a surge of new users:
https://x.com/robertwiblin/status/1858991765942137227
How likely are such recommendations usually to be implemented? Are there already manifold markets on questions related to the recommendation?
In the reuters article they highlight Jacob Helberg: https://www.reuters.com/technology/artificial-intelligence/us-government-commission-pushes-manhattan-project-style-ai-initiative-2024-11-19/
He seems quite influential in this initiative and recently also wrote this post:
https://republic-journal.com/journal/11-elements-of-american-ai-supremacy/
Wikipedia has the following paragraph on Helberg:
“ He grew up in a Jewish family in Europe.[9] Helberg is openly gay.[10] He married American investor Keith Rabois in a 2018 ceremony officiated by Sam Altman.”
Might this be an angle to understand the influence that Sam Altman has on recent developments in the US government?
Why I think scaling laws will continue to drive progress
Epistemic status: This is a thought I had since a while. I never discussed it with anyone in detail; a brief conversation could convince me otherwise.
According to recent reports there seem to be some barriers to continued scaling. We don’t know what exactly is going on, but it seems like scaling up base models doesn’t bring as much new capability as people hope.
However, I think probably they’re still in some way scaling the wrong thing: The model learns to predict a static dataset on the internet; however, what it needs to do later is to interact with users and the world. For performing well in such a task, the model needs to understand the consequences of its actions, which means modeling interventional distributions P(X | do(A)) instead of static data P(X | Y). This is related to causal confusion as an argument against the scaling hypothesis.
This viewpoint suggests that if big labs figure out how to predict observations in an online-way by ongoing interactions of the models with users / the world, then this should drive further progress. It’s possible that labs are already doing this, but I’m not aware of it, and so I guess they haven’t yet fully figured out how to do that.
What triggered me writing this is that there is a new paper on scaling law for world modeling that’s about exactly what I’m talking about here.
Do we know anything about why they were concerned about an AGI dictatorship created by Demis?
What’s your opinion on the possible progress of systems like AlphaProof, o1, or Claude with computer use?
“Scaling breaks down”, they say. By which they mean one of the following wildly different claims with wildly different implications:
When you train on a normal dataset, with more compute/data/parameters, subtract the irreducible entropy from the loss, and then plot in a log-log plot: you don’t see a straight line anymore.
Same setting as before, but you see a straight line; it’s just that downstream performance doesn’t improve .
Same setting as before, and downstream performance improves, but: it improves so slowly that the economics is not in favor of further scaling this type of setup instead of doing something else.
A combination of one of the last three items and “btw., we used synthetic data and/or other more high-quality data, still didn’t help”.
Nothing in the realm of “pretrained models” and “reasoning models like o1″ and “agentic models like Claude with computer use” profits from a scale-up in a reasonable sense.
Nothing which can be scaled up in the next 2-3 years, when training clusters are mostly locked in, will demonstrate a big enough success to motivate the next scale of clusters costing around $100 billion.
Be precise. See also.
Thanks for this compendium, I quite enjoyed reading it. It also motivated me to read the “Narrow Path” soon.
I have a bunch of reactions/comments/questions at several places. I focus on the places that feel most “cruxy” to me. I formulate them without much hedging to facilitate a better discussion, though I feel quite uncertain about most things I write.
On AI Extinction
The part on extinction from AI seems badly argued to me. Is it fair to say that you mainly want to convey a basic intuition, with the hope that the readers will find extinction an “obvious” result?
To be clear: I think that for literal god-like AI, as described by you, an existential catastrophe is likely if we don’t solve a very hard case of alignment. For levels below (superintelligence, AGI), I become progressively more optimistic. Some of my hope comes from believing that humanity will eventually coordinate to not scale to god-like AI unless we have enormous assurances that alignment is solved; I think this is similar to your wish, but you hope that we already stop before even AGI is built.
On AI Safety
When we zoom out from the individual to groups, up to the whole of humanity, the complexity of “finding what we want” explodes: when different cultures, different religions, different countries disagree about what they want on key questions like state interventionism, immigration, or what is moral, how can we resolve these into a fixed set of values? If there is a scientific answer to this problem, we have made little progress on it.
If we cannot find, build, and reconcile values that fit with what we want, we will lose control of the future to AI systems that ardently defend a shadow of what we actually care about.
This is a topic where I’m pretty confused, but I still try to formulate a counterposition: I think we can probably align AI systems to constitutions, which then makes it unnecessary to solve all value differences. Whenever someone uses the AI, the AI needs to act in accordance with the constitution, which already has mechanisms for how to resolve value conflicts.
Additionally, the constitution could have mechanisms for how to change the constitution itself, so that humanity and AI could co-evolve to better values over time.
Progress on our ability to predict the consequences of our actions requires better science in every technical field.
ELK might circumvent this issue: Just query an AI about its latent knowledge of future consequences of our actions.
Process design for alignment: [...]
This section seems quite interesting to me, but somewhat different from technical discussions of alignment I’m used to. It seems to me that this section is about problems similar to “intent alignment” or creating valid “training stories”, only that you want to define alignment as working correctly in the whole world, instead of just individual systems. Thus, the process design should also prevent problems like “multipolar failure” that might be overlooked by other paradigms. Is this a correct characterization?
Given that this section mainly operates at the level of analogies to politics, economics, and history, I think this section could profit from making stronger connections to AI itself.
Just as solving neuroscience would be insufficient to explain how a company works, even full interpretability of an LLM would be insufficient to explain most research efforts on the AI frontier.
That seems true, and it reminds me of deep deceptiveness, where an AI engages in deception without having any internal process that “looks like” deception.
The more powerful AI we have, the faster things will go. As AI systems improve and automate their own learning, AGI will be able to improve faster than our current research, and ASI will be able to improve faster than humanity can do science. The dynamics of intelligence growth means that it is possible for an ASI “about as smart as humanity” to move to “beyond all human scientific frontiers” on the order of weeks or months. While the change is most dramatic with more advanced systems, as soon as we have AGI we enter a world where things begin to move much quicker, forcing us to solve alignment much faster than in a pre-AGI world.
I agree that such a fast transition from AGI to superintelligence or god-like AI seems very dangerous. Thus, one either shouldn’t build AGI, or should somehow ensure that one has lots of time after AGI is built. Some possibilities for having lots of time:
Sufficient international cooperation to keep things slow.
A sufficient lead of the West over countries like China to have time for alignment
Option 2 leads to a race against China, and even if we end up with a lead, it’s unclear whether it will be sufficient to solve the hard problems of alignment. It’s also unclear whether the West could use already AGI (pre superintelligence) for a robust military advantage, and absent such an advantage, scenario 2 seems very unstable.
So a very cruxy question seems to be how feasible option 1 is. I think this compendium doesn’t do much to settle this debate, but I hope to learn more in the “Narrow Path”.
Thus we need to have humans validate the research. That is, even automated research runs into a bottleneck of human comprehension and supervision.
That seems correct to me. Some people in EA claim that AI Safety is not neglected anymore, but I would say if we ever get confronted with the need to evaluate automated alignment research (possibly on a deadline), then AI Safety research might be extremely neglected.
AI Governance
The reactive framework reverses the burden of proof from how society typically regulates high-risk technologies and industries. In most areas of law, we do not wait for harm to occur before implementing safeguards.
My impression is that companies like Anthropic, DeepMind, and OpenAI talk about mechanisms that are proactive rather than reactive. E.g., responsible scaling policies define an ASL level before it exists, including evaluations for these levels. Then, mitigations need to be in place once the level is reached. Thus, decisively this framework does not want to wait until harm occurred.
I’m curious whether you disagree with this narrow claim (that RSP-like frameworks are proactive), or whether you just want to make the broader claim that it’s unclear how RSP-like frameworks could become widespread enforced regulation.
AI is being developed extremely quickly and by many actors, and the barrier to entry is low and quickly diminishing.
I think that the barrier to entry is not diminishing: to be at the frontier requires increasingly enormous resources.
Possibly your claim is that the barrier to entry for a given level of capabilities diminishes. I agree with that, but I’m unsure if it’s the most relevant consideration. I think for a given level of capabilities, the riskiest period is when it’s reached for the first time since humanity then won’t have experience in how to mitigate potential risks.
Paul Graham estimates training price for performance has decreased 100x in each of the last two years, or 10000x in two years.
If GPT-4′s costs were 100 million dollars, then it could be trained and released by March 2025 for 10k dollars. That seems quite cheap, so I’m not sure if I believe the numbers.
The reactive framework incorrectly assumes that an AI “warning shot” will motivate coordination.
I never saw this assumption explicitly expressed. Is your view that this is an implicit assumption?
Companies like Anthropic, OpenAI, etc., seem to have facilitated quite some discussion with the USG even without warning shots.
But history shows that it is exactly in such moments that these thresholds are most contested –- this shifting of the goalposts is known as the AI Effect and common enough to have its own Wikipedia page. Time and again, AI advancements have been explained away as routine processes, whereas “real AI” is redefined to be some mystical threshold we have not yet reached.
I would have found this paragraph convincing before ChatGPT. But now, with efforts like the USG national security memorandum, it seems like AI capabilities are being taken almost adequately seriously.
we’ve already seen competitors fight tooth and nail to keep building.
OpenAI thought that their models are considered high-risk in the EU AI act. I think arguing that this is inconsistent with OpenAI’s commitment for regulation would require to look at what the EU AI act actually said. I didn’t engage with it, but e.g. Zvi doesn’t seem to be impressed.
The AI Race
Anthropic released Claude, which they proudly (and correctly) describe as a state-of-the-art pushing model, contradicting their own Core Views on AI Safety, claiming “We generally don’t publish this kind of work because we do not wish to advance the rate of AI capabilities progress.”
The full quote in Anthropic’s article is:
“We generally don’t publish this kind of work because we do not wish to advance the rate of AI capabilities progress. In addition, we aim to be thoughtful about demonstrations of frontier capabilities (even without publication). We trained the first version of our headline model, Claude, in the spring of 2022, and decided to prioritize using it for safety research rather than public deployments. We’ve subsequently begun deploying Claude now that the gap between it and the public state of the art is smaller.”
This added context sounds quite different and seems to make clear that with “publish”, Anthropic means the publication of the methods to get to the capabilities. Additionally, I agree with Anthropic that releasing models now is less of a race-driver than it would have been in 2022, and so the current decisions seem more reasonable.
These policy proposals lack a roadmap for government enforcement, making them merely hypothetical mandates. Even worse, they add provisions to allow the companies to amend their own framework as they see fit, rather than codifying a resilient system. See Anthropic’s Responsible Scaling Policy: [...]
I agree that it is bad that there is no roadmap for government enforcement. But without such enforcement, and assuming Anthropic is reasonable, I think it makes sense for them to change their RSP in response to new evidence for what works. After all, we want the version that will eventually be encoded in law to be as sensible as possible.
I think Anthropic also deserves some credit for communicating changes to the RSPs and learnings.
Mechanistic interpretability, which tries to reverse-engineer AIs to understand how they work, which can then be used to advance and race even faster. [...] Scalable oversight, which is another term for whack-a-mole approaches where the current issues are incrementally “fixed” by training them away. This incentivizes obscuring issues rather than resolving them. This approach instead helps Anthropic build chatbots, providing a steady revenue stream.
This seems not argued well. It’s unclear how mechanistic interpretability would be used to advance the race further (unless you mean that it leads to safety-washing for more government trust and public trust?). Also, scalable oversight is so broad as a collection of strategies that I don’t think it’s fair to call them whack-a-mole strategies. E.g., I’d say many of the 11 proposals fall under this umbrella.
I’d be happy for any reactions to my comments!
Then the MATS stipend today is probably much lower than it used to be? (Which would make sense since IIRC the stipend during MATS 3.0 was settled before the FTX crash, so presumably when the funding situation was different?)
Is “CHAI” being a CHAI intern, PhD student, or something else? My MATS 3.0 stipend was clearly higher than my CHAI internship stipend.
I have a similar feeling, but there are some forces in the opposite direction:
Nvidia seems to limit how many GPUs a single competitor can acquire.
training frontier models becomes cheaper over time. Thus, those that build competitive models some time later than the absolute frontier have to invest much less resources.
[Paper Blogpost] When Your AIs Deceive You: Challenges with Partial Observability in RLHF
My impression is that Dario (somewhat intentionally?) plays the game of saying things he believes to be true about the 5-10 years after AGI, conditional on AI development not continuing.
What happens after those 5-10 years, or if AI gets even vastly smarter? That seems out of scope for the article. I assume he’s doing that since he wants to influence a specific set of people, maybe politicians, to take a radical future more seriously than they currently do. Once a radical future is more viscerally clear in a few years, we will likely see even more radical essays.
It is a thing that I remember having been said at podcasts, but I don’t remember which one, and there is a chance that it was never said in the sense I interpreted it.
Also, quote from this post:
“DeepMind says that at large quantities of compute the scaling laws bend slightly, and the optimal behavior might be to scale data by even more than you scale model size. In which case you might need to increase compute by more than 200x before it would make sense to use a trillion parameters.”
Are the straight lines from scaling laws really bending? People are saying they are, but maybe that’s just an artefact of the fact that the cross-entropy is bounded below by the data entropy. If you subtract the data entropy, then you obtain the Kullback-Leibler divergence, which is bounded by zero, and so in a log-log plot, it can actually approach negative infinity. I visualized this with the help of ChatGPT:
Here, f represents the Kullback-Leibler divergence, and g the cross-entropy loss with the entropy offset.
Agreed.
To understand your usage of the term “outer alignment” a bit better: often, people have a decomposition in mind where solving outer alignment means technically specifying the reward signal/model or something similar. It seems that to you, the writeup of a model-spec or constitution also counts as outer alignment, which to me seems like only part of the problem. (Unless perhaps you mean that model specs and constitutions should be extended to include a whole training setup or similar?)
If it doesn’t seem too off-topic to you, could you comment on your views on this terminology?
“California Gov. Gavin Newsom has vetoed a controversial artificial-intelligence safety bill that pitted some of the biggest tech companies against prominent scientists who developed the technology.
The Democrat decided to reject the measure because it applies only to the biggest and most expensive AI models and leaves others unregulated, according to a person with knowledge of his thinking”
New Bloomberg article on data center buildouts pitched to the US government by OpenAI. Quotes:
- “the startup shared a document with government officials outlining the economic and national security benefits of building 5-gigawatt data centers in various US states, based on an analysis the company engaged with outside experts on. To put that in context, 5 gigawatts is roughly the equivalent of five nuclear reactors, or enough to power almost 3 million homes.”
- “Joe Dominguez, CEO of Constellation Energy Corp., said he has heard that Altman is talking about building 5 to 7 data centers that are each 5 gigawatts. “
- “John Ketchum, CEO of NextEra Energy Inc., said the clean-energy giant had received requests from some tech companies to find sites that can support 5 GW of demand, without naming any specific firms.”
Compare with the prediction by Leopold Aschenbrenner in situational awareness:
- “The trillion-dollar cluster—+4 OOMs from the GPT-4 cluster, the ~2030 training cluster on the current trend—will be a truly extraordinary effort. The 100GW of power it’ll require is equivalent to >20% of US electricity production”
OpenAI would have mentioned if they had reached gold on the IMO.
Somewhat pedantic correction: they don’t say “one should update”. They say they update (plus some caveats).