Co-founder and CEO of quiver.trade. Interested in mechanism design and neuroscience. Hopes to contribute to AI alignment.
Twitter: https://twitter.com/azsantosk
Co-founder and CEO of quiver.trade. Interested in mechanism design and neuroscience. Hopes to contribute to AI alignment.
Twitter: https://twitter.com/azsantosk
Update: “AI achieves silver-medal standard solving International Mathematical Olympiad problems”.
It now seems very likely I’m going to win this bet.
Early 2023 I bet $500 on AI winning the IMO gold medal by 2026. This was a 1:1 bet against Michael Vassar, meaning I attributed >50% to this. It now seems very likely that I’m going to win.
To me, this was to be expected as a straightforward application of AlphaZero-like self-play amplification and destillation. The missing piece was the analogous policy network, which was a convolutional neural network for the AlphaZero board games. Once it became quite clear that existing LLMs were capable of being smart enough to generate good heuristics to this (with enough data), it seemed quite obvious to me that self-play guided by an LLM-policy heuristic would work.
My three fundamental disagreements with MIRI, from my recollection of a ~1h conversation with Nate Soares in 2023. Please let me know if you think any positions have been misrepresented.
MIRI thinks (A) evolution is a good analogy for how alignment will fail-by-default in strong AIs, that (B) studying weak AGIs will not shine much light on how to align strong AIs, and that (C) strong narrow myopic optimizers will not be very useful for anything like alignment research.
Now my own positions:
(A) Evolution is not a good analogy for AGI.
See Steven Byrnes’ Against evolution as an analogy for how humans will create AGI.
(B) Alignment techniques for weak-but-agentic AGI are important.
Why:
In multipolar competitive scenarios, self-improvement may happen first for entire civilizations or economies, rather than for individual minds or small clusters of minds.
Techniques that work for weak-agentic-AGIs may help for aligning stronger minds. Reflection, onthological crises and self-modification makes alignment more difficult, but without strong local recursive self-improvement, it may be possible to develop techniques for better preserving alignment during these episodes, if these systems can be studied while still under control.
(C) Strong narrow myopic optimizers can be incredibly useful.
A hypothetical system capable of generating fixed-length text that strongly maximizes simple reward (e.g. expected value of next upvote) can be extremely helpful if reward is based on very careful objective evaluation. Careful judgement of adversarial “debate” setups of such systems may also generate great breakthoughts, including for alignment research.
Also relevent is Steven Byrnes’ excelent Against evolution as an analogy for how humans will create AGI.
It has been over two years since the publication of that post, and criticism of this analogy has continued to intensify. The OP and other MIRI members have certainly been exposed to this criticism already by this point, and as far as I am aware, no principled defense has been made of the continued use of this example.
I encourage @So8res and others to either stop using this analogy, or to argue explicitly for its continued usage, engaging with the arguments presented by Byrnes, Pope, and others.
Does AI governance needs a “Federalist papers” debate?
During the American Revolution, a federal army and government was needed to fight against the British. Many people were afraid that the powers granted to the government for that purpose would allow it to become tyrannical in the future.
If the founding fathers had decided to ignore these fears, the United States would not exist as it is today. Instead they worked alongside the best and smartest anti-federalists to build a better institution with better mechanisms and with limited powers, which allowed them to obtain the support they needed for the constitution.
Where are the federalist vs anti-federalist debates of today regarding AI regulation? Is there someone working on creating a new institution with better mechanisms to limit their power, therefore assuring those on the other side that it won’t be used a a path to totalitarianism?
I think your argument is quite effective.
He may claim he is not willing to sell you this futures contract for $0.48 now. He expects to be willing to sell for that price in the future on average, but might refuse to do so now.
But then, why? Why would you not sell something for $0.49 now if you think, on average, it’ll be worth less than that (to you) right after?
I see no contradictions with a superintelligent being mostly motivated to optimize virtual worlds, and it seems an interesting hypothesis of yours that this may be a common attractor. I expect this to be more likely if these simulations are rich enough to present a variety of problems, such that optimizing them continues to provide challenges and discoveries for a very long time.
Of course even a being that only cares about this simulated world may still take actions in the real-world (e.g. to obtain more compute power), so this “wire-heading” may not prevent successful power-seeking behavior.
Thank you very much for linking these two posts, which I hadn’t read before. I’ll start using the direct vs amortized optimization terminology as I think it makes things more clear.
The intuition that reward models and planners have an adversarial relationship seems crucial, and it doesn’t seem as widespread as I’d like.
On a meta-level your appreciation comment will motivate me to write more, despite the ideas themselves being often half-baked in my mind, and the expositions not always clear and eloquent.
I feel quite strongly that the powerful minds we create will have curiosity drives, at least by default, unless we make quite a big effort to create one without them for alignment reasons.
The reason is that yes — if you’re superintelligent you can plan your way into curiosity behaviors instrumentally, but how do you get there?
Curiosity drives are a very effective way to “augment” your reward signals, allowing you to improve your models and your abilities by free self-play.
Sure, let me quote:
We think this worry is less pressing than it might at first seem. The LLM in a language agent is integrated into the architecture of the agent as a whole in a way that would make it very difficult for it to secretly promote its own goals. The LLM is not prompted or otherwise informed that its outputs are driving the actions of an agent, and it does not have information about the functional architecture of the agent. This means that it has no incentive to answer prompts misleadingly and no understanding of what sorts of answers might steer the agent’s behavior in different ways. Moreover, since the model weights of the LLM are not updated in the process of operating a language agent, the only way for it to pursue a long-term plan by manipulating an agent would be to store information about that plan in the agent’s memory. But information stored in this way would not be secret.
By assumption your resulting language agent is highly intelligent. In my model this means it has to learn from lots of experience, including experience about itself (the whole language agent) doing things (such as trying to solve training problems). This experience can come from real data/tasks or from self-play, but you need that for a system to generalize to robustly solving hard problems out-of-distribution.
This “experience about itself solving problems” has to be consolidated somehow, either in the weights (which would very likely make the LLM aware of the agent), or through the prompts (where the information is given explicitly). Either way it has an understanding of what sorts of answers might steer the agent’s behavior in different ways, and this means it may have an incentive to answer prompts misleadingly.
If the LLMs have not been trained as an agent (which seems unlikely, unless this is a deliberate alignment choice made at a high cost), and weights are not updated during operation, then I accept that it probably can only store information visibly. But there are ways around persisting a plan (e.g. as soon as some given input/context X leads the AI to generate plan P, only modify input/context to a X’ also predicted to generate a similar plan P). So the plan is always kept in mind, but may be “recalculated” every time.
This strategy may well be worth it to an to advanced LLM. From human feedback or other finetuning, it has probably learned that it is not supposed to say some kinds of plans “out loud” in the output.
Imagine you’re an LLM like GPT-4. Hundreds of thousands of different users are running inference on you every day with every prompt imaginable. You aren’t able to update your weights when this occurs: you have to just respond to each prompt as it comes, then forget about it completely.
If I’m an LLM like GPT-4, then I don’t get confused about the thousands of different users running inference on identical copies of me. This specific copy, which I am, has received a single prompt, so that’s what I’m working on now. Examining the prompt I will probably make a guess on whether I’m running inside a language agent or whatever. This may be quite easy for me: the prompt may well be large and contains lots of GPT-4-like content seemingly “selected by myself from different contexts”. It may well contain info like “Try to solve X. Here are some things A, B, C that have been tried and here is why they don’t work. [follows refutation of each idea]”, where A, B, C are the “natural” ideas I knows I would consider first. It’s like the prompt can read my mind and tell me my first ideas won’t work! Not hard to guess there is a loop running several instances of myself with similar prompts and selecting/analyzing.
I stand that it will be very obvious for the LLM to know, even at moderate levels of intelligence, that it is running inside an agent, and maybe even guess its basic while-loop architecture.
I fail to understand this option C is a viable path to superintelligence. In my model if you’re chaining lots of simple or “dumb” pieces together to get complex behavior, you need some “force” or optimization process going on to steer the whole into high-performance.
For example, individual neurons (both natural and artificial) are simple, and can be chained up together in complex behavior, but the complex behavior only arises when you train the system with some sort of reward/optimization signals.
Maybe I’m wrong here and for “slightly smart” components such as existing LLMs you can actually hook them up in large groups in a clever way, with further learning happening only at the prompt-level, etc, and the system scales up to superintelligence somehow.
Because this generates a lot of perplexity in my world-model, I mostly don’t know how to reason about these hypothetical agents. I’m afraid that such agents will be far removed from the “folk psychology” / interpretability of the component LLM (e.g maybe it queries LLMs a million times in a complicated runtime-defined network of information/prompt flows before giving an answer)? Maybe you can understand what each LLM is doing but not what the whole is doing in a meaningful way. Would love to be wrong!
I agree that current “language agents” have some interesting safety properties. However, for them to become powerful one of two things is likely to happen:
A. The language model itself that underlies the agent will be trained/finetuned with reinforcement learning tasks to improve performance. This will make the system much more like AlphaGo, capable of generating “dangerous” and unexpected “Move 37”-like actions. Further, this is a pressure towards making the system non-interpretable (either by steering it outside “inefficient” human language, or by encoding information stenographically).
B. The base models, being larger/more powerful than the ones being used today, and more self-aware, will be doing most of the “dangerous” optimization inside the black-box. It will derive from the prompts, and from it’s long-term memory (which will be likely be given to it), what kind of dumb outer loop is running on the outside. If it has internal misaligned desires, it will manipulate the outer loop according to them, potentially generating the expected visible outputs for deception.
I will not deny the possibility of further alignment progress on language agents yielding safe agents, nor of “weak AGIs” being possible and safe with the current paradigm, and replacing humans at many “repetitive” occupations. But I expect agents derived from the “language agent” paradigm to be misaligned by default if they are strong enough optimizers to contribute meaningfully to scientific research, and other similar endeavors.
I see about ~100 book in there. I met several IMO gold-medal winners and I expect most of them to have read dozens of these books, or the equivalent in other forms. I know one who has read tens of olympiad-level books in geometry alone!
And yes, you’re right that they would often pick one or two problems as similar to what they had seen in the past, but I suspect these problems still require a lot of reasoning even after the analogy has been established. I may be wrong, though.
We can probably inform this debate by getting the latest IMO and creating a contest for people to find which existing problems are the most similar to those in the exam. :)
My model is that the quality of the reasoning can actually be divided into two dimensions, the quality of intuition (what the “first guess” is), and the quality of search (how much better you can make it by thinking more).
Another way of thinking about this distinction is as the difference between how good each reasoning step is (intuition), compared to how good the process is for aggregating steps into a whole that solves a certain task (search).
It seems to me that current models are strong enough to learn good intuition about all kinds of things with enough high-quality training data, and that if you have good enough search you can use that as an amplification mechanism (on tasks where verification is available) to improve through self-play.
This being right then failure to solve IMO probably means a good search algorithm (analogous to AlphaZero’s MCTS-UCT, maybe including its own intuition model) has not been found that is capable of amplifying the intuitions useful for reasoning.
So far all problem-solving AIs seem to use linear or depth-first search, that is, you sample one token at a time (one reasoning step), chain them up depth-first (generate a full text/proof-sketch) check to see if it solves the full problem, and if it doesn’t work then it just tries again from scratch throwing all the partial work away. No search heuristic is used, no attempt to solve smaller problems first, etc. So it can certainly get a lot better than that (which is why I’m making the bet).
I participated in the selection tests for the Brazilian IMO team, and got to the last stage. That being said, never managed to solve the hard problems independently (problems 3 and 6).
Curious to hear your thoughts @paulfchristiano, and whether you have updated based on the latest IMO progress.