Co-founder @ Gladstone AI.
Contact: edouard@gladstone.ai
Website: eharr.is
This is really interesting. It’s hard to speak too definitively about theories of human values, but for what it’s worth these ideas do pass my intuitive smell test.
One intriguing aspect is that, assuming I’ve followed correctly, this theory aims to unify different cognitive concepts in a way that might be testable:
On the one hand, it seems to suggest a path to generalizing circuits-type work to the model-based RL paradigm. (With shards, which bid for outcomes on a contextually activated basis, being analogous to circuits, which contribute to prediction probabilities on a contextually activated basis.)
On the other hand, it also seems to generalize the psychological concept of classical conditioning (Pavlov’s salivating dog, etc.), which has tended to be studied over the short term for practical reasons, to arbitrarily (?) longer planning horizons. The discussion of learning in babies also puts one in mind of the unfortunate Little Albert Experiment, done in the 1920s:
For the experiment proper, by which point Albert was 11 months old, he was put on a mattress on a table in the middle of a room. A white laboratory rat was placed near Albert and he was allowed to play with it. At this point, Watson and Rayner made a loud sound behind Albert’s back by striking a suspended steel bar with a hammer each time the baby touched the rat. Albert responded to the noise by crying and showing fear. After several such pairings of the two stimuli, Albert was presented with only the rat. Upon seeing the rat, Albert became very distressed, crying and crawling away.
[...]
In further experiments, Little Albert seemed to generalize his response to the white rat. He became distressed at the sight of several other furry objects, such as a rabbit, a furry dog, and a seal-skin coat, and even a Santa Claus mask with white cotton balls in the beard.
A couple more random thoughts on stories one could tell through the lens of shard theory:
As we age, if all goes well, we develop shards with longer planning horizons. Planning over longer horizons requires more cognitive capacity (all else equal), and long-horizon shards do seem to have some ability to either reinforce or dampen the influence of shorter-horizon shards. This is part of the continuing process of “internally aligning” a human mind.
Introspectively, I think there is also an energy cost involved in switching between “active” shards. Software developers understand this as context-switching, actively dislike it, and evolve strategies to minimize it in their daily work. I suspect a lot of the biases you might categorize under “resistance to change” (projection bias, sunk cost fallacy and so on) have this as a factor.
I do have a question about your claim that shards are not full subagents. I understand that in general different shards will share parameters over their world-model, so in that sense they aren’t fully distinct — is this all you mean? Or are you arguing that even a very complicated shard with a long planning horizon (e.g., “earn money in the stock market” or some such) isn’t agentic by some definition?
Anyway, great post. Looking forward to more.
Nice. Congrats on the launch! This is an extremely necessary line of effort.
Interesting. The specific idea you’re proposing here may or may not be workable, but it’s an intriguing example of a more general strategy that I’ve previously tried to articulate in another context. The idea is that it may be viable to use an AI to create a “platform” that accelerates human progress in an area of interest to existential safety, as opposed to using an AI to directly solve the problem or perform the action.
Essentially:
A “platform” for work in domain X is something that removes key constraints that would otherwise have consumed human time and effort when working in X. This allows humans to explore solutions in X they wouldn’t have previously — whether because they’d considered and rejected those solution paths, or because they’d subconsciously trained themselves not to look in places where the initial effort barrier was too high. Thus, developing an excellent platform for X allows humans to accelerate progress in domain X relative to other domains, ceteris paribus. (Every successful platform company does this. e.g., Shopify, Amazon, etc., make valuable businesses possible that wouldn’t otherwise exist.)
For certain carefully selected domains X, a platform for X may plausibly be relatively easier to secure & validate than an agent that’s targeted at some specific task x ∈ X would be. (Not easy; easier.) It’s less risky to validate the outputs of a platform and leave the really dangerous last-mile stuff to humans, than it would be to give an end-to-end trained AI agent a pivotal command in the real world (i.e., “melt all GPUs”) that necessarily takes the whole system far outside its training distribution. Fundamentally, the bet is that if humans are the ones doing the out-of-distribution part of the work, then the output that comes out the other end is less likely to have been adversarially selected against us.
(Note that platforms are tools, and tools want to be agents, so a strategy like this is unlikely to arise along the “natural” path of capabilities progress other than transiently.)
There are some obvious problems with this strategy. One is that point 1 above is no help if you can’t tell which of the solutions the humans come up with are good, and which are bad. So the approach can only work on problems that humans would otherwise have been smart enough to solve eventually, given enough time to do so (as you already pointed out in your example). If AI alignment is such a problem, then it could be a viable candidate for such an approach. Ditto for a pivotal act.
Another obvious problem is that capabilities research might benefit from the similar platforms that alignment research can. So actually implementing this in the real world might just accelerate the timeline for everything, leaving us worse off. (Absent an intervention at some higher level of coordination.)
A third concern is that point 2 above could be flat-out wrong in practice. Asking an AI to build a platform means asking for generalization, even if it is just “generalization within X”, and that’s playing a lethally dangerous game. In fact, it might well be lethal for any useful X, though that isn’t currently obvious to me. e.g., AlphaFold2 is a primitive example of a platform that that’s useful and non-dangerous, though it’s not useful enough for this.
On top of all that, there are all the steganographic considerations — AI embedding dangerous things in the tool itself, etc. — that you pointed out in your example.
But this strategy still seems like it could bring us closer to the Pareto frontier for critical domains (alignment problem, pivotal act), than it would be to directly train an AI to do the dangerous action.
Yep, I’d say I intuitively agree with all of that, though I’d add that if you want to specify the set of “outcomes” differently from the set of “goals”, then that must mean you’re implicitly defining a mapping from outcomes to goals. One analogy could be that an outcome is like a thermodynamic microstate (in the sense that it’s a complete description of all the features of the universe) while a goal is like a thermodynamic macrostate (in the sense that it’s a complete description of the features of the universe that the system can perceive).
This mapping from outcomes to goals won’t be injective for any real embedded system. But in the unrealistic limit where your system is so capable that it has a “perfect ontology” — i.e., its perception apparatus can resolve every outcome / microstate from any other — then this mapping converges to the identity function, and the system’s set of possible goals converges to its set of possible outcomes. (This is the dualistic case, e.g., AIXI and such. But plausibly, we also should expect a self-improving systems to improve its own perception apparatus such that its effective goal-set becomes finer and finer with each improvement cycle. So even this partition over goals can’t be treated as constant in the general case.)
Gotcha. I definitely agree with what you’re saying about the effectiveness of incentive structures. And to be clear, I also agree that some of the affordances in the quote reasonably fall under “alignment”: e.g., if you explicitly set a specific mission statement, that’s a good tactic for aligning your organization around that specific mission statement.
But some of the other affordances aren’t as clearly goal-dependent. For example, iterating quickly is an instrumentally effective strategy across a pretty broad set of goals a company might have. That (in my view) makes it closer to a capability technique than to an alignment technique. i.e., you could imagine a scenario where I succeeded in building a company that iterated quickly, but I failed to also align it around the mission statement I wanted it to have. In this scenario, my company was capable, but it wasn’t aligned with the goal I wanted.
Of course, this is a spectrum. Even setting a specific mission statement is an instrumentally effective strategy across all the goals that are plausible interpretations of that mission statement. And most real mission statements don’t admit a unique interpretation. So you could also argue that setting a mission statement increases the company’s capability to accomplish goals that are consistent with any interpretation of it. But as a heuristic, I tend to think of a capability as something that lowers the cost to the system of accomplishing any goal (averaged across the system’s goal-space with a reasonable prior). Whereas I tend to think of alignment as something that increases the relative cost to the system of accomplishing classes of goals that the operator doesn’t want.
I’d be interested to hear whether you have a different mental model of the difference, and if so, what it is. It’s definitely possible I’ve missed something here, since I’m really just describing an intuition.
Thanks, great post.
These include formulating and repeating a clear mission statement, setting up a system for promotions that rewards well-calibrated risk taking, and iterating quickly at the beginning of the company in order to habituate a rhythm of quick iteration cycles.
I may be misunderstanding, but wouldn’t these techniques fall more under the heading of capabilities rather than under alignment? These are tactics that should increase a company’s effectiveness in general, for most reasonable mission statements or products the company could have.
This is fantastic. Really appreciate both the detailed deep-dive in the document, and the summary here. This is also timely, given that teams working on superscale models with concerning capabilities haven’t generally been too forthcoming with compute estimates. (There are exceptions.)
As you and Alex point out in the sibling thread, the biggest remaining fudge factors seem to be:
Mixture models (or any kind of parameter-sharing, really) for the first method, which will cause you to systematically overestimate the “Operations per forward pass” factor; and
Variable effective utilization rates of custom hardware for the second method, which will cause an unknown distribution of errors in the “utilization rate” factor.
Nonetheless, my flying guess would be that your method is pretty much guaranteed to be right within an OOM, and probably within a factor of 2 or less. That seems pretty good! It’s certainly an improvement over anything I’ve seen previously along these lines. Congrats!
It’s simply because we each (myself more than her) have an inclination to apply a fair amount of adjustment in a conservative direction, for generic “burden of proof” reasons, rather than go with the timelines that seem most reasonable based on the report in a vacuum.
While one can sympathize with the view that the burden of proof ought to lie with advocates of shorter timelines when it comes to the pure inference problem (“When will AGI occur?”), it’s worth observing that in the decision problem (“What should we do about it?”) this situation is reversed. The burden of proof in the decision problem probably ought instead to lie with advocates of non-action: when one’s timelines are >1 generation, it is a bit too easy to kick the can down the road in various ways — leaving one unprepared if the future turns out to move faster than we expected. Conversely someone whose timelines are relatively short may take actions today that will leave us in a better position in the future, even if that future arrives more slowly than they believed originally.
(I don’t think OpenPhil is confusing these two, just that in a conversation like this it is particularly worth emphasizing the difference.)
This is an excellent point and it’s indeed one of the fundamental limitations of a public tracking approach. Extrapolating trends in an information environment like this can quickly degenerate into pure fantasy. All one can really be sure of is that the public numbers are merely lower bounds — and plausibly, very weak ones.
Yeah, great point about Gopher, we noticed the same thing and included a note to that effect in Gopher’s entry in the tracker.
I agree there’s reason to believe this sort of delay could become a bigger factor in the future, and may already be a factor now. If we see this pattern develop further (and if folks start publishing “model cards” more consistently like DM did, which gave us the date of Gopher’s training) we probably will begin to include training date as separate from publication date. But for now, it’s a possible trend to keep an eye on.
Thanks again!
A more typical example: I can look at a chain of options on a stock, and use the prices of those options to back out market-implied probabilities for each possible stock price at expiry.
Gotcha, this is a great example. And the fundamental reasons why this works are 1) the immediate incentive that you can earn higher returns by pricing the option more correctly; combined with 2) the fact that the agents who are assigning these prices have (on a dollar-weighted-average basis) gone through multiple rounds of selection for higher returns.
(I wonder to what extent any selection mechanism ultimately yields agents with general reasoning capabilities, given tight enough competition between individuals in the selected population? Even if the environment doesn’t start out especially complicated, if the individuals are embedded in it and are interacting with one another, after a few rounds of selection most of the complexity an individual perceives is going to be due to its competitors. Not everything is like this — e.g., training a neural net is a form of selection without competition — but it certainly seems to describe many of the more interesting bits of the world.)
Thanks for the clarifications here btw — this has really piqued my interest in selection theorems as a research angle.
Okay, then to make sure I’ve understood correctly: what you were saying in the quoted text is that you’ll often see an economist, etc., use coherence theorems informally to justify a particular utility maximization model for some system, with particular priors and conditionals. (As opposed to using coherence theorems to justify the idea of EU models generally, which is what I’d thought you meant.) And this is a problem because the particular priors and conditionals they pick can’t be justified solely by the coherence theorem(s) they cite.
The problem with VNM-style lotteries is that the probabilities involved have to come from somewhere besides the coherence theorems themselves. We need to have some other, external reason to think it’s useful to model the environment using these probabilities.
To try to give an example of this: suppose I wanted to use coherence / consistency conditions alone to assign priors over the outcomes of a VNM lottery. Maybe the closest I could come to doing this would be to use maxent + transformation groups to assign an ignorance prior over those outcomes; and to do that, I’d need to additionally know the symmetries that are implied by my ignorance of those outcomes. But those symmetries are specific to the structure of my problem and are not contained in the coherence theorems themselves. So this information about symmetries would be what you would refer to as an “external reason to think it’s useful to model the environment using these probabilities”.
Is this a correct interpretation?
Thanks so much for the feedback!
The ability to sort by model size etc would be nice. Currently sorting is alphabetical.
Right now the default sort is actually chronological by publication date. I just added the ability to sort by model size and compute budget at your suggestion. You can use the “⇅ Sort” button in the Models tab to try it out; the rows should now sort correctly.
Also the rows with long textual information should be more to the right and the more informative/tighter/numerical columns more to the left (like “deep learning” in almost all rows, not very informative). Ideally the most relevant information would be on the initial page without scrolling.
You are absolutely right! I’ve just taken a shot at rearranging the columns to surface the most relevant parts up front and played around a bit with the sizing. Let me know what you think.
“Date published” and “date trained” can be quite different. Maybe worth including the latter?
That’s true, though I’ve found the date at which a model was trained usually isn’t disclosed as part of a publication (unlike parameter count and, to a lesser extent, compute cost). There is also generally an incentive to publish fairly soon after the model’s been trained and characterized, so you can often rely on the model not being that stale, though that isn’t universal.
Is there a particular reason you’d be interested in seeing training dates as opposed to (or in addition to) publication dates?
Thanks again!
I’m surprised by just how much of a blindspot goal-inputs seem to be for today’s economists, AI researchers, etc. The coherence theorems usually cited to justify expected utility maximization models imply a quite narrow range of inputs to those utility functions: utilities are only over the outcomes on which agents can bet. Yet practitioners use utility functions over entire (unobservable) world states, world state trajectories, MDP states, etc, often without any way for the agent to bet on all of the outcomes.
It’s true that most of the agents we build can’t directly bet on all the outcomes in their respective world-models. But these agents would still be modelled by the coherence theorems (+ VNM) as betting on lotteries over such outcomes. This seems like a fine way to justify EU maximization when you’re unable to bet on every “microstate” of the world — so in what sense did you mean that this was a blind spot?
EDIT: Unless you were alluding to the fact that real-world agents’ utility functions are often defined over “wrong” ontologies, such that you couldn’t actually construct a lottery over real-world microstates that’s an exact fit for the bet the agent wants to make. Is that what you meant?
(FWIW, I agree with your overall point in this section. I’m just trying to better understand your meaning here.)
Personally speaking, I think this is the subfield to be closely tracking progress in, because 1) it has far-reaching implications in the long term and 2) it has garnered relatively little attention compared to other subfields.
Thanks for the clarification — definitely agree with this.
If you’d like to visualize trends though, you’ll need more historical data points, I think.
Yeah, you’re right. Our thinking was that we’d be able to do this with future data points or by increasing the “density” of points within the post-GPT-3 era, but ultimately it will probably be necessary (and more compelling) to include somewhat older examples too.
Interesting; I hadn’t heard of DreamerV2. From a quick look at the paper, it looks like one might describe it as a step on the way to something like EfficientZero. Does that sound roughly correct?
it would be great to see older models incorporated as well
We may extend this to older models in the future. But our goal right now is to focus on these models’ public safety risks as standalone (or nearly standalone) systems. And prior to GPT-3, it’s hard to find models whose public safety risks were meaningful on a standalone basis — while an earlier model could have been used as part of a malicious act, for example, it wouldn’t be as central to such an act as a modern model would be.
Yeah, these are interesting points.
Isn’t it a bit suspicious that the thing-that’s-discontinuous is hard to measure, but the-thing-that’s-continuous isn’t? I mean, this isn’t totally suspicious, because subjective experiences are often hard to pin down and explain using numbers and statistics. I can understand that, but the suspicion is still there.
I sympathize with this view, and I agree there is some element of truth to it that may point to a fundamental gap in our understanding (or at least in mine). But I’m not sure I entirely agree that discontinuous capabilities are necessarily hard to measure: for example, there are benchmarks available for things like arithmetic, which one can train on and make quantitative statements about.
I think the key to the discontinuity question is rather that 1) it’s the jumps in model scaling that are happening in discrete increments; and 2) everything is S-curves, and a discontinuity always has a linear regime if you zoom in enough. Those two things together mean that, while a capability like arithmetic might have a continuous performance regime on some domain, in reality you can find yourself halfway up the performance curve in a single scaling jump (and this is in fact what happened with arithmetic and GPT-3). So the risk, as I understand it, is that you end up surprisingly far up the scale of “world-ending” capability from one generation to the next, with no detectable warning shot beforehand.
“No one predicted X in advance” is only damning to a theory if people who believed that theory were making predictions about it at all. If people who generally align with Paul Christiano were indeed making predictions to the effect of GPT-3 capabilities being impossible or very unlikely within a narrow future time window, then I agree that would be damning to Paul’s worldview. But—and maybe I missed something—I didn’t see that. Did you?
No, you’re right as far as I know; at least I’m not aware of any such attempted predictions. And in fact, the very absence of such prediction attempts is interesting in itself. One would imagine that correctly predicting the capabilities of an AI from its scale ought to be a phenomenally valuable skill — not just from a safety standpoint, but from an economic one too. So why, indeed, didn’t we see people make such predictions, or at least try to?
There could be several reasons. For example, perhaps Paul (and other folks who subscribe to the “continuum” world-model) could have done it, but they were unaware of the enormous value of their predictive abilities. That seems implausible, so let’s assume they knew the value of such predictions would be huge. But if you know the value of doing something is huge, why aren’t you doing it? Well, if you’re rational, there’s only one reason: you aren’t doing it because it’s too hard, or otherwise too expensive compared to your alternatives. So we are forced to conclude that this world-model — by its own implied self-assessment — has, so far, proved inadequate to generate predictions about the kinds of capabilities we really care about.
(Note: you could make the argument that OpenAI did make such a prediction, in the approximate yet very strong sense that they bet big on a meaningful increase in aggregate capabilities from scale, and won. You could also make the argument that Paul, having been at OpenAI during the critical period, deserves some credit for that decision. I’m not aware of Paul ever making this argument, but if made, it would be a point in favor of such a view and against my argument above.)
I think what gwern is trying to say is that continuous progress on a benchmark like PTB appears (from what we’ve seen so far) to map to discontinuous progress in qualitative capabilities, in a surprising way which nobody seems to have predicted in advance. Qualitative capabilities are more relevant to safety than benchmark performance is, because while qualitative capabilities include things like “code a simple video game” and “summarize movies with emojis”, they also include things like “break out of confinement and kill everyone”. It’s the latter capability, and not PTB performance, that you’d need to predict if you wanted to reliably stay out of the x-risk regime — and the fact that we can’t currently do so is, I imagine, what brought to mind the analogy between scaling and Russian roulette.
I.e., a straight line in domain X is indeed not surprising; what’s surprising is the way in which that straight line maps to the things we care about more than X.
(Usual caveats apply here that I may be misinterpreting folks, but that is my best read of the argument.)
Good catch! I didn’t check the form. Yes you are right, the spoiler should say (1=Paul, 9=Eliezer) but the conclusion is the right way round.
Got it. That makes sense, thanks!