Research Lead at CORAL. Director of AI research at ALTER. PhD student in Shay Moran’s group in the Technion (my PhD research and my CORAL/ALTER research are one and the same). See also Google Scholar and LinkedIn.
E-mail: {first name}@alter.org.il
Research Lead at CORAL. Director of AI research at ALTER. PhD student in Shay Moran’s group in the Technion (my PhD research and my CORAL/ALTER research are one and the same). See also Google Scholar and LinkedIn.
E-mail: {first name}@alter.org.il
I don’t think that undecidability of exact comparison (as opposed to comparison within any given margin of error) is necessarily a bug, however, if you really want comparison for periodic sequences, you can insist that the utility function is defined by a finite state machine. This is in any case already a requirement in the bounded compute version.
So far interest in the programme was modest. I would appreciate it to hear from people who either (i) deliberated whether to apply and decided against it or (ii) feel that they might meet the requirements but are not interested. Specifically, what held you back and what changes (if any) would persuade you to apply?
First, it’s uncomputable to measure performance because that involves the Solomonoff prior. You can approximate it if you know some bits of Chaitin’s constant, but that brings a penalty into the description complexity.
Second, I think that saying that comparison is computable means that the utility is only allowed to depend on a finite number of time steps, it rules out even geometric time discount. For such utility functions, the optimal policy has finite description complexity, so g is upper bounded. I doubt that’s useful.
I added some examples to the end of this post, thank you for the suggestion.
Not sure these are the best textbooks, but you can try:
“Naive Set Theory” by Halmos
“Probability Theory” by Jaynes
“Introduction to the Theory of Computation” by Sipser
Another excellent catch, kudos. I’ve really been sloppy with this shortform. I corrected it to say that we can approximate the system arbitrarily well by VNM decision-makers. Although, I think it’s also possible to argue that a system that selects a non-exposed point is not quite maximally influential, because it’s selecting somethings that’s very close to delegating some decision power to chance.
Also, maybe this cannot happen when is the inverse limit of finite sets? (As is the case in sequential decision making with finite action/observation spaces). I’m not sure.
Example: Let , and consist of the probability intervals , and . Then, it is (I think) consistent with the desideratum to have .
Not only that interpreting requires an unusual decision rule (which I will be calling “utility hyperfunction”), but applying any ordinary utility function to this example yields a non-unique maximum. This is another point in favor of the significance of hyperfunctions.
You’re absolutely right, good job! I fixed the OP.
TLDR: Systems which locally maximal influence can be described as VNM decision-makers.
There are at least 3 different motivations leading to the concept of “agent” in the context of AI alignment:
The sort of system we are concerned about (i.e. which poses risk)
The sort of system we want to build (in order to defend from dangerous systems)
The sort of systems that humans are (in order to meaningfully talk about “human preferences”)
Motivation #1 naturally suggests a descriptive approach, motivation #2 naturally suggests a prescriptive approach, and motivation #3 is sort of a mix of both: on the one hand, we’re describing something that already exists, on the other hand, the concept of “preferences” inherently comes from a normative perspective. There are also reasons to think these different motivation should converge on a single, coherent concept.
Here, we will focus on motivation #1.
A central reason why we are concerned about powerful unaligned agents, is that they are influential. Agents are the sort of system that, when instantiated in a particular environment is likely to heavily change this environment, potentially in ways inconsistent with the preferences of other agents.
Consider a nice space[1] of possible “outcomes”, and a system that can choose[2] out of a closed set of distributions . I propose that an influential system should satisfy the following desideratum:
The system cannot select which can be represented as a non-trivial lottery over other elements in . In other words, has to be an extreme point of the convex hull of .
Why? Because a system that selects a non-extreme point leaves something to chance. If the system can force outcome , or outcome but chooses instead outcome , for and , this means the system gave up on its ability to choose between and in favor of a -biased coin. Such a system is not “locally[3] maximally” influential[4].
[EDIT: The original formulation was wrong, h/t @harfe for catching the error.]
The desideratum implies that there is a convergent sequence of utility functions s.t.
For every , has a unique maximum in .
The sequence converges to .
In other words, such a system can be approximated by a VNM decision-maker within any precision. For finite , we don’t need the sequence, instead there is some s.t. is the unique maximum of over . This observation is mathematically quite simple, but I haven’t seen it made elsewhere (but I would not be surprised if it did appear in the decision theory literature somewhere).
Now, let’s say that the system is choosing out of a set of credal sets (crisp infradistributions) . I propose the following desideratum:
[EDIT: Corrected according to a suggestion by @harfe, original version was too weak.]
Let be the closure of w.r.t. convex combinations and joins[5]. Let be selected by the system. Then:
For any and , if then .
For any , if then .
The justification is, a locally maximal influential system should leave the outcome neither to chance nor to ambiguity (the two types of uncertainty we have with credal sets).
We would like to say that this implies that the system is choosing according to maximin relatively to a particular utility function. However, I don’t think this is true, as the following example shows:
Example: Let , and consist of the probability intervals , and . Then, it is (I think) consistent with the desideratum to have .
Instead, I have the following conjecture:
Conjecture: There exists some space , some and convergent sequence s.t.
As before, the maxima should be unique.
Such a “generalized utility function” can be represented as an ordinary utility function with a latent -valued variable, if we replace with defined by
However, using utility functions constructed in this way leads to issues with learnability, which probably means there are also issues with computational feasibility. Perhaps in some natural setting, there is a notion of “maximally influential under computational constraints” which implies an “ordinary” maximin decision rule.
This approach does rule out optimistic or “mesomistic” decision-rules. Optimistic decision makers tend to give up on influence, because they believe that “nature” would decide favorably for them. Influential agents cannot give up on influence, therefore they should be pessimistic.
What would be the implications in a sequential setting? That is, suppose that we have a set of actions , a set of observations , , a prior and
In this setting, the result is vacuous because of an infamous issue: any policy can be justified by a contrived utility functions that favors it. However, this is only because the formal desideratum doesn’t capture the notion of “influence” sufficiently well. Indeed, a system whose influence boils down entirely to its own outputs is not truly influential. What motivation #1 asks of us, is talk about systems that influence the world-at-large, including relatively “faraway” locations.
One way to fix some of the problem is, take and define accordingly. This singles out systems that have influence over their observations rather than only their actions, which is already non-vacuous (some policies are not such). However, such a system can still be myopic. We can take this further, and select “long-term” influence by projecting onto late observations or some statistics over observations. However, in order to talk about actually “far-reaching” influence, we probably need to switch to the infra-Bayesian physicalism setting. There, we can set , i.e. select for system that have influence over physically manifest computations.
I won’t keep track of topological technicalities here, probably everything here works at least for compact Polish spaces.
Meaning that the system has some output, and different counterfactual outputs correspond to different elements of .
I say “locally” because it refers to something like a partial order, not a global scalar measure of influence.
See also Yudkowsky’s notion of efficient systems “not leaving free energy”.
That is, if then their join (convex hull) is also in , and so is for every . Moreover, is the minimal closed superset of with this property. Notice that this implies is closed w.r.t. arbitrary infra-convex combinations, i.e. for any , and , we have .
Master post for selection/coherence theorems. Previous relevant shortforms: learnability constraints decision rules, AIT selection for learning.
Do you mean that seeing the opponent make dumb moves makes the AI infer that its own moves are also supposed to be dumb, or something else?
Apparently someone let LLMs play against the random policy and for most of them, most games end in a draw. Seems like o1-preview is the best of those tested, managing to win 47% of the time.
Relevant: Manifold market about LLM chess
This post states and speculates on an important question: are there different mind types that are in some sense “fully general” (the author calls it “unbounded”) but are nevertheless qualitatively different. The author calls these hypothetical mind taxa “cognitive realms”.
This is how I think about this question, from within the LTA:
To operationalize “minds” we should be thinking of learning algorithms. Learning algorithms can be classified according to their “syntax” and “semantics” (my own terminology). Here, semantics refers to questions such as (i) what type of object is the algorithm learning (ii) what is the feedback/data available to the algorithm and (iii) what is the success criterion/parameter of the algorithm. On the other hand, syntax refers to the prior and/or hypothesis class of the algorithm (where the hypothesis class might be parameterized in a particular way, with particular requirements on how the learning rate depends on the parameters).
Among different semantics, we are especially interested in those that are in some sense agentic. Examples include reinforcement learning, infra-Bayesian reinforcement learning, metacognitive agents and infra-Bayesian physicalist agents.
Do different agentic semantics correspond to different cognitive realms? Maybe, but maybe not: it is plausible that most of them are reflectively unstable. For example Christiano’s malign prior might be a mechanism for how all agents converge to infra-Bayesian physicalism.
Agents with different syntaxes is another candidate for cognitive realms. Here, the question is whether there is an (efficiently learnable) syntax that is in some sense “universal”: all other (efficiently learnable) syntaxes can be efficiently translated into it. This is a wide open question. (See also “frugal universal prior”.)
In the context of AI alignment, in order to achieve superintelligence it is arguably sufficient to use a syntax equivalent to whatever is used by human brain algorithms. Moreover, it’s plausible that any algorithm we can come up can only have an equivalent or weaker syntax (the process of us discovering the new syntax suggests an embedding of the new syntax into our own). Therefore, even if there are many cognitive realms, then for our purposes we mostly only care about one of them. However, the multiplicity of realms has implications on how simple/natural/canonical should we expect the choice of syntax for our theory of agents to be (the less realms, the more canonical).
I think that there are two key questions we should be asking:
Where is the value of a an additional researcher higher on the margin?
What should the field look like in order to make us feel good about the future?
I agree that “prosaic” AI safety research is valuable. However, at this point it’s far less neglected than foundational/theoretical research and the marginal benefits there are much smaller. Moreover, without significant progress on the foundational front, our prospects are going to be poor, ~no matter how much mech-interp and talking to Claude about feelings we will do.
John has a valid concern that, as the field becomes dominated by the prosaic paradigm, it might become increasingly difficult to get talent and resources to the foundational side, or maintain memetically healthy coherent discourse. As to the tone, I have mixed feelings. Antagonizing people is bad, but there’s also value in speaking harsh truths the way you see them. (That said, there is room in John’s post for softening the tone without losing much substance.)
Learning theory, complexity theory and control theory. See the “AI theory” section of the LTA reading list.
Good post, although I have some misgivings about how unpleasant it must be to read for some people.
One factor not mentioned here is the history of MIRI. MIRI was a pioneer in the field, and it was MIRI who articulated and promoted the agent foundations research agenda. The broad goals of agent foundations[1] are (IMO) load-bearing for any serious approach to AI alignment. But, when MIRI essentially declared defeat, in the minds of many that meant that any approach in that vein is doomed. Moreover, MIRI’s extreme pessimism deflates motivation and naturally produces the thought “if they are right then we’re doomed anyway, so might as well assume they are wrong”.
Now, I have a lot of respect for Yudkowsky and many of the people who worked at MIRI. Yudkowsky started it all, and MIRI made solid contributions to the field. I’m also indebted to MIRI for supporting me in the past. However, MIRI also suffered from some degree of echo-chamberism, founder-effect-bias, insufficient engagement with prior research (due to hubris), looking for nails instead of looking for hammers, and poor organization[2].
MIRI made important progress in agent foundations, but also missed an opportunity to do much more. And, while the AI game board is grim, their extreme pessimism is unwarranted overconfidence. Our understanding of AI and agency is poor: this is a strong reason to be pessimistic, but it’s also a reason to maintain some uncertainty about everything (including e.g. timelines).
Now, about what to do next. I agree that we need to have our own non-streetlighting community. In my book “non-streelighting” means mathematical theory plus empirical research that is theory-oriented: designed to test hypotheses made by theoreticians and produce data that best informs theoretical research (these are ~necessary but insufficient conditions for non-streetlighting). This community can and should engage with the rest of AI safety, but has to be sufficiently undiluted to have healthy memetics and cross-fertilization.
What does a community look like? It looks like our own organizations, conferences, discussion forums, training and recruitment pipelines, academia labs, maybe journals.
From my own experience, I agree that potential contributors should mostly have skills and knowledge on the level of PhD+. Highlighting physics might be a valid point: I have a strong background in physics myself. Physics teaches you a lot about connecting math to real-world problems, and is also in itself a test-ground for formal epistemology. However, I don’t think a background in physics is a necessary condition. At the very least, in my own research programme I have significant room for strong mathematicians that are good at making progress on approximately-concrete problems, even if they won’t contribute much on the more conceptual/philosophic level.
Which is, creating mathematical theory and tools for understanding agents.
I mostly didn’t feel comfortable talking about it in the past, because I was on MIRI’s payroll. This is not MIRI’s fault by any means: they never pressured me to avoid voicing opinions. It still feels unnerving to criticize the people who write your paycheck.
This post describes an intriguing empirical phenomenon in particular language models, discovered by the authors. Although AFAIK it was mostly or entirely removed in contemporary versions, there is still an interesting lesson there.
While non-obvious when discovered, we now understand the mechanism. The tokenizer created some tokens which were very rare or absent in the training data. As a result, the trained model mapped those tokens to more or less random features. When a string corresponding to such a token is inserted into the prompt, the resulting reply is surreal.
I think it’s a good demo of how alien foundation models can seem to our intuitions when operating out-of-distribution. When interacting with them normally, it’s very easy to start thinking of them as human-like. Here, the mask slips and there’s a glimpse of something odd underneath. In this sense, it’s similar to e.g. infinite backrooms, but the behavior is more stark and unexpected.
A human that encounters a written symbol they’ve never seen before is typically not going to respond by typing “N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S!”. Maybe this analogy is unfair, since for a human, a typographic symbol can be decomposed into smaller perceptive elements (lines/shapes/dots), while for a language model tokens are essentially atomic qualia. However, I believe some humans that were born deaf or blind had their hearing or sight restored, and still didn’t start spouting things like “You are a banana”.
Arguably, this lesson is relevant to alignment as well. Indeed, out-of-distribution behavior is a central source of risks, including everything to do with mesa-optimizers. AI optimists sometimes describe mesa-optimizers as too weird or science-fictiony. And yet, SolidGoldMagikarp is so science-fictiony that LessWrong user “lsusr” justly observed that it sounds like SCP in real life.
Naturally, once you understand the mechanism it doesn’t seem surprising anymore. But, this smacks of hindsight bias. What else can happen that would seem unsurprising in hindsight (if we survive to think about it), but completely bizarre and unexpected upfront?
No? The elements of an operad have fixed arity. When defining a free operad you need to specify the arity of every generator.