Director of AI research at ALTER, where I lead a group working on the learning-theoretic agenda for AI alignment. I’m also supported by the LTFF. See also LinkedIn.
E-mail: {first name}@alter.org.il
Director of AI research at ALTER, where I lead a group working on the learning-theoretic agenda for AI alignment. I’m also supported by the LTFF. See also LinkedIn.
E-mail: {first name}@alter.org.il
Do you mean that seeing the opponent make dumb moves makes the AI infer that its own moves are also supposed to be dumb, or something else?
Apparently someone let LLMs play against the random policy and for most of them, most games end in a draw. Seems like o1-preview is the best of those tested, managing to win 47% of the time.
Relevant: Manifold market about LLM chess
This post states and speculates on an important question: are there different mind types that are in some sense “fully general” (the author calls it “unbounded”) but are nevertheless qualitatively different. The author calls these hypothetical mind taxa “cognitive realms”.
This is how I think about this question, from within the LTA:
To operationalize “minds” we should be thinking of learning algorithms. Learning algorithms can be classified according to their “syntax” and “semantics” (my own terminology). Here, semantics refers to questions such as (i) what type of object is the algorithm learning (ii) what is the feedback/data available to the algorithm and (iii) what is the success criterion/parameter of the algorithm. On the other hand, syntax refers to the prior and/or hypothesis class of the algorithm (where the hypothesis class might be parameterized in a particular way, with particular requirements on how the learning rate depends on the parameters).
Among different semantics, we are especially interested in those that are in some sense agentic. Examples include reinforcement learning, infra-Bayesian reinforcement learning, metacognitive agents and infra-Bayesian physicalist agents.
Do different agentic semantics correspond to different cognitive realms? Maybe, but maybe not: it is plausible that most of them are reflectively unstable. For example Christiano’s malign prior might be a mechanism for how all agents converge to infra-Bayesian physicalism.
Agents with different syntaxes is another candidate for cognitive realms. Here, the question is whether there is an (efficiently learnable) syntax that is in some sense “universal”: all other (efficiently learnable) syntaxes can be efficiently translated into it. This is a wide open question. (See also “frugal universal prior”.)
In the context of AI alignment, in order to achieve superintelligence it is arguably sufficient to use a syntax equivalent to whatever is used by human brain algorithms. Moreover, it’s plausible that any algorithm we can come up can only have an equivalent or weaker syntax (the process of us discovering the new syntax suggests an embedding of the new syntax into our own). Therefore, even if there are many cognitive realms, then for our purposes we mostly only care about one of them. However, the multiplicity of realms has implications on how simple/natural/canonical should we expect the choice of syntax for our theory of agents to be (the less realms, the more canonical).
I think that there are two key questions we should be asking:
Where is the value of a an additional researcher higher on the margin?
What should the field look like in order to make us feel good about the future?
I agree that “prosaic” AI safety research is valuable. However, at this point it’s far less neglected than foundational/theoretical research and the marginal benefits there are much smaller. Moreover, without significant progress on the foundational front, our prospects are going to be poor, ~no matter how much mech-interp and talking to Claude about feelings we will do.
John has a valid concern that, as the field becomes dominated by the prosaic paradigm, it might become increasingly difficult to get talent and resources to the foundational side, or maintain memetically healthy coherent discourse. As to the tone, I have mixed feelings. Antagonizing people is bad, but there’s also value in speaking harsh truths the way you see them. (That said, there is room in John’s post for softening the tone without losing much substance.)
Learning theory, complexity theory and control theory. See the “AI theory” section of the LTA reading list.
Good post, although I have some misgivings about how unpleasant it must be to read for some people.
One factor not mentioned here is the history of MIRI. MIRI was a pioneer in the field, and it was MIRI who articulated and promoted the agent foundations research agenda. The broad goals of agent foundations[1] are (IMO) load-bearing for any serious approach to AI alignment. But, when MIRI essentially declared defeat, in the minds of many that meant that any approach in that vein is doomed. Moreover, MIRI’s extreme pessimism deflates motivation and naturally produces the thought “if they are right then we’re doomed anyway, so might as well assume they are wrong”.
Now, I have a lot of respect for Yudkowsky and many of the people who worked at MIRI. Yudkowsky started it all, and MIRI made solid contributions to the field. I’m also indebted to MIRI for supporting me in the past. However, MIRI also suffered from some degree of echo-chamberism, founder-effect-bias, insufficient engagement with prior research (due to hubris), looking for nails instead of looking for hammers, and poor organization[2].
MIRI made important progress in agent foundations, but also missed an opportunity to do much more. And, while the AI game board is grim, their extreme pessimism is unwarranted overconfidence. Our understanding of AI and agency is poor: this is a strong reason to be pessimistic, but it’s also a reason to maintain some uncertainty about everything (including e.g. timelines).
Now, about what to do next. I agree that we need to have our own non-streetlighting community. In my book “non-streelighting” means mathematical theory plus empirical research that is theory-oriented: designed to test hypotheses made by theoreticians and produce data that best informs theoretical research (these are ~necessary but insufficient conditions for non-streetlighting). This community can and should engage with the rest of AI safety, but has to be sufficiently undiluted to have healthy memetics and cross-fertilization.
What does a community look like? It looks like our own organizations, conferences, discussion forums, training and recruitment pipelines, academia labs, maybe journals.
From my own experience, I agree that potential contributors should mostly have skills and knowledge on the level of PhD+. Highlighting physics might be a valid point: I have a strong background in physics myself. Physics teaches you a lot about connecting math to real-world problems, and is also in itself a test-ground for formal epistemology. However, I don’t think a background in physics is a necessary condition. At the very least, in my own research programme I have significant room for strong mathematicians that are good at making progress on approximately-concrete problems, even if they won’t contribute much on the more conceptual/philosophic level.
Which is, creating mathematical theory and tools for understanding agents.
I mostly didn’t feel comfortable talking about it in the past, because I was on MIRI’s payroll. This is not MIRI’s fault by any means: they never pressured me to avoid voicing opinions. It still feels unnerving to criticize the people who write your paycheck.
This post describes an intriguing empirical phenomenon in particular language models, discovered by the authors. Although AFAIK it was mostly or entirely removed in contemporary versions, there is still an interesting lesson there.
While non-obvious when discovered, we now understand the mechanism. The tokenizer created some tokens which were very rare or absent in the training data. As a result, the trained model mapped those tokens to more or less random features. When a string corresponding to such a token is inserted into the prompt, the resulting reply is surreal.
I think it’s a good demo of how alien foundation models can seem to our intuitions when operating out-of-distribution. When interacting with them normally, it’s very easy to start thinking of them as human-like. Here, the mask slips and there’s a glimpse of something odd underneath. In this sense, it’s similar to e.g. infinite backrooms, but the behavior is more stark and unexpected.
A human that encounters a written symbol they’ve never seen before is typically not going to respond by typing “N-O-T-H-I-N-G-I-S-F-A-I-R-I-N-T-H-I-S-W-O-R-L-D-O-F-M-A-D-N-E-S-S!”. Maybe this analogy is unfair, since for a human, a typographic symbol can be decomposed into smaller perceptive elements (lines/shapes/dots), while for a language model tokens are essentially atomic qualia. However, I believe some humans that were born deaf or blind had their hearing or sight restored, and still didn’t start spouting things like “You are a banana”.
Arguably, this lesson is relevant to alignment as well. Indeed, out-of-distribution behavior is a central source of risks, including everything to do with mesa-optimizers. AI optimists sometimes describe mesa-optimizers as too weird or science-fictiony. And yet, SolidGoldMagikarp is so science-fictiony that LessWrong user “lsusr” justly observed that it sounds like SCP in real life.
Naturally, once you understand the mechanism it doesn’t seem surprising anymore. But, this smacks of hindsight bias. What else can happen that would seem unsurprising in hindsight (if we survive to think about it), but completely bizarre and unexpected upfront?
This is just a self-study list for people who want to understand and/or contribute to the learning-theoretic AI alignment research agenda. I’m not sure why people thought it deserves to be in the Review. FWIW, I keep using it with my MATS scholars, and I keep it more or less up-to-date. A complementary resource that became available more recently is the video lectures.
This post suggests an analogy between (some) AI alignment proposals and shell games or perpetuum mobile proposals. Pertuum mobiles are an example how an idea might look sensible to someone with a half-baked understanding of the domain, while remaining very far from anything workable. A clever arguer can (intentionally or not!) hide the error in the design wherever the audience is not looking at any given moment. Similarly, some alignment proposals might seem correct when zooming in on every piece separately, but that’s because the error is always hidden away somewhere else.
I don’t think this adds anything very deep to understanding AI alignment, but it is a cute example how atheoretical analysis can fail catastrophically, especially when the the designer is motivated to argue that their invention works. Conversely, knowledge of a deep theoretical principle can refute a huge swath of design space is a single move. I will remember this for didactic purposes.
Disclaimer: A cute analogy by itself proves little, any individual alignment proposal might be free of such sins, and didactic tools should be used wisely, lest they become soldier-arguments. The author intends this (I think) mostly as a guiding principle for critical analysis of proposals.
This post argues against alignment protocols based on outsourcing alignment research to AI. It makes some good points, but also feels insufficiently charitable to the proposals it’s criticizing.
John make his case by an analogy to human experts. If you’re hiring an expert in domain X, but you understand little in domain X yourself then you’re going to have 3 serious problems:
Illusion of transparency: the expert might say things that you misinterpret due to your own lack of understanding.
The expert might be dumb or malicious, but you will believe them due to your own ignorance.
When the failure modes above happen, you won’t be aware of this and won’t act to fix them.
These points are relevant. However, they don’t fully engage with the main source of hope for outsourcing proponents. Namely, it’s the principle that validation is easier than generation[1]. While it’s true that an arbitrary dilettante might not benefit from an arbitrary expert, the fact that it’s easier to comprehend an idea than invent it yourself means that we can get some value from outsourcing, under some half-plausible conditions.
The claim that the “AI expert” can be deceptive and/or malicious is straightforwardly true. I think that the best hope to address it would be something like Autocalibrated Quantilized Debate, but it does require some favorable assumptions about the feasibility of deception and inner alignment is still a problem.
The “illusion of transparency” argument is more confusing IMO. The obvious counterargument is, imagine an AI that is trained to not only produce correct answers but also explain them in a way that’s as useful as possible for the audience. However, there are two issues with this counterargument:
First, how do we know that the generalization from the training data to the real use case (alignment research) is reliable? Given that we cannot reliably test the real use case, precisely because we are alignment dilettantes?
Second, we might be following a poor metastrategy. It is easy to imagine, in the world we currently inhabit, that an AI lab creates catastrophic unaligned AI, even though they think they care about alignment, just because they are too reckless and overconfident. By the same token, we can imagine such an AI lab consulting their own AI about alignment, and then proceeding with the reckless and overconfident plans suggested by the AI.
In the context of a sufficiently cautious metastrategy, it is not implausible that we can get some mileage from the outsourcing approach[2]. Move one step at a time, spend a lot of time reflecting on the AI’s proposals, and also have strong guardrails against the possibility of superhuman deception or inner alignment failures (which we currently don’t know how to build!) But without this context, we are indeed liable to become the clients in the satiric video John linked.
I think that John might disagree with this principle. A world in which the principle is mostly false would be peculiar. It would be a world in which marketplaces of ideas don’t work at all, and even if someone fully solves AI alignment they will fail to convince most relevant people that their solution is correct (any more than someone with an incorrect solution would succeed in that). I don’t think that’s the world we live in.
Although currently I consider PSI to be more promising.
This post makes an important point: the words “artificial intelligence” don’t necessarily carve reality at the joints, the fact something is true about a modern system that we call AI doesn’t automatically imply anything about arbitrary future AI systems, no more than conclusions about e.g. Dendral or DeepBlue carry over to Gemini.
That said, IMO the author somewhat overstates their thesis. Specifically, I take issue with all the following claims:
LLMs have no chance of becoming AGI.
LLMs are automatically safe.
There is nearly no empirical evidence from LLMs that is relevant to alignment of future AI.
First, those points are somewhat vague because it’s not clear what counts as “LLM”. The phrase “Large Language Model” is already obsolete, at least because modern AI is multimodal. It’s more appropriate to speak of “Foundation Models” (FM). More importantly, it’s not clear what kind of fine-tuning does or doesn’t count (RLHF? RL on CoT? …)
Second, how do we know FM won’t become AGI? I’m imagining the argument is something like “FM is primarily about prediction, so it doesn’t have agency”. However, when predicting data that contains or implies decisions by agents, it’s not crazy to imagine that agency can arise in the predictor.
Third, how do we know that FM are always going to be safe? By the same token that they can develop agency, they can develop dangerous properties.
Fourth, it seems really unfair to say existing AI provides no relevant evidence. The achievements of existing AI systems are such that it seems very likely they capture at least some of the key algorithmic capabilities of the human brain. The ability of relatively simple and generic algorithms to perform well on a large variety of different tasks is indicative of something in the system being quite “general”, even if not “general intelligence” in the full sense.
I think that we should definitely try learning from existing AI. However, this learning should be more sophisticated and theory-driven than superficial analogies or trend extrapolations. What we shouldn’t do is say “we succeeded at aligning existing AI, therefore AI alignment is easy/solved in general”. The same theories that predicted catastrophic AI risk also predict roughly the current level of alignment for current AI systems.
I will expand a little on this last point. The core of the catastrophic AI risk scenario is:
We are directing the AI towards a goal which is complex (so that correct specification/generalization is difficult)[1].
The AI needs to make decisions in situations which (i) cannot be imitated well in simulation, due to the complexity of the world (ii) admit catastrophic mistakes (otherwise you can just add any mistake to the training data)[2].
The capability required from the AI to succeed is such that it can plausibly do catastrophic mistakes (if succeeding at the task is easy, but causing a catastrophe is really hard then a weak AI would be safe and effective)[3].
The above scenario must be addressed eventually, if only to create an AI defense system against unaligned AI that irresponsible actors could create. However, no modern AI system operates in this scenario. This is the most basic reason why the relative ease of alignment in modern systems (although even modern systems have alignment issues), does little to dispel concerns about catastrophic AI risk in the future.
Even for simple goals inner alignment is a concern. However, it’s harder to say at which level of capability this concern arises.
It’s also possible that mistakes are not catastrophic per se, but are simultaneously rare enough that it’s hard to get enough training data and frequent enough to be troublesome. This is related to the reliability problems in modern AI that we indeed observe.
But sometimes it might be tricky to hit the capability sweet spot where the AI is strong enough to be useful but weak enough to be safe, even if such a sweet spot exists in principle.
This post provides a mathematical analysis of a toy model of Goodhart’s Law. Namely, it assumes that the optimization proxy is a sum of the true utility function and noise , such that:
and are independent random variables w.r.t. some implicit distribution on the solution space. The meaning of this distribution is not discussed, but I guess we might think of it some kind of inductive bias, e.g. a simplicity prior.
The optimization process can be modeled as conditioning on a high value of .
In this model, the authors prove that Goodhart occurs when is subexponential and its tail is sufficiently heavier than that of . Conversely, when is sufficiently light-tailed, Goodhart doesn’t occur.
My opinion:
On the one hand, kudos for using actual math to study an alignment-relevant problem.
On the other hand, the modeling assumptions feel too toyish for most applications. Specifically, the idea that and are independent random variables seems implausible. Typically, we worry about Goodhart’s law because the proxy behaves differently in different domains. In the “ordinary” domain that motivated the choice of proxy, is a good approximation of . However, in other domains might be unrelated to or even anticorrelated.
For example, ordinarily smiles on human-looking faces is an indication of happy humans. However, in worlds that contain much more inanimate facsimiles of humans than actual humans, there is no correlation.
Or, to take the example used in the post, ordinarily if a sufficiently smart expert human judge reads an AI alignment proposal, they form a good opinion on how good this proposal is. But, if the proposal contains superhumanly clever manipulation and psychological warfare, the ordinary relationship completely breaks down. I don’t expect this effect to behave like independent random noise at all.
Less importantly, it might be interesting to extend this analysis to a more realistic model of optimization. For example, the optimizer learns a function which is the best approximation to out of some hypothesis class , and then optimizes instead of the actual . (Incidentally, this might generate an additional Goodhart effect due to the discrepancy between and .) Alternatively, the optimizer learns an infrafunction that is a coarsening of out of some hypothesis class and then optimizes .
This post attempts to describe a key disagreement between Karnofsky and Soares (written by Karnofsky) pertaining to the alignment protocol “train an AI to simulate an AI alignment researcher”. The topic is quite important, since this is a fairly popular approach.
Here is how I view this question:
The first unknown is how accurate is the simulation. This is not really discussed in the OP. On the one hand, one might imagine that with more data, compute and other improvements, the AI should ultimately converge on an almost perfect simulation of an AI alignment researcher, which is arguably safe. One the other hand, there are two problems with this. First, such a simulation might be vulnerable to attacks from counterfactuals. Second, the prior is malign, i.e. the simulation might converge to representing a “malign simulation hypothesis” universe rather than then intended null hypothesis / ordinary reality.
Instead, we can imagine a simulation that’s not extremely accurate, but that’s modified to be good enough by fine-tuning with reinforcement learning. This is essentially the approach in contemporary AI and is also the assumption of the OP. Although Karnofsky says: “a small amount of RL”, and I’m don’t know why he beliefs a small amount is sufficient. Perhaps RL seemed less obviously important then than it does now, with the recent successes of o1 and o3.
The danger (as explained in the OP by Soares paraphrased by Karnofsky) is that it’s much easier to converge in this manner on an arbitrary agent that has the capabilities of the imaginary AI alignment researcher (which probably have to be a lot greater than capabilities of human researchers to make it useful), but doesn’t have values that are truly aligned. This is because “agency” is (i) a relatively simple concept and (ii) a robust attractor, in the sense that any agent would behave similarly when faced with particular instrumental incentives, and it’s mainly this behavior that the training process rewards. On the other hand, human values are complex and some behaviors that are necessary to pinpoint them might be rare.
Karnofsky’s counterargument is twofold: First, he believes that merely avoiding catastrophic outcomes should be a lot easier than pinpointing human values. Second, he believes that AI alignment research can be done without much agency or reflection, and hence useful AI alignment research arises in the simulation before full-fledged agency.
Regarding the first counterargument, I’m not sure why Karnofsky believes it (it’s not really supported in the OP). I think he’s imagining something like “in the training data, AI alignment researchers never engineer nanobots that take over the world, hence the AI will also never engineer nanobots that take over the world”. However, this seems like relying on the simulation being sufficiently bad. Indeed, there are situations in which I would consider it correct to engineer nanobots that take over the world, they just seem to have never arisen in my life so far[1]. Hence, a sufficiently good simulation of me would also do that in some situation. The question then becomes whether the exact circumstances and the type of nanobots are captured by the simulation correctly, which is much more fraught.
Worse, even an accurate simulation of a human is not necessarily safe. I think that there are plenty of humans that given unlimited power would abuse it in a manner catastrophic for most of everyone else. When it comes to fully aligned ASI, I’m mostly hoping for a collectively-good outcome due to some combination of:
ASI is aligned to the aggregate values of many people.
Acausal cooperation between the people that the ASI is aligned to and other people who supported or at least haven’t hindered the project.
A “virtue ethics” component of human values, where you don’t want to be “the kind of person who would do [thing]” even if [thing] is net-beneficial to you in an abstract sense. (But not all people have this!)
These sources of hope seem pretty brittle when it comes to an imperfect simulation of possibly a small number of people, who might not even correspond to any particular real people but be some kind of AI-generated characters.
Regarding the second counterargument, for now it mostly comes down to a battle of intuitions. That said, I think that metacognitive agents lend a lot of credence to the idea that even “purely mental” tasks require agency and reflection to master: you need to make and execute plans for thinking about the problem, and you need to reflect about the methods you use in your thinking. Anecdotally, I can testify that my thinking about AI alignment led me to much reflection about my values and high-level hopes for the future. Moreover, this is another case where Karnofsky seems to hope that the simulation will be bad.
Relying on the simulation being bad is a dangerous proposition. It means we are caught between the Scylla of “the simulation is too good to be safe” and the Charybdis of “the simulation is too bad to be useful” and it’s not clear the zone between them exists at all.
Overall, I would say that neither side has a slam dunk case, but ignoring the dangers without much stronger arguments seems deeply unwise.
As far as can be told from public record. I neither confirm nor deny that I ever was in a situation in which I considered to engineer nanobots that take over the world.
This post is a solid introduction to the application of Singular Learning Theory to generalization in deep learning. This is a topic that I believe to be quite important.
One nitpick: The OP says that it “seems unimportant” that ReLU networks are not analytic. I’m not so sure. On the one hand, yes, we can apply SLT to (say) GELU networks instead. But GELUs seem mathematically more complicated, which probably translates to extra difficulties in computing the RLCT and hence makes applying SLT harder. Alternatively, we can consider a series of analytical response functions that converges to ReLU, but that probably also comes with extra complexity. Also, ReLU have an additional symmetry (the scaling symmetry mentioned in the OP) and SLT kinda thrives on symmetries, so throwing that out might be bad!
It seems to me like a fascinating possibility that there is some kind of tropical geometry version of SLT which would allow analyzing generalization in ReLU networks directly and perhaps somewhat more easily. But, at this point it’s merely a wild speculation of mine.
This post is a great review of the Natural Abstractions research agenda, covering both its strengths and weaknesses. It provides a useful breakdown of the key claims, the mathematical results and the applications to alignment. There’s also reasonable criticism.
To the weaknesses mentioned in the overview, I would also add that the agenda needs more engagement with learning theory. Since the claim is that all minds learn the same abstractions, it seems necessary to look into the process of learning, and see what kind of abstractions can or cannot be learned (both in terms of sample complexity and in terms of computational complexity).
Some thoughts about natural abstractions inspired by this post:
The concept of natural abstractions seems closely related to my informally conjectured agreement theorem for infra-Bayesian physicalism. In a nutshell, two physicalist agents in the same universe with access to “similar” information should asymptotically arrive at similar beliefs (notably this is false for cartesian agents because of the different biases resulting from the different physical points of view).
A possible formalization of the agreement theorem inspired by my richness of mathematics conjecture: Given two beliefs and , we say that when some conditioning of on a finite set of observations produces a refinement of some conditioning of on a finite set of observations (see linked shortform for mathematical details). This relation is a preorder. In general, we can expect an agent to learn a sequence of beliefs of the form Here, the sequence can be over physical time, or over time discount or over a parameter such as “availability of computing resources” or “how much time the world allows you for thinking between decisions”: the latter is the natural asymptotic for metacognitive agents (see also logical time). Given two agents, we get two such sequences and . The agreement theorem can then state that for all , there exists s.t. (and vice versa). More precisely, this relation might hold up to some known function s.t. .
The “agreement” in the previous paragraph is purely semantic: the agents converge to believing in the same world, but this doesn’t say anything about the syntactic structure of their beliefs. This seems conceptually insufficient for natural abstractions. However, maybe there is a syntactic equivalent where the preorder is replaced by morphisms in the category of some syntactic representations (e.g. string machines). It seems reasonable to expect that agents must use such representations to learn efficiently (see also frugal compositional languages).
In this picture, the graphical models used by John are a candidate for the frugal compositional language. I think this might be not entirely off the mark, but the real frugal compositional language is probably somewhat different.
This post introduces Timaeus’ “Developmental Interpretability” research agenda. The latter is IMO one of the most interesting extant AI alignment research agendas.
The reason DevInterp is interesting is that it is one of the few AI alignment research agendas that is trying to understand deep learning “head on”, while wielding a powerful mathematical tool that seems potentially suitable for the purpose (namely, Singular Learning Theory). Relatedly, it is one of the few agendas that maintains a strong balance of theoretical and empirical research. As such, it might also grow to be a bridge between theoretical and empirical research agendas more broadly (e.g. it might be synergistic with the LTA).
I also want to point out a few potential weaknesses or (minor) reservations I have:
First, DevInterp places phase transitions as its central object of study. While I agree that phase transitions seem interesting, possibly crucial to understand, I’m not convinced that a broader view wouldn’t be better.
Singular Learning Theory (SLT) has the potential to explain generalization in deep learning, phase transitions or no. This in itself seems to be important enough to deserve the central stage. Understanding generalization is crucial, because:
We want our alignment protocols to generalize correctly, given the available data, compute and other circumstances, and we need to understand what conditions would guarantee it (or at least prohibit catastrophic generalization failures).
If the resulting theory of generalization is in some sense universal, then it might be applicable to specifying a procedure for inferring human values (as human behavior is generated from human values by a learning algorithm with similar generalization properties), or at least formalizing “human values” well enough for theoretical analysis of alignment.
Hence, compared to the OP, I would put more emphasis on these latter points.
Second, the OP does mention the difference between phase transitions during Stochastic Gradient Descent (SGD) and the phase transitions of Singular Learning Theory, but this deserves a closer look. SLT has IMO two key missing pieces:
The first piece is the relation between ideal Bayesian inference (the subject of SLT) and SGD. Ideal Bayesian inference is known to be computationally intractable. Maybe there is an extension of SLT that replaces Bayesian inference with either SGD or a different tractable algorithm. For example, it could be some Markov Chain Monte Carlo (MCMC) that converges to Bayesian inference in the limit. Maybe there is a natural geometric invariant that controls the MCMC relaxation time, similarly to how the log canonical threshold controls sample complexity.
The second missing piece is understanding the special properties of ANN architectures compared to arbitrary singular hypothesis classes. For example, maybe there is some universality property which explains why e.g. transformers (or something similar) are qualitatively “as good as it gets”. Alternatively, it could be a relation between the log canonical threshold of specific ANN architectures to other simplicity measures which can be justified on other philosophical grounds.
That said, if the above missing pieces were found, SLT would become straightforwardly the theory for understanding deep learning and maybe learning in general.
This post is a collection of claims about acausal trade, some of which I find more compelling and some less. Overall, I think it’s a good contribution to the discussion.
Claims that I mostly agree with include:
Acausal trade in practice is usually not accomplished by literal simulation (the latter is mostly important as a convenient toy model) but by abstract reasoning.
It is likely to be useful to think of the “acausal economy” as a whole, rather just about each individual trade separately.
Claims that I have some quibbles with include:
The claim that there is a strong relation between the prevalent acausal norms and human moral philosophy. I agree that there are likely to be some parallels: both processes are to some degree motivated by articulating mutually beneficial norms. However, human moral philosophy is likely to contain biases specific to humans and to human circumstances on Earth. Conversely, acausal norms are likely to be shaped by metacosmological circumstances that we don’t even know yet. For example, maybe there is some reason why most civilizations in the multiverse really hate logarithmic spirals. In this case, there would be a norm against logarithmic spirals that we are currently completely oblivious about.
The claim that the concept of “boundaries” is likely to play a key role in acausal norms. I find this somewhat plausible but far from clear. AFAIK, Critch so far produced little in the way of compelling mathematical models to support the “boundaries” idea.
It seems to be implicit in the post that, an acausal-norm-following paperclip-maximizer would be “nice” to humans to some degree. (But Critch warns us that the paperclip-maximizer might easily fail to be acausal-norm-following.) While I grant that it’s possible, I think it’s far from clear. The usual trad-y argument to be nice to others is so that others are nice to you. However, (i) some agents are a priori less threatened by others and hence find the argument less compelling (ii) who exactly are the relevant “others” is unclear. For example, it might be that humans are in some ways not “advanced” enough to be considered. Conversely, it’s possible that human treatment of animals has already condemned us to the status of defectors (which can be defected-against in turn).
The technical notion that logical proofs and Lob/Payor are ultimately the right mathematical model of acausal trade. I am very much unconvinced, e.g. because proof search is intractable and also because we don’t know how to naturally generalizes these arguments far beyond the toy setting of Fair Bots in Prisoner’s Dilemma. On the other hand, I do expect there to exist some mathematical justification of superrationality, just along other lines.
Master post for selection/coherence theorems. Previous relevant shortforms: learnability constraints decision rules, AIT selection for learning.