Rationalists are missing a core piece for agent-like structure (energy vs information overload)
The agent-like structure problem is a question about how agents in the world are structured. I think rationalists generally have an intuition that the answer looks something like the following:
We assume the world follows some evolution law, e.g. maybe deterministically like , or maybe something stochastic. The intuition being that these are fairly general models of the world, so they should be able to capture whatever there is to capture. here has some geometric structure, and we want to talk about areas of this geometric structure where there are agents.
An agent is characterized by a Markov blanket in the world that has informational input/output channels for the agent to get information to observe the world and send out information to act on it, intuitively because input/output channels are the most general way to model a relationship between two systems, and to embed one system within another we need a Markov blanket.
The agent uses something resembling a Bayesian model to process the input, intuitively because the simplest explanation that predicts the observed facts is the best one, yielding the minimal map that can answer any query you could have about the world.
And then the agent uses something resembling argmax to make a decision for the output given the input, since endless coherence theorems prove this to be optimal.
Possibly there’s something like an internal market that combines several decision-making interests (modelling incomplete preferences) or several world-models (modelling incomplete world-models).
There is a fairly-obvious gap in the above story, in that it lacks any notion of energy (or entropy, temperature, etc.). I think rationalists mostly feel comfortable with that because:
is flexible enough to accomodate worlds that contain energy (even if they also accomodate other kinds of worlds where “energy” doesn’t make sense)
80% of the body’s energy goes to muscles, organs, etc., so if you think of the brain as an agent and the body as a mech that gets piloted by the brain (so the Markov blanket for humans would be something like the blood-brain barrier rather than the skin), you can mostly think of energy as something that is going on out in the universe, with little relevance for the agent’s decision-making.
I’ve come to think of this as “the computationalist worldview” because functional input/output relationships are the thing that is described very well with computations, whereas laws like conservation of energy are extremely arbitrary from a computationalist point of view. (This should be obvious if you’ve ever tried writing a simulation of physics, as naive implementations often lead to energy exploding.)
Radical computationalism is killed by information overload
Under the most radical forms of computationalism, the “ideal” prior is something that can range over all conceivable computations. The traditional answer to this is Solomonoff induction, but it is not computationally tractable because it has to process all observed information in every conceivable way.
Recently with the success of deep learning and the bitter lesson and the Bayesian interpretations of deep double descent and all that, I think computationalists have switched to viewing the ideal prior as something like a huge deep neural network, which learns representations of the world and functional relationships which can be used by some sort of decision-making process.
Briefly, the issue with these sorts of models is that they work by trying to capture all the information that is reasonably non-independent of other information (for instance, the information in a picture that is relevant for predicting information in future pictures). From a computationalist point of view, that may seem reasonable since this is the information that the functional relationships are about, but outside of computationalism we end up facing two problems:
It captures a lot of unimportant information, which makes the models more unweildy. Really, information is a cost: the point of a map is not to faithfully reflect the territory, because that would make it really expensive to read the map. Rather, the point of a map is to give the simplest way of thinking about the most important features of the territory. For instance, literal maps often use flat colors (low information!) to represent different kinds of terrain (important factors!).
It distorts important things, because to efficiently represent information, it is usually best to represent it logarithmically, since your uncertainty is proportional to the magnitude of the object you are thinking about. But this means rare-yet-important things don’t really get specially highlighted, even though they should.
To some extent, human-provided priors (e.g. labels) can reduce these problems, but that doesn’t seem scalable, and really humans also sometimes struggle with these problems too. Plus, philosophically, this would kind of abandon radical computationalism.
“Energy”-orientation solves information overload
I’m not sure to what extent we merely need to focus on literal energy versus also on various metaphorical kinds of energy like “vitality”, but let me set up an example of a case where we can just consider literal energy:
Suppose you have a bunch of physical cubes whose dynamics you want to model. Realistically, you just want the rigid-body dynamics of the cubes. But if your models are supposed to capture information, then they have to model all sorts of weird stuff like scratches to the cubes, complicated lighting scenarios, etc.. Arguably, more of the information about (videos of) the cubes may be in these things than in the rigid-body dynamics (which can be described using only a handful of numbers).
The standard approach is to say that the rigid-body dynamics constitute a low-dimensional component that accounts for the biggest chunk of the dynamics. But anecdotally this seems very fiddly and basically self-contradictory (you’re trying to simultaneously maximize and minimize information, admittedly in different parts of the model, but still). The real problem is that scratches and lighting and so on are “small” in absolute physical terms, even if they carry a lot of information. E.g. the mass displaced in a scratch is orders of magnitude smaller than the mass of a cube, and the energy in weird light phenomena is smaller than the energy of the cubes (at least if we count mass-energy).
So probably we want representation that maximizes the correlation with the energy of the system, at least moreso than we want a representation that maximizes the mutual information with observations of the system.
… kinda
The issue is that we can’t just tell a neural network to model the energy in a bunch of pictures, because it doesn’t have access to the ground truth. Maybe by using the correct loss function, we could fix it, but I’m not sure about that, and at the very least it is unproven so far.
I think another possibility is that there’s something fundamentally wrong with this framing:
An agent is characterized by a Markov blanket in the world that has informational input/output channels for the agent to get information to observe the world and send out information to act on it.
As humans, we have a natural concept of e.g. force and energy because we can use our muscles to apply a force, and we take in energy through food. That is, our input/output channels are not simply about information, and instead they also cover energetic dynamics.
This can, technically speaking, be modelled with the computationalist approach. You can say the agent has uncertainty over the size of the effects of its actions, and as it learns to model these effect sizes, it gets information about energy. But actually formalizing this would require quite complex derivations with a recursive structure based on the value of information, so it’s unclear what would happen, and the computationalist approach really isn’t mathematically oriented towards making it easy.
- The causal backbone conjecture by 17 Aug 2024 18:50 UTC; 26 points) (
- Rationalist Gnosticism by 10 Oct 2024 9:06 UTC; 9 points) (
- 23 Sep 2024 14:42 UTC; 6 points) 's comment on Stephen Fowler’s Shortform by (
I think this is as far away from truth as it can possibly be.
Also, conservation of energy is a consequence of pretty much simple and nice properties of environment, not arbitrary. The reason why it’s hard to keep in physics simulations is because accumulating errors in numerical approximations violate said properties (error accumulation is obviously not symmetric in time).
I think you are wrong in purely practical sense. We don’t care about most of energy. Oceans have a lot of energy in them, but we don’t care because 99%+ of it is unavailable, because it is in high-entropy state. We care about exploitation of free energy, which is present only low-entropy high-information states. And, as expected, we learn to notice such states very quickly because they are very cheap sources of uncertainty reduction in world model.
I don’t mean that rationalists deny thermodynamics, just that it’s not taking a sufficient center-stage, in particular when reasoning on large-scale phenomena than physics or chemistry where it’s hard to precisely quantify the energies, or especially when considering mathematical models of agency (as mentioned rationalists usually use argmax + Bayes).
This post takes a funky left turn at the end, making it a lesson that forming accurate beliefs requires observations. That’s a strange conclusion because that also applies to systems where thermodynamics doesn’t hold.
Conservation of energy doesn’t just follow from time symmetry (as otherwise it would be pretty nice). It follows from time symmetry combined with either Lagrangian/Hamiltonian mechanics or quantum mechanics. There’s several problems here:
The usual representations used in rationalist toy models, e.g. MDPs, do not get conservation of energy.
Lagrangian/Hamiltonian/quantum mechanics don’t really model dissipative phenomena. I’ve heard that there are some extensions that do, but they seem obscure.
Partly the above but also partly just the intrinsic reductionism of these models imply that we don’t have anything even resembling these models for higher phenomena like politics, nutrition or programming, even though the point about energy and agency holds in those areas too.
Energy accounting is uninteresting unless it can be localized to specific phenomena, which is not guaranteed by this theorem.
It’s true that free energy is especially important, but I’m unconvinced rationalists jump as strongly onto it as you say. Free energy is pretty cheap, so between your power outlet and your snack cabinet you are pretty unconstrained by it.
Wrote a followup that maybe adds more clarity to it: The causal backbone conjecture.
I think this ties into modeling invariant abstractions of objects, and coming up with models that generalize to probable future states.
I think partly this is addressed in animals (including humans) by having a fraction of their brain devoted to predicting future sensations and forming a world model out of received sensations, but also having an action model that attempts to influence the world and self-models its own actions and their effects. So things like the cubes, we learn a model of the motions of the cubes not just from watching video of them, but by stacking them up and knocking them over. We play and explore, and these manipulations allow us to test hypotheses.
I expect that having a portion of a model’s training be interactive exploration of a simulation would help close this gap.
The thing is, your actions can lead to additional scratches to the cubes, so actions aren’t causally separated from scratches. And the scratches will be visible on future states too, so if your model attempts to predict future states, it will attempt to predict the scratches.
I suspect ultimately one needs to have an explicit bias in favor of modelling large things accurately. Actions can help nail down the size comparisons, but they don’t directly force you to focus on the larger things.
While I agree that physical laws like conservation of energy are extremely arbitrary from a computational standpoint, I do think that once we try to exhaust all the why questions of why our universe has the physical laws and constants that it does, a lot of the answer is “it’s arbitrary, and we just happen to live in this universe instead of a different one.
Also, about this point in particular:
Yeah, this is probably one of the biggest differences that comes up between idealized notions of computation/intelligence like AIXI (at the weak end) and The Universal Hypercomputer model in their paper The Universal Hypercomputer (at the strong end) and real agents because of computation costs.
For idealized agents, they can often treat their maps as equivalent to a given territory, at least with full simulation/computation, while real agents must have differences between the map and the territory they’re trying to model, so the saying “the map is not the territory” is true for us.
I think a lot of this post can be boiled down to “Computationalism does not scale down well, and thus it’s not generally useful to try to capture all the information that is reasonably non-independent of other information, even if it’s philosophically correct to be a computationalist.”
And yeah, this is extremely unsurprising: Even theoretically correct models/philosophies can often be intractable to actually implement, so you have to look for approximations or use a different theory, even if not philosophically/mathematically justified in the limit.
And yeah, trying to have a prior over all conceivable computations is ridiculously intractable, especially if we want the computational model to be very expressive/general like these computational models, with abstracts. primarily due to the fact that it can express almost everything in theory (ignore their physical plausibility for now, because this isn’t intended to show we can actually build these):
https://arxiv.org/abs/1806.08747
https://www.semanticscholar.org/paper/The-many-forms-of-hypercomputation-Ord/2e1acfc8fce8ef6701a2c8a5d53f59b4fdacab3a
https://arxiv.org/abs/math/0209332
So yes, it is ridiculously intractable to focus on the class of all computational experiences ever, as well as their non-independent information.
So my guess is you’re looking for a tractable model of the agent-like structure problem while still being very general, but willing to put restrictions on it’s generality.
Is that right?
I think everyone is doing that, my point is more about what the appropriate notion of approximation is. Most people think the appropriate notion of approximation is something like KL-divergence, and I’ve discovered that to be false and that information-based definitions of “approximation” don’t work.