I spent a few hours today just starting to answer this question, and only got as far as walking through what this “agency” thing is which we’re trying to understand. Since people have already asked for clarification on that topic, I’ll post it here as a standalone mini-essay. Things which this comment does not address, which I may or may not get around to writing later:
Really there should be a few more multi-agent phenomena at the end—think markets, organizations/firms, Schelling points, governance, that sort of thing. I ran out of steam before getting to those.
What might “understanding” each of these phenomena look like?
How might it all fit together into a coherent whole picture? (Though hopefully the parts below are enough to start to see the unifying structure.)
How would better understanding of each of these phenomena individually yield incremental progress on various alignment subgoals? (Basically any of them would be incrementally useful for multiple alignment approaches/subproblems.)
How would a unified understanding of all these pieces address the hard parts of alignment? In particular, how could they rule out large classes of potential unknown unknowns?
What “agenty” phenomena are we talking about?
Prerequisite: Boundaries
So there’s this thing where everything interacts with everything else, but mostly not directly. A sled’s motion down a hill is influenced, to varying degrees, by motions of far-off stars or by magma flows in the earth’s crust or by the fashion choices of teenagers at a nearby high school. But those effects are some combination of (a) small, and (b) mediated by things which interact with the sled more directly, like its weight or the coefficient of friction between sled and hill. This phenomenon—most interactions being mediated by a few factors—is a necessary precondition to science working at all in our high-dimensional world. Otherwise, reproducible outcomes would require that we control way too many things to ever realistically achieve reproducibility.
Building on that, there’s also this thing where a biological cell interacts with its surroundings mostly via specific sensors/channels on the membrane, despite all sorts of complex stuff happening inside. Or a deposit bank interacts with its customers mostly via fancy versions of “you put money in and take money out, the bank says ‘no’ if the amount you try to take out is greater than the amount you put in”, despite lots of complex stuff going on behind the scenes at the bank to make it work.
These are “boundaries”: some relatively-large/complex systems interact with the rest of the world only through relatively-narrow/simple information-channels. We need some notion of boundaries, and of interactions flowing across those boundaries, in order to carve out some subsystem to call an “agent”.
The Basics: Agency
So there’s this thing where a thermostat senses the initial temperature of a room, and then does different things (like e.g. activating heating or cooling) depending on the initial temperature, in such a way that the final temperature consistently ends up roughly the same, for many different initial temperatures?
Or a bacterium senses how sugar concentrations change as it swims along, and then does different things (like e.g. continuing forward or tumbling around to face a random new direction) depending on how the sugar concentration changes, in such a way that it ends up in an area with lots of sugar, for many different initial positions or sugar concentration landscapes?
Or most animals will look and listen and smell around themselves, and then do different things (like e.g. run or fly or swim different directions, or bite, or stay very still, or...) depending on what they see/hear/smell, in such a way that they end up eating food and not being eaten themselves (mostly, over short time horizons) and having children, for many different configurations of the trees and rocks and plants and animals around them?
That’s the most basic form of “agency”: taking different actions depending on observations, in order to achieve a consistent outcome (or class of outcomes).
The next few phenomena follow from that basic idea: they either allow a system to achieve a consistent outcome more robustly (i.e. across more initial conditions), or to achieve a more specific consistent outcome, or they’re the “easiest” way (in a statistical sense) to achieve a consistent outcome across many different conditions.
Modules
So there’s this thing where an animal or plant develops specialized organs, which interact with the rest of the organism only in relatively simple, specialized ways. Or an organization has many departments, which specialize in particular roles and present a simplified API to the rest of the company.
These are “modules”: subsystems with boundaries of their own, interacting with the rest of the system through relatively-limited/simple information channels.
Factorization
So there’s this thing where a human wants milk for their coffee and doesn’t have any, and they break this problem up into subproblems. One subproblem is to drive to the store. Another is to find the milk within the store. A third is to make enough money to pay for the milk. These subproblems are mostly-independent: the human mostly doesn’t need to think about the details of finding milk within the store in order to drive to the store, nor do they need to think about driving to the store in order to make money.
Or, an organism has organs/organelles with specialized roles which interact in relatively-limited/simple ways. In order for those organs to solve the organism’s problems, they must each handle subproblems which are mostly-independent of the others (else the organs would need to pass a lot more information between themselves to solve the organism’s top-level problems.) Same with departments of a company.
This is “factorization”, a dual in some sense to modules: when faced with a problem, break it up into subproblems which can be solved mostly-independently.
Coherence
So there’s this thing where a biological cell needs a handful of different metabolic resources—most obviously energy (i.e. ATP), but also amino acids, membrane lipids, etc. And often cells can produce some metabolic resources via multiple different paths, including cyclical paths—e.g. it’s useful to be able to turn A into B but also B into A, because sometimes the environment will have lots of B and other times it will have lots of A. But we also expect that the cell usually won’t spend energy to turn B into A and spend energy to turn A into B at the same time; energy is a scarce resource, and we expect that the bacterium can produce more progeny (on average) if it doesn’t waste resources that way. So, the bacterium can achieve higher fitness if it represses either the A → B pathway or the B → A pathway at any given time, depending on which metabolite is more abundant. (See here for more detail on this example and how it relates to utility maximization, plus a bunch of meta discussion.)
Or, consider the toy example of a hospital administrator budgeting to save as many lives as possible. If the administrator spends $1M on a liver for someone who needs a transplant, but does not spend $100k on a dialysis machine which will save 3 lives, then the administrator has failed to budget in a way which saves as many lives as possible. They could save strictly more lives on the same budget by taking the dialysis machine over the liver.
That’s “coherence”: taking multiple actions in different times/places, in such a way that the actions together are pareto optimal with respect to scarce resources.
World Models
So there’s this thing where a human keeps a map in their head of the stuff around them, including outside their direct line of sight. (You can tell humans do this because if they turn around and see some big obvious thing behind them which was not there last time they looked, they will be surprised, whereas if they see some big obvious thing which was there, they will not be surprised.) And that map constantly updates as new information comes in, in such a way that the map continues to track stuff around the human pretty robustly, even if there’s weird stuff which messes it up for a little while.
Even e-coli, when swimming along a sugar gradient, have an internal molecular species whose concentration roughly tracks the rate-of-change of the external sugar concentration as the bacterium swims. It’s a tiny internal model of the e-coli’s environment. More generally, cells often use some internal molecular species to track some external state, and update that internal representation as new information comes in.
That’s a “world model”: some internal stuff which consistently tracks the state of (some parts of) the external world, and updates to continue tracking that external state as new information comes in.
General-Purpose Search
So there’s this thing with humans where you can give them pretty arbitrary tasks, from assembling some furniture to coding an app to planning an invasion, and they’ll go figure out how to do it. In particular, humans can come up with plans to do pretty arbitrary tasks, before actually starting the tasks. (And of course competent humans usually iteratively update those plans as they try stuff and new information changes their world-model.) This is in contrast to fixed strategies, which can’t update to many new tasks or adjust as new information comes in.
The part which comes up with the plan, and updates it in tandem with changes to the world-model, is “general-purpose search”: some internal method which can find strategies to achieve a wide variety of goals across a wide variety of (modeled) external world-states. (More on what general purpose search is/isn’t here.)
Reflection
So there’s this thing where some animals recognize themselves in a mirror, and some don’t. (You can tell this from the animal e.g. trying to fight with the reflection or scare it away, vs e.g. noticing something sneaking up behind the reflection and then turning to see what’s behind them.)
Or humans explicitly think about themselves, and talk about themselves, their own thought processes, how they’re perceived by others, yada yada yada. Indeed, it’s hard to get humans to stop thinking about themselves for a short while.
This is “reflection”: a system represents itself, not just in the trivial way that everything “represents itself”, but within its own world-model, including representations of relationships to all the external stuff represented in the world model.
Language
So there’s this thing where you can show a toddler an apple and say “apple”, repeat with maybe three different apples, and from then on the toddler will mostly interpret “apple” the same way most other humans do. In the minds of two different humans, the word will map to internal representations of roughly-the-same stuff in the environment. Furthermore, words can be composed together in an exponentially huge variety of ways, and different humans will still end up mapping the words to internal representations of roughly-the-same stuff in the environment. (Not super consistently, unfortunately, but enough that humans are able to communicate at all, which is rather remarkable when dealing with an exponentially large space of potential meanings.)
This is “language”: two systems coordinate to pass signals between them which map to internal representations of roughly-the-same stuff in the environment.
Thanks for your response. There’s a lot of good material here, although some of these components like modules or language seem less central to agency, at least from my perspective. I guess you might see these are appearing slightly down the stack?
They fit naturally into the coherent whole picture. In very broad strokes, that picture looks like selection theorems starting from selection pressures for basic agency, running through natural factorization of problem domains (which is where modules and eventually language come in), then world models and general purpose search (which finds natural factorizations dynamically, rather than in a hard-coded way) once the environment and selection objective has enough variety.
Even e-coli, when swimming along a sugar gradient, have an internal molecular species whose concentration roughly tracks the rate-of-change of the external sugar concentration as the bacterium swims. It’s a tiny internal model of the e-coli’s environment. More generally, cells often use some internal molecular species to track some external state, and update that internal representation as new information comes in.
Woah, this sounds incredibly fascinating, I’ve never heard of this — do you have a link to more info, or terminology to google?
I spent a few hours today just starting to answer this question, and only got as far as walking through what this “agency” thing is which we’re trying to understand. Since people have already asked for clarification on that topic, I’ll post it here as a standalone mini-essay. Things which this comment does not address, which I may or may not get around to writing later:
Really there should be a few more multi-agent phenomena at the end—think markets, organizations/firms, Schelling points, governance, that sort of thing. I ran out of steam before getting to those.
What might “understanding” each of these phenomena look like?
How might it all fit together into a coherent whole picture? (Though hopefully the parts below are enough to start to see the unifying structure.)
How would better understanding of each of these phenomena individually yield incremental progress on various alignment subgoals? (Basically any of them would be incrementally useful for multiple alignment approaches/subproblems.)
How would a unified understanding of all these pieces address the hard parts of alignment? In particular, how could they rule out large classes of potential unknown unknowns?
What “agenty” phenomena are we talking about?
Prerequisite: Boundaries
So there’s this thing where everything interacts with everything else, but mostly not directly. A sled’s motion down a hill is influenced, to varying degrees, by motions of far-off stars or by magma flows in the earth’s crust or by the fashion choices of teenagers at a nearby high school. But those effects are some combination of (a) small, and (b) mediated by things which interact with the sled more directly, like its weight or the coefficient of friction between sled and hill. This phenomenon—most interactions being mediated by a few factors—is a necessary precondition to science working at all in our high-dimensional world. Otherwise, reproducible outcomes would require that we control way too many things to ever realistically achieve reproducibility.
Building on that, there’s also this thing where a biological cell interacts with its surroundings mostly via specific sensors/channels on the membrane, despite all sorts of complex stuff happening inside. Or a deposit bank interacts with its customers mostly via fancy versions of “you put money in and take money out, the bank says ‘no’ if the amount you try to take out is greater than the amount you put in”, despite lots of complex stuff going on behind the scenes at the bank to make it work.
These are “boundaries”: some relatively-large/complex systems interact with the rest of the world only through relatively-narrow/simple information-channels. We need some notion of boundaries, and of interactions flowing across those boundaries, in order to carve out some subsystem to call an “agent”.
The Basics: Agency
So there’s this thing where a thermostat senses the initial temperature of a room, and then does different things (like e.g. activating heating or cooling) depending on the initial temperature, in such a way that the final temperature consistently ends up roughly the same, for many different initial temperatures?
Or a bacterium senses how sugar concentrations change as it swims along, and then does different things (like e.g. continuing forward or tumbling around to face a random new direction) depending on how the sugar concentration changes, in such a way that it ends up in an area with lots of sugar, for many different initial positions or sugar concentration landscapes?
Or most animals will look and listen and smell around themselves, and then do different things (like e.g. run or fly or swim different directions, or bite, or stay very still, or...) depending on what they see/hear/smell, in such a way that they end up eating food and not being eaten themselves (mostly, over short time horizons) and having children, for many different configurations of the trees and rocks and plants and animals around them?
That’s the most basic form of “agency”: taking different actions depending on observations, in order to achieve a consistent outcome (or class of outcomes).
The next few phenomena follow from that basic idea: they either allow a system to achieve a consistent outcome more robustly (i.e. across more initial conditions), or to achieve a more specific consistent outcome, or they’re the “easiest” way (in a statistical sense) to achieve a consistent outcome across many different conditions.
Modules
So there’s this thing where an animal or plant develops specialized organs, which interact with the rest of the organism only in relatively simple, specialized ways. Or an organization has many departments, which specialize in particular roles and present a simplified API to the rest of the company.
These are “modules”: subsystems with boundaries of their own, interacting with the rest of the system through relatively-limited/simple information channels.
Factorization
So there’s this thing where a human wants milk for their coffee and doesn’t have any, and they break this problem up into subproblems. One subproblem is to drive to the store. Another is to find the milk within the store. A third is to make enough money to pay for the milk. These subproblems are mostly-independent: the human mostly doesn’t need to think about the details of finding milk within the store in order to drive to the store, nor do they need to think about driving to the store in order to make money.
Or, an organism has organs/organelles with specialized roles which interact in relatively-limited/simple ways. In order for those organs to solve the organism’s problems, they must each handle subproblems which are mostly-independent of the others (else the organs would need to pass a lot more information between themselves to solve the organism’s top-level problems.) Same with departments of a company.
This is “factorization”, a dual in some sense to modules: when faced with a problem, break it up into subproblems which can be solved mostly-independently.
Coherence
So there’s this thing where a biological cell needs a handful of different metabolic resources—most obviously energy (i.e. ATP), but also amino acids, membrane lipids, etc. And often cells can produce some metabolic resources via multiple different paths, including cyclical paths—e.g. it’s useful to be able to turn A into B but also B into A, because sometimes the environment will have lots of B and other times it will have lots of A. But we also expect that the cell usually won’t spend energy to turn B into A and spend energy to turn A into B at the same time; energy is a scarce resource, and we expect that the bacterium can produce more progeny (on average) if it doesn’t waste resources that way. So, the bacterium can achieve higher fitness if it represses either the A → B pathway or the B → A pathway at any given time, depending on which metabolite is more abundant. (See here for more detail on this example and how it relates to utility maximization, plus a bunch of meta discussion.)
Or, consider the toy example of a hospital administrator budgeting to save as many lives as possible. If the administrator spends $1M on a liver for someone who needs a transplant, but does not spend $100k on a dialysis machine which will save 3 lives, then the administrator has failed to budget in a way which saves as many lives as possible. They could save strictly more lives on the same budget by taking the dialysis machine over the liver.
That’s “coherence”: taking multiple actions in different times/places, in such a way that the actions together are pareto optimal with respect to scarce resources.
World Models
So there’s this thing where a human keeps a map in their head of the stuff around them, including outside their direct line of sight. (You can tell humans do this because if they turn around and see some big obvious thing behind them which was not there last time they looked, they will be surprised, whereas if they see some big obvious thing which was there, they will not be surprised.) And that map constantly updates as new information comes in, in such a way that the map continues to track stuff around the human pretty robustly, even if there’s weird stuff which messes it up for a little while.
Even e-coli, when swimming along a sugar gradient, have an internal molecular species whose concentration roughly tracks the rate-of-change of the external sugar concentration as the bacterium swims. It’s a tiny internal model of the e-coli’s environment. More generally, cells often use some internal molecular species to track some external state, and update that internal representation as new information comes in.
That’s a “world model”: some internal stuff which consistently tracks the state of (some parts of) the external world, and updates to continue tracking that external state as new information comes in.
General-Purpose Search
So there’s this thing with humans where you can give them pretty arbitrary tasks, from assembling some furniture to coding an app to planning an invasion, and they’ll go figure out how to do it. In particular, humans can come up with plans to do pretty arbitrary tasks, before actually starting the tasks. (And of course competent humans usually iteratively update those plans as they try stuff and new information changes their world-model.) This is in contrast to fixed strategies, which can’t update to many new tasks or adjust as new information comes in.
The part which comes up with the plan, and updates it in tandem with changes to the world-model, is “general-purpose search”: some internal method which can find strategies to achieve a wide variety of goals across a wide variety of (modeled) external world-states. (More on what general purpose search is/isn’t here.)
Reflection
So there’s this thing where some animals recognize themselves in a mirror, and some don’t. (You can tell this from the animal e.g. trying to fight with the reflection or scare it away, vs e.g. noticing something sneaking up behind the reflection and then turning to see what’s behind them.)
Or humans explicitly think about themselves, and talk about themselves, their own thought processes, how they’re perceived by others, yada yada yada. Indeed, it’s hard to get humans to stop thinking about themselves for a short while.
This is “reflection”: a system represents itself, not just in the trivial way that everything “represents itself”, but within its own world-model, including representations of relationships to all the external stuff represented in the world model.
Language
So there’s this thing where you can show a toddler an apple and say “apple”, repeat with maybe three different apples, and from then on the toddler will mostly interpret “apple” the same way most other humans do. In the minds of two different humans, the word will map to internal representations of roughly-the-same stuff in the environment. Furthermore, words can be composed together in an exponentially huge variety of ways, and different humans will still end up mapping the words to internal representations of roughly-the-same stuff in the environment. (Not super consistently, unfortunately, but enough that humans are able to communicate at all, which is rather remarkable when dealing with an exponentially large space of potential meanings.)
This is “language”: two systems coordinate to pass signals between them which map to internal representations of roughly-the-same stuff in the environment.
Thanks for your response. There’s a lot of good material here, although some of these components like modules or language seem less central to agency, at least from my perspective. I guess you might see these are appearing slightly down the stack?
They fit naturally into the coherent whole picture. In very broad strokes, that picture looks like selection theorems starting from selection pressures for basic agency, running through natural factorization of problem domains (which is where modules and eventually language come in), then world models and general purpose search (which finds natural factorizations dynamically, rather than in a hard-coded way) once the environment and selection objective has enough variety.
Woah, this sounds incredibly fascinating, I’ve never heard of this — do you have a link to more info, or terminology to google?
“Chemotaxis” is the main relevant jargon.