This seems partially right, partially confused in an important way.
As I tried to point people to years ago, how this works is … quite complex processes, where some higher-level modelling (“I see a lion”) leads to a response in lower levels connected to body states, some chemicals are released, and this interoceptive sensation is re-integrated in the higher levels.
I will try to paraphrase/expand in a longer form.
Genome already discovered a ton of cybernetics before inventing neocortex-style neural nets.
Consider e.g. the problem of morphogenesis—that is, how one cell replicates to something like quadrillion cells in an elephant. Which end up reliably forming some body shape and cooperating in a highly complex way: it’s really impressive and hard optimization problem.
Inspired by Levine, I’m happy to argue it is also impossible without discovering a lot of powerful stuff from information theory and cybernetics, including various regulatory circuits, complex goal specifications, etc.
Note that there are many organisms without neural nets which still seek reproduction, avoid danger, look for food, move in complex environments, and in general, are living using fairly complex specifications of evolutionary relevant goals.
This implies genome had complex circuitry specificing many/most of the goal states it’s cares about before it invented predictive processing brain.
Given this, what genome did when developing the brain predictive processing machinery likely wasn’t trying to hook up things to “raw sensory inputs”, but hook up the PP machinery to the existing cybernetic regulatory systems, often broadly localized “in the body”.
From the PP-brain-centric viewpoint, the variables of this evolutionary older control system come in via a “sense” of interoception.
The very obvious hack which genome is using in encoding goals to the PP machinery isspecifying the goals mostly in interoceptive variables,utilizing the existing control circuits.
Predictive processing / active inference than goes on to build a complex world model and execute complex goal-oriented behaviours.
How these desirable states are encoded was called agenty subparts by me, but according to Friston, is basically the same thing as he calls “fixed priors”: as a genome, you for example “fix the prior” on the variable “hunger” to “not being hungry”. (Note that a lot of the specification of what “hunger” is, is done by the older machinery). Generic predictive processing principles than build you a circuitry “around” this “fixed prior” which e.g. cares about objects in the world which are food. (Using intentional stance, the fixed variable + the surrounding control circuits look like a sub-agent of the human, hence the alternative agenty subpart view)
Summary: - genome solves the problem of aligning the predictive processing neural nets by creating a bunch of agenty subparts/fixed priors, caring about specific variables in the predictive processing world model. Pp/active inference deals with how this translates to sensing and action. - however, many critical variables used for this are not sensory inputs, but interoceptive variables, extracted from a quite complex computation
This allows genome to point to stuff like sex or love for family relatively easily and, build “subagents” caring for this. Building of complex policies out of this is then left to predictive processing style of interactions.
If you would counts this as “direct” or “indirect” seems unclear.
Here’s my stab at a summary of your comment: “Before complex brains evolved, evolution had already optimized organisms to trade off a range of complex goals, from meeting their metabolic needs to finding mates. Therefore, in laying down motivational circuitry in our ancient ancestors, evolution did not have to start from scratch, and already had a reasonably complex ‘API’ for interoceptive variables.”
This sounds right to me. Reasons like this also contribute to my uncertainty about how much weight to put on “But a sensory food-scent-detector would be simpler to specify than a world-model food-detector”, because “simpler” gets weird in the presence of uncertain initial conditions. For example, what kinds of “world models” did our nonhuman precursors have, and, over longer evolutionary timescales, could evolution have laid down some simpler circuitry which detected food in their simpler world models, which we inherited? It’s not that I find such possibilities probable on their own, but marginalizing over all such possibilities, I end up feeling somewhat uncertain.
I don’t see how complex interoceptive variables + control systems help accomplish “love for family” more easily, though, although that one doesn’t seem very inaccessible to the genome anyways (in part since at least some of your family is usually proximate to sensory inputs).
I would correct “Therefore, in laying down motivational circuitry in our ancient ancestors, evolution did not have to start from scratch, and already had a reasonably complex ‘API’ for interoceptive variables.”
from the summary to something like this
”Therefore, in laying down motivational circuitry in our ancient ancestors, evolution did have to start locating ‘goals’ and relevant world-features in the learned world models. Instead, it re-used the the existing goal-specifying circuits, and implicit-world-models, existing in older organisms. Most of the goal specification is done via “binding” the older and newer world-models in some important variables. From within the newer circuitry, important part of the “API” between the models is interoception”
(Another way how to think about it: imagine a more blurry line between a “sensory signal” and “reward signal”)
Jan—well said, and I strongly agree with your perspective here.
Any theory of human values should also be consistent with the deep evolutionary history of the adaptive origins and functions of values in general—from the earliest Cambrian animals with complex nervous systems through vertebrates, social primates, and prehistoric hominids.
As William James pointed out in 1890 (paraphrasing here), human intelligence depends on humans have more evolved instincts, preferences, and values than other animals, not having fewer.
This seems partially right, partially confused in an important way.
As I tried to point people to years ago, how this works is … quite complex processes, where some higher-level modelling (“I see a lion”) leads to a response in lower levels connected to body states, some chemicals are released, and this interoceptive sensation is re-integrated in the higher levels.
I will try to paraphrase/expand in a longer form.
Genome already discovered a ton of cybernetics before inventing neocortex-style neural nets.
Consider e.g. the problem of morphogenesis—that is, how one cell replicates to something like quadrillion cells in an elephant. Which end up reliably forming some body shape and cooperating in a highly complex way: it’s really impressive and hard optimization problem.
Inspired by Levine, I’m happy to argue it is also impossible without discovering a lot of powerful stuff from information theory and cybernetics, including various regulatory circuits, complex goal specifications, etc.
Note that there are many organisms without neural nets which still seek reproduction, avoid danger, look for food, move in complex environments, and in general, are living using fairly complex specifications of evolutionary relevant goals.
This implies genome had complex circuitry specificing many/most of the goal states it’s cares about before it invented predictive processing brain.
Given this, what genome did when developing the brain predictive processing machinery likely wasn’t trying to hook up things to “raw sensory inputs”, but hook up the PP machinery to the existing cybernetic regulatory systems, often broadly localized “in the body”.
From the PP-brain-centric viewpoint, the variables of this evolutionary older control system come in via a “sense” of interoception.
The very obvious hack which genome is using in encoding goals to the PP machinery is specifying the goals mostly in interoceptive variables, utilizing the existing control circuits.
Predictive processing / active inference than goes on to build a complex world model and execute complex goal-oriented behaviours.
How these desirable states are encoded was called agenty subparts by me, but according to Friston, is basically the same thing as he calls “fixed priors”: as a genome, you for example “fix the prior” on the variable “hunger” to “not being hungry”. (Note that a lot of the specification of what “hunger” is, is done by the older machinery). Generic predictive processing principles than build you a circuitry “around” this “fixed prior” which e.g. cares about objects in the world which are food. (Using intentional stance, the fixed variable + the surrounding control circuits look like a sub-agent of the human, hence the alternative agenty subpart view)
Summary:
- genome solves the problem of aligning the predictive processing neural nets by creating a bunch of agenty subparts/fixed priors, caring about specific variables in the predictive processing world model. Pp/active inference deals with how this translates to sensing and action.
- however, many critical variables used for this are not sensory inputs, but interoceptive variables, extracted from a quite complex computation
This allows genome to point to stuff like sex or love for family relatively easily and, build “subagents” caring for this. Building of complex policies out of this is then left to predictive processing style of interactions.
If you would counts this as “direct” or “indirect” seems unclear.
Here’s my stab at a summary of your comment: “Before complex brains evolved, evolution had already optimized organisms to trade off a range of complex goals, from meeting their metabolic needs to finding mates. Therefore, in laying down motivational circuitry in our ancient ancestors, evolution did not have to start from scratch, and already had a reasonably complex ‘API’ for interoceptive variables.”
This sounds right to me. Reasons like this also contribute to my uncertainty about how much weight to put on “But a sensory food-scent-detector would be simpler to specify than a world-model food-detector”, because “simpler” gets weird in the presence of uncertain initial conditions. For example, what kinds of “world models” did our nonhuman precursors have, and, over longer evolutionary timescales, could evolution have laid down some simpler circuitry which detected food in their simpler world models, which we inherited? It’s not that I find such possibilities probable on their own, but marginalizing over all such possibilities, I end up feeling somewhat uncertain.
I don’t see how complex interoceptive variables + control systems help accomplish “love for family” more easily, though, although that one doesn’t seem very inaccessible to the genome anyways (in part since at least some of your family is usually proximate to sensory inputs).
I would correct “Therefore, in laying down motivational circuitry in our ancient ancestors, evolution did not have to start from scratch, and already had a reasonably complex ‘API’ for interoceptive variables.”
from the summary to something like this
”Therefore, in laying down motivational circuitry in our ancient ancestors, evolution did have to start locating ‘goals’ and relevant world-features in the learned world models. Instead, it re-used the the existing goal-specifying circuits, and implicit-world-models, existing in older organisms. Most of the goal specification is done via “binding” the older and newer world-models in some important variables. From within the newer circuitry, important part of the “API” between the models is interoception”
(Another way how to think about it: imagine a more blurry line between a “sensory signal” and “reward signal”)
Jan—well said, and I strongly agree with your perspective here.
Any theory of human values should also be consistent with the deep evolutionary history of the adaptive origins and functions of values in general—from the earliest Cambrian animals with complex nervous systems through vertebrates, social primates, and prehistoric hominids.
As William James pointed out in 1890 (paraphrasing here), human intelligence depends on humans have more evolved instincts, preferences, and values than other animals, not having fewer.