Preamble on LessWrong relevance: Agentic goals and values, which are of some interest here at LW, are a special case of a broader class of objects that have interested linguists, biologists, and social scientists for a while. This class of objects should really have a common basic mathematical language. It does not, by historical accident. As a consequence, low-hanging fruits and basic confusions abound: research relies on verbal intuitions, gets hung up on domain-specific peculiarities, and misses fundamental generalizations across systems and fields. I suspect potential for progress ranges from Shannon-level to Newton-level, and unfortunately, no Shannons or higher are currently to be found working on it.
This is a broad introduction to give context for an upcoming sequence of increasingly technical posts. You can also try to skip ahead to What should a telic science look like for a different take on how to introduce the project.
Motivation: A mental black hole
The way mathematical models were introduced into the biological, cognitive and social sciences should be recognized as a great redirection: it spawned many new fascinating questions, but didn’t make it any easier to answer the old questions.
The old questions all have to do with something like function, role, goal, purpose or meaning: what’s the point of this organ, this gene, this word, this institution?
We have been made to believe that these questions are either so trivial as to admit simple verbal answers, or unscientific until converted into other questions that are the proper domain of science (such as “how is this implemented?” or “how could this evolve?”). This mindset is such a strong attractor that it has become a kind of intellectual black hole: all our attempts to move away lead back there, forgetting our original intent.
Everything I will say next is thus both exceedingly simple, and exceedingly hard—researchers who try to address this seriously tend to either go crackpot, or fall back into the black hole. It’s hard to imagine what success even looks like when it has never ever happened.
1) Two basic directions of scientific explanation
The behavior of many systems can be explained from at least two directions: what’s happening inside them, and what’s influencing them from the outside.
At one extreme, there’s usual physics: it deals with isolated systems or simple boundary conditions, ignores or minimizes the outside, to focus on internal components and bulk laws.
At the other extreme, there’s the world of signals and encoding: we don’t care what a hard drive is made of, everything interesting about it (e.g. how the stored sequence of 0s and 1s will change in the future) is imposed and retrieved by outside users.
Somewhere in between, there’s biology: both external selection and internal processes are intricate and possibly important; interesting phenomena are often non-trivial interactions between these two forces.
Everything that we want to call function, role, goal, purpose or meaning—a cluster that I will label telos because why not—is some variation on explaining a system by selection from the outside.
Pitfall #1: Telos disappears when you look at a system in isolation. If your explanation still works when you remove the object from its context, then you are asking and answering a non-telic type of question.
Taking a book out of any context, you can ask about its “information content” in the sense of Shannon’s theory: how many characters it contains, which can encode that many bits, etc. But you cannot ask about its meaning. Meaning is not a property of the book in itself. The fact that a certain sequence of characters means the story of Hamlet to you is, at the very least, an interaction between the text and you. The same signal could mean many different things for many different lifeforms. Conversely, the same meaning could be implemented in many supports—a physical book, an email, acting out the story, etc.
In brief, there is a many-to-many relationship between what a thing is in itself, and what its role is in various contexts. Not every thing can play every role (a short book cannot convey a long meaning), but each thing has potential for a multiplicity of roles, and each role can be filled by many things.
Confusing the thing-in-itself and the role-in-context is how systems biology and many other fields keep failing to say anything interesting about telos. A certain biological pattern is, in itself, a feedback or feedforward, a controller, a switch, a memory, etc. irrespective of context, so these labels are not and can never be functions/teloi.
2) Recursing down or up
Each of the two approaches above becomes a scientific method when we apply it recursively until we hit something we can treat as an axiomatic starting point:
Reductionism is going down to ever smaller parts, to smaller space and shorter time scales, until we have found simple particles that we can fully understand, and then going back up to explain the system.
Telism*is going up to ever broader contexts, to larger space and longer time scales, until we have found simple selection rules that we can fully understand, and then going back down to explain the system.
* (coined here because “functionalism” or “selectionism”, let alone “teleology”, are pretty loaded)
Darwin’s idea is a mostly typical instance of telism: when we don’t understand an organ or a behavior, we go up to the organism, or even further up to the lineage or ecosystem, until we find a simple purpose like reproduction or persistence. Think also of strategy games: to explain why a chess player selected a move, we go up to the simple overall goal of winning the game.
We can then go back down to explain the big parts within the whole, and the small parts within the big: the goals of each phase of the game, of each tactical idea within a phase, and finally the complex purpose of a move with respect to all these nested contexts.
Whichever approach works best depends on whether the nearest/strongest source of simplicity and predictability is toward the inside or the outside. For reductionism, larger systems are typically more complex; for telism, smaller systems are typically more complex: any component of your heart should ensure that your heart functions, but should also avoid killing you in any other way (e.g. making byproducts that are toxic to your liver). Constraints can pile up—or rather, pile down—until the atom, like the individual in social systems, is possibly the most complicated thing to explain as it can play many roles in many nested contexts.
Various systems that appear complex from a reductionist point of view are likely to be better understood through telism (or historicism) - as our gut has been telling us for centuries regarding questions of biology, sociology, or linguistics.
3) The missing method
Everything I’ve called telos can be given some interpretation in well-established formal frameworks such as optimization, variational calculus, statistical mechanics, etc. What is missing is a general method telling us how to use any of these languages to answer the right type of questions.
There is a rigorous scientific method for understanding how systems are made of parts and these parts are made of smaller parts and how they all combine. The heart of this was the Cartesian revolution: every qualitative property of any physical system (its color, warmth, taste, etc.) can ultimately be boiled to spatio-temporal geometry, to quantitative information on the configurations and dynamics of a small set of universal particles.
There is no rigorous scientific method for understanding how contexts and purposes arise from larger and simpler contexts, nor how they interact with each other to generate complex selection on subsystems. This is done verbally in many fields such as engineering (system architecture, refactoring, etc), but we don’t have a formal science for how to reverse-engineer the functional structure of an existing system.
This is where every field to date has failed to produce interesting formal theory:
i) Generality: For reductionism, the formalism of dynamical equations allows me to directly compare, say, the propagation of sound waves and the spread of diseases. For telos, no formalism tells me how the role of a word in a sentence compares to the role of an institution in society, or a neuron in the brain. In particular, nothing tells me what should be universal (like the Navier-Stokes equation for fluids) and what should be system-specific (like its coefficients depending on the fluid).
ii) Autonomy: Reductionism provides a complete description of the universe that does not ever need to invoke purpose (it may be impractical or even pragmatically impossible to compute, but it is still valid in principle). Likewise, a true telic science would be able to describe purposes and functions for anything in the universe without referring to any of the objects of reductionism.
Pitfall #2: Starting toward telism and then deviating back toward reductionism. Optimization or evolutionary theory put a lot of emphasis on the constraints bearing on a system, but then always end up asking fundamentally reductionist questions about that system—how is optimization or evolution implemented in components, when are they successfully achieved by certain dynamical rules, etc.
Even the select few who talk unashamedly about multi-level selection or top-down causation end up inexorably caught in the trap of “how does this fit into reductionism” rather than “how to talk about this on its own terms” as soon as they start doing math.
Conclusion: Why new math
In this post, I argued that
- There really is a broad class of non-reductionist “telic” questions for which we simply do not have the right type of formalization/math/data structures.
- The reason it hasn’t been solved yet is that it is terribly hard to think about it for more than a few minutes without being drawn back to the wrong type of questions and answers, and trapped in irrelevant technical mazes. The more of a mathy/physicsy background one has, the worse it gets.
Let’s discuss these statements.
I feel it is rather easy to observe that we are lacking a general and self-sufficient theory for explaining any object by its role in ever-broader contexts, under ever-wider selection rules. As far as I know, only linguistics has ever attempted to base a whole formalism on this idea, and that formalism still falls short of capturing the essence of linguistic function.
It may seem more audacious to claim that we do not even have the right math.
Many scientists do ponder related questions and use math (optimization, information theory, dynamics, statistical mechanics). Sometimes, they get interesting results. Yet, the math conspicuously does not help with the core of the question: it serves as a post hoc thing, raising and solving a whole other set of issues. The scientist must do all the heavy lifting in her brain.
Relevant if slightly tangential example: the mathematical machinery of game theory comes into play once we’ve decided who the agents are, and what set of actions and goals they are equipped with. Many social scientists, I believe, would argue that getting to this “starting point” is the entire purpose of their field, and everything of value is lost if we bungle these assumptions. For them, cranking out the math of subsequent agent behavior is almost always missing the point. Of course, the same social scientists produce very little general theory themselves (or might even be offended at the prospect).
This whole issue was recognized in the 1950s and 60s, among waves of cybernetics and “systems” thinking, and this recognition led to many ideas in engineering, and approximately zero progress at all in basic science. Scientific fields that claim descent from that tradition, e.g. systems biology, simultaneously advertise the problem and fall back into the black hole and answer the “wrong” questions.
I feel that trying to answer “telic” questions with maths born of reductionism (dynamical systems, game theory, probabilities/set theory...) is as tricky as trying to build physics on top of Euclidean geometry alone. It did work for Galileo, and kudos to him. But sooner or later, we need to invent something like differentials and integrals, coordinates and vector spaces, something really geared toward the kind of questions we want to answer. This will probably involve some metamathematics that are well above my paygrade.
In the meantime, many existing tools may turn out to be practical, like optimization or variational calculus or stat mech, but they will still tend to naturally point us in all the wrong directions. Trying to use them without getting lost will be my main purpose for the rest of this sequence.
For LW-relevant questions, such as agents and goals and values, my foremost message here will be that some deep traps might be avoided by keeping in mind a much broader class of issues, including some that are potentially much simpler and more likely to be solved on a reasonable time scale.
Do we have the right kind of math for roles, goals and meaning?
Preamble on LessWrong relevance: Agentic goals and values, which are of some interest here at LW, are a special case of a broader class of objects that have interested linguists, biologists, and social scientists for a while. This class of objects should really have a common basic mathematical language. It does not, by historical accident. As a consequence, low-hanging fruits and basic confusions abound: research relies on verbal intuitions, gets hung up on domain-specific peculiarities, and misses fundamental generalizations across systems and fields. I suspect potential for progress ranges from Shannon-level to Newton-level, and unfortunately, no Shannons or higher are currently to be found working on it.
This is a broad introduction to give context for an upcoming sequence of increasingly technical posts. You can also try to skip ahead to What should a telic science look like for a different take on how to introduce the project.
Motivation: A mental black hole
The way mathematical models were introduced into the biological, cognitive and social sciences should be recognized as a great redirection: it spawned many new fascinating questions, but didn’t make it any easier to answer the old questions.
The old questions all have to do with something like function, role, goal, purpose or meaning: what’s the point of this organ, this gene, this word, this institution?
We have been made to believe that these questions are either so trivial as to admit simple verbal answers, or unscientific until converted into other questions that are the proper domain of science (such as “how is this implemented?” or “how could this evolve?”). This mindset is such a strong attractor that it has become a kind of intellectual black hole: all our attempts to move away lead back there, forgetting our original intent.
Everything I will say next is thus both exceedingly simple, and exceedingly hard—researchers who try to address this seriously tend to either go crackpot, or fall back into the black hole. It’s hard to imagine what success even looks like when it has never ever happened.
1) Two basic directions of scientific explanation
The behavior of many systems can be explained from at least two directions: what’s happening inside them, and what’s influencing them from the outside.
At one extreme, there’s usual physics: it deals with isolated systems or simple boundary conditions, ignores or minimizes the outside, to focus on internal components and bulk laws.
At the other extreme, there’s the world of signals and encoding: we don’t care what a hard drive is made of, everything interesting about it (e.g. how the stored sequence of 0s and 1s will change in the future) is imposed and retrieved by outside users.
Somewhere in between, there’s biology: both external selection and internal processes are intricate and possibly important; interesting phenomena are often non-trivial interactions between these two forces.
Everything that we want to call function, role, goal, purpose or meaning—a cluster that I will label telos because why not—is some variation on explaining a system by selection from the outside.
Pitfall #1: Telos disappears when you look at a system in isolation. If your explanation still works when you remove the object from its context, then you are asking and answering a non-telic type of question.
Taking a book out of any context, you can ask about its “information content” in the sense of Shannon’s theory: how many characters it contains, which can encode that many bits, etc. But you cannot ask about its meaning. Meaning is not a property of the book in itself. The fact that a certain sequence of characters means the story of Hamlet to you is, at the very least, an interaction between the text and you. The same signal could mean many different things for many different lifeforms. Conversely, the same meaning could be implemented in many supports—a physical book, an email, acting out the story, etc.
In brief, there is a many-to-many relationship between what a thing is in itself, and what its role is in various contexts. Not every thing can play every role (a short book cannot convey a long meaning), but each thing has potential for a multiplicity of roles, and each role can be filled by many things.
Confusing the thing-in-itself and the role-in-context is how systems biology and many other fields keep failing to say anything interesting about telos. A certain biological pattern is, in itself, a feedback or feedforward, a controller, a switch, a memory, etc. irrespective of context, so these labels are not and can never be functions/teloi.
2) Recursing down or up
Each of the two approaches above becomes a scientific method when we apply it recursively until we hit something we can treat as an axiomatic starting point:
Reductionism is going down to ever smaller parts, to smaller space and shorter time scales, until we have found simple particles that we can fully understand, and then going back up to explain the system.
Telism* is going up to ever broader contexts, to larger space and longer time scales, until we have found simple selection rules that we can fully understand, and then going back down to explain the system.
* (coined here because “functionalism” or “selectionism”, let alone “teleology”, are pretty loaded)
Darwin’s idea is a mostly typical instance of telism: when we don’t understand an organ or a behavior, we go up to the organism, or even further up to the lineage or ecosystem, until we find a simple purpose like reproduction or persistence. Think also of strategy games: to explain why a chess player selected a move, we go up to the simple overall goal of winning the game.
We can then go back down to explain the big parts within the whole, and the small parts within the big: the goals of each phase of the game, of each tactical idea within a phase, and finally the complex purpose of a move with respect to all these nested contexts.
Whichever approach works best depends on whether the nearest/strongest source of simplicity and predictability is toward the inside or the outside. For reductionism, larger systems are typically more complex; for telism, smaller systems are typically more complex: any component of your heart should ensure that your heart functions, but should also avoid killing you in any other way (e.g. making byproducts that are toxic to your liver). Constraints can pile up—or rather, pile down—until the atom, like the individual in social systems, is possibly the most complicated thing to explain as it can play many roles in many nested contexts.
Various systems that appear complex from a reductionist point of view are likely to be better understood through telism (or historicism) - as our gut has been telling us for centuries regarding questions of biology, sociology, or linguistics.
3) The missing method
Everything I’ve called telos can be given some interpretation in well-established formal frameworks such as optimization, variational calculus, statistical mechanics, etc. What is missing is a general method telling us how to use any of these languages to answer the right type of questions.
There is a rigorous scientific method for understanding how systems are made of parts and these parts are made of smaller parts and how they all combine. The heart of this was the Cartesian revolution: every qualitative property of any physical system (its color, warmth, taste, etc.) can ultimately be boiled to spatio-temporal geometry, to quantitative information on the configurations and dynamics of a small set of universal particles.
There is no rigorous scientific method for understanding how contexts and purposes arise from larger and simpler contexts, nor how they interact with each other to generate complex selection on subsystems. This is done verbally in many fields such as engineering (system architecture, refactoring, etc), but we don’t have a formal science for how to reverse-engineer the functional structure of an existing system.
This is where every field to date has failed to produce interesting formal theory:
i) Generality: For reductionism, the formalism of dynamical equations allows me to directly compare, say, the propagation of sound waves and the spread of diseases. For telos, no formalism tells me how the role of a word in a sentence compares to the role of an institution in society, or a neuron in the brain. In particular, nothing tells me what should be universal (like the Navier-Stokes equation for fluids) and what should be system-specific (like its coefficients depending on the fluid).
ii) Autonomy: Reductionism provides a complete description of the universe that does not ever need to invoke purpose (it may be impractical or even pragmatically impossible to compute, but it is still valid in principle). Likewise, a true telic science would be able to describe purposes and functions for anything in the universe without referring to any of the objects of reductionism.
Pitfall #2: Starting toward telism and then deviating back toward reductionism.
Optimization or evolutionary theory put a lot of emphasis on the constraints bearing on a system, but then always end up asking fundamentally reductionist questions about that system—how is optimization or evolution implemented in components, when are they successfully achieved by certain dynamical rules, etc.
Even the select few who talk unashamedly about multi-level selection or top-down causation end up inexorably caught in the trap of “how does this fit into reductionism” rather than “how to talk about this on its own terms” as soon as they start doing math.
Conclusion: Why new math
In this post, I argued that
- There really is a broad class of non-reductionist “telic” questions for which we simply do not have the right type of formalization/math/data structures.
- The reason it hasn’t been solved yet is that it is terribly hard to think about it for more than a few minutes without being drawn back to the wrong type of questions and answers, and trapped in irrelevant technical mazes. The more of a mathy/physicsy background one has, the worse it gets.
Let’s discuss these statements.
I feel it is rather easy to observe that we are lacking a general and self-sufficient theory for explaining any object by its role in ever-broader contexts, under ever-wider selection rules. As far as I know, only linguistics has ever attempted to base a whole formalism on this idea, and that formalism still falls short of capturing the essence of linguistic function.
It may seem more audacious to claim that we do not even have the right math.
Many scientists do ponder related questions and use math (optimization, information theory, dynamics, statistical mechanics). Sometimes, they get interesting results. Yet, the math conspicuously does not help with the core of the question: it serves as a post hoc thing, raising and solving a whole other set of issues. The scientist must do all the heavy lifting in her brain.
Relevant if slightly tangential example: the mathematical machinery of game theory comes into play once we’ve decided who the agents are, and what set of actions and goals they are equipped with. Many social scientists, I believe, would argue that getting to this “starting point” is the entire purpose of their field, and everything of value is lost if we bungle these assumptions. For them, cranking out the math of subsequent agent behavior is almost always missing the point. Of course, the same social scientists produce very little general theory themselves (or might even be offended at the prospect).
This whole issue was recognized in the 1950s and 60s, among waves of cybernetics and “systems” thinking, and this recognition led to many ideas in engineering, and approximately zero progress at all in basic science. Scientific fields that claim descent from that tradition, e.g. systems biology, simultaneously advertise the problem and fall back into the black hole and answer the “wrong” questions.
I feel that trying to answer “telic” questions with maths born of reductionism (dynamical systems, game theory, probabilities/set theory...) is as tricky as trying to build physics on top of Euclidean geometry alone. It did work for Galileo, and kudos to him. But sooner or later, we need to invent something like differentials and integrals, coordinates and vector spaces, something really geared toward the kind of questions we want to answer. This will probably involve some metamathematics that are well above my paygrade.
In the meantime, many existing tools may turn out to be practical, like optimization or variational calculus or stat mech, but they will still tend to naturally point us in all the wrong directions. Trying to use them without getting lost will be my main purpose for the rest of this sequence.
For LW-relevant questions, such as agents and goals and values, my foremost message here will be that some deep traps might be avoided by keeping in mind a much broader class of issues, including some that are potentially much simpler and more likely to be solved on a reasonable time scale.
Follow-up posts include:
Basic methodology/philosophy: What should a telic science look like
Reference points from various fields: Telic intuitions across the sciences (WIP)
Actually doing something (or starting to): Building a Rosetta stone for reductionism and telism. (WIPier)
Broader context:
Historicism in the math-adjacent sciences