Merriam Webster: meaning \ˈmē-niŋ \ noun 1 a the thing one intends to convey especially by language b the thing that is conveyed especially by language 2 something meant or intended 3 significant quality 4 a the logical connotation of a word or phrase b the logical denotation or extension of a word or phrase
Meaning is an interesting concept. In everyday conversation, when the “meaner” is a person, the idea of meaning fits effortlessly into our worldview; other people’s words and actions mean things, and we’re able to understand that meaning and generate our own in return. However, when looking from a deeper level of granularity, the idea of meaning becomes less clear. Your brain is made up of ~100 billion neurons and ~100 to 1,000 trillion synapses, which all combine to make “you” – where does meaning reside within those connections? Humans have been wondering about our inner workings since ancient times, and while the progress of neuroscience has shed some light on this question, it’s still far from solved. Over the last ~100 years, we’ve had the chance to see this problem from a different light with the invention of computers. With their extreme complexity, it feels as though these systems may also be able to have meaning. Though there have been significant efforts in this area, progress has been limited, and computers still seem to be far away from understanding the world as we do. This post will take a deeper look at meaning and how it relates to machines, first laying some foundations for the concept of meaning, and then looking more explicitly at how we might get this meaning “into” machines.
As in previous posts, it will be helpful to build up a foundational concept of meaning from first principles, due to the ambiguity of the general concept. Zooming way in, we can picture the world as a vast number of particles moving around, bumping into each other and exchanging energy according to certain laws. From this point of view, there is no meaning, only things happening and particles bumping into other particles. However, zooming out a bit, we can see that certain patterns emerge in the interactions of these particles. The lowest level particles (that we’re aware of) settle into stable (or semi-stable) configurations, forming atoms, which settle into patterns of molecules, which settle into patterns of “objects” (solids, liquids, gasses, etc.). This higher level viewpoint better jives with our experience of the world, but there’s still no meaning at this level. There’s an order to the bumping of particles, but no meaning. To get meaning, we first need “meaners”, or agents. We can view an agent as a particular pattern which can emerge from the soup of particles; a pattern which “acts” to perpetuate itself (with a relatively simple example being a virus – though these types of patterns started off far more simply). Note that there’s no real “line” between an agent and the rest of the environment – we can choose to draw an artificial line from a particular viewpoint (and doing so makes the full system easier to understand), but from a different viewpoint the entire system is still simply particles bumping into other particles.
Agents get us closer to meaning, but the concept isn’t relevant for all types of agents. To really attribute meaning, we need a particular type of agent / pattern – one which functions in such a way as to mirror / represent the tendencies of the outside world inside itself. Essentially, we need agents which are capable of building an internal world model. Our brains function in this way, but far simpler organisms also create some sort of representation – even C. Elegans, the microscopic nematode, can represent certain facets of the world in its 302 neurons. When a C. Elegans finds food, this means something to it, in that the internal world model housed in its brain is updated and the worm’s future actions are modified based on that fact. Compare this to a virus: when a virus gets into a host cell, certain processes are activated which release the genetic information of the virus into the cell. There’s no (or extremely limited, depending on your point of view) world modeling / pattern recognition / abstraction going on. The evolution of neural nets allowed these representations to start, and the further centralization of neurons in brains allowed the representations to become increasingly powerful. While a C. Elegans has some level of meaning in its 302 neurons, it pales in comparison to the meaning more advanced animals are capable of, particularly humans. The internal world model is the key element for meaning to arise; meaning is the isomorphism between that model and the actual world.
Looking further at the brain, we can see that meaning arises when certain neural patterns represent patterns of the outside world. Put differently, there’s a mapping between the way in which neurons are connected / update their connections and the events of the outside world which are perceived through the sensory organs. Evolution has crafted our genome in such a way to create some level of alignment, or accuracy of mapping, in the initial formation of our brains (in the womb); the major foundation is then constructed during the first few years of life, as our brains adapt to the high level order of the world (objects, causation, etc.). From that point forward, we can mean things and understand meaning; meaning requires a certain (non-conscious) foundational understanding of the world, upon which more detailed events of the world can be understood. To take a simple example – if you say “Hi, I’m John” to a baby, is there any meaning being conveyed? You know what you mean by creating the specific utterance of “haɪ aɪm ʤɑn” (presumably, if you’re an English speaker, you mean that your name is John – although that phonetic sequence might have a different meaning in other languages, or you could just be using English in an unconventional way, or the baby’s name might by “Eimjon”, or…. only you know what meaning you attach to the utterance) but the baby derives no meaning from it; the mapping in their brain is not yet developed enough to extract meaning from such abstract methods of communication. The baby does have some understanding of meaning; for example, if they see a person, they may understand that it is a person, and not some other object or aspect of the world, or, if they cry, they might mean that they’re hungry. This rudimentary ability to mean things is an important step on the path towards general human meaning, which is built up over our lifetimes (with the first few years being the most formative) as our brains build up a network of concepts and associations that accurately model the world.
Now that we’ve built up a bit of a foundation for human meaning, let’s take a look at machines (specifically computers). For most of history, we’ve put our own meaning into computers, with their operations serving as an extension of our own meaning. For example, say we want to solve a math problem, and we know all the steps – it just takes a while to calculate. We set up the computer, and let it run through the entire calculation. There’s no meaning in the computer; while there’s a mapping from patterns of the world to innate representations (the math problem), there’s no foundational understanding of the world built up for this mapping to sit within. The representations mean something to us because we have that foundational understanding, which the symbols and operations are built on top of. We then design the computer in such a way as to act in accordance with our meaning – which is different from the computer “having meaning” itself. Standing alone, a mapping can’t do very much. The image below highlights the differences between a human and a computer adding “1+1” – for the human, this operation happens in the context of the deep web of concepts and relations which constitutes meaning, whereas the computer simply passes some voltages through some gates (which are structured in such a way as to provide outputs which align with human conceptions).
It’s easy for humans to fall into the trap of thinking we can imbue our systems with meaning by giving them relational data to use; for example, Doug Lenat’s Cyc, which “aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works”. On the surface, this type of approach seems reasonable – computers obviously lack common sense, so if we can “build in” most common things which humans know, computers will be able to act more appropriately. Looking specifically at Cyc, Lenat has worked to encode a huge variety of “common” information about the world; things like “a dog has four legs” or “Barack Obama was a president”. As a human looking at this system, the million plus encoded pieces of information seem powerful. However, this is due more to the meaning we bring to these relations, rather than due to meaning within the relations themselves. When we read, “a dog has four legs”, we can’t help but bring to mind our concepts for each of the words, and the interplay between those concepts. The sentence means something to us because each of the terms has been built up from a deep foundation. We see the “a” and our minds focus on a particular instance of a thing, then we read “dog” and that particular instance slides into focus as a hairy creature with perky ears and a narrow snout (or whatever your personal conception of “a dog” is). We see “has” and our minds slip into the possessive form, attributing a particular thing to the conceived “a dog”, and we then read “four legs” and draw to mind the idea of a thin appendage, next to another, with another pair behind. At this point, the appendages imagined are furry and dog-like due to the setup of the earlier part of the sentence. The key point here is that the sentence is not words to us, it is meaning, with each word representing a particular pattern of matter / perception (or a pattern of patterns, etc.) All this happens “under the hood”, so to speak, so we feel as though the words themselves carry the meaning, and feel as though providing a system with the logical relation “a dog has four legs” adds to its common sense. All the system actually gets, however, is a meaningless (in the sense of lacking meaning) link; an association from one node to another.
Concepts can’t be built up through top down fact base construction, as it’s the underlying foundation which gives them so much power. The brain has evolved over millions of years to coherently pull together experiences in a way that makes sense of the world, allowing the organism to operate effectively within it. There’s some a priori “knowledge”, or “potential for knowledge”, built into the structure of the brain, and then experience acts to shape the brain in a way which mirrors the world. The most basic level is not language, or relationships between linguistic objects – we can look at any other animal to see that. Rather, the base level is made up of patterns and regularities – matter (and the light reflecting from it) tends to clump together in “objects”, and these “objects” tend to stay clumped, unless certain other clumps interact with them. As babies, we start by viewing and understanding the world in these types of broad strokes – then we learn that certain objects are “alive”, that certain ones are “friendly” (or that they will provide for our needs), etc. Only after this initial foundation has been laid do we make the jump to language. By the time we’re saying the word “dog”, we aren’t learning about the clump of matter, or about the object; we’re learning that a particular sound is “attached” to that clump / object. Throughout this whole process, the knowledge is being put together in a particular way from the ground up, dictated by the algorithms of the brain that allow it all to “fit together”.
One analogy I’ve found helpful is to picture pieces of knowledge, or bits of experience, as Tetris pieces. Millions of them flow into the system, of all different shapes and sizes. The brain has a method for handling this inflow and “making sense of it”, sorting and filtering through the pieces and getting them into useful positions. The pieces are the base level, not the “words” level. This stands in stark contrast to attempts like Cyc, which simply ignore the base (sub-language) level, and thereby ignore any actual meaning. The word sequence “a dog has four legs” only means something to humans because of the deeper (sub-language) associations each word brings to mind for us; it doesn’t denote any meaning itself. The graphic below tries to highlight this idea – individual pieces of perceptive input (e.g. a single cone firing, or a single bipolar / ganglion cell, or some low-level circuit in the cortex) are symbolized as Tetris pieces, and the “world modeling algorithm” serves to make sense of all these perceptions and fit them into place, resulting in a coherent worldview, as shown on the left. The right side zooms out a bit, showing the large number of perceptive inputs that must be made sense of, and pointing specifically to how the “dog” concept might be put together from the inputs.
For us, the concept of “dog” contains all the perceptive information which makes it up – even without words, by knowing what the “dog” concept is, we know a great deal about dogs. We eventually also learn to attach a word (“dog”) to the concept, but this is of secondary importance in forming the concept itself. When we give a computer the word “dog”, however, it has none of this foundation – all it has is a single symbol, “dog”. There’s no meaning attached to it. Importantly, the situation doesn’t improve much even if we give the computer additional sentences describing dogs, or pictures of dogs. With sentences, we run into the same issue of no meaning behind the terms; with pictures, we run into a different version of the same issue. When we see a picture of a dog, we easily recognize the dog, but we don’t realize how much is going on “behind the scenes” to make that possible. It’s a great computational feat to pull together all the sensory inputs and recognize them as a specific instance of a general class of objects. Without taking this step, a picture of a dog (or even many pictures of many dogs) doesn’t help a computer get at the meaning of “dog” – all it ends up with is a vast amount of relatively useless pixel data. However, though Cyc-type approaches seem doomed, we’ve recently made significant strides in figuring out how to get computers to take a version of this step.
Deep learning has been a step in the right direction, as it (to some degree) allows the system to build up its own “meaning”, and relies less on human encodings (this post offers a deeper dive into how deep learning works). For example, consider a neural network set up to identify dogs. The programmer specifies all the individual nodes and layers of the network and the update behavior, but they don’t add in any information or structure related to dogs (the connections are all randomized to start). The “dog” concept instead comes from the training data – the network is shown a great number of pictures of dogs (and not dogs), and after each batch the connection weights are updated in such a way as to be slightly better at identifying dogs (this is possible because the whole system is differentiable, and so we can calculate what direction to move each connection for better performance). Looking back at the Tetris analogy, we can see that this process (known as backpropagation) plays a similar role as the “world modeling algorithm”, in that it builds up concepts from the foundational sense data (pixels, in this case). A neural network set up to identify dogs doesn’t know what the word “dog” means, but it does have a fairly robust concept of “dog”, as its connections are tuned in such a way as to effectively represent what constitutes a dog (enough to recognize dogs in pictures). To some degree, it seems fair to say that the concept of “dog” means something to the system, in that the system has a representation of it. However, the degree of meaning is far closer to that of C. Elegans than to that of humans, with this narrowness a consequence of the methods we use to get the meaning “into” the system.
The central issue facing current strategies is that they use performance on a specific objective to update connection weights (and thereby “model the world”). The dog recognizer, for example, updates after looking at a batch of dog and non-dog images, moving its connections in the direction of being more correct. This process is dependent on labeled images of dogs and non-dogs – but even more importantly, it’s dependent on the singular target of dog-understanding. This dependency holds even as the specific objective becomes more complex; whether the system is structured to identify dogs, select the best Go moves, predict protein structures, or play Atari games, the goal / objective is of a singular and narrow nature. We can see this difference more clearly by looking at how the human brain works. Our brains don’t have a specific goal to target, instead working generally to minimize prediction error / accurately model the (entire) world. This is not to say we lack specific goals (like breathing or eating), but rather that the update process of the brain is more general than any one of these goals. Take eating as an example. If our brain functioned like a machine learning system, with eating as the objective, then our brain would run for a period of time and then update its connections in the direction of whatever resulted in more eating (this is an extreme simplification, and backpropagation doesn’t work in the brain as it’s not differentiable – but the analogy still proves helpful). This doesn’t mean our brain would reinforce activities which resulted in eating (which it does) – it means that the metric of eating would be the only thing governing how the brain updates (i.e. learns). If a concept did not directly result in more eating, it would not make its way into the brain. Framed in this way, we can see how different our brains are, for while they do reinforce eating (and other specific objectives), the general update process of the brain seems to be driven by a separate algorithm, one centered around modeling the (entire) world. It is this general update process that allows us to build up meaning, as it works to make sense of all facets of the world (not just those tied to a particular objective) and results in the deep web of associations and concepts required.
Another issue with our current strategies is that although they allow systems to build up representations of particular aspects of human meaning (e.g. recognizing dogs), they lack the broader context required to “do anything” with these concepts (apart from what humans choose to do with them). Computers are able to recognize our concept of “dog”, but they lack the associations which make it a useful concept for us. The fact that they can build up to a particular human concept from the ground up is impressive, but meaning really requires building up all concepts from the ground up. By picking a particular concept from our framework, we offer a shortcut, essentially asking the computer to “pick out the features of this particular pattern of matter which has proven itself a useful concept” rather than to “pick out patterns of matter which are useful concepts”. As shown in the image below, the concept of “dog” is just once small part of a human’s framework of meaning, and it is this framework we need computers to target for true understanding.
While it’s clear our current approaches are limited in their ability to give computers meaning, that may change as the field continues to advance. The objectives we use to train deep learning systems have become increasingly general, and we may figure out how to train on the objective of “accurately model the world”. We can see these ideas starting to emerge in systems like GPT-3, which had the objective of predicting the next word in a sequence of text. This goal was sufficiently low-level to generate a great variety of interesting high-level behavior (writing prose, poetry, and code), and we could imagine a similar type of low-level, world-modeling objective leading to powerful results when paired with a system that receives visual input and has an ability to interact with the world. Additionally, we’re continuing to make progress on understanding the brain (albeit slowly), and will one day understand its algorithms well enough to implement in silico. For now, however, meaning remains limited to biology’s domain.
On Meaning and Machines
Cross-posted from mybrainsthoughts.com
Merriam Webster:
meaning \ˈmē-niŋ \ noun 1 a the thing one intends to convey especially by language b the thing that is conveyed especially by language 2 something meant or intended 3 significant quality 4 a the logical connotation of a word or phrase b the logical denotation or extension of a word or phrase
Meaning is an interesting concept. In everyday conversation, when the “meaner” is a person, the idea of meaning fits effortlessly into our worldview; other people’s words and actions mean things, and we’re able to understand that meaning and generate our own in return. However, when looking from a deeper level of granularity, the idea of meaning becomes less clear. Your brain is made up of ~100 billion neurons and ~100 to 1,000 trillion synapses, which all combine to make “you” – where does meaning reside within those connections? Humans have been wondering about our inner workings since ancient times, and while the progress of neuroscience has shed some light on this question, it’s still far from solved. Over the last ~100 years, we’ve had the chance to see this problem from a different light with the invention of computers. With their extreme complexity, it feels as though these systems may also be able to have meaning. Though there have been significant efforts in this area, progress has been limited, and computers still seem to be far away from understanding the world as we do. This post will take a deeper look at meaning and how it relates to machines, first laying some foundations for the concept of meaning, and then looking more explicitly at how we might get this meaning “into” machines.
As in previous posts, it will be helpful to build up a foundational concept of meaning from first principles, due to the ambiguity of the general concept. Zooming way in, we can picture the world as a vast number of particles moving around, bumping into each other and exchanging energy according to certain laws. From this point of view, there is no meaning, only things happening and particles bumping into other particles. However, zooming out a bit, we can see that certain patterns emerge in the interactions of these particles. The lowest level particles (that we’re aware of) settle into stable (or semi-stable) configurations, forming atoms, which settle into patterns of molecules, which settle into patterns of “objects” (solids, liquids, gasses, etc.). This higher level viewpoint better jives with our experience of the world, but there’s still no meaning at this level. There’s an order to the bumping of particles, but no meaning. To get meaning, we first need “meaners”, or agents. We can view an agent as a particular pattern which can emerge from the soup of particles; a pattern which “acts” to perpetuate itself (with a relatively simple example being a virus – though these types of patterns started off far more simply). Note that there’s no real “line” between an agent and the rest of the environment – we can choose to draw an artificial line from a particular viewpoint (and doing so makes the full system easier to understand), but from a different viewpoint the entire system is still simply particles bumping into other particles.
Agents get us closer to meaning, but the concept isn’t relevant for all types of agents. To really attribute meaning, we need a particular type of agent / pattern – one which functions in such a way as to mirror / represent the tendencies of the outside world inside itself. Essentially, we need agents which are capable of building an internal world model. Our brains function in this way, but far simpler organisms also create some sort of representation – even C. Elegans, the microscopic nematode, can represent certain facets of the world in its 302 neurons. When a C. Elegans finds food, this means something to it, in that the internal world model housed in its brain is updated and the worm’s future actions are modified based on that fact. Compare this to a virus: when a virus gets into a host cell, certain processes are activated which release the genetic information of the virus into the cell. There’s no (or extremely limited, depending on your point of view) world modeling / pattern recognition / abstraction going on. The evolution of neural nets allowed these representations to start, and the further centralization of neurons in brains allowed the representations to become increasingly powerful. While a C. Elegans has some level of meaning in its 302 neurons, it pales in comparison to the meaning more advanced animals are capable of, particularly humans. The internal world model is the key element for meaning to arise; meaning is the isomorphism between that model and the actual world.
Looking further at the brain, we can see that meaning arises when certain neural patterns represent patterns of the outside world. Put differently, there’s a mapping between the way in which neurons are connected / update their connections and the events of the outside world which are perceived through the sensory organs. Evolution has crafted our genome in such a way to create some level of alignment, or accuracy of mapping, in the initial formation of our brains (in the womb); the major foundation is then constructed during the first few years of life, as our brains adapt to the high level order of the world (objects, causation, etc.). From that point forward, we can mean things and understand meaning; meaning requires a certain (non-conscious) foundational understanding of the world, upon which more detailed events of the world can be understood. To take a simple example – if you say “Hi, I’m John” to a baby, is there any meaning being conveyed? You know what you mean by creating the specific utterance of “haɪ aɪm ʤɑn” (presumably, if you’re an English speaker, you mean that your name is John – although that phonetic sequence might have a different meaning in other languages, or you could just be using English in an unconventional way, or the baby’s name might by “Eimjon”, or…. only you know what meaning you attach to the utterance) but the baby derives no meaning from it; the mapping in their brain is not yet developed enough to extract meaning from such abstract methods of communication. The baby does have some understanding of meaning; for example, if they see a person, they may understand that it is a person, and not some other object or aspect of the world, or, if they cry, they might mean that they’re hungry. This rudimentary ability to mean things is an important step on the path towards general human meaning, which is built up over our lifetimes (with the first few years being the most formative) as our brains build up a network of concepts and associations that accurately model the world.
Now that we’ve built up a bit of a foundation for human meaning, let’s take a look at machines (specifically computers). For most of history, we’ve put our own meaning into computers, with their operations serving as an extension of our own meaning. For example, say we want to solve a math problem, and we know all the steps – it just takes a while to calculate. We set up the computer, and let it run through the entire calculation. There’s no meaning in the computer; while there’s a mapping from patterns of the world to innate representations (the math problem), there’s no foundational understanding of the world built up for this mapping to sit within. The representations mean something to us because we have that foundational understanding, which the symbols and operations are built on top of. We then design the computer in such a way as to act in accordance with our meaning – which is different from the computer “having meaning” itself. Standing alone, a mapping can’t do very much. The image below highlights the differences between a human and a computer adding “1+1” – for the human, this operation happens in the context of the deep web of concepts and relations which constitutes meaning, whereas the computer simply passes some voltages through some gates (which are structured in such a way as to provide outputs which align with human conceptions).
It’s easy for humans to fall into the trap of thinking we can imbue our systems with meaning by giving them relational data to use; for example, Doug Lenat’s Cyc, which “aims to assemble a comprehensive ontology and knowledge base that spans the basic concepts and rules about how the world works”. On the surface, this type of approach seems reasonable – computers obviously lack common sense, so if we can “build in” most common things which humans know, computers will be able to act more appropriately. Looking specifically at Cyc, Lenat has worked to encode a huge variety of “common” information about the world; things like “a dog has four legs” or “Barack Obama was a president”. As a human looking at this system, the million plus encoded pieces of information seem powerful. However, this is due more to the meaning we bring to these relations, rather than due to meaning within the relations themselves. When we read, “a dog has four legs”, we can’t help but bring to mind our concepts for each of the words, and the interplay between those concepts. The sentence means something to us because each of the terms has been built up from a deep foundation. We see the “a” and our minds focus on a particular instance of a thing, then we read “dog” and that particular instance slides into focus as a hairy creature with perky ears and a narrow snout (or whatever your personal conception of “a dog” is). We see “has” and our minds slip into the possessive form, attributing a particular thing to the conceived “a dog”, and we then read “four legs” and draw to mind the idea of a thin appendage, next to another, with another pair behind. At this point, the appendages imagined are furry and dog-like due to the setup of the earlier part of the sentence. The key point here is that the sentence is not words to us, it is meaning, with each word representing a particular pattern of matter / perception (or a pattern of patterns, etc.) All this happens “under the hood”, so to speak, so we feel as though the words themselves carry the meaning, and feel as though providing a system with the logical relation “a dog has four legs” adds to its common sense. All the system actually gets, however, is a meaningless (in the sense of lacking meaning) link; an association from one node to another.
Concepts can’t be built up through top down fact base construction, as it’s the underlying foundation which gives them so much power. The brain has evolved over millions of years to coherently pull together experiences in a way that makes sense of the world, allowing the organism to operate effectively within it. There’s some a priori “knowledge”, or “potential for knowledge”, built into the structure of the brain, and then experience acts to shape the brain in a way which mirrors the world. The most basic level is not language, or relationships between linguistic objects – we can look at any other animal to see that. Rather, the base level is made up of patterns and regularities – matter (and the light reflecting from it) tends to clump together in “objects”, and these “objects” tend to stay clumped, unless certain other clumps interact with them. As babies, we start by viewing and understanding the world in these types of broad strokes – then we learn that certain objects are “alive”, that certain ones are “friendly” (or that they will provide for our needs), etc. Only after this initial foundation has been laid do we make the jump to language. By the time we’re saying the word “dog”, we aren’t learning about the clump of matter, or about the object; we’re learning that a particular sound is “attached” to that clump / object. Throughout this whole process, the knowledge is being put together in a particular way from the ground up, dictated by the algorithms of the brain that allow it all to “fit together”.
One analogy I’ve found helpful is to picture pieces of knowledge, or bits of experience, as Tetris pieces. Millions of them flow into the system, of all different shapes and sizes. The brain has a method for handling this inflow and “making sense of it”, sorting and filtering through the pieces and getting them into useful positions. The pieces are the base level, not the “words” level. This stands in stark contrast to attempts like Cyc, which simply ignore the base (sub-language) level, and thereby ignore any actual meaning. The word sequence “a dog has four legs” only means something to humans because of the deeper (sub-language) associations each word brings to mind for us; it doesn’t denote any meaning itself. The graphic below tries to highlight this idea – individual pieces of perceptive input (e.g. a single cone firing, or a single bipolar / ganglion cell, or some low-level circuit in the cortex) are symbolized as Tetris pieces, and the “world modeling algorithm” serves to make sense of all these perceptions and fit them into place, resulting in a coherent worldview, as shown on the left. The right side zooms out a bit, showing the large number of perceptive inputs that must be made sense of, and pointing specifically to how the “dog” concept might be put together from the inputs.
For us, the concept of “dog” contains all the perceptive information which makes it up – even without words, by knowing what the “dog” concept is, we know a great deal about dogs. We eventually also learn to attach a word (“dog”) to the concept, but this is of secondary importance in forming the concept itself. When we give a computer the word “dog”, however, it has none of this foundation – all it has is a single symbol, “dog”. There’s no meaning attached to it. Importantly, the situation doesn’t improve much even if we give the computer additional sentences describing dogs, or pictures of dogs. With sentences, we run into the same issue of no meaning behind the terms; with pictures, we run into a different version of the same issue. When we see a picture of a dog, we easily recognize the dog, but we don’t realize how much is going on “behind the scenes” to make that possible. It’s a great computational feat to pull together all the sensory inputs and recognize them as a specific instance of a general class of objects. Without taking this step, a picture of a dog (or even many pictures of many dogs) doesn’t help a computer get at the meaning of “dog” – all it ends up with is a vast amount of relatively useless pixel data. However, though Cyc-type approaches seem doomed, we’ve recently made significant strides in figuring out how to get computers to take a version of this step.
Deep learning has been a step in the right direction, as it (to some degree) allows the system to build up its own “meaning”, and relies less on human encodings (this post offers a deeper dive into how deep learning works). For example, consider a neural network set up to identify dogs. The programmer specifies all the individual nodes and layers of the network and the update behavior, but they don’t add in any information or structure related to dogs (the connections are all randomized to start). The “dog” concept instead comes from the training data – the network is shown a great number of pictures of dogs (and not dogs), and after each batch the connection weights are updated in such a way as to be slightly better at identifying dogs (this is possible because the whole system is differentiable, and so we can calculate what direction to move each connection for better performance). Looking back at the Tetris analogy, we can see that this process (known as backpropagation) plays a similar role as the “world modeling algorithm”, in that it builds up concepts from the foundational sense data (pixels, in this case). A neural network set up to identify dogs doesn’t know what the word “dog” means, but it does have a fairly robust concept of “dog”, as its connections are tuned in such a way as to effectively represent what constitutes a dog (enough to recognize dogs in pictures). To some degree, it seems fair to say that the concept of “dog” means something to the system, in that the system has a representation of it. However, the degree of meaning is far closer to that of C. Elegans than to that of humans, with this narrowness a consequence of the methods we use to get the meaning “into” the system.
The central issue facing current strategies is that they use performance on a specific objective to update connection weights (and thereby “model the world”). The dog recognizer, for example, updates after looking at a batch of dog and non-dog images, moving its connections in the direction of being more correct. This process is dependent on labeled images of dogs and non-dogs – but even more importantly, it’s dependent on the singular target of dog-understanding. This dependency holds even as the specific objective becomes more complex; whether the system is structured to identify dogs, select the best Go moves, predict protein structures, or play Atari games, the goal / objective is of a singular and narrow nature. We can see this difference more clearly by looking at how the human brain works. Our brains don’t have a specific goal to target, instead working generally to minimize prediction error / accurately model the (entire) world. This is not to say we lack specific goals (like breathing or eating), but rather that the update process of the brain is more general than any one of these goals. Take eating as an example. If our brain functioned like a machine learning system, with eating as the objective, then our brain would run for a period of time and then update its connections in the direction of whatever resulted in more eating (this is an extreme simplification, and backpropagation doesn’t work in the brain as it’s not differentiable – but the analogy still proves helpful). This doesn’t mean our brain would reinforce activities which resulted in eating (which it does) – it means that the metric of eating would be the only thing governing how the brain updates (i.e. learns). If a concept did not directly result in more eating, it would not make its way into the brain. Framed in this way, we can see how different our brains are, for while they do reinforce eating (and other specific objectives), the general update process of the brain seems to be driven by a separate algorithm, one centered around modeling the (entire) world. It is this general update process that allows us to build up meaning, as it works to make sense of all facets of the world (not just those tied to a particular objective) and results in the deep web of associations and concepts required.
Another issue with our current strategies is that although they allow systems to build up representations of particular aspects of human meaning (e.g. recognizing dogs), they lack the broader context required to “do anything” with these concepts (apart from what humans choose to do with them). Computers are able to recognize our concept of “dog”, but they lack the associations which make it a useful concept for us. The fact that they can build up to a particular human concept from the ground up is impressive, but meaning really requires building up all concepts from the ground up. By picking a particular concept from our framework, we offer a shortcut, essentially asking the computer to “pick out the features of this particular pattern of matter which has proven itself a useful concept” rather than to “pick out patterns of matter which are useful concepts”. As shown in the image below, the concept of “dog” is just once small part of a human’s framework of meaning, and it is this framework we need computers to target for true understanding.
While it’s clear our current approaches are limited in their ability to give computers meaning, that may change as the field continues to advance. The objectives we use to train deep learning systems have become increasingly general, and we may figure out how to train on the objective of “accurately model the world”. We can see these ideas starting to emerge in systems like GPT-3, which had the objective of predicting the next word in a sequence of text. This goal was sufficiently low-level to generate a great variety of interesting high-level behavior (writing prose, poetry, and code), and we could imagine a similar type of low-level, world-modeling objective leading to powerful results when paired with a system that receives visual input and has an ability to interact with the world. Additionally, we’re continuing to make progress on understanding the brain (albeit slowly), and will one day understand its algorithms well enough to implement in silico. For now, however, meaning remains limited to biology’s domain.