A Study of Scarlet: The Conscious Mental Graph
Sequel to: Seeing Red: Dissolving Mary’s Room and Qualia
Seriously, you should read first: Dissolving the Question, How an Algorithm Feels From Inside
In the previous post, we introduced the concept of qualia and the thought experiment of Mary’s Room, set out to dissolve the question, and decided that we were seeking a simple model of a mind which includes both learning and a conscious/subconscious distinction. Since for now we’re just trying to prove a philosophical point, we don’t need to worry whether our model corresponds well to the human mind (though it would certainly be convenient if it did); we’ll therefore pick an abstract mathematical structure that we can analyze more easily.
The Mobile Graph
Let’s consider a graph, or a network of simple agents1; nodes can correspond to concepts, representations of objects and people, emotions, memories, actions, etc. These nodes are connected to one another, and the connections have varying strengths. At any given moment, some of the nodes are active (changing their connections and affecting nearby nodes), and others are not. This skein of nodes and connections will serve to direct the actions of some organism; let’s call her Martha, and let’s call this graph Martha’s mind.
It’s important to note that the graph is all there is to Martha’s mind; when the mental agent for “hunger” is activated, that doesn’t mean that some homunculus within her mind becomes hungry, but rather that the agents corresponding to eating nearby food are strongly activated (unless otherwise inhibited by nodes pertaining to social constraints, concerns of sanitation, etc), that other nodes which visualize and evaluate plans of action to obtain food are activated, that other mental processes are somewhat inhibited to save energy and prevent distraction, and so on. (The evolutionary benefits of such an admittedly complicated system directing an organism are relatively significant.)
Since we’ll be discussing experience and learning, the graph will need to change over time, and perhaps change structure in large ways. As different agents are activated, connections can form, strengthen, weaken, or sever, and the graph as a whole can rearrange itself in response2. Imagine, as a geometric analogy, that the nodes repel each other and the connections pull them together. Like a protein folding itself as it is assembled or a rope assuming surprising new shapes as we continually twist one end, the graph seeks the lowest-energy state and occasionally makes cascades of rearrangements in response to one new connection being added. This is going to be important, so let’s consider an example.
Let’s say that Martha is listening to a story of long ago and far, far away. The LONE STALWART has trained himself in the mystical ways of his teacher, the OLD WARRIOR. The OLD WARRIOR, a generation before, had taught both the LONE STALWART’s father (the AMAZING STARFIGHTER) and another pupil, who turned to the ways of darkness and became the DEADLY VILLAIN. This DEADLY VILLAIN killed the AMAZING STARFIGHTER and, soon after the LONE STALWART began to train, struck down the OLD WARRIOR himself.
Within Martha’s mind might be the following subgraph, where different arrows represent these various relationships3:
Of course, there are various other connections to ideas: Martha sees the OLD WARRIOR as wise and virtuous, and expects the story to end with the LONE STALWART killing the DEADLY VILLAIN to avenge both his father and his mentor.
But then, when the LONE STALWART finally duels the DEADLY VILLAIN, the latter reveals a shocking secret: he is the LONE STALWART’s father! As this piece of information is processed, the graph undergoes a revolution: two distinct nodes abruptly merge.
Part of the reason that this revelation is so significant, compared to other facts in the story (such as when the DEADLY VILLAIN hires bounty hunters to track the LONE STALWART’s companions, or when he unilaterally alters the deal he has cut with a fearful LOCAL COMMISSAR) is that it leads to a larger cascade of changes within Martha’s mind: the OLD WARRIOR must have lied to the LONE STALWART, who might be in real danger of corruption after all (hence the symbolic darkening of those two nodes in the picture above); it would no longer be an unambiguously happy ending if the LONE STALWART killed the DEADLY VILLAIN, et cetera.
On the other hand, just because something has a large effect on Martha’s mind doesn’t automatically imply that Martha ‘notices’ the effect in the way we might think, any more than a vacuum-tube computer ‘notices’ when a moth dies inside a relay; it’s only by dint of Herculean engineering labors that our modern computers can tell us when and how they’ve gone awry (most of the time). What we can say, however, is that this bit of information changes the structure of Martha’s mind more than did the other pieces of information in the story.
So far, so good. It’s now time to introduce the second key element of our model: the conscious/subconscious distinction.
The Conscious Subgraph
What exactly do we mean by conscious and unconscious parts of Martha’s mind? Is it just a label we affix to some nodes and connections, but not others? Why should such a thing matter to Martha or to us?
To create a plausible role for such a distinction, it may help to think of the evolutionary dynamics of communication. Let’s say that Martha’s species has gradually evolved the ability to communicate in great detail through language. So there is a significant part of her mind devoted to language, and any agent that connects to this part of the graph can cause Martha to speak (or write, etc.) in particular ways.
But of course, not everything is useful to communicate. It may be very helpful to tell each other that a certain food tastes good, but (unless one starts to get sick) not worth the trouble of communicating the details of digestion. The raw data of vision might take forever to explain, when all that’s really relevant is that the speaker just spotted a tiger or a mammoth. And there can be social reasons why it’s best to keep some of one’s mental processes away from the means of explicit communication.
So within Martha’s graph, there’s a relatively small subgraph that’s hooked up to the language areas; we’ll call this her conscious subgraph. Again, there’s no homunculus located here; every agent simply performs its simple function when activated. The conscious subgraph isn’t kept separate from the rest of the graph, nor did it evolve to be otherwise particularly special. The only difference between it and the remainder is that the agents in the conscious subgraph can articulate their activation, causing Martha to speak it, write it, point to an object, think it4, or otherwise render it effable.
When something like color vision is activated, most of Martha’s processing happens subconsciously and only the reportable details about objects appear on her conscious graph. For example, Martha’s mind might contain the following pattern:
Here, the vast visual-processing agent “Detect Red Object” is not part of the conscious subgraph, but it can affect many nodes that are, and it thereby binds them all closer together than they would be on the strength of conscious connections alone5. In the middle of this cluster is the extra node representing the color red, which turns out to be an especially effective shorthand for communication. And since most of Martha’s processing of color is done subconsciously, I’ve shaded the diagram accordingly; she can talk about particularities of color by giving pointers (red like a firetruck) or analogies (an angry red) shaped by unconscious processes, but the level of detail pales by comparison to what she can do by (for example) painting the object she’s thinking of, using the same subconscious agents to determine whether the canvas matches the visualization.
Now, it’s pretty clear where we ought to look for qualia in Martha’s mind: we should focus on those subconscious agents whose activation has significant effects on the conscious graph, like the “Detect Red Object” agent above. In the next post we’ll consider one crucial agent that plays a key role in the Mary’s Room experiment, and then we’ll be ready to observe what goes though Martha’s mind when she first sees in color.
TO BE CONCLUDED...
Footnotes:
1. More complicated nodes are actually networks of simpler ones; the node for “catch a ball” is actually a connected tangle combining hundreds of particular muscle movements with visual perception of the moving ball, which itself breaks down into smaller nodes of visual processing. But as a first approximation, we can treat even large nodes as basic objects. (Also, I’ve cribbed the network of mental agents idea from Minsky’s Society of Mind, though it’s also a pretty common meme round here by now.)
2. Also, the graph will form new nodes, at least by taking new recurring patterns of simple nodes to be new complicated nodes. Martha’s mind may not have started out with a node for a concept like coconuts, but upon seeing them, a collection of visual perceptions get bound together as a new object, and then a kind of object; the texture and taste are added to this node when they are discovered, along with ideas like the coconut tree. For our discussion, though, we’ll try and elide this complication, and stick to the analysis of changing connections between nodes, or combining two existing nodes, etc.
3. We could have different types of connections, or we could just mean that the connections between LS and AS are connected to nodes representing the concept of fatherhood, as well. It’s of course much easier to represent the former in simple images.
4. That is, those agents could activate the agents in charge of articulating speech and suppress those in charge of actually speaking it, and the agents that articulate speech could pass on information to the agents in charge of hearing, who would activate in imitation of how it would sound if spoken. This too is a useful feature for Martha’s mind to have, so that she can try out different articulations of a concept and imagine their effect on listeners before selecting one.
5. Similarly, Martha’s self-serving subconscious desires can knit together a collection of conscious moral notions much more tightly than is warranted by her noble-sounding conscious arguments for them. But I digress.
- Nature: Red, in Truth and Qualia by 29 May 2011 23:50 UTC; 62 points) (
- Seeing Red: Dissolving Mary’s Room and Qualia by 26 May 2011 17:47 UTC; 54 points) (
- Original Research on Less Wrong by 29 Oct 2012 22:50 UTC; 48 points) (
- Neural Correlates of Conscious Access by 7 Oct 2011 23:12 UTC; 39 points) (
- Improving My Writing Style by 11 Oct 2011 16:14 UTC; 11 points) (
- 29 May 2011 10:48 UTC; 1 point) 's comment on A Study of Scarlet: The Conscious Mental Graph by (
- 4 Oct 2015 17:27 UTC; 0 points) 's comment on Rationality Quotes Thread September 2015 by (
- 6 Jun 2011 15:21 UTC; 0 points) 's comment on Nature: Red, in Truth and Qualia by (
- 29 May 2011 1:13 UTC; 0 points) 's comment on A Study of Scarlet: The Conscious Mental Graph by (
- Does functionalism imply dualism? by 31 Jan 2012 3:43 UTC; -4 points) (
This has nothing at all to do with the actual content of the article, but I loved your schematic character names. Nicely done!
Thanks! I stole the schema from Homestuck, and the characters from somewhere else.
And for that matter, the URL of the post. http://lesswrong.com/lw/5op/qualia_strike_back/
Nah, it’s http://lesswrong.com/lw/5op/qualia_wars/ . Or maybe it’s http://lesswrong.com/lw/5op/i_just_made_this_up_and_it_could_be_anything/ .
(The actual URL I get by “normal” methods just has the title in it.)
Which is exactly why one should have fun with the links when writing a sequence.
The URL I posted was the one I saw in my URL bar while reading the post; I copy-pasted it. Checking again just gets me the title.
I’m guessing you navigated to this page from the previous article.
Yeah, that would be it.
Style and explanation level: Yudkowsky