Outline of a lower bound for consciousness
This is a summary of an article I’m writing on consciousness, and I’d like to hear opinions on it. It is the first time anyone has been able to defend a numeric claim about subjective consciousness.
ADDED: Funny no one pointed out this connection, but the purpose of this article is to create a nonperson predicate.
1. Overview
I propose a test for the absence of consciousness, based on the claim that a necessary, but not sufficient, condition for a symbol-based knowledge system to be considered conscious is that it has exactly one possible symbol grounding, modulo symbols representing qualia. This supposition, plus a few reasonable assumptions, leads to the conclusion that a symbolic artificial intelligence using Boolean truth-values and having an adult vocabulary must have on the order of 106 assertions before we need worry whether it is conscious.
Section 2 will explain the claim about symbol-grounding that this analysis is based on. Section 3 will present the math and some reasonable assumptions for computing the expected number of randomly-satisfied groundings for a symbol system. Section 4 will argue that a Boolean symbol system with a human-level vocabulary must have millions of assertions in order for it to be probable that no spurious symbols groundings exist.
2. Symbol grounding
2.1. A simple representational system
Consider a symbolic reasoning system whose knowledge base K consists of predicate logic assertions, using atoms, predicates, and variables. We will ignore quantifiers. In addition to the knowledge base, there is a relatively small set of primitive rules that say how to derive new assertions from existing assertions, which are not asserted in K, but are implemented by the inference engine interpreting K. Any predicate that occurs in a primitive rule is called a primitive predicate.
The meaning of primitive predicates is specified by the program that implements the inference engine. The meaning of predicates other than primitive rules is defined within K, in terms of the primitive predicates, in the same way that LISP code defines a semantics for functions based on the semantics of LISP’s primitive functions. (If this is objectionable, you can devise a representation in which the only predicates are primitive predicates (Shapiro 2000).)
This still leaves the semantics of the atoms undefined. We will say that a grounding g for that system is a mapping from the atoms in K to concepts in a world W. We will extend the notation so that g(P) more generally indicates the concept in W arrived at by mapping all of the atoms in the predication P using g. The semantics of this mapping may be referential (e.g., Dretske 1985) or intensional (Maida & Shapiro 1982). The world may be an external world, or pure simulation. What is required is a consistent, generative relationship between symbols and a world, so that someone knowing that relationship, and the state of the world, could predict what predicates the system would assert.
2.2. Falsifiable symbol groundings and ambiguous consciousness
If you have a system that is meant to simulate molecular signaling in a cell, I might be able to define a new grounding g’ that re-maps its nodes to things in W, so that for every predication P in K, g’(P) is still true in W; but now the statements in W would be interpreted as simulating traffic flow in a city. If you have a system that you say models disputes between two corporations, I might re-map it so that it is simulating a mating ritual between two creatures. But adding more information to your system is likely to falsify my remapping, so that it is no longer true that g(P) is true in W for all propositions P in K. The key assumption of this paper is that a system need not be considered conscious if such a currently-true but still falsifiable remapping is possible.
A falsifiable grounding is a grounding for K whose interpretation g(K) is true in W by chance, because the system does not contain enough information to rule it out. Consider again the system you designed to simulate a dispute between corporations, that I claim is simulating mating rituals. Let’s say for the moment that the agents it simulates are, in fact, conscious. Furthermore, since both mappings are consistent, we don’t get to choose which mapping they experience. Perhaps each agent has two consciousnesses; perhaps each settles on one interpretation arbitrarily; perhaps they flickers between the two like a person looking at a Necker cube.
Although in this example we can say that one interpretation is the true interpretation, our knowledge of that can have no impact on which interpretation the system consciously experiences. Therefore, any theory of consciousness that claimed the system was conscious of events in W using the intended grounding, must also admit that it is conscious of a set of entirely different events in W using the accidental grounding, prior to the acquisition of new information ruling the latter out.
2.3 Not caring is as good as disproving
I don’t know how to prove that multiple simultaneous consciousnesses don’t occur. But we don’t need to worry about them. I didn’t say that a system with multiple groundings couldn’t be conscious. I said it needn’t be considered conscious.
Even if you are willing to consider a computer program with many falsifiable groundings to be conscious, you still needn’t worry about how you treat that computer program. Because you can’t be nice to it no matter how hard you try. It’s pointless to treat an agent as having rights if it doesn’t have a stable symbol-grounding, because what is desirable to it at one moment might cause it indescribable agony in the next. Even if you are nice to the consciousness with the grounding intended by the system’s designer, you will be causing misery to an astronomical number of equally-real alternately-grounded consciousnesses.
3. Counting groundings
3.1 Overview
Let g(K) denote the set of assertions about the world W that K represents that are produced from K using a symbol-grounding mapping g. The system fails the unique symbol-grounding test if there is a permutation function f (other than the identity function) mapping atoms into other atoms, such that g(K) is true and g(f(K)) is true in W. Given K, what is the probability that there exists such a permutation f?
We assume Boolean truth-values for our predicates. Suppose the system represents s different concepts as unique atoms. Suppose there are p predicates in the system, and a assertions made over the s symbols using these p predicates. We wish to know that it is not possible to choose a permutation f of those s symbols, such that the knowledge represented in the system would still evaluate to true in the represented world.
We will calculate the probability P(p,a) that each of a assertions using p predicates evaluates to true. We will also calculate the number N(s) of possible groundings of symbols in the knowledge base. We then can calculate the expected number E of random symbol groundings in addition to the one intended by the system builder as
N(s) x P(p,a) = E
Equation 1: Expected number of accidental symbol groundings
3.2 A closed-form approximation
This section, which I’m saving for the full paper, proves that, with certain reasonable assumptions, the solution to this equation is
sln(s)-s < aln(p)/2
Equation 7: The consciousness inequality for Boolean symbol systems
As s and p should be similar in magnitude, this can be approximated very well as 2s < a.
4. How much knowledge does an AI need before we need worry whether it is conscious?
How complex must a system that reasons something like a human be to pass the test? By “something like a human” we mean a system with approximately the same number of categories as a human. We will estimate this from the number of words in a typical human’s vocabulary.
(Goulden et al. 1990) studied Webster’s Third International Dictionary (1963) and concluded it contains less than 58,000 distinct base words. They then tested subjects for their knowledge of a sample of base words from the dictionary, and concluded that native English speakers who are university graduates have an average vocabulary of around 17,000 base words. This accounts for concepts that have their own words. We also have concepts that we can express only by joining words together (“back pain” occurs to me at the moment); some concepts that we would need entire sentences to communicate; and some concepts that share the same word with other concepts. However, some proportion of these concepts will be represented in our system as predicates, rather than as atoms.
I used 50,000 as a ballpark figure for s. This leaves a and p unknown. We can estimate a from p if we suppose that the least-common predicate in the knowledge base is used exactly once. Then we have a/(p ln(p)) = 1, a = pln(p). We can then compute the smallest a such that Equation 7 is satisfied.
Solving for p would be difficult. Using 100 iterations of Newton’s method (from the starting guess p=100) finds p=11,279, a=105,242. This indicates that a pure symbol system having a human-like vocabulary of 50,000 atoms must have at least 100,000 assertions before one need worry whether it is conscious.
Children are less likely to have this much knowledge. But they also know fewer concepts. This suggests that the rate at which we learn language is limited not by our ability to learn the words, but by our ability to learn enough facts using those words for us to have a conscious understanding of them. Another way of putting this is that you can’t short-change learning. Even if you try to jump-start your AI by writing a bunch of rules ala Cyc, you need to put in exactly as much data as would have been needed for the system to learn those rules on its own in order for it to satisfy Equation 7.
5. Conclusions
The immediate application of this work is that scientists developing intelligent systems, who may have (or be pressured to display) moral concerns over whether the systems they are experimenting with may be conscious, can use this approach to tell whether their systems are complex enough for this to be a concern.
In popular discussion, people worry that a computer program may become dangerous when it becomes self-aware. They may therefore imagine that this test could be used to tell whether a computer program posed a potential hazard. This is an incorrect application. I suppose that subjective experience somehow makes an agent more effective; otherwise, it would not have evolved. However, automated reasoning systems reason whether they are conscious or not. There is no reason to assume that a system is not dangerous because it is unconscious, any more than you would conclude that a hurricane is not dangerous because it is unconscious.
More generally, this work shows that it is possible, if one considers representations in enough detail, to make numeric claims about subjective consciousness. It is thus an existence proof that a science of consciousness is possible.
References
Fred Dretske (1985). Machines and the mental. In Proceedings and Addresses of the American Philosophical Association 59: 23-33.
Robin Goulden, Paul Nation, John Read (1990). How large can a receptive vocabulary be? Applied Linguistics 11: 341-363.
Anthony Maida & Stuart Shapiro (1982). Intensional concepts in propositional semantic networks. Cognitive Science 6: 291-330. Reprinted in Ronald Brachman & Hector Levesque, eds., Readings in Knowledge Representation, Los Altos, CA: Morgan Kaufmann 1985, pp. 169-189.
William Rapaport (1988). Syntactic semantics: Foundations of computational natural-language understanding. In James Fetzer, ed., Aspects of Artificial Intelligence (Dordrecht, Holland: Kluwer Academic Publishers): 81-131; reprinted in Eric Dietrich (ed.), Thinking Computers and Virtual Persons: Essays on the Intentionality of Machines (San Diego: Academic Press, 1994): 225-273.
Roger Schank (1975). The primitive ACTs of conceptual dependency. Proceedings of the 1975 workshop on Theoretical Issues in Natural Language Processing, Cambridge MA.
Stuart C. Shapiro (2000). SNePS: A logic for natural language understanding and commonsense reasoning. In Lucja Iwanska & Stuart C. Shapiro (eds.), Natural Language Processing and Knowledge Representation: Language for Knowledge and Knowledge for Language (Menlo Park, CA/Cambridge, MA: AAAI Press/MIT Press): 175-195.
Yorick Wilks (1972). Grammar, Meaning, and the Machine Analysis of Language. London.
- 6 Jun 2010 0:25 UTC; 4 points) 's comment on Open Thread: June 2010 by (
- 16 Apr 2011 3:36 UTC; 1 point) 's comment on Eight questions for computationalists by (
- 14 Apr 2011 22:59 UTC; 0 points) 's comment on Eight questions for computationalists by (
- 31 Aug 2010 16:49 UTC; 0 points) 's comment on Nonperson Predicates by (
I disagree. If multiple consciousnesses are instantiated in a single physical system, you should figure out what each of them is, and be nice to as many of them as you can. The existence of an astronomical number of alternatively real beings is no excuse to throw up your hands and declare it impossible to figure out what they all want; the number of humans is already pretty big, but I’m not about to give up on pleasing them.
The alternatively real beings you’re talking about are probably pretty similar to each other, so you’re unlikely to cause agony to one while pleasing another.
For example, I have a bowl containing 180 white Go stones, and I don’t know anything about any of them that I don’t know about the others. Thus there are at least 180! possible groundings for my knowledge base. Regardless of which grounding you choose, my preferences are about the same.
Also, as the above example demonstrates, probably all humans have multiple groundings for their knowledge, and therefore lack consciousness according to your criterion!
What you’re saying is essentially correct, but I didn’t deal with it in the short version. I haven’t worked out how to incorporate this into the math. It may change the results drastically.
In the particular case of the go stones, you have this difficulty only because you’re using an extensional grounding rather than an intensional grounding. I didn’t want to get into the extensional/intensional debate, but my opinion is that extensional grounding is simply wrong. And I don’t think I have enough space to get into the issue of representations of indistinguishable objects.
Can you explain this?
An extensional grounding is one in which things in the representation map 1-1 to things in the world. This doesn’t work for intelligent agents. They often have inaccurate information about the world, and don’t know about the existence of objects, or don’t know that 2 things are really the same object, or that 2 things they think are different are really the same object, or can’t distinguish between objects (such as go stones). They also can reason about hypothetical entities.
In an intensional grounding, you can’t have separate representations for different extensional things (like go stones) that you think about in exactly the same way. Some people claim that they are distinguished only contextually, so that you can have a representation of “the stone at G7”, or “the leftmost stone”, but they all collapse into a single mental object when you put them back in the bag.
I’m reading a review of studies using “inverting spectacles”, which “invert” the visual field. Experimenters then see whether subjects report having the same subjective experience as originally after adapting to the inverting spectacles.
The problem is that I can’t tell from the paper what inverting spectacles do! It’s vital to know whether they invert the visual field around just 1 axis, or around 2 axis at the same time (which is what a lens or pinhole does). If it’s around 2, then the effect is to rotate the visual field 180 degrees. This would not cause the wearer to experience a mirror-image world.
My friend says they’re up/down reversing glasses unless otherwise specified, though left/right versions do exist. He hasn’t heard of anyone doing experiments involving pinhole-type reversing glasses. He also says to tell you to google “Richard Held”.
I know a retired neuroscientist who specialized in vision. I’ll ask him about this when I see him—probably tomorrow.
My understanding is that it’s only an upside-down inversion. I could only find two crappy links on the subject, though. 1 2
ETA: Page 4 of this paper.pdf) refers to both horizontal and vertical inversion goggles.
Let us know if you come up with details on the spectacles. I’ve daydreamed about building a pair of these to try on myself, but the design I came up with is physically awkward and has a very narrow field of view. (I had in mind the mirror-image kind, not the rotation kind, but either would be interesting.)
I don’t see your footnote 2 - the only superscript 2 I see is part of math.
Thanks. It’s a footnote from a deleted section, that accused some readers of being zombies.
Thanks for clearing that up, I had the same question.
(FWIW, I’m not a zombie. And I would definitely tell you guys if I were. I mean, I’d feel bad to falsely lead you all on like that.)
I see what you did there.
Really, because I claim that I am a zombie. At least, that is the claim I make to all of my philosophy profs.
I don’t know why I didn’t think of this the last time we discussed this claim you like to make, but it just occured to me that there are actually people who do make this claim and mean it: they have Cotard’s delusion. (Funnier version here.)
That’s kind of funny considering that I have a significant portion of my shins that are putrefying in fact (this is the origin of the “I am a Zombie” claim). I have gaping wounds on my shins from a motorcycle accident.
That is funny that they have a real delusion for it. I’ll bet that one of my ex’s would know about it. She’s a for real necrophiliac, and was actually fired once for having sex with a corpse (She’s a mortician/embalmer). She used to drool over my legs and would always ask me to not change the bandages for a few days so that the stink would increase… That relationship didn’t last long.
I think this is the most unexpectedly surreal LW comment I have ever seen.
Can you post a link to the most expectedly surreal LW comment?
I would’ve appreciated a TMI warning.
I shall have to make note of that fact.
My life has been a collection of extremes, from extremely secretive (even Spooky in all senses that could be imagined) to highly public (minor celebrity attained from falling asleep at just the right moment)…
Add to the fact that almost all of it has been lived in a quasi-Punk Rock lifestyle, and the phrase “Too much information” was something that I had never heard a person say until I left both of those lives behind (neither by choice, but for the better)
Okay, I’ll bite… How did you become a celebrity by falling asleep at the right moment?
It’s really pretty boring.
I used to have a fairly provocative appearance. I fell asleep one night in my car (which itself drew rather a lot of attention), and a reporter from a magazine in Dallas Texas saw me in the car. He knocked on the window and wanted an interview. I gave him one, and he later dragged me to a photographer to have my photo taken. One of my professors (my first time through school for a sculpture degree) said to be careful of the reporter, and I called him and told him that he couldn’t use the interview (which turned out to be a good thing)… Only, I didn’t think about the photos. So, on my way to school one morning, I stopped into a Safeway to buy a coke before school, and saw this on the magazine racks in front of every check stand.
The photo eventually got me several extra, bit and small part roles in movies (although, I really preferred stunt work). I’ve been in all three Robocop movies (compare this photo from Robocop’s visor at a club in Dallas TX where it was shot. I’m not recognizable in the other Robocop movies due to doing stunt work instead of eye-candy).
There are one or two other things that drew a lot of attention my way too, but those will have to wait.
All that, just because of the spiky hair?
Well… No...
But, to comment on the hair… It was 1987 when that picture was taken, and spikey hair was no where nearly as common then as it is now. There were other things to be sure, but the hair and other similar aspects of my life made for a much greater splash in 1983-88/89 than they would have in this day and age. The comment “all that just because of the spikey hair” is rather similar to listening to a Beatles song in this day and age and saying “I don’t get it, they hardly rock at all”
You have to remember, there was a day and time when getting on the cover of a rather widely distributed periodical was a pretty freaking big deal… Not so much today.
There was also a few Bands that I was in, or did work for, and several side businesses which I ran.
Edit: Oh! Yes, and I became well known in Dallas for decorating my city without the permission of the City Council. They all failed to see the beauty in my freeway beautification project… Something about the color of the dinosaurs.
If it helps, it didn’t occur to me that it was TMI till it was brought up.
Till what was brought up?
Restated for clarity: I didn’t notice that your comment contained “too much information” until AdeleneDawner mentioned that your comment contained “too much information”.
“Too much information” isn’t really the right term, actually, but I find it hard to be eloquent when I’m trying not to hurl.
*picks jaw up off of floor*
quick, reattach it before she notices.
I’m not sure if I should believe that story about your ex...
She’s pretty well known in the Houston Texas Goth scene (or, rather, she was 10 years ago). You’ve never been involved in some of the earlier musical sub-cultures have you (And I don’t mean the watered down 90s/00s versions that some of these became)?
The Punk scene in the 70s/80s was pretty much like that, only Renee missed he boat by a few years.
If you don’t wish to believe it, however, then don’t...
Believe it? Heck, I wish I’d never known about it :-P
That reminds me though: we need to have a Texas LW meetup.
You’re covering up your leg before you come, though.
I’m in SF for most of the year… And, I keep both legs fully covered. Most people don’t believe me about the legs either until they actually see them. Of course, they don’t look nearly as bad now as they did 10 years ago right after the accident.
I just thought of one additional point here.
When I make the claim that I am a Zombie, I am not really making the claim in the same sense that someone with Cotard’s Delusion would make, but rather as a response to people like Searle who posit that there is some extra special stuff that human consciousness has, and to which Dennett (and others) have replied with the Zombie counter to this claim.
So, I maintain that I am a zombie in the Dennett sense of the word. I am just like a normal human with the extra special stuff, just I don’t have that extra special stuff and thus, I am not really conscious, but just mimicking consciousness; thus, a Zombie.
Are you actually claiming to be unconscious and to not be aware of experiences? Or do you just mean you’re denying that consciousness is anything metaphysically special?
I am not entirely sure what the claim would actually consist of.
Dennett claims that a human looking zombie, that acted exactly like a person, yet failed to have this special stuff that Searle claims is necessary for consciousness, would be unrecognizable from a person who really did have whatever it is that Searle claims is the Magic Ingredient to consciousness. Its really more just a claim to mock Searle’s Chinese Room than anything else.
I guess though, that if I had to take a stance, it would be that there is nothing metaphysically special about consciousness. Now, this is just my naive assumption based on only a few years of reading, and none of it at the levels I will be forced to face in the coming years as I complete my degree. However, it is the sense that I get from what I have read and considered.
We can distinguish behavioral zombies, which just act like a person but are not conscious, from physical zombies, which are an exact physical duplicate of a conscious person but are not conscious.
For instance, people have reported doing all sorts of things while unconscious while taking the medication Ambien. They may be behavioral zombies, in the sense that they appear fully conscious but are not, but they aren’t physical zombies, because their brain is physically different from a conscious person’s (e.g., because of the Ambien).
The existence of physical zombies requires denying materialism: you need some sort of magic stuff that makes ordinary matter conscious. But even though materialism is true, it’s entirely possible that you are a behavioral zombie. Once neuroscience is advanced to the point where we understand consciousness better, we’ll be able to look at your brain and find out.
Something occurred to me while in a dream (and I nearly forgot to transition it to my waking consciousness).
In the post above, you seem to imply that consciousness is only to be applied to that part of our mind which is conscious, and not to that part of our mind of which we are not cognizant.
I would maintain that Consciousness still exists even without a conscious mind (at least as it seems to be applied above)… Case in point, my realization during a dream.
A person dreaming is still Conscious (meaning they have subjective experience). They aren’t “conscious” (meaning awake). These are completely different meanings that unfortunately share a word.
Are you sure it was you who had the realization during the dream? Is it possible you just dreamed that you had one? Or that when you woke up, your mind reconstructed a narrative from the random fragments of your dream, and that narrative included having a realization?
Interesting questions… it’s definitely possible for consciousness to flicker on and off. Marijuana and alcohol, for instance, can both have the effect of making time seem discrete instead of continuous, so you have flashes of awareness within an unconscious period.
The other problem is distinguishing consciousness from the capability of consciousness. Without a conscious mind, you may be capable of consciousness, but it’s not clear to me that you are in fact conscious.
On the subject of dreams… Yesterday I was swapping emails with a famous science-fiction writer and he sent me this interesting document. Then I woke up and I was ruing that it wasn’t real—but then I found a printout of the correspondence from the dream, and I was like, wow, what does this say about reality? Then I woke up again.
Then this morning I was thinking back on yesterday’s “false awakening” (as such events are called), while I browsed a chapter in a book about the same writer. Then I woke up again.
Exactly. This is why I mock the argument from Searle. Eventually, science, not philosophy, will provide an answer to the problem.
Ya, a p-zombie. Wikipedia reports (with a [citation needed]) that sufferers of severe Cotard delusion deny that they exist at all. Presumably such individuals would claim to have no body, no mind, no consciousness, etc. So I agree, not a Zombie in Dennett’s sense per se, since they claim to have/be nothing at all, never mind the extra special stuff.
In fact it may be worth stressing that, by definition, someone’s claim to be a p-zombie would be zero evidence that they were a p-zombie.
Do they actually claim to be zombies? I thought they claimed to be dead or nonexistent.
On one level, with a little artistic license one could say that a person who claims to be dead and putrifying can be said to consider themselves a zombie. On another level, a person who claims to be literally non-existent is almost claiming to be a p-zombie.
Depends on the exact claim of nonexistence. Something claiming to be a p-zombie might claim to exist, in the sense of being physically present in the world, but claim to not have conscious experience. I’m not clear on what types of nonexistence claims people with Cotard’s actually make.
Me neither; I’m just being sloppy. Although I do have a vague memory of reading about a man who was conscious but could not experience it in a manner analogous to the way that those with blindsight can see but are not aware of it. However, I have confabulated similar memories in the past, so even I take this memory with a large chunk of salt.
You mean that guy who couldn’t consciously see, but could name the pictures on flashcards at much better than chance frequencies? And they eventually talked him into trying to walk down a hall with obstacles in it and he did (but it took some significant effort for them to talk him into it)? I remember that, too—should have saved it. I’m pretty sure it wasn’t taken as any kind of delusion, though.
Blindsight is discussed in Consciousness Explained.
That’s blindsight. The memory (?) I have is of a man who was to all appearances perfectly conscious (while awake) and conducted himself in a perfectly sensible manner but reported feeling as if at every moment the world was dissolving into a dream-like morass, so that each moment was somehow disconnected from every other. He had no conscious access to his internal narrative, even though he carried out plans that he made, went to appointments, etc.
You’re not by any chance thinking of Peter Watts’s novel “Blindsight”, which has zombies as characters, are you?
That sounds like Depersonalization or Derealization.
No, I haven’t read Blindsight yet (it’s in my queue). Those other terms seem close, but unless I can find the actual account I seem to recall, my report should only have a negligible weight of evidence.
So… uh… how can I use this to see if, say, rats are guaranteed to not be conscious?
Short answer: It won’t guarantee that, because rats learn most of what they know. The equation I developed turns out to be identical to an equation saying that the amount of information contained in facts and data must be at least as great as the amount of information that it takes to specify the ontology. So any creature that learns its ontology, automatically satisfies the equation.
… Could we take as the input the most a rat could ever learn?
I don’t understand the question. It’s an inequality, and in cases where the inequality isn’t satisfied, the answer it gives is “I don’t know”. The answer for a rat will always be “I don’t know”.
I must profess I didn’t understand most of what you’ve said, but did I guess the following right? The equation says that
IF my knowledge is “bigger” than my ontology THEN I might be conscious
And in the case of learning my ontology, it means that my ontology is a subset of my knowledge and thus never bigger than the former.
Right.
Exactly.
That’s like watching the Wright Brothers fly their airplane at Kitty Hawk, then asking how to fly to London.
If the numbers came up to say that rats don’t need to be considered conscious, I would think the numbers were probably wrong.
The number in the conclusion just changed by a factor of 40. I double-checked the math in the last section, and found it was overly complex and completely wrong. I also forgot to remove the number of predicates p from the dictionary-based estimate for s.
BTW, note that even if you can prove that an AI isn’t conscious, that doesn’t prove that it isn’t dangerous.
Oops. Original math was correct.
Are you treating the arguments in the assertions as independent random variables? Because the same symbol will show up in several assertions, in which case the meanings aren’t independent.
I think you mean that the probability of multiple predicates involving the same variable being true under a different grounding are not independent.
I hadn’t thought of that. That could be devastating.
It makes what you did a first approximation. But I think you can go further. Call S the set of all s and P the set of all p. Each proposition links a p in P to some subset of S. A set of such propositions can be represented by some sort of hypergraph built on S and P. So there’s a hypergraph for g(K), another hypergraph for W, and you want to estimate the number of sub-hypergraphs of W which are isomorphic to g(K). It’s complicated but it may be doable, especially if you can find a way to make that estimate given just statistical properties of the hypergraphs.
I doubt that thinking of the arguments as forming a subset of S is the end, because predicates typically have more structure than just their arity. E.g. the order of arguments may or may not matter. So the sets need to be ordered, or the links from P to S need to be labeled so as to indicate the logical form of the predicate. But one step at a time!
I don’t follow.
A predicate logic representation is a hypergraph on P and S. g(K) maps that hypergraph into a hypergraph on W, kind of (but K is intensional and W is extensional, so simple examples using English-level concepts will run into trouble).
When you ask about finding sub-hypergraphs of W that are isomorphic to g(K), it sounds like you’re just asking if g(K) evaluates to true.
I just skimmed your definitions so I might be getting something wrong. But the idea is that your symbolic reasoning system, prior to interpretation, corresponds to a system of propositional schemas, in which the logical form and logical dependencies of the propositions have been specified, but the semantics has not. Meanwhile, the facts about the world are described by another, bigger system of propositions (whose semantics is fixed). The search for groundings is the search for subsystems in this description of the world which have the logical structure (form and dependency) of those in the uninterpreted schema-system. The stuff about hypergraphs was just a guess as to the appropriate mathematical framework. But maybe posets or lattices are appropriate (for the deductive structure).
I haven’t thought about intension vs extension at all. That might require a whole other dimension of structure.
This might not be the best phrasing to use, considering that almost everyone experiences exactly that on a regular basis. I’d guess that it would be important to emphasize that the deam-like periods would occur more arbitrarily than our nice regular sleep cycle.
Now I’m stuck trying to distinguish a grounding from a labeling. I thought of a grounding as being something like “atom G234 is associated with objects reflecting visible light of wavelengths near 650nm most strongly”, and a labeling as being “the atom G234 has the label red”.
But this grounding is something that could be expressed within the knowledge base. This is the basic symbol-grounding problem: Where does it bottom out? The question I’m facing is whether it bottoms out in a procedural grounding like that given above, or whether it can bottom out in an arbitrary symbol-mapping. If the latter, then I’m afraid there is no distinction between labelings and groundings if the grounding is intensional rather than extensional/referential.
If I say a grounding is procedural, the argument falls apart, because then you can inspect the system and determine its grounding! But if I don’t, I’m back to having people object that you can swap qualia without changing truth-values.
Qualia are giving me a headache.
BTW, the treatment of truth-value in this outline is shoddy. It’s not really observer-independent, but it takes paragraphs to explain that.
I think the answer lies in drawing two rather arbitrary boundaries: One between the world and the sensory system, and one between the sensory system and the mind. The actual grounding g is implemented by the sensory system. A spurious grounding g’ is one that is not implemented, but that would produce the same propositions in K; so that the mind of K does not know whether it uses g or g’.
Does it matter if K could, in theory, dissect itself and observe its g? Would that in fact be possible; or is the fact that it needs to observe the implementation of its grounding g as viewed through g mean that it cannot distinguish between g and g’, even by “autosection”?
WTF? This is not supposed to be published. It’s supposed to be a draft.
It was saved to “LessWrong” instead of “Drafts”. I put it back in Drafts, hopefully.
I can’t tell from here whether doing that took it off the NEW page, so I fixed it up and put it back in LessWrong.
It did go off the New page, although, for strange and mysterious reasons, people can always see their own unpublished drafts on the New page.
The basic problem with these equations is that the test is more stringent for large a than for small a, because parameters that gave an expected 1 random grounding for a assertions would give only ¼ of a random grounding for a/4 assertions.
Actually, not!
I spent 2 weeks devising a more-sophisticated approach, that doesn’t share this problem. The shape of the resulting curve for assertions needed as a function of dictionary size turns out to be exactly the same. The reason is that the transition between zero alternate groundings, and an astronomical number of groundings, is so sharp, that it doesn’t matter at all that the test performed for a dictionary size of 100,000 effectively allows only half as many errors as for a dictionary size of 50,000. Increasing the number of assertions by 1 decreases the number of possible alternate groundings by about a factor of 10; decreasing the number of assertions by 1 increases it by about a factor of 10. So this translates into a difference between 200,000 assertions and 200,001.
I’m surprised no one has come up with the objection that we believe things that aren’t true. That’s a tough one to handle.
When you wrote that a grounding maps atoms to ” concepts in a world W”, I assumed you were talking about an abstract world that was not necessarily the real world. If I had to pick one of the many ways that it falls short of describing humans* to criticize, I’d go with “fails to implement learning”.
* which is just fine given that it seeks to define a necessary but not sufficient conditions for consciousness.
Something worries me about this. It seems to say that Consciousness is a quantifiable thing, and as such, certain living things (babies, cows, dogs/cats, fish) may not meet the standards for Consciousness?
Or, am I reading it wrong??? I’m probably reading it wrong, please tell me I’m reading it wrong (if that is what I am doing).
I can understand your worries. It doesn’t actually say that you can prove a lack of consciousness; but that you can prove that, under certain conditions, the meaning of the concepts in the mind would be underdetermined; and argues that you don’t need to treat such an entity as conscious.
A fuller treatment would talk about what parts of the mind are conscious, or which concepts are accessible to consciousness. I don’t think consciousness is a 0⁄1 binary distinction for an entire brain.
I’m going to go out on a limb and guess you don’t want people quoting you on that...
Too conservative?
How about, “further study is probably needed”?
(deleting comment, hopefully before google spiders it up)
Want me to delete mine?
Okay. It seems innocuous to me, but when I imagine what would happen if Richard Dawkins had made the same joke online somewhere, deleting it seems like a good idea.
Upon reading it again (and with some input from a friend who is a logician—he said that this is something that he said I should have been able to understand myself—and upon going over it with him again, I discovered he was right), I get that there is this distinction now.
I don’t think consciousness is necessarily a binary distinction for any part of the brain, or any thing for that matter. This does mean that it could be 0⁄1, but that it is likely that most things capable of exhibiting conscious behavior lie between the two degrees.
What do you mean by a “concept” in a world?
If I said a “thing” in a world, people would think of physical objects; whereas I also want to include events, attributes, and so on. I don’t know a good way of putting this.
I think this article would be better if it started with a definition of consciousness (or maybe even two definitions, a formal one and an operational one). It’s hard to figure out how much sense the calculations make if we don’t know what the results mean.
Well sure, a definition of consciousness would make this the best article ever.
The most exciting discovery in the history of time is always the next one, but it might not look more important than Newtonian gravity in retrospect.
Isn’t the most exciting discovery in the history of time the one I’m most excited by at the moment? And of what use is the consideration of future evaluations?
A full, generally agreed-upon definition may indeed be hard to formulate, which makes it even more important to formulate a good operational definition in the context of this article. That’s not too much to ask, or? Otherwise, what do these numbers even mean?
So if I understand correctly, your basic claim underlying all of this is that a system can be said not to be conscious if its set of beliefs remains equally valid when you switch the labels on some of the things it has beliefs about. I have a few concerns about this point, which you may have already considered, but which I would like to see addressed explicitly. I will post them as replies to this post.
If I am mischaracterizing your position, please let me know, and then my replies to this post can probably be ignored.
Doesn’t this fail independence of irrelevant alternatives? That is to say, couldn’t I take a conscious system and augment it with two atoms, then add one fact about each atom such that switching the labels on the two atoms maintains the truth of those facts? It seems to me that in that case, the system would be provably unconscious, which does not accord with my intuition.
Yes; I mentioned that in the full version. The brain is full of information that we’re not conscious of. This is necessarily so when you have regions of the graph of K with low connectivity. A more complete analysis would look for uniquely-grounded subsets of K. For example, it’s plausible that infants thrashing their arms around blindly have knowledge in their brains about where there arms are and how to move them, but are not conscious of that knowledge; but are conscious of simpler sensation.
What does this all mean physically? You talk about a symbolic reasoning system consisting of logic assertions and such, but any symbolic reasoning system ultimately has to be made out of atoms. How can I look at one lump of atoms and tell that it’s a symbolic reasoning system, and another lump and tell that it’s just random junk?
You can’t, because you can interpret any system as a symbolic reasoning system. You don’t need to ask whether a system is a symbolic reasoning system; you need to ask whether it’s conscious.
How can one grounding be falsifiable and another not, and the two groundings still be entirely indistinguishable? If there is a difference, shouldn’t there be some difference? How would they flicker back and forth, as you say, like a Necker cube? Wouldn’t there be some truth of the matter?
I don’t think they can. I wanted to accommodate people who believe that qualia are part of groundings, and that you would have a different grounding if you swapped the experience of blue with the experience of red, rather than argue about it.
That’s how I used to phrase it, but now I would say instead that you switch what the things are mapped to. I think of the labels themselves as qualia, so that switching just the labels would be like switching the experience of “blue” with the experience of “red”.
The number of assertions needed is now so large that it may be difficult for a human to acquire that much knowledge. Does anyone have an estimate for how many facts a human knows at different ages? Vocabulary of children entering grade school is said to be around 3000 words, IIRC.
An interesting result is that it suggests that that rate at which we can learn new concepts is not limited by our ability to learn the concepts themselves, but by our ability to learn enough facts using the concepts that we can be truly conscious of that knowledge. Or—if you suddenly loaded all of the concepts that an adult possesses into the mind of a child, without a large number of facts using those concepts, that child might be able to use the concepts without any conscious comprehension of them. It suggests an interesting reply to Searle’s Chinese Room argument.
Especially given these are likely significantly lower bounds, and don’t account for the problems of running on spotty evolutionary hardware, I suspect that the discrepancy is even greater than it first appears.
What I find intriguing about this result is that essentially it is one of the few I’ve seen that has a limit description of consciousness: you have on one hand a rating of complexity of your “conscious” cognitive system and on the other you have world adherence based on the population of your assertions. Consciousness is maintained if, as you increase your complexity, you maintain the variety of the assertion population.
It is possible that the convergence rates for humans and prospective GAI will simply be different, however. Which makes a certain amount of sense. Ideal consciousness in this model is unachievable, and approaching it faster is more costly, so there are good evolutionary reasons for our brains to be as meagerly conscious as possibly—even to fake consciousness when the resources would not otherwise be missed.
I don’t think I understand this.
Hopefully it’s more clear now. I wanted to see what people thought of my ideas on qualia, but they were confusing and tangential.
Yes, it is. I don’t know if it really means anything, but I can follow the article now without getting that “This isn’t going to make any sense unless I try very hard to decode it” feeling.
I haven’t finished reading, but have noted a place where you write “a qualia” instead of “a quale”.
Yes, that stuck out like a sore thumb for me also. I would also suggest saying “conscious experience” instead, to avoid the confused concept of qualia.
I can’t make it less confusing by using a new word for it.
Dennett argues that the whole concept is confused, because the term “qualia” is used to mean something with many other characteristics than just conscious experience. The two are not synonyms.
Oops. Twice. Good catch.