GAZP vs. GLUT
In “The Unimagined Preposterousness of Zombies”, Daniel Dennett says:
To date, several philosophers have told me that they plan to accept my challenge to offer a non-question-begging defense of zombies, but the only one I have seen so far involves postulating a “logically possible” but fantastic being — a descendent of Ned Block’s Giant Lookup Table fantasy...
A Giant Lookup Table, in programmer’s parlance, is when you implement a function as a giant table of inputs and outputs, usually to save on runtime computation. If my program needs to know the multiplicative product of two inputs between 1 and 100, I can write a multiplication algorithm that computes each time the function is called, or I can precompute a Giant Lookup Table with 10,000 entries and two indices. There are times when you do want to do this, though not for multiplication—times when you’re going to reuse the function a lot and it doesn’t have many possible inputs; or when clock cycles are cheap while you’re initializing, but very expensive while executing.
Giant Lookup Tables get very large, very fast. A GLUT of all possible twenty-ply conversations with ten words per remark, using only 850-word Basic English, would require 7.6 * 10585 entries.
Replacing a human brain with a Giant Lookup Table of all possible sense inputs and motor outputs (relative to some fine-grained digitization scheme) would require an unreasonably large amount of memory storage. But “in principle”, as philosophers are fond of saying, it could be done.
The GLUT is not a zombie in the classic sense, because it is microphysically dissimilar to a human. (In fact, a GLUT can’t really run on the same physics as a human; it’s too large to fit in our universe. For philosophical purposes, we shall ignore this and suppose a supply of unlimited memory storage.)
But is the GLUT a zombie at all? That is, does it behave exactly like a human without being conscious?
The GLUT-ed body’s tongue talks about consciousness. Its fingers write philosophy papers. In every way, so long as you don’t peer inside the skull, the GLUT seems just like a human… which certainly seems like a valid example of a zombie: it behaves just like a human, but there’s no one home.
Unless the GLUT is conscious, in which case it wouldn’t be a valid example.
I can’t recall ever seeing anyone claim that a GLUT is conscious. (Admittedly my reading in this area is not up to professional grade; feel free to correct me.) Even people who are accused of being (gasp!) functionalists don’t claim that GLUTs can be conscious.
GLUTs are the reductio ad absurdum to anyone who suggests that consciousness is simply an input-output pattern, thereby disposing of all troublesome worries about what goes on inside.
So what does the Generalized Anti-Zombie Principle (GAZP) say about the Giant Lookup Table (GLUT)?
At first glance, it would seem that a GLUT is the very archetype of a Zombie Master—a distinct, additional, detectable, non-conscious system that animates a zombie and makes it talk about consciousness for different reasons.
In the interior of the GLUT, there’s merely a very simple computer program that looks up inputs and retrieves outputs. Even talking about a “simple computer program” is overshooting the mark, in a case like this. A GLUT is more like ROM than a CPU. We could equally well talk about a series of switched tracks by which some balls roll out of a previously stored stack and into a trough—period; that’s all the GLUT does.
A spokesperson from People for the Ethical Treatment of Zombies replies: “Oh, that’s what all the anti-mechanists say, isn’t it? That when you look in the brain, you just find a bunch of neurotransmitters opening ion channels? If ion channels can be conscious, why not levers and balls rolling into bins?”
“The problem isn’t the levers,” replies the functionalist, “the problem is that a GLUT has the wrong pattern of levers. You need levers that implement things like, say, formation of beliefs about beliefs, or self-modeling… Heck, you need the ability to write things to memory just so that time can pass for the computation. Unless you think it’s possible to program a conscious being in Haskell.”
“I don’t know about that,” says the PETZ spokesperson, “all I know is that this so-called zombie writes philosophical papers about consciousness. Where do these philosophy papers come from, if not from consciousness?”
Good question! Let us ponder it deeply.
There’s a game in physics called Follow-The-Energy. Richard Feynman’s father played it with young Richard:
It was the kind of thing my father would have talked about: “What makes it go? Everything goes because the sun is shining.” And then we would have fun discussing it:
“No, the toy goes because the spring is wound up,” I would say. “How did the spring get wound up?” he would ask.
“I wound it up.”
“And how did you get moving?”
“From eating.”
“And food grows only because the sun is shining. So it’s because the sun is shining that all these things are moving.” That would get the concept across that motion is simply the transformation of the sun’s power.
When you get a little older, you learn that energy is conserved, never created or destroyed, so the notion of using up energy doesn’t make much sense. You can never change the total amount of energy, so in what sense are you using it?
So when physicists grow up, they learn to play a new game called Follow-The-Negentropy—which is really the same game they were playing all along; only the rules are mathier, the game is more useful, and the principles are harder to wrap your mind around conceptually.
Rationalists learn a game called Follow-The-Improbability, the grownup version of “How Do You Know?” The rule of the rationalist’s game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it. (This game has amazingly similar rules to Follow-The-Negentropy.)
Whenever someone violates the rules of the rationalist’s game, you can find a place in their argument where a quantity of improbability appears from nowhere; and this is as much a sign of a problem as, oh, say, an ingenious design of linked wheels and gears that keeps itself running forever.
The one comes to you and says: “I believe with firm and abiding faith that there’s an object in the asteroid belt, one foot across and composed entirely of chocolate cake; you can’t prove that this is impossible.” But, unless the one had access to some kind of evidence for this belief, it would be highly improbable for a correct belief to form spontaneously. So either the one can point to evidence, or the belief won’t turn out to be true. “But you can’t prove it’s impossible for my mind to spontaneously generate a belief that happens to be correct!” No, but that kind of spontaneous generation is highly improbable, just like, oh, say, an egg unscrambling itself.
In Follow-The-Improbability, it’s highly suspicious to even talk about a specific hypothesis without having had enough evidence to narrow down the space of possible hypotheses. Why aren’t you giving equal air time to a decillion other equally plausible hypotheses? You need sufficient evidence to find the “chocolate cake in the asteroid belt” hypothesis in the hypothesis space—otherwise there’s no reason to give it more air time than a trillion other candidates like “There’s a wooden dresser in the asteroid belt” or “The Flying Spaghetti Monster threw up on my sneakers.”
In Follow-The-Improbability, you are not allowed to pull out big complicated specific hypotheses from thin air without already having a corresponding amount of evidence; because it’s not realistic to suppose that you could spontaneously start discussing the true hypothesis by pure coincidence.
A philosopher says, “This zombie’s skull contains a Giant Lookup Table of all the inputs and outputs for some human’s brain.” This is a very large improbability. So you ask, “How did this improbable event occur? Where did the GLUT come from?”
Now this is not standard philosophical procedure for thought experiments. In standard philosophical procedure, you are allowed to postulate things like “Suppose you were riding a beam of light...” without worrying about physical possibility, let alone mere improbability. But in this case, the origin of the GLUT matters; and that’s why it’s important to understand the motivating question, “Where did the improbability come from?”
The obvious answer is that you took a computational specification of a human brain, and used that to precompute the Giant Lookup Table. (Thereby creating uncounted googols of human beings, some of them in extreme pain, the supermajority gone quite mad in a universe of chaos where inputs bear no relation to outputs. But damn the ethics, this is for philosophy.)
In this case, the GLUT is writing papers about consciousness because of a conscious algorithm. The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.
“All right,” says the philosopher, “the GLUT was generated randomly, and just happens to have the same input-output relations as some reference human.”
How, exactly, did you randomly generate the GLUT?
“We used a true randomness source—a quantum device.”
But a quantum device just implements the Branch Both Ways instruction; when you generate a bit from a quantum randomness source, the deterministic result is that one set of universe-branches (locally connected amplitude clouds) see 1, and another set of universes see 0. Do it 4 times, create 16 (sets of) universes.
So, really, this is like saying that you got the GLUT by writing down all possible GLUT-sized sequences of 0s and 1s, in a really damn huge bin of lookup tables; and then reaching into the bin, and somehow pulling out a GLUT that happened to correspond to a human brain-specification. Where did the improbability come from?
Because if this wasn’t just a coincidence—if you had some reach-into-the-bin function that pulled out a human-corresponding GLUT by design, not just chance—then that reach-into-the-bin function is probably conscious, and so the GLUT is again a cellphone, not a zombie. It’s connected to a human at two removes, instead of one, but it’s still a cellphone! Nice try at concealing the source of the improbability there!
Now behold where Follow-The-Improbability has taken us: where is the source of this body’s tongue talking about an inner listener? The consciousness isn’t in the lookup table. The consciousness isn’t in the factory that manufactures lots of possible lookup tables. The consciousness was in whatever pointed to one particular already-manufactured lookup table, and said, “Use that one!”
You can see why I introduced the game of Follow-The-Improbability. Ordinarily, when we’re talking to a person, we tend to think that whatever is inside the skull, must be “where the consciousness is”. It’s only by playing Follow-The-Improbability that we can realize that the real source of the conversation we’re having, is that-which-is-responsible-for the improbability of the conversation—however distant in time or space, as the Sun moves a wind-up toy.
“No, no!” says the philosopher. “In the thought experiment, they aren’t randomly generating lots of GLUTs, and then using a conscious algorithm to pick out one GLUT that seems humanlike! I am specifying that, in this thought experiment, they reach into the inconceivably vast GLUT bin, and by pure chance pull out a GLUT that is identical to a human brain’s inputs and outputs! There! I’ve got you cornered now! You can’t play Follow-The-Improbability any further!”
Oh. So your specification is the source of the improbability here.
When we play Follow-The-Improbability again, we end up outside the thought experiment, looking at the philosopher.
That which points to the one GLUT that talks about consciousness, out of all the vast space of possibilities, is now… the conscious person asking us to imagine this whole scenario. And our own brains, which will fill in the blank when we imagine, “What will this GLUT say in response to ‘Talk about your inner listener’?”
The moral of this story is that when you follow back discourse about “consciousness”, you generally find consciousness. It’s not always right in front of you. Sometimes it’s very cleverly hidden. But it’s there. Hence the Generalized Anti-Zombie Principle.
If there is a Zombie Master in the form of a chatbot that processes and remixes amateur human discourse about “consciousness”, the humans who generated the original text corpus are conscious.
If someday you come to understand consciousness, and look back, and see that there’s a program you can write which will output confused philosophical discourse that sounds an awful lot like humans without itself being conscious—then when I ask “How did this program come to sound similar to humans?” the answer is that you wrote it to sound similar to conscious humans, rather than choosing on the criterion of similarity to something else. This doesn’t mean your little Zombie Master is conscious—but it does mean I can find consciousness somewhere in the universe by tracing back the chain of causality, which means we’re not entirely in the Zombie World.
But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?
Well, then it wouldn’t be conscious. IMHO.
I mean, there’s got to be more to it than inputs and outputs.
Otherwise even a GLUT would be conscious, right?
Oh, and for those of you wondering how this sort of thing relates to my day job...
In this line of business you meet an awful lot of people who think that an arbitrarily generated powerful AI will be “moral”. They can’t agree among themselves on why, or what they mean by the word “moral”; but they all agree that doing Friendly AI theory is unnecessary. And when you ask them how an arbitrarily generated AI ends up with moral outputs, they proffer elaborate rationalizations aimed at AIs of that which they deem “moral”; and there are all sorts of problems with this, but the number one problem is, “Are you sure the AI would follow the same line of thought you invented to argue human morals, when, unlike you, the AI doesn’t start out knowing what you want it to rationalize?” You could call the counter-principle Follow-The-Decision-Information, or something along those lines. You can account for an AI that does improbably nice things by telling me how you chose the AI’s design from a huge space of possibilities, but otherwise the improbability is being pulled out of nowhere—though more and more heavily disguised, as rationalized premises are rationalized in turn.
So I’ve already done a whole series of posts which I myself generated using Follow-The-Improbability. But I didn’t spell out the rules explicitly at that time, because I hadn’t done the thermodynamic posts yet...
Just thought I’d mention that. It’s amazing how many of my Overcoming Bias posts would coincidentally turn out to include ideas surprisingly relevant to discussion of Friendly AI theory… if you believe in coincidence.
- Why I’m not working on {debate, RRM, ELK, natural abstractions} by 10 Feb 2023 19:22 UTC; 71 points) (
- Seeing Red: Dissolving Mary’s Room and Qualia by 26 May 2011 17:47 UTC; 54 points) (
- Algorithmic Intent: A Hansonian Generalized Anti-Zombie Principle by 14 Jul 2020 6:03 UTC; 50 points) (
- Seeking a “Seeking Whence ‘Seek Whence’” Sequence by 25 Jun 2012 11:10 UTC; 39 points) (
- Recognizing Intelligence by 7 Nov 2008 23:22 UTC; 28 points) (
- 31 Oct 2021 22:04 UTC; 21 points) 's comment on Resurrecting all humans ever lived as a technical problem by (
- Quantitative cruxes in Alignment by 2 Jul 2023 20:38 UTC; 19 points) (
- Does the simulation argument even need simulations? by 11 Oct 2013 21:16 UTC; 13 points) (
- 2 Jul 2016 21:08 UTC; 12 points) 's comment on Zombies Redacted by (
- 10 Apr 2011 1:35 UTC; 11 points) 's comment on David Deutsch on How To Think About The Future by (
- 3 Aug 2020 2:10 UTC; 11 points) 's comment on Rereading Atlas Shrugged by (
- 25 Aug 2012 1:34 UTC; 10 points) 's comment on Completeness of simulations by (
- 10 Sep 2013 15:32 UTC; 10 points) 's comment on The Ultimate Newcomb’s Problem by (
- Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] by 7 Jun 2016 2:34 UTC; 9 points) (
- 1 Dec 2018 12:02 UTC; 9 points) 's comment on Intuitions about goal-directed behavior by (
- 15 Aug 2014 7:58 UTC; 9 points) 's comment on Ethical frameworks are isomorphic by (
- 20 May 2013 19:54 UTC; 9 points) 's comment on The flawed Turing test: language, understanding, and partial p-zombies by (
- 14 Sep 2013 1:25 UTC; 7 points) 's comment on The Ultimate Newcomb’s Problem by (
- 19 Feb 2011 21:22 UTC; 7 points) 's comment on Spoiled Discussion of Permutation City, A Fire Upon The Deep, and Eliezer’s Mega Crossover by (
- 10 Jun 2010 7:17 UTC; 6 points) 's comment on Open Thread June 2010, Part 2 by (
- 16 Feb 2010 22:30 UTC; 6 points) 's comment on Open Thread: February 2010, part 2 by (
- 13 Jun 2022 1:19 UTC; 6 points) 's comment on A claim that Google’s LaMDA is sentient by (
- 29 Mar 2011 21:14 UTC; 5 points) 's comment on Philosophy: A Diseased Discipline by (
- Rationality Reading Group: Part R: Physicalism 201 by 13 Jan 2016 23:41 UTC; 5 points) (
- [SEQ RERUN] GAZP vs. GLUT by 27 Mar 2012 2:00 UTC; 5 points) (
- “Free Will” in a Computational Universe by 22 Sep 2022 21:25 UTC; 5 points) (
- 5 Apr 2021 22:45 UTC; 4 points) 's comment on Learning Russian Roulette by (
- 8 Apr 2024 12:24 UTC; 4 points) 's comment on My intellectual journey to (dis)solve the hard problem of consciousness by (
- 2 Aug 2010 19:47 UTC; 4 points) 's comment on Open Thread: July 2010, Part 2 by (
- 21 May 2013 8:27 UTC; 4 points) 's comment on Welcome to Less Wrong! (5th thread, March 2013) by (
- 26 Mar 2010 4:57 UTC; 3 points) 's comment on The two insights of materialism by (
- 24 Aug 2010 21:30 UTC; 3 points) 's comment on Zombies! Zombies? by (
- 8 Jan 2022 1:30 UTC; 3 points) 's comment on You can’t understand human agency without understanding amoeba agency by (
- 8 Jun 2016 21:21 UTC; 2 points) 's comment on Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID] by (
- 30 Dec 2011 18:26 UTC; 2 points) 's comment on Stupid Questions Open Thread by (
- 5 Dec 2009 13:15 UTC; 2 points) 's comment on Nonsentient Optimizers by (
- 16 Jan 2010 15:42 UTC; 2 points) 's comment on Consciousness by (
- 21 Nov 2013 16:22 UTC; 2 points) 's comment on Another problem with quantum measure by (
- Gettier in Zombie World by 23 Jan 2011 6:44 UTC; 2 points) (
- 23 Jan 2021 18:06 UTC; 2 points) 's comment on Deutsch and Yudkowsky on scientific explanation by (
- 3 Dec 2012 21:09 UTC; 2 points) 's comment on Causal Universes by (
- 18 Mar 2024 19:23 UTC; 2 points) 's comment on On the abolition of man by (
- 21 Jun 2020 9:32 UTC; 2 points) 's comment on Image GPT by (
- 9 Jun 2015 19:54 UTC; 1 point) 's comment on Agency is bugs and uncertainty by (
- 13 Jun 2013 18:04 UTC; 1 point) 's comment on Rationality Quotes June 2013 by (
- 29 Sep 2012 6:56 UTC; 1 point) 's comment on A (small) critique of total utilitarianism by (
- 15 Apr 2019 21:13 UTC; 1 point) 's comment on The Cacophony Hypothesis: Simulation (If It is Possible At All) Cannot Call New Consciousnesses Into Existence by (
- 13 Oct 2013 10:46 UTC; 0 points) 's comment on The best 15 words by (
- 17 May 2015 4:15 UTC; 0 points) 's comment on On desiring subjective states (post 3 of 3) by (
- 7 Apr 2011 16:41 UTC; 0 points) 's comment on Rationality Quotes: April 2011 by (
- 12 Apr 2011 4:11 UTC; 0 points) 's comment on We are not living in a simulation by (
- 19 Jan 2015 10:22 UTC; 0 points) 's comment on Entropy and Temperature by (
- 8 Jun 2013 9:31 UTC; 0 points) 's comment on Rationality Quotes June 2013 by (
- All of my grandparents were prodigies, I am extremely bored at Oxford University. Please let me intern/work for you! by 26 Feb 2023 7:50 UTC; -17 points) (
Whether the belief happens to be true is irrelevant. What matters is whether the person can justify the belief. If the conviction is spontaneously generated, the person doesn’t have a rational argument that shows how the claim arises from previously-accepted statements. Thus, asserting that claim is wrong, regardless of whether it happens to be true or not.
It’s not about truth! It’s about justification!
No. It’s about truth. Cognitive engines don’t run because you justified them into existence. They run when they systematically produce truth, whether the philosophers agree it ought to or not.
To produce truth systematically—by a method known to generate truth reliably—is to produce justified truth.
for sane vales of “justified”.
What insane values did you have in mind? “Justified” is pretty much a success-word in philosophy.
I mean, there’s got to be more to it than inputs and outputs.
Otherwise even a GLUT would be conscious, right?
Eliezer, I suspect you are not being 100% honest here. I don’t have any problems with a GLUT being conscious.
If the GLUT is conscious, then there is consciousness in the Zombie World. There cannot be consciousness in the Zombie World, therefore the GLUT cannot be conscious. If the GLUT is not conscious, something else must be looking up the inputs in the GLUT, and that thing must be conscious. Rinse—Repeat.
This is why the epiphenomenal argument is logically impossible—either the Zombie World is not exactly the same as ours (precluded by the framing of the thought experiment) or there is consciousness in the Zombie World (also precluded by the framing of the thought experiment). They are mutually exclusive. A Zombie Master with a GLUT does not solve the problem for the epiphenomenal position—it’s just Zombie Masters and GLUTs all the way down.
This is an incorrect synthesis, and likely an incorrect conclusion. Eliezer is saying it is the process by which the table is populated that involves consciousness, not the thing that does the picking.
Right, I was arguing Roland’s point, not Eliezer’s, and I don’t see where I disagreed with Eliezer in any way.
Roland just said the GLUT is conscious, which means by definition it isn’t in a Zombie World, because the definition of a Zombie World is one that is apparently identical to ours, minus consciousness.
I’m not sure where I screwed the synthesis up here, Eliezer’s post doesn’t really come into it except for framing the subject of Zombie Worlds, GLUTs, and consciousness.
I was just saying if the GLUT is conscious then the Anti-Zombie position automatically wins the Zombie World GLUT argument, by definition. Perhaps I should have just said it exactly like that?
Please reread the bit I quoted. I am not trying to be pedantic, and it’s possible that either I am misreading you, or that you just didn’t write quite what you’d intended. Speaking of the case where the GLUT itself is not conscious, which was the sole of my focus, it seems to me that you said that the thing that is “looking up the inputs in the GLUT” must be conscious. This seems to mean “thing that is performing the lookup operation”, which is different than “thing that stored the data to be looked up.” Did I misunderstand you?
I didn’t misspeak, even though the argument I gave wasn’t the exact same one that Eliezer gave it is essentially the exact same argument. The “rinse, repeat” was meant to suggest you keep going with it ad infinitum, and the very next step I came up with was the exact same as Eliezer’s. It’s a reference to washing directions on shampoo bottles, and I’ve honestly never had anyone get confused by it, so I apologize.
The point is, if the thing looking up the inputs (to continue where I left off) isn’t conscious, then the thing that created the thing that looks up the inputs is probably conscious. If the thing that created the thing that created the GLUT isn’t conscious (e.g. a true random code generator that happens to produce the GLUT that matches our universe) then the thing that chose that GLUT out of the multitude of others is probably conscious. This is exactly Eliezer’s argument, unless I have completely misunderstood it, and if I have I would love to be corrected (I’ve only been actively engaged in this kind of thinking for a little over a year now, so I’m still very much a newbie). As it is, I don’t see what is different in principal between my point and Eliezer’s.
My point was in regards to Roland’s argument, which was that he didn’t mind the GLUT being consciousness in his Zombie World. I was attempting to point out that if the GLUT is conscious then the anti-zombie principle is automatically validated on the grounds that it’s not a Zombie World in that case.
In order for epiphenomenalists to effectively argue the Zombie World using a GLUT, it cannot be conscious itself. Eliezer argued (in a nutshell) that if there was a conscious mind behind the creation of the GLUT, then the GLUT was simply a tool of the conscious mind, and that the GLUT wasn’t actually running things, the consciousness behind it was. This is true regardless of where in the process the consciousness is, the point is that it is there somewhere and has a meaningful affect on the universe it exists in. Any world that his this kind of connection to a consciousness by definition can’t be a real Zombie World.
If consciousness has no affect on the universe then it is meaningless. This has been my understanding of Eliezer’s position throughout this entire series.
“Otherwise even a GLUT would be conscious, right?”
I have to admit that this sounds crazy, and that I don’t really understand what’s going on. But it looks like it’s logically necessary that lookup tables can be conscious. As far as we know, the Universe, and everything in it, can be simulated on a giant Turing machine. What is a Turing machine, if not a lookup table? Granted, most Turing machines use a much smaller set of symbols than a GLUT- base 5 or base 10 instead of base 10^10^50- but how would that change a system from being “non-conscious” to being “conscious”? And while a Turing machine has a state register, this can be simulated by just using N lookup tables instead of one lookup table. It seems like we have to believe that 1), the mathematical structure of a UTM relative to a giant lookup table, which is very minimal indeed, is the key element required for consciousness, or 2), the Universe is not Turing-computable, or 3), consciousness does not exist.
But it is not logically necessary that such a universe would contain genuine consciousness. It might be a zombie universe.
or (4) the universe is not simulated or (5) consciousness is not simulable, evne if physics is.
(4) is indistinguishable from (2) (until we make something more powerful than a Turing machine) and (5) is a pretty wishy-washy argument; if you can simulate a human completely, then surely that human would be conscious, or else not completely simulated?
Hrm… as far as no one actually willing to jump in and say “a glut can be/is conscious”… What about Moravec and Egan? (Egan in Permutation City, Moravec in Simulation, Consciousness, Existance)… I don’t recall them explicitly coming out and saying it, but it does seem to have been implied.
Anyways, I think I’m about to argue it… Or at least argue that there’s something here that’s seriously confusing me:
Okay, so you say that it’s the generating process of the GLUT that has the associated consciousness, rather than the GLUT itself. Fine...
But exactly where is the breakdown between that and, say, the process that generates a human equivalent AI? Why not say that process is where the consciousness resides rather than the AI itself? if one takes at least some level of functionalism, allowing some optimizations and so on in the internal computations, then the internal “levers” can end up looking algorithmically very very different than the external, even if the behavior is identical.
In other words, as I start with the “correct” rods and levers to produce consciousness, then optimize various bits of it incramentally… when does the optimization process itself contain the majority of the consciousness?
More concretely, let’s do something analogous to that hashlife program, creating a bunch of minigluts for clusters of neurons rather than a single superglut for the entire brain.
What’s going on there? is the location of the consciousness now kinda spread out and scrambled in spacetime, a la Permutation City?
As we make the sizes of the clusers we’re precomputing all possible states for larger, or basically grouping clusters and making megaclusters out of them… does the localization of the consciousness start to incrementally concentrate in spacetime toward the optimization process?
To perhaps make this really concrete.… implement turing machine in life universe, implement brain simulation on that, and then start with regular life simulation, then regular hashlife, and then incrementally “optimize” with larger and larger clusters of cells, so you end up with ever larger look up tables. ie, run the sim for a bit, then pause, do a step of optimization of the life CA algorithm (ie, life → regular hashlife) run for a bit, pause, make a hashhashlife or make larger clusters, continue running, etc...)
This isn’t so much an argument for a specific perspective so much as a thought experiment and question. I’m honestly not entirely sure how to view this. “Simplest” seems to be Permutation City style “scrambled in spacetime” consciousness.
Hi Caledonian. Hi Stephen. If I remember correctly, this is where the program that is the three of us having college bull sessions goes HALT and we never get any further, is it not? Once again, Eliezer says clearly what Caledonian was thinking and articulated through metaphor in one-on-one conversations (namely “Well, then it wouldn’t be conscious. IMHO.” ) but is predictably not understood by same, while I am far from sure. Eliezer: You don’t know how much I wanted to see you type essentially the line “Ordinarily, when we’re talking to a person, we tend to think that whatever is inside the skull, must be “where the consciousness is”. It’s only by playing Follow-The-Improbability that we can realize that the real source of the conversation we’re having, is that-which-is-responsible-for the improbability of the conversation—however distant in time or space, as the Sun moves a wind-up toy.”. Honestly, to me that summarizes the essence of not falling into the actual extremely common philosophical heresy of scientism, a heresy which I consider Chappell and Chalmers, for instance, to belong to (rather than positivism, which Chappell calls ‘scientism’ and which Caledonian doesn’t actually believe in based on my personal communications with him despite his ‘belief in belief’ in it).
“The GLUT is no more a zombie, than a cellphone is a zombie because it can talk about consciousness while being just a small consumer electronic device. The cellphone is just transmitting philosophy speeches from whoever happens to be on the other end of the line. A GLUT generated from an originally human brain-specification is doing the same thing.”
You begin by saying that you are using “zombie” in a broader-than-usual sense, to denote something that “behave[s] exactly like a human without being conscious”. The GLUT was constructed by observing googols of humans, but no human being plays a part in its operation. Are you going to call it conscious just because humans were an input to the design process? And even that’s not true, in the extremely improbable but still possible case where the GLUT is generated by a random process. Is the presence of consciousness supposed to depend on the manner of creation, even though the result be physically identical?
It is possible that the point of this essay was just to say that if something talks with facility about being conscious, then with overwhelming probability the real thing is somewhere causally upstream, and that you were not taking a stand as to whether the GLUT in itself is conscious or not. But the evidence suggests otherwise: you do say that the randomly generated GLUT is not conscious, and you say that the GLUT generated by brute-force observation “is not a zombie”. In which case I ask again, Is the presence of consciousness supposed to depend on the manner of creation, even though the result be physically identical?
And a bonus question: Suppose we incrementally modify the GLUT so that more and more of its responses are generated through computation, rather than just being looked up. Evidently there is something of a continuum between pure GLUT and shortest possible program implementing exactly the same responses. Where in this continuum is the boundary between consciousness and unconsciousness?
Isn’t the state-space of similar such problems known to exceed the number of atoms in the Universe? There is a term for problems which are rendered unsolvable because there just isn’t enough possible state-storing matter to represent them, but I can’t think of it now.
Pardon me if this is a stupid question, my experience with AI is limited. Funny Eliezer should mention Haskell, I’ve got to get back to trying to wrap my brain around ‘monads’.
I’m not sure what you mean by a GLUT? A static table obviously wouldn’t be conscious, since whatever the details consciousness is obviously a process. But, the way you use GLUT suggests that you are including algorithms for processing the look-ups, how would that be different from other algorithmic reasoning systems using stored data (memories)?
The lookup algorithms in question are not processing the meaning of the inputs and generating a response as needed. The lookup algorithms simply string-match the conversational history to the list of inputs and output the next line in the conversation.
An algorithmic reasoning system, on the other hand, would seem to be something that actually reasons about the meaning of what’s been said, in the sense of logically processing the input as opposed to string-matching it.
A GLUT AGI need not be a UTM, since most people have limited ability to execute programmes in their heads. You can write in “huh? I’ve lost track” for most answers to “what do you get if you execute the folowing programme steps”.
There was something like a random-yet-working GLUT picked out by sheer luck—abiogenesis. And it did eventually become conscious. The original improbability is a small jump (comparatively) and the rest of the improbability was pumped in by evolution. Still, it’s an existence proof of sorts—I don’t think you can argue conscious origin as necessary for consciousness. There needs to be an optimizer, or enough time for luck. There doesn’t really need to be any mind per se.
Now, if we could only get a non-conscious optimizer to do it, or not, on command, we’d have the nonperson predicate we need.
A simple GLUT cannot be conscious and or intelligent because it has no working memory or internal states. For example, suppose the GLUT was written at t = 0. At t = 1, the system has to remember that “x = 4”. No operation is taken since the GLUT is already set. At t = 2 the system is queried “what is x?”. Since the GLUT was written before the information that “x = 4” was supplied, the GLUT cannot know what x is. If the GLUT somehow has the correct answer then the GLUT goes beyond just having precomputed outputs to precomputed inputs. Somehow the GLUT author also knew an event from the future, in this case that “x = 4″ would be supplied at t = 1.
It would have to be a Cascading Input Giant Lookup Table(CIGLUT). eg: At t = 1, input = “1) x = 4” at t = 2, input = “1) x = 4 //previous inputs what is x?” //+ new inputs We would have to postulate infinite storage and reaffirm our commitment to ignoring combinatorial explosions.
Think about it. I need to go to sleep now, it’s 3 AM.
Eliezer covered some of this in description of the twenty-ply GLUT being not infinite, but still much larger than the universe. The number of plys in the conversation is the number of “iterations” simulated by the GLUT. For an hour-long Turing test, the GLUT would still not be infinite, (i.e., still describe the Chinese Room thought experiment) and, for the purposes of the thought experiment, it would still be computable without infinite resources.
Certainly, drastic economies could be had by using more complicated programming, but the outputs would be indistinguishable.
The rule of the rationalist’s game is that every improbable-seeming belief needs an equivalent amount of evidence to justify it.
Aren’t you already breaking it allowing what you consider improbable GLUTs with no evidence?
Also how would you play this game with someone with a vastly different prior?
Any process can be replaced by a sufficiently-large lookup table with the right elements.
If you accept that a process can be conscious, you must acknowledge that lookup tables can be.
There is no alternative. Resistance is useless.
Let me be the first in this thread to suggest that, for the purposes of GLUTs, we should taboo the word “conscious.” This post, in my opinion, is a shining example of Eliezer’s ability to verbally carve reality at its joints. After a remarkably clear discussion of the real problem, the question of “conscious” GLUTs seems like a silly near-boundary case.
Is there a technical reason I should think otherwise?
PK is right. I don’t think a GLUT can be intelligent, since it can’t remember what it’s done. If you let it write notes in the sand and then use those notes as part of the future stimulus, then it’s a Turing machine.
The notion that a GLUT could be intelligent is predicated on the good-old-fashioned AI idea that intelligence is a function that computes a response from a stimulus. This idea, most of us in this century now believe, is wrong.
Wow, a lot of things to say at this point.
Eliezer Yudkowsky: First, as I started reading, I was going to correct you and point out that Daniel Dennett thinks a GLUT can be conscious, as that is exactly his response to Searle’s Chinese Room argument, thinking that I didn’t need to read further. Fortunately, I did read the whole thing and find out, when I look at the substance of what the two of you believe, it’s the same. While Dennett would say that the GLUT running in the Chinese Room is conscious, what you were really asking was, what is the source of the consciousness? Since that GLUT would have to be written by a consciousness, you two are in agreement.
Second, I don’t think you have ruled out (shown to be low enough) the possibility of randomly picking out a GLUT that just happens to be conscious. While there is a low probability of picking just the right GLUT that happens to implement just the right lookup table, it’s no different than any of the other unlikely things that had to happen for us to all be here. I mean, a certain group of people will point to the low probability of physical constants being just right/self-replicating molecules forming/single-celled organisms becoming multicellular/wing or flagellum or cell or blood clotting evolving, as evidence it couldn’t have happened by chance (that there was a consciousness behind it). In response, one can just point to the anthropic principle—why wouldn’t that apply here? We could only be here to observe the universes where random processes grabbed that one GLUT that implemented something functionally similar to consciousness.
Finally, I had assumed through this series of posts that you were taking some position sharply divergent from Dennett. I mean, if the whole concept of qualia is incoherent, a universe lacking that incoherence isn’t so impossible, right?
No. No. No. No. No.
The probability of picking the “just the right” GLUT is vastly smaller than any mere physical chain of events – there’s no chance!
“Any process can be replaced by a sufficiently-large lookup table with the right elements.”
That misses my point. A process is needed to do the look-ups or the table just sits there.
If you abstract away the low-level details of how neurons work, couldn’t the brain be considered a very large, multidimensional look-up table with a few rules regarding linkages and how to modify strengths of connections?
I will step up and claim that GLUTs are conscious. Why wouldn’t they be?
Because consciousness is precluded in the thought experiment. The whole idea is that the Zombie World is identical in every way—except it doesn’t have this ephemeral consciousness thing.
Therefore the GLUT cannot be conscious, by the very design of the thought experiment it cannot be so. Yet there isn’t any logical explanation for the behavior of the zombies without something, somewhere, that is conscious to drive them. That’s why the GLUT came into the discussion in the first place—something has to tell the zombies what to do, and that something must be conscious (except it can’t be, because the thought experiment precludes it).
Thus, an identical world without consciousness is inconceivable.
So does that mean a GLUT in the zombie world cannot be conscious, but a GLUT in our world (assuming infinite storage space, since apparently we were able to assume that for the zombie world) can be conscious?
Phil: Gluts can certainly learn. A GLUT’s program is this:
while (true) { x = sensory input y, z = GLUT(y, x) muscle control output = z }
Everything a GLUT has learned is encoded into y. Human GLUTS are so big that even their indices are huge.
Is the entity that results from gerrymandering together neural firings from different people’s brains, so as to produce a pattern of neural firings similar to a brain but not corresponding to any “real person” in this Everett branch, conscious? How about gerrymandering together instructions occurring in different CPUs? Atomic motions in random rocks?
Consider a tiny look-up table mapping a few (input sentence, state) pairs to (output sentence, state) pairs—one small enough to practically be constructed, even. So long as you stick to the few sentences it accepts in the current state, it behaves exactly like a GLUT. If a GLUT is conscious, either this smaller table is conscious too, or it’s the never activated entries that make the GLUT conscious.
Personally my response to the one would be similar to Caledonian’s; perhaps more extreme. I think the linguistic analysis of philosophers is essentially worthless. Language is a means of communication and the referents a word has a matter of convention; meaning is a psychological property of no particular value. What concerns me is the person doing the communication. Where have they been and what have they done? You can, of course, follow the improbability on that. But my maxim is just,
Maxim: Language is a means of communication.
If somebody comes to you with just words; ignore them. Even if they’re words about things. There isn’t some metaphysical relation of reference reaching out from the noises leaving their mouth and connecting them to physical objects. There isn’t some great Eternal Registry where “the problem of the chocolate cake in the asteroid belt” is suddenly registered as soon as somebody mentions the possibility of a chocolate cake in the asteroid belt. We do not, from that point on, have to solve the mystery of the chocolate cake or, and this is important, account for it in any way.
Philosophy and religion are very similar in their reverence for language. They both make the same essential mistake: they confer language with a power it does not have. It’s the same view of language the shaman and the witchdoctor have. Language does something. It establishes something. The mere utterance of a word has some effect in this world or some other. For the shaman it’s the spirit world; for the philosopher it’s the non-actual possible world (or whatever happens to be in fashion this week). We know all this is not true as an empirical point of fact; language only effects the listener. This is why I reject philosophy outright.
Well said.
I think you will find that philosophers are very well aware of the limitations of langage. More so than just about anyone else.
Philosophers currently talk about PW’s, but that does not mean they reify them. (What would non-actual mean)? And don’t forget that some scientiss, and some less wrongers, and EY himself, do believe in alternate worlds. You need a better example. Or a different theory.
“But suppose someone actually did reach into a GLUT-bin and by genuinely pure chance pulled out a GLUT that wrote philosophy papers?”
this event has a probability that is best thought of as being smaller than 1 over the biggest number that most people can think of, so I don’t particularly care.
Ah, I see you’re not familiar with the works of Jorge Luis Borges. Permit me to hyperlink: The Library of Babel
PK, Phil Goetz, and Larry D’Anna are making a crucial point here but I’m afraid it is somewhat getting lost in the noise. The point is (in my words) that lookup tables are a philosophical red herring. To emulate a human being they can’t just map external inputs to external outputs. They also have to map a big internal state to the next version of that big external state. (That’s what Larry’s equations mean.)
If there was no internal state like this, a GLUT couldn’t emulate a person with any memory at all. But by hypothesis, it does emulate a person (perfectly). So it must have this internal state.
And given that a GLUT is maintaining a big internal state it is equivalent to a Turing machine, as Phil says.
But that means that is can implement any computationally well defined process. If we believe that consciousness can be a property of some computation then GLUTs can have consciousness. This isn’t even a stretch, it is totally unavoidable.
The whole reason that philosopher talk about GLUTs, or that Searle talks about the Chinese room, is to try to trick the reader into being overwhelmed by the intuition that “that can’t possibly be conscious” and to STOP THINKING.
Looking at this discussion, to some extent that works! Most people didn’t say “Hmmm, I wonder how a GLUT could emulate a human...” and then realize it would need internal state, and the internal state would be supporting a complex computational process, and that the GLUT would in effect be a virtual machine, etc.
This is like an argument where someone tries to throw up examples that are so scary, or disgusting, or tear jerking, or whatever that we STOP THINKING and vote for whatever they are trying to sneak through. In other words it does not deserve the honor of being called an argument.
This leaves the very interesting question of whether a computational process can support consciousness. I think yes, but the discussion is richer. GLUTs are a red herring and don’t lead much of anywhere.
Internal state is not necessary. Consider a function f mapping strings to strings by means of a lookup table. Here are some examples of f evaluated with well-chosen inputs:
f(“Hi, Dr. S here, how are you now that you’re a lookup table?”) = “Very well, thank you. I notice no difference.”
f(“Hi, Dr. S here, how are you now that you’re a lookup table? Really, none at all?”) = “Yes, really no differences at all.”
f(“Hi, Dr. S here, how are you now that you’re a lookup table? You have insulted my entire family!”) = “I know you well enough to know that my last reply could not possibly have insulted you; someone must be feeding me fake input histories again.”
There should probably be timestamps in the input histories but that’s an implementation detail. For what it’s worth, I hold that f is conscious.
Of course a GLUT can be conscious. A problem some may have with it would be that it is not self-modifying, for the table is set in stone, right? Well, consider it from this perspective:
First of all, I assume that all or some of the output is fed back into the input, directly or indirectly (or is that cheating? why?). Then, we can divide the GLUT in two parts, A and B, that differ only in one input: the fact that the “zombie” has previously heard a particular phrase, for example “You are not conscious, you ugly zombie!”.
There is no need for the being to have any other kind of “memory” apart from the GLUT, because we can postulate that from the point that that phrase is heard, and produces an output in the “B” zone of the table, there is no possible combination of feedback plus external inputs that go out of the GLUT by tha “A” zone. With a truly G LUT, we can keep al the state we need.
Then we can easily say that the table has been “changed”, for the outputs are coming from an entirely separated zone (“B”) of the table, and it cannot go back to “A”, so we might as well discard that part and say that the table has changed.
People who want to read more about this topic online may find that it is sometimes referred to as a “humongous” (slang for huge) lookup table or HLUT. Googling on that term will find some additional hits.
Psy-Kosh’s point about implementations that use lookup tables internally of various sizes I think echos Moravec’s point in Mind Children. The idea is that you could replace various sub-parts of your conscious AI with LUTs, ranging all the way from trivial substitutions up to a GLUT for the whole thing. Then as he says, when and where is the consciousness?
I would suggest that the answer is meaningless, that consciousness cannot necessarily be localized in the same way as some other properties. Where, after all, in our own brains, is the consciousness, if we zoom in and look at individual neurons? Is there a “consciousness scalar field” where we can indicate, at each point in the brain, how much consciousness is present there? I doubt it.
One other question this raises is the issue of implementation. There is an extensive philosophical debate (also involving Chalmers) on when a given system can be said to implement a given computation, in particular a conscious computation. I recall several years back Eliezer writing on these topics and at the time he saw this as a major stumbling block for functionalism. I would be interested in hearing how his thoughts have evolved, and I hope he can write about this soon.
The more I think about it, the more I am convinced that if any GLUT could ever be made it would be an unspeakably horrible abomination. To explicitly represent the brain states of all the worst things that could happen to a person is a terrible thing. Weather the “internal state” variable is actually pointing at one doesn’t seem to make a big moral difference. GLUTs are torture. They are the worst form of torture I’ve ever heard of. I’m glad they’re almost certainly impossible.
I recall several years back Eliezer writing on these topics and at the time he saw this as a major stumbling block for functionalism. I would be interested in hearing how his thoughts have evolved, and I hope he can write about this soon.
Very, very strongly seconded.
Larry gives me another idea. Say the GLUT is implemented as a giant book with a person following instructions a la the Chinese Room. In the course of looking up the current (sentence, state) pair in the book, many other entries will inevitably impinge on the operator’s retinas and enter their mind, but not be reported. Do the experience-moments associated with them occur? Or say it’s implemented as a delay line memory that constantly cycles and discards entries until it reaches the current input, which it reports. Do the experience-moments associated with all the non-reported entries occur?
(I have a feeling that’s a very Wrong Question.)
Hal: Yeah, I actually am inclined toward thinking that something like Permutation City style cosmology/consciousness is actually valid… HOWEVER
If so, that seems to seperate consciousness and material reality to the point that one may as well say “what material reality?”
But then, one could say
“hrm, okay, so let’s say that physics as we know it is the wrong reduction, and instead there’s some other principle that ends up implying/producing consciousness, and something about that fundamental principle and so on causes statistical patterns/regularities in the types of conscious experience that can exist, allowing an apperent ‘well behaved material reality’”
Of course, when I then continue reasoning along those lines, it seems to semi implode as soon as I ask myself “hrm.… so instead of assuming particle fields and stuff as fundamental, assume some fundamental principles that eventually give rise to consciousness and patterns in it and so on… Some fundamental principles like, say.… the physics of particle fields? ;)”
Pretty much the main thing I feel I can solidly say on this whole matter is that I’m very confused.
Oh well… hopefully the GAZP implies that as soon as neuroscience actually solves the “easy problem”, that’ll automatically hand us the answer to the “hard problem” (whatever form that answer may take)
Greg Egan says, in the Permutation City FAQ:
Nick: oh, hey, cool, thanks. Didn’t know about the existance of such a FAQ
Yeah, the uniformity thing (which I thought of in terms of existance of structure in experience) does seem to be a hit against it, and something I’ve spent time thinking about, still without conclusion though.
On the other hand, the chain of reasoning leading to it seems hard to argue against.
ie, what would have to be true for for something like the dust theory to be false? I have trouble thinking of any way of having the dust theory be false and yet also keeping anything like zombies disallowed.
Incidentally, I note that the uniformity/structure problem is also, near as I can tell, a hit against Tegmark style “all possible mathematical structures” multiverse. (Actually, that is something I’m inclined to group together with Dust/Moravec cosmologies)
But, it seems to be that we end up with the same sort of problem with Boltzmann brains.
Why, near as I can tell, aren’t most of “me” (for any reasonably applicable reference class) made of Boltzmann brains or something analogous? (obviously, there would be some “me” that says the same thing even if the genuine majority of “me”s were such entities, but from my perspective, if the majority of me were such, finding that I’m experiencing something now, and continuing to experience, and not dying after a moment of awareness as a Boltzmann brain suggests that the majority of “me” isn’t such an entity. And you can presumably use the same argument on yourself to convince yourself that the “majority of you” isn’t either.)
Incidentally, I note that the uniformity/structure problem is also, near as I can tell, a hit against Tegmark style “all possible mathematical structures” multiverse
Not necessarily. Tegmark suggests that mathematical structures with higher algorithmic complexity [in what encoding?] have lower weight [is there a Mangled Worlds-like phenomenon that turns this weight into discrete objective frequencies?], and that laws producing an orderly universe have lower complexity than chaotic universes or especially encodings of specific chaotic experiences.
Does Tegmark provide any justification for the lower weight thing or is it a flat out “it could work if in some sense higher complexity realities have lower weight”?
For that matter, what would it even mean for them to be lower weight?
I’d, frankly, expect the reverse. The more “tunable parameters”, the more patterns of values they could take on, so...
For that matter, if some means of different weights/measures/whatever could be applied to the different algorithm’s, why disallow that sort of thing being applied to different “dust interpretations”?
And any thoughts at all on why it seems like I’m not (at least, most of me seemingly isn’t) a Boltzmann brain?
Well, the first point is to discard the idea that orderly perceptions are less probable than chaotic ones in the Dust.
The second is to recognize that probability doesn’t matter to the anthropic principle at all. You don’t exist in the chaotic perspectives, so you never see them.
Psy-Kosh:
Does Tegmark provide any justification for the lower weight thing or is it a flat out “it could work if in some sense higher complexity realities have lower weight”?
It’s the same justification as for the Kolmogorov prior: if you use a prefix-free code to generate random objects, less complex objects will come up more frequently. Descriptions of worlds with more tunable parameters must include those parameters, which adds complexity. (But, yes, if complexity/weight/frequency is ignored, there are infinitely more worlds above any complexity bound than below it.)
For that matter, what would it even mean for them to be lower weight?
Good question. With MWI, there’s Robin’s “mangled worlds” proposal (and maybe others) to generate objective frequencies; I don’t know of any such suggestion for Tegmark’s multiverse.
And any thoughts at all on why it seems like I’m not (at least, most of me seemingly isn’t) a Boltzmann brain?
From Wikipedia: “Boltzmann proposed that we and our observed low-entropy world are a random fluctuation in a higher-entropy universe. Even in a near-equilibrium state, there will be stochastic fluctuations in the level of entropy. The most common fluctuations will be relatively small....” So we have strong evidence that this is false; there must be some reason to expect large, low-entropy universes to be more common than you would naively predict. Still, I would expect Boltzmann brains to outnumber ‘normal’ observers even within our universe, because there’s only a narrow window of time for ‘normal’ observers to exist, but an infinity of heat death for Boltzmann brains to arise in, so I’m still confused.
Caledonian:
Well, the first point is to discard the idea that orderly perceptions are less probable than chaotic ones in the Dust.
Could be, but there doesn’t seem to be any prior reason to suppose this. It seems that the dust should generate observer-moments with probability according to their algorithmic complexity, which would produce many more chaotic than normal ones. But it would solve the problem.
The second is to recognize that probability doesn’t matter to the anthropic principle at all. You don’t exist in the chaotic perspectives, so you never see them.
For every ‘normal’ possible world, there exist a huge number exactly like it but with small but glaring anomalies, like I have two sets of inconsistent memories or all coin flips come up heads or.… Observers could still exist in these partially-chaotic perspectives. There are also worlds that are almost entirely chaotic but with an island of order just big enough for one observer.
No one in particular: even if it’s possible to account for why the dust wouldn’t produce consciousness, the same arguments would still seem to apply to a non-conscious, purely computational Bayesian decision system (it would be surprised to observe order, etc.) I suspect this is actually a doubly wrong question, resulting from confusion about both consciousness and anthropic reasoning.
Psy-Kosh : “Yeah, the uniformity thing (which I thought of in terms of existance of structure in experience) does seem to be a hit against it, and something I’ve spent time thinking about, still without conclusion though.
On the other hand, the chain of reasoning leading to it seems hard to argue against.
ie, what would have to be true for for something like the dust theory to be false? I have trouble thinking of any way of having the dust theory be false and yet also keeping anything like zombies disallowed.”
Psy-Kosh, that isn’t a chain of reasoning leading to it. Your premise is that zombies are disallowed, which has not been established.
In other words, our evidence for the falsity of the Dust Theory—even if it is possible that someone may sometime present a Revised Dust Theory consistent with our evidence—is also evidence that Eliezer is wrong, and something like zombies are possible.
The full version of the Library of Babel can be generated by “walking” through the versions with a limited number of texts, each of finite length. It contains every possible string that can be composed of a given set of symbols—infinitely many strings, each infinitely long. Any finite string that can appear in the Library, does appear—infinitely many times.
In the English version, in any of the truncated (and sufficiently long) versions of the Library, the sequence “AB” is much more common than “CDEFG”. It doesn’t matter whether the texts are ten thousand letters long, or ten billion—the first is less complex and thus more probable than the second.
In the FULL version, “AB” and “CDEFG” are equally probable. Each appears infinitely often, but the order of the category of infinities that they belong to is the same.
It’s interesting that Eliezer never heard anyone say that a GLUT is conscious before now, but now nearly all the commenters are saying that GLUT is conscious. What is the meaning of this?
Unknown: I was unclear. I meant “rejecting the assumptions involved in the chain of reasoning that leads to the dust hypothesis would seem to require accepting things very much like zombies, and in ways that seem rather preposterous, at least to me”
Yes, obviously if ~zombie → dust, then ~dust->zombie. Either way, I know I’m very confused about this whole matter.
Caledonian: Yes, AB will be more common than CDEFG as a substring. but ABABABABABAB will be less common than AB(insert-random-sequence-here)
In other words, the number of “me”s that also observe an externally structured world that persistantly seems to be structured ought to, at least on the surface level, be somewhat smaller than the “me”s that experience chaos.
It’s the same problem as the Boltzmann brain thing.
Dangnabbit, I want to know all the answers about all the big questions about reality.
I don’t know all the answers about all the big questions about reality.
Reality is being mean to me. waaaaaaaaaa! :)
Would you argue that odd numbers are as probable as even numbers in the set of natural numbers, because the order of the category of infinities that they belong to is the same?
How about squares (1, 2, 4, 9, 16, …) versus non-square numbers? Prime numbers versus composite numbers?
It depends on how you order it. With the natural numbers in ascending order, squares are less common. Interleaving them like {1, 2, 4, 3, 9, 5, 16, 6, 25, 7, …}, they’re equally common. With a different order type like {2, 3, 5, 6, 7, …, 1, 4, 9, 16, 25, …}, I have no idea. This is a problem.
See also Nick Bostrom’s Infinite Ethics [PDF].
Aw, Nick, you spoiled the punchline! ;-)
As far as I understand, the sets of odd numbers, squares, and primes are all countable.
As such, a one-to-one correspondence can be established between them and the counting numbers. Therefore, considered across infinity, there are just as many primes as there are odd numbers.
There are as many examples of ABABABAB as there are examples of AB[random sequence of English letters six symbols long] in the full Library. There are as many examples of ABABABAB as there are examples of AB[random sequence of English letters ten-thousand symbols long] in the full Library.
I acknowledge that this is very counterintuitive. But isn’t the point of this blog to move beyond mere intuition and look at what rationality has to say?
Caledonian,
The part I have a problem with is where you go from the cardinality of the sets to a judgment of “equally probable”.
Let me put it this way: you wrote,
The “any” is the problem. I can construct a truncated version of the Library where your assertion doesn’t hold, just like I can fiddle with the order of a conditionally convergent series to get any limiting value I want. So when you say, “In the FULL version...”, you’ve left a key piece of information out, to wit, what is the limiting process which takes finite versions of the Library to the infinite version.
My statement doesn’t hold in ANY truncated version of the Library—it’s not difficult to construct an example, because any finite version automatically serves.
But we’re not DEALING with a finite version of the Library. We are dealing with the infinite version. And infinity wreaks some pretty serious havoc on conventional concepts of probability.
So why do you say that all sentences have equal probability, rather than that the probability is undefined, which would seem to be the default option?
Hmmmm...
The set of Turing machines is countably infinite.
If I ran a computer program that systematically emulated every Turing machine, would I thereby create every possible universe?
For example:
n=1;
max = 1;
while (1) {
emulate_one_instruction(n);
n = n+1;
if (n > max)
{max = max + 1; n = 1;}
}
(In other words, the pattern of execution goes 1,1,2,1,2,3,1,2,3,4, and so on. If you wait long enough, this sequence will eventually repeat any number you specify as many times as you specify.)
Of course, you’d need infinite resources to run this for an infinite number of steps...
Congratulations, you have reinvented the universal dovetailer
Some of those instructions won’t halt, so eventually you’ll get hung up in an infinite loop without outputting anything. And the Halting Problem has no general solution...
First, I haven’t seen how this figures into an argument, and I see that Eliezer has already taken this in another direction, but...
What immediately occurs to me is that there’s a big risk of a faulty intuition pump here. He’s describing, I assume, a lookup table large enough to describe your response to every distinguishable sensory input you could conceivably experience during your life. The number of entries is unimaginable. But I suspect he’s picturing and inviting us to picture a much more mundane, manageable LUT.
I can almost hear the Chinese Room Fallacy already. “You can’t say that a LUT is conscious, it’s just a matrix”. Like ”...just some cards and some rules” or ”...just transistors”. That intuition works in a common-sense way when the thing is tiny, but we just said it wasn’t.
And let’s not slight other factors that make the thing either very big and hairy or very, very, very big.
To work as advertised, it needs some sense of history. Perhaps every instant in our maybe-zombie’s history has its own corresponding dimension in the table, or perhaps some field(s) of the table’s output at each instant is an additional input at the next instant, representing one’s entire mental state. Either way, it’s gotta be huge enough to represent every distinguishable history.
The input and output formats also correspond to enormous objects capable of fully describing all the sensory input we can perceive in a short time, all the actions we can take in a short time (including habitual, autonomic, everything), and every aspect of our mental state.
This ain’t your daddy’s 16 x 32 array of unsigned ints.
Cyan: not true. As you can see, the non-halting processes don’t prevent the others from running; they slow them down, but who cares when you have an infinite computer?
Tom: what do you think of my previous comment about a tiny look-up table?
That’s a good strategy and I recommend you stick to it.
The infinities are absolutely needed, here.
Caledonian: But do we here need to go beyond “well behaved limit defined infinities”?
Nick, you’re right. I just misread/misinterpreted the pseudo-code “emulate_one_instruction”.
You do if you want to talk about certain sets. Some of those sets are relevant to the Dust hypothesis. Therefore, if you want to talk about the Dust hypothesis, you have to be willing to discuss infinities in a more complex way.
Short answer: yes.
Paul and Patricia Churchland, and Jerry Fodor, and others, have argued that GLUTs would be conscious.
They would be conscious. But they need memory, because the past provides context that changes proper responses to future questions / dialogue.
Amendment: I said GLUTs need memory based on the idea of perfectly duplicating the behavior of some other conscious being, like Eliezer, who does have memory. But there are brain-damaged people with various deficiencies in long- and/or short-term memory who still have conscious experience, so a GLUT without the ability to store new memories could be conscious like those people. Anyhoo.
A person’s thoughts are underdetermined by their actions—there’s no way, probably even in principle, to know nearly as much about my current thoughts as I do by observing my macro-level behavior (as opposed to micro-scale heat/EM wave output), and definitely no way to do so by observing what I type, even over a long period of interaction. So, since a GLUT is purely behavioral, which of the many possible experiences corresponding to my behavior would arise from a GLUT simulating me?
Nick: a GLUT wouldn’t just be a list of actions though, it’d be a list, basically, of all possible outputs for all possible inputs.
In other words, if I simply knew your actions, that may underdetermine you, but if I knew all the ways you would have acted for all possible circumstances, well, it’s not obvious to be that that would underdetermine you.
It seems likely to me that even that, for reasonable definitions of “action”, couldn’t distinguish between e.g. me and a very good improviser with a rich model of my mind (and running at a high subjective speedup) but completely different private thoughts, or a group of such people, or between me and me plus some secret thought I would never tell anyone or act on but regularly think about.
Nick: Are you even reasonably confident that such an impostor wouldn’t, effectively, have instanstiated a version of you in their head?
Even if they did (and I doubt they would have to, but am less confident), they would also have thoughts that weren’t mine.
I’m sure this will come across as naïve or loony, but is anyone else here occasionally terrified by the idea that they might ‘wake up’ as a Boltzmann brain at some point, with a brain arranged in such a way as to subject them to terrible agony?
Perhaps a GLUT cannot actually pass the Turing Test. Consider the following extension to the thought experiment.
I have a dilemma. I must conduct a Turing Test. I have two identical rooms. You will be in one room. A GLUT will be in the other. At the end of the experiment, I must destroy one of the two rooms. The Turing Test forbids me to peer inside the rooms, and I only communicate with simple textual question/responses.
What can I do to save your life? What I would want to do is create a window between the two rooms. It would allow all the information in each room to be visible to the other. I’m not sure if this illegitimately mutates the Turing Test or not, but it does seem to avoid violating the critical rule in the Turing Test that the experimenter must not peer into the room. I then ask one of the two rooms randomly: “Please give me a single question/response I should expect from the other room.”
Assuming you are a rational person who actually wants to save your life, if I ask you this question, you will examine the GLUT, pick a single lookup, and give me the question/response. I will then ask the GLUT the question you gave me. The GLUT, being a helplessly deterministic lookup table, will have no option but to respond accordingly. I will then destroy the GLUT, and save your life. Conversely, if I ask the GLUT the question, I should expect that you—who wants to save your life, and who knows by looking through the window what the GLUT said you’d say, will answer anything other than what the GLUT said you would say. Either way, I can differentiate between you and the GLUT.
[Update: ciphergoth and FAW do a great job spotting the error in this intuition pump. To summarize, the GLUT, like you, can also include data from the window as input to its lookup table.]
I’m afraid this is just a misleading intuition pump. Eliezer has GLUT-reading powers, does he? Well, the GLUT has a body that it uses to type its responses in the Turing Test, and that body is capable of scanning the complete state of Eliezer’s brain, from which the GLUT’s enormous tables predict what he’s going to say next.
When does the GLUT’s scan occur? Before or after it has to start the Turing Test? If it does it beforehand, then it suffers predictability. But it can’t do it afterwards, without ceasing to fit the definition of a lookup table.
The point I’m making is that the difference you’re drawing between people and GLUTs isn’t really to do with their essential nature: it’s a more trivial asymmetry on things like how readable their state is and whether they have access to a private source of randomness. Fix these asymmetries and your problem case goes away.
Thanks ciphergoth; I updated the original comment to allude to the error you spotted.
A lookup table is stateless. The human is stateful. RAM beats ROM. This is not a trivial asymmetry but a fundamental asymmetry that enables the human to beat the GLUT. The algorithms:
Stateless GLUT:
Question 1 → Answer 1
Question 2 → Answer 2
Question 3 → Answer 3
…
Stateful Human:
Any Question → Any Answer other than what the GLUT said I’d say
The human’s algorithm is bulletproof against answering predictably. The GLUT’s algorithm can only answer predictably.
P.S. I wasn’t entirely sure what you meant by “private source of randomness”. I also apologize if I’m slow to grasp any of your points.
GLUT:
Task + Question + state of the human → “Any Answer other than what the GLUT said I’d say”
If the human has looked up that particular output as well then that’s another input for the GLUT, and since the table includes all possible inputs this possibility is included as well, to infinite recursion.
The problem for the GLUT is that the “state of the human” is a function of the GLUT itself (the window causes the recursion).
And the human has exactly the same problem.
You’re right; got it. That’s also what ciphergoth was trying to tell me when he said that the asymmetries could be melted away.
Thanks for update! By “private source of randomness” I mean one that’s not available to the person on the other side of the window. Another way to look at it would be the sort of randomness you use to generate cryptographic keys—your adversary mustn’t have access to the randomness you draw from it.
Surely the ‘bottom line’ is this:
Once you’ve described what a GLUT is and what it does, it’s a mistake to think that there’s anything more to be said about whether it’s “really conscious”. (Agreeing with Dennett against Chalmers:) consciousness isn’t a fundamental property like electric charge but a ‘woolly’, ‘high level’ one like health or war. Clearly there’s no reason to think that for every physical system, there is a well-defined answer to the question “is it healthy?” (or “is a war in progress?”) You can devise scenarios sufficiently weird that the questions become baffling and unanswerable. Same for consciousness.
But anyway, here’s a fantasy you might enjoy:
You teach the ‘person at the other end’ of the GLUT how to build a ‘teleport exit’ machine that can recreate a physical object from a long stream of binary data. You yourself build a corresponding teleport entrance (for simplicity, let’s suppose it’s the kind of teleporter that destroys what it’s teleporting). Then you teleport yourself and have the resulting data passed across to the GLUT (i.e. appended to the conversation so far).
Then you can shake hands with ‘the person at the other end’ and inspect the parallel universe they’re living in. Or at least, that’s the story your buddies back on earth will hear about as they continue chatting with the GLUT. Eventually, each side builds another teleport machine so that you can come home, and you even bring ‘the person at the other end’ back with you.
mulla on pippeli se haiseva pippeli on se ei ole tietoinen
If anyone’s curious, google informs me that the above is in Finnish, and is both unrelated and rather rude.
Part of the brain’s function is to provide output to itself. Consequently, even though I would be quite happy saying C-3PO is conscious, I wouldn’t be so quick to say that about a GLUT.
Still, it seems remarkable to me that everyone is treating consciousness as an either/or. Homo sapiens gradually became conscious after species that weren’t. Infants gradually become conscious after a fertilized egg that was not. Let us put essentialism to rest.
And as an aside, I would state roughly that an organism is conscious iff it has theory of mind. That is, consciousness is ToM applied to oneself.
A GLUT consciousness would need to store an internal state for the consciousness it is modeling. This could be as detailed as the region of configuration space describing an equivalent brain. You have a mapping from (sensation, state) to (external output, state). Since this is essentially a precomputed physical simulation, it’s trivially capable of consciousness.
Eliminating the state parameter would lead to non-consciousness.
“Follow The Improbability” is a wonderful thing. Thank you.
Ahemhem. Haskell is as fine a turing complete language; we just like to have our side effects explicit!
Also, can we just conclude that “consciousness” is the leakiest of surface generalizations ever? If I one day get the cog-psy skills I am going to run a stack-trace on what makes us say “consciousness” without knowing diddy about what it is.
As a budding AI researcher, I am frankly offended by philosophers pretending to be wise like that. No. There is no such thing as “consciousness” because it is not even a bucket to put things in. It’s metal shreds. You are a some sort of self-introspective algorithm implemented on a biochemical computing substrate, so let’s make the blankness of our maps self-evident by calling it “magic” or something.
Glaring redundancy aside, isn’t “self-introspective” just as intensionally valid or void as “conscious”?
Yes, probably. It is a really good idea to taboo any and all of “conscious,” “self-,” “introspective,” “thinking,” and so on when doing AI work, or so I heard.
I don’t think we even need turing-completeness, really. Take a total language, and give it a loop-timeout of say 3^^^^3. It can compute anything we care about, and is still guaranteed to terminate in bounded time.
(I’m reminded of Godel’s letter discussing P=NP—should such an algorithm be found, mathematicians could be replaced by a machine that searches all proofs with less than a few million symbols; anything that couldn’t be proved with that many symbols would be of little to no interest to us.)
Or, at least, they could be replaced by people who can understand and seek out proofs that are relevant to us out of the arbitrarily large number of useless ones. So mathematicians basically.
Unless for any NP problem there exists an algorithm which solves it in just O(N^300) time.
NP=P is not the same as NP=Small P
Likewise, EXPTIME doesn’t mean Large EXPTIME—an algorithm running in exp(1e-15*N) seconds is asymptotically slower than one running in N^300 seconds, but it is faster for pretty much all practical purposes.
I once read an Usenet post or Web page along the lines of “There are two kinds of numbers: those smaller than Graham’s number and those larger than Graham’s number. Computational complexity theory traditionally only concerns itself with the latter, but only the former are relevant to real-world problems.”
Do you mean faster?
No, slower. N^300 is polynomial, exp(1e-15*N) is exponential.
Maybe it’s easier to understand them if you take log ? log(N^300) = log(N) 300 and log(exp(1e-15 N)) = 1e-15 * N
Whatever >0 multiplicative constants a and b you put, a N will at one point become bigger (so, slower) than b log(N). In this, that’ll happen roughly when N will be somewhat above 10^15, around 10^20 according to a simple guess.
Complexity theorists don’t know anything, but they at least know that it’s impossible to solve all NP problems in O(N^300) time. In fact they know it’s impossible to solve all P problems in O(N^300) time.
http://en.wikipedia.org/wiki/Time_hierarchy_theorem
I think Yudkowsky meant big omega.
I think the charitable interpretation is that Eliezer meant someone might figure out an O(N^300) algorithm for some NP-complete problem. I believe that’s consistent with what the complexity theorists know, it certainly implies P=NP, but it doesn’t help anyone with the goal of replacing mathematicians with microchips.
I don’t think that interpretation is necessary. A better one is that even if all NP problems could be solved in O(N^300) time, we’d still need mathematicians.
Sewing-Machine correctly pointed out, above, that this contradicts what we already know.
Are you saying that the counterfactual implication contradicts what we already know, or that the antecedent of the counterfactual implication contradicts what we already know?
I’d be surprised by the former, and the latter is obvious from that it is a counterfactual.
I’m not really comfortable with counterfactuals, when the counterfactual is a mathematical statement. I think I can picture a universe in which isolated pieces of history or reality are different; I can’t picture a universe in which the math is different.
I suppose such a counterfactual makes sense from the standpoint of someone who does not know the antecedent is mathematically impossible, and thinks rather that it is a hypothetical. I was trying to give a hypothetical (rather than a counterfactual) with the same intent, which is not obviously counterfactual given the current state-of-the-art.
To elaborate on this a little bit, you can think of the laws of physics as a differential equation, and the universe as a solution. You can imagine what would happen if the universe passed through a different state (just solve the differential equation again, with new initial conditions), or even different physics (solve the new differential equation), but how do you figure out what happens when calculus changes?
When I first read that TDT was about counterfactuals involving logically impossible worlds I was uncomfortable with that but I wasn’t sure why, and when I first read about the five-and-ten problem I dismissed it as wordplay, but then it dawned on me that the five-and-ten problem is indeed what you get if you allow counterfactuals to range over logically impossible worlds.
I was having trouble articulating why I was uncomfortable reasoning under a mathematical counterfactual, but more comfortable reasoning under a mathematical hypothesis that might turn out to be false. This comment helped me clarify that for myself.
I’ll explain my reasoning using Boolean logic, since it’s easier to understand that way, but obviously the same problem must occur with Bayesian logic, since Bayes generalizes Boole.
Suppose we are reasoning about the effects of a mathematical conjecture P, and conclude that P → Q, and ¬P → Q’. Let’s assume Q and Q’ can’t both be true, because we’re interested in the difference between how the world would look if P were true, and how the world would look if P’ were true. Let’s assume we don’t have any idea which of P or ¬P is true. We can’t also have concluded P → Q’, because then the contradiction would allow as to conclude ¬P, and for the same reason we can’t also have concluded ¬P → Q. When we assume P, we only have one causal chain leading us to distinguish between Q and Q’, so we have an unambiguous model of how the universe will look under assumption P. This is true even if P turns out to be false, because we are aware of a chain of causal inferences beginning with P leading to only one conclusion.
However, the second we conclude ¬P, we have two contradictory causal chains starting from P: P → Q, and P → ¬P → Q’, so our model of a universe where P is true is confused. We can no longer make sense of this counterfactual, because we are no longer sure which causal inferences to draw from the counterfactual.
Are you sure? Maybe there are exist concise, powerful theorems that have really long proofs.
Oh good grief, since everyone here is intent on nitpicking the observation to death, here is his bloody letter: http://rjlipton.wordpress.com/the-gdel-letter/
The philosopher is clearly simulating our universe, since as Eliezer already observed, a Giant Lookup Table won’t fit in our universe. So he may as well be simulating 10^10^10^20 copies of our universe, each with a different Giant Lookup Table, so that every possible Giant Lookup Table gets represented in some simulation. Now the improbability just comes from the law of large numbers, rather than any conscious being. The end result still talks about consciousness, but the root cause of this talking-about-consciousness is no longer a conscious entity, but the mere fact that in a large enough pool of numbers, some of them happen to encode what looks like the output of a conscious being.
Or is it that, for example, the Godel number of a Turing machine computation of a conscious entity is actually conscious? Actually, now that I think of it, I suppose it must be. Weird.
Counting should be illegal.
Your number is no more conscious than a paused Turing machine. It seems to me that whatever we mean by “consciousness” requires some degree of active reflection.
The Godel number of a Turing computation encodes not just a single configuration of the machine, but every configuration the machine passes through from beginning to end, so it’s more than just a paused Turing machine. It’s true that there’s no dynamics, but after all there are no dynamics in a timeless universe either, yet there’s reason to suspect we might live in one.
The later configurations reflect on the earlier configurations, which is, for all intents and purposes, active reflection.
To be pedantic, perhaps I should say the configurations coded by the exponents of larger primes reflect on the configurations encoded by the exponents of smaller primes, since we have the entire computation frozen in amber, as it were.
(I know this is an old article; let me know if commenting on it is a faux pas of some sort)
Well, I’d definitely claim it. If we could somehow disregard all practical considerations, and conjure up a GLUT despite the unimaginably huge space requirements—then we could, presumably, hold conversations with it, read those philosophy papers that it writes, etc. How is that different from consciousness ? Sure, the GLUT’s hardware is weird and inefficient, but if we agree that robots and zombies and such can be conscious, then why not GLUTs ?
I can’t possibly be the only person in the world who’d ever made this observation...
My reluctance to treat GLUTs as conscious primarily has to do with the sense that, whatever conscious experience the GLUT might have, there is no reason it had to wait for the triggering event to have it; the data structures associated with that experience already existed inside the GLUT’s mind prior to the event, in a way that isn’t true for a system synthesizing new internal states that trigger/represent conscious experience.
That said, I’m not sure I understand why that difference should matter to the conscious/not-conscious distinction, so perhaps I’m just being parochial. (That is in general the conclusion I come to about most conscious/not-conscious distinctions, which mostly leads me to conclude that it’s a wrong question.)
IMO that’s an implementation detail. Th GLUT doesn’t need to synthesize new internal states because it already contains all possible states. Synthesizing new internal states is an optimization that our non-GLUT brains (and computers) use in order to get around the space requirements (as well as our lack of time-traveling capabilities).
Yeah, consciousness is probably just a philosophical red herring, as far as I understand...
Yeah, I don’t exactly disagree (though admittedly, I also think intuitions about whether implementation details matter aren’t terribly trustworthy when we’re talking about a proposed design that cannot conceivably work in practice). Mostly, I think what I’m talking about here is my poorly grounded intuitions, rather than about an actual thing in the world. Still, it’s sometimes useful to get clear about what my poorly grounded intuitions are, if only so I can get better at recognizing when they distort my perceptions or expectations.
Yeah, the whole GLUT scenario is really pretty silly to begin with, so I don’t exactly disagree (as you’d say) . Perhaps the main lesson from here is that it’s rather difficult, if not impossible, to draw useful conclusions from silly scenarios.
Not Conscious? I’d say the GLUT was not only conscious, it has god like powers. It can solve NP hard problems in one look up. It can prove anything in under a second.
It’s easy for a human to confuse epsilon for zero. In most cases this would be a useful simplification, but a GLUT can take that simplification and use it against you. A look up table doesn’t warp space and time? Well, actually it does, it’s just that a normal one would warp it by an insignificant amount. We wouldn’t normally think of a look up table as threatening a death star, but even a “small” GLUT of 10^500 entries has enough mass energy to destroy a death star from 10 billion light year away. Just by warping space and time!
Most arguments that involve a GLUT go something like this;
A GLUT is just a look up table and a look up table is obviously not …
It’s anything but obvious. A book is not conscious? How do you know? Maybe consciousness isn’t a binary property, maybe we’ve just arbitrarily set a threshold, above that amount we call it conscious. A GLUT would have that in spades. Or maybe not. How can you be 100% confident that a look up table has zero consciousness when you don’t even know for sure what consciousness is?
It can do what the mind it is made from can. No more, no less.
I may have missed the part where this is specified, but I imagine reading the GLUT would actually take longer than solving most problems, since it’s unimaginably large.
Why not just define consciousness in a rational, unambiguous, non-contradictory way and then use it consistently throughout. If we are talking thought experiments here, it is up to us to make assumption(s) in our hypothesis. I don’t recall EY giving HIS definition of consciousness for his thought experiment.
However, if the GLUT behaves exactly like a human, and humans are conscious, then by definition the GLUT is conscious, whatever that means.
Things that are true “by definition” are generally not very interesting.
If consciousness is defined by referring solely to behavior (which may well be reasonable, but is itself an assumption) then yes, it is true that something that behaves exactly like a human will be conscious IFF humans are conscious.
But what we are trying to ask, at the high level, is whether there is something coherent in conceptspace that partitions objects into “conscious” and “unconscious” in something that resembles what we understand when we talk about “consciousness,” and then whether it applies to the GLUT. Demonstrating that it holds for a particular set of definitions only matters if we are convinced that one of the definitions in that set accurately captures what we are actually discussing.
If my goal is to talk about something with a particular definition, then I prefer not to use an existing word to refer to it when that word doesn’t refer unambiguously to the definition I have in mind. That just leads to confusing conversations. I’d rather just make up a new term to go with my new made-up definition and talk about that.
Conversely, if my goal is to use the word “consciousness” in a way that respects the existing usage of the term, coming up with an arbitrary definition that is unambiguous and non-contradictory but doesn’t respect that existing usage won’t quite accomplish that. I mean, I could define “consciousness” as the ability to speak fluent English; that would be unambiguous and non-contradictory and there’s even some historical precedent for it, but I consider it a poor choice of referent.
Well, casual conversation is not the same as using key terms (or words) in a scientific hypothesis, so that’s a different subject, but new terms to define new ideas is fine if it’s your hypothesis. In conversation, new definitions for old words would be confusing and defining old words in a new way could be confusing as well. That’s not what I am saying.
Words can have multiple meanings and the dictionary gives the most popular usages. If we are appealing to the popular use then we still need to define the word. At any rate, whatever key terms that we use in our hypothesis must be precise, unambiguous, non circular, non-contradictory and used consistently throughout our presentation.
I’m saying it is important what EY meant by consciousness. If the person I quoted says we don’t know what it is.....then that person doesn’t know what the existing usage of the word is, or it is not well defined.
Anyways, why would you use a poor choice of a referent?
I’m personally okay with circular definition when used appropriately. For instance, there’s the Haskell definition
which tells you how to build the natural numbers in terms of the natural numbers.
If my goal is to clarify some confusing aspects of what people think about when they use the word “consciousness”, then if I end up talking about something other than what people think about when they use the word “consciousness” (for example, if I come up with some precise, unambiguous, non-circular, non-contradictory definition for the term) there’s a good chance that I’ve lost sight of my goal.
Thanx! TheOtherDave:
The point of defining one’s terms is to avoid confusion in the first place. It doesn’t matter what anyone else thinks consciousness means. Only the meaning as defined in the theorist’s hypothesis is important at this stage of the scientific method.
That’s something I don’t understand (with epistemic rationality- “The art of choosing actions that steer the future toward outcomes ranked higher in your preferences ”).
This is fine when a person is making personal choices on how to act, but when it comes to knowledge (and especially the scientific method)....It seems like ultimately one would be interested in increasing one’s understanding regardless of an individual’s goals, preferences or values.
Oh well, at least we aren’t using Weber’s affectual rationality involving feelings here.
I would agree that if what I want to do is increase my understanding regardless of my ability to communicate effectively with other people (which isn’t true of me, but might be true of others), and if communicating effectively with others doesn’t itself contribute significantly to my understanding (which isn’t true of me, but might be true of others), then choosing definitions for my words that maximize my internal clarity without reference to what those words mean to others is a pretty good strategy.
You started out by asking why EY doesn’t do that, and I was suggesting that perhaps it’s because his goals weren’t the goals you’re assuming here.
Reading between the lines a bit, I infer that the question was rhetorical in the first place, and your point is that maximizing individual understanding without reference to other goals, preferences, values, or communication with others should be what EY is doing… or perhaps that it is what he’s doing, and he’s doing a bad job of it.
If so, I apologize for misunderstanding.
@TheOtherDave:
Anotherblackhat said :
In response Monkeymind said :
Not being100% confident what consciousness is, seemed to be a concern to anotherblackhat. Defining consciousness would have removed that concern.
No need to “read between the lines” as it was a straight forward question. I really didn’t understand why the definition of consciousness wasn’t laid out in advance of the thot experiment.
Defining terms allows one to communicate more effectively with others which is really important in any conversation but essential in presenting a hypothesis.
I was informed by Dlthomas that conceptspace is different than thingspace, so I think get the jest of it now.
However, my point was, and is, that the theorist’s defs are crucial to the hypothesis and hypotheses don’t care at all about goals, preferences, and values. Hypotheses simply illustrate the actors, define the terms in the script and set the stage for the first act. Now we can move on to the theory and hopefully form a conclusion.
No need to apologize, it is easy to misunderstand me, as I am not very articulate to begin with, and as usual, I don’t understand what I know about it!
ADDED: And I still need to learn how to narrow the inferential gap!
Agreed that hypotheses don’t care about goals, preferences, or values.
Agreed that for certain activities, well-defined terms are more important than anything else.
This seems to exactly contradict your first paragraph. What if I define “conscious” as “made of cells”?
If you don’t know for sure what consciousness is, you define it as best you can, and proceed forward to see if your hypothesis is rational and that the theory is possible. If you define conscious as made of cells, then everyone knows right away a GLUT is not conscious (that is, if it is not made of cells) by YOUR def. and tells you, you are being irrational, please go back to the drawing board!
As far as I can tell, GLUTs have to fail Turing tests for relativistic reasons.
Presumably lookup tables need to be stored somewhere in the universe. The number of possible lookups a GLUT might have to do to respond to whatever’s happened in a Turing test so far grows exponentially with time, so the distance information has to travel from some part of the lookup table to an output device also grows exponentially with time (and Grover’s algorithm doesn’t change this). Since the information can’t travel faster than the speed of light, before long a tester would notice an exponential slowdown in the GLUT’s response times to questions...
Algorithms as dumb as GLUTs can’t sensibly respond to their environments in constant time even in principle.
I don’t see where the top posting is going on the whole. P-Zombies are always supposed to logically possible, as Dennet says. There may be a lot of things wrong with logical possibility: it may be imposssible to derive real-world consequences from it, it may not exist..but whatever it is, it is not a level of probablity, even a small one. Tell a zombiephile that p-zombies are highly unlikey, and she’ll reply “sure, but they’re still logically possible”.
GLUTs pose a challenge to the GAZP because they make a kind of p-zombie (not exactly: I call them c-zombies) remotely plausible to people with phsycialist and computationalist inclinations. Finding the mistake that makes c-zombies seem likely does a certain amount of work towards the GAZP, but it does nothing at all to refute the claim that zombies of some sort are logically possilble. Becuase logical possibility is not a level of probability—even a small one.
In my (limited) understanding of the way the universe began, it was all pretty random.
Evolution seems to have been pretty random, too.
So how did we end up being concious?
And I was also wondering, does “randomness” exist? Or was the history of the universe set from the moment of the big bang?
(Please, I’m not trying to be clever, I just want to know the answer!)
A good counter to this argument would be to find a culture with morals strongly opposed to our own, and demonstrate that it is logical and internally consistent. My inability to think of such a culture could be interpreted as evidence that a sufficiently-powerful AI would be moral. But I think it’s more likely that the morals we agree on are properties common to most moral frameworks that are workable in our particular biological and technological circumstances. You should be able to demonstrate that an AI need not be moral by our standards, by writing a story that takes place in a world with technology and biology different enough so that our morals are substandard. But nobody would publish it.
I can’t help but notice that almost all the comments here are dealing with whether or not the GLUT is conscious. Apparently the community didn’t find the “It’s completely improbable” argument satisfying, and were left still asking the question. If I thought the explanation was correct and complete, just not satisfying, I would try to reason out what sort of mind would even ask that question, and why. I didn’t find the explanation to be complete, though, so I’ll try to answer the question instead.
As James_C points out, the GLUT can be treated as a function that outputs the proper response given some input:
But we can take this a step further:
So really, the GLUT contains all information about the outputs of the consciousness it is emulating , from the moment the emulation begins until death. It’s the perfect video recording. But it’s more than that, because there isn’t just one possible time stream you can look up. You can literally look up the responses along all possible timestreems, like a choose-your-own adventure book.
But is it conscious? Well, if consciousness is a process, then no. This is just a snapshot of that process. It’s like a 4-stroke engine sitting frozen in the compression stroke, sitting next to an engine frozen in the power stroke, sitting next to an engine in the exhaust stroke, sitting next to an engine in the induction stroke, sitting next to a bunch more engines frozen in every state in between. Each state was like a frozen waterfall, but it doesn’t count as a water fall if none of the water is actually falling. It only counts a consciousness if it is actually thinking things like “I am aware that I am aware”, but not if it just has a series of thoughts frozen in place. Thoughts frozen on a sheet of paper, or on a computer screen are just representations of what was once an active process of thinking.
As for how consciousness can emerge from “just” atoms, I don’t know. I know that it dues for many reasons, such as that brain damages effects our consciousness. I know the extremely high-level details of my thoughts, and I can intuit some details about how they seem to be interacting with my consciousness. We know some of the details about how individual neurons function, and how they are connected. We’re missing a huge gap in between those two, though. All we can do at the moment is to narrow the solution space, and box the problem in.
Would you deny that the function f(x) = x and the set of ordered pairs {...,(-1,-1),(0,0),(1,1),...} are merely two different representations of the same thing?
Sure, at least for integer values of x. :p
That doesn’t answer your point, though, which I presume was to appeal to a notion of interchangeable parts being equivalent, as Turing has suggested. I think it would be inaccurate to say that GLUT=Bob, even if FGLUT(input) = FBob(input). It’s not like comparing the same software or function running on two different operating systems. It’s comparing a function programmed in C# with throwing sand on the table and noticing that, if you interpret the pattern as dots and dashes, the Morse code happens to give the same result.
A Turing test seems like it should be valid in all real instances. The randomly generated GLUT, by definition, is one among countless trillions and trillions which gives a false positive on a Turing test. It’d be like giving 10^^^^^^^^^^^2 Turing tests via phone to an empty room, and having the random static just happen to sound like words in some of them, and have those words happen to form coherent sentences in a subset of those, and then have those coherent sentences happen to by actual rational answers to the examiner’s questions.
The difference is that, with the GLUT, you’ve created a list of all possible answers, and then rejected everything but the coherent ones that match a certain personality type and resemble a single person. You’ve then taken just these pre-recordings, from among countless trillions of trillions, and labeled them alone as your conscious GLUT and used them to pass a Turing test. However, it’s not fair to look just at this one pass, any more than it is reasonable to look at just the one pass out of trillions from the random noise on the phone line. You have to also consider the trillions and trillions of failed attempts. If all the static in the phone line “words” and “sentences” had been pre-recorded before the trillions of tests, would you then say that the one tape to pass was truly conscious?
Would you say that you are conscious? I mean, after all you’re nothing but one result that came out of countless experiments done by natural selection over the past ~4 billion years.
Yes, I am conscious, and so are most or all other humans. I am aware of my own existance, therefore I am conscious by definition. It would be improbable for the universe to make me conscious but not other people, despite the physical similarities, so I’m almost 100% certain that you aren’t a zombie. It’s not clear to me when “unconscious” people are and are not conscious, in the sense of being aware of their own existence. I could probably do a web search and uncover some data that hinted at where the line should be drawn, but that would be going off on a tangent.
The distinction I am trying to make is between random chance creating a thing in itself, and random chance resulting in the same outcomes as the thing would have had. Random chance can erode a rock with lines that look like writing, or random chance can create a self replicating set of molecules that eventually evolve into intelligent life. If we look at writing on a stone, we presume that the writer is conscious, but that’s not the case if there is no writer.
I’m trying to make a map/territory distinction. Consciousness is something that actually exists, in physical form, in the real world. There are some combinations of atoms which are consciousness, and some that aren’t. When we draw our map, we naturally assume that if it looks like a duck and quacks like a duck, it’s a duck. But philosophers have asked us an interesting question:
I don’t know that I’m conscious. (And to avoid the inevitable snark, using “I” in written text doesn’t demonstrate I do.)
If I wanted to know that something is, say, a triangle, you could tell me what it means for something to be a triangle. I could then check things and say “yup, that one is a triangle”.
If I were to instead be puzzled about “red”, you couldn’t describe being red to me, but you could at least show me examples of things that are all red. I could verify that the perceptions I have of those things are similar. Furthermore, I could then point to other objects, say “these produce the same perceptions in me as the first objects”, and discover that you agree with me on those objects producing the same sensation, even if I can’t directly compare the sensation in my head to the sensation in yours.
But even this isn’t possible for “consciousness”. Nobody can perceive more than one example of consciousness. Even assuming that I am conscious and can perceive that, there’s no commonality—there aren’t any sets of things which both of us will perceive as consciousness, that I can use to generalize from to figure out whether some unknown thing I can perceive is also consciousness. If I only ever saw one red object in my lifetime, and the only red object you ever saw in your lifetime was a different object, how could either of us know that the two are the same color?
Are you arguing that you don’t know if you are conscious because you can’t be sure that the consciousness you experience and observe matches the consciousness other people claim to experience and observe?
I would argue that the word “red” is poorly defined, since it predates a modern understanding of light. For the purposes of discussion, we might define an object as “red” if the majority of the energy of the visible spectrum (430-790nm) light it reflects or otherwise emits (under uniform illumination across the visible spectrum) has a wavelength between 620 and 750 nm.
But you say this:
That may have been true at some point in history, but today I can describe being red to you in great detail, as I did above. Before we had that knowledge, though, we had less strong evidence that we were talking about the same thing, but there was still evidence. Without knowing about photons, it would still seem unlikely that we were talking bout different “reds”. Occam’s razor would favor a single “red” property or set of properties over something that appeared different to different people based on some complex set of rules. The counter-evidence is colorblindness, of course, but the complexity added by having to claim that some people have eye problems is significantly less than the complexity that would be added by the theory that each person has their own version of red.
I would argue that we have a similar level of knowledge about consciousness, which is further hindered, as you point out, that we can only observe our own consciousness with high fidelity. To observe other people’s consciousnesses, we have to examine the things they say and write about their own consciousness. It’s a bit like observing the sun, and observing the stars, and trying to deduce whether the sun is a star.
Different cultures seem to have independently come up with similar sounding ideas about consciousness, so it seems like most people are more or less talking about the same sort of thing. I’m sure there are minute differences, just as there are different shades of red, and different classifications of stars. After all, the atoms and neurons in our brains are all configured slightly differently, so it would be surprising if our consciousnesses were exactly identical, neuron for neuron. Then again, it would also be surprising if our sun was exactly the same as some other star, atom for atom. But that’s why we use words like “star”, “dog”, “red” and even “consciousness” to refer to entire classes of things. In this case, “consciousness” refers to the sensation of existing, and the thing that causes us to talk about consciousness. That’s not a full definition, but it’s a start. We’ll have to wait on neuroscience, or perhaps AI research, before we can get a more precise definition.
Sort of, but not quite.
In the case of “red”, I can’t be sure that someone’s mental sensation when seeing red is the same as my mental sensation when seeing red. They’re private, after all. But I can at least be fairly sure that these sensations, however different they may be privately, still point to the same set of things. They are operationally the same. Since my perceptions of red correlate with the other person’s perceptions of red, it makes sense to conclude (with less than perfect certainty) that red objects have something in common with each other—that is, that redness is a natural category.
But I can’t apply this to consciousness. There are no consciousnesses that we can both see—we can each see at most one, and we can never see the same one that the other can. So the factor that leads me to conclude that redness is a natural category is absent for consciousness.
Have they? Different cultures have come up with similar sounding ideas on how to conclude that something has consciousness, but they (or their members) cannot ever make direct observations of two consciousnesses and say that they observe similarities between them. So the example of different cultures agreeing only lets us be pretty sure that “consciousness-labelled-observed-behavior” is a real thing, but not that one person’s direct observation of their own consciousness is the same as another person’s direct observation.
Ah, so you were talking about the possible mismatch between our perceptions of the redness of red. I could try to guess at a technical answer, since it would be highly immoral to experiment with actual people. I’m not sure it would make any difference to the consciousness argument, though.
It sounds like you do experience some sort of sensation of existing, but that you don’t talk about this sensation with words like “consciousness”, or anything else, because you can’t draw a logical link between different people’s consciousnesses to show that they are the same thing.
But I’m not talking about formal logic. I’d agree with you that given what we know, we can’t deduce that everyone is talking about the same “consciousness”. However, we have tools in our bag besides just formal logic. One such tool is Bayes’ theorem. Do you really prescribe less than a 50% probability to the hypothesis that our ideas of “consciousness” are similar, rather than entirely random things? Maybe it isn’t above a 95% certainty, or 99.9%, or whatever arbitrary threshold you would choose before you can safely say that you “know” something.
Personally, I would assign a low probability to the idea that our consciousnesses are identical, but a quite high probability to the idea that they are at least similar in nature. People seem to talk about consciousness in much different ways than they talk about potatoes or space-time. There are enough differences in the rest of our brains that I would be surprised if consciousnesses were identical, but there are still patterns that are similar between most human brains. It strikes me as an unsolved but bounded question, rather than an unknowable one.
To perceive at all, regardless of the nature of that perception, is consciousness. So I think the “I” snark is warranted.
Ah. I think I understand your position a bit better now; thanks. Now let me ask you the following question:
Suppose I take a certain volume of space large enough to hold a human brain—say, a 1-by-1-by-1-cubic-meter space. Now let us suppose that I fill that space with a random arrangement of quarks and electrons. This will almost certainly produce nothing more than a shapeless blob of matter. But now suppose that I continue doing this, over and over again, until finally, after perhaps quintillions upon quintillions of trials, I finally manage to construct a human brain—simply out of random chance. (This is actually a real phenomenon speculated about by physicists, known as the Boltzmann brain.)
Assuming that this brain doesn’t die immediately due to being created in a vacuum, would you agree that it is conscious?
The vast majority of such brains would not be. They’d just be hunks of dead meat, no different from the brain of a cadaver. A tiny subset, however, would be conscious, at least until they ran out of oxygen or whatever and died.
I’m not objecting to the matter in which the GLUT is created, but merely observing that it doesn’t have a form which seems like it would give rise to consciousness. Without knowing the exact mechanism by which human brains give rise to consciousness, it is difficult to say precisely where to draw the line between calling something conscious or not conscious, but a GLUT doesn’t seem to be structured in a way that could think. I’m arguing that it is possible, at least in principle, to cheat a Turing test with a GLUT.
I gave a few more comments in response to blossom’s question if you are interested.
Probably I am not right, but it looks to me that consciousness can go on without any “inputs” and “outputs”. If I sit in the dark room alone and think about some sort of problem then I neither taking any inputs at that moment and nor generating any output uless I decide to think aloud :) So if you believe I am not a zombie then I am consciousness regardless if there are any inputs/outputs.
One more thing. Suppose there is a GLUT and I can talk to it. So I can ask a question: “GLUT, is there a question which you cannot answer?” What do you guys think the GLUT will tell me?
Something an ordinary human being might tell you when asked the same question. Maybe “Depends on what you mean by ‘cannot answer’, but certainly there are plenty of questions I can’t answer well.”
One of the best examples of the GLUT which I used to find very convincing, is by Jaron Lanier. (https://youtu.be/RgfFFRFPvyw) Instead of randomly pulling a computer out of nowhere, it’s just the finite set of all possible computers. He uses this not to argue for zombies, but to introduce confusion and show how since hailstorms and asteroids can’t be conscious, nobody really knows what they’re talking about, therefore dualism is just as valid as reductionism. I now see where the error in reasoning is, thanks.
This is by far the silliest part of the sequences for me. Within this blog post, Yudkowsky briefly went insane and decided thought experiments have to be “probable” or “realistic” in order to be engaged with. He then refuses to answer the prompt until the last four sentences, wherein he basically admits that he doesn’t have a framework for answering it.
Suppose someone, that someone indeed being a conscious agent, creates a GLUT and then swiftly dies a horrible death so you can stop focusing on the person who made the GLUT or how it got there and answer the damn question. Is the GLUT conscious?