The AIXI algorithm amounts to a formal mathematical definition of intelligence, but in plain english we can just say intelligence is a capacity for modelling and predicting one’s environment.
This relates to the computability of physics and the materialist computationalist assumption in the SA itself. If we figure out the exact math underlying the universe (and our current theories are pretty close), and you ran that program on an infinite or near infinite computer, that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe). If you were to look inside that simulated universe, it would have entire galaxies, planets, humans or aliens pondering their consciousness, writing on websites, etc etc etc
… that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe).
I worry that there may be an instance of the Mind Projection Fallacy involved here. You are assuming there is a one-place predicate E(X) ⇔ {X has real existence}. But maybe the right way of thinking about it is as a two-place predicate J(A,X)<=> {Agent A judges that X has real existence}.
Example: In this formulation, Descartes’s “cogito ergo sum” might best be expressed as leading to the conclusion J(me,me). Perhaps I can also become convinced of J(you,you) and perhaps even J(sim-being,sim_being). But getting from there to E(me) seems to be Mind Projection; getting to J(me, you) seems difficult; and getting to J(me, sim-being) seems very difficult. Especially if I can’t also get to J(sim-being, me).
Do your claims really depend on the optimality of AIXI? It seems to me that, using your logic, if I ran the exact math underlying the universe on, say, Wolfram Alpha, or a TI-86 graphing calculator, the simulated inhabitants would still have realistic experiences; they would just have them more slowly relative to our current frame of reality’s time-stream.
No, computationalism is separate, and was more or less assumed. I discussed AIXI as interesting just because it shows that universal intelligence is in fact simulation, and so future hyperintelligences will create beings like us just by thinking/simulating our time period (in sufficient detail). And moreover, they won’t have much of a choice (if they really want to deeply understand it).
As to your second thought, turing machines are turing machines, so it doesn’t matter what form it takes as long as it has sufficient space and time. Of course, that rules out your examples though: you’ll need something just a tad bigger than a TI-86 or Wolfram Alpha (on today’s machines) to simulate anything on the scale of a planet, let alone a single human brain.
I think I’m finally starting to understand your article. I will probably have to go back and vote it up; it’s a worthwhile point.
computationalism is separate, and was more or less assumed
Do you have the link for that? I think there’s an article somewhere, but I can’t remember what it’s called.
If there isn’t one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist. For me, the very definitions of “concept,” “relationship,” and “exist” are almost enough to justify an assumption of anti-computationalism. A “concept” is something that might or might not exist; it is merely potential existence. A “relationship” is a set of concepts. I either don’t know of or don’t understand any of the insights that would suggest that everything that potentially exists and is computed therefore actually exists—computing, to me, just sounds like a way of manipulating concepts, or, at best, of moving a few bits of matter around, perhaps LED switches or a turing tape, in accordance with a set of concepts. How could moving LED switches around make things real?
By “real,” I mean made of “stuff.” I get through a typical day and navigate my ordinary world by assuming that there is a distinction between “stuff” (matter-energy) and “ideas” (ways of arranging the matter-energy in space-time). Obviously thinking about an idea will tend to form some analog of the idea in the stuff that makes up my brain, and, if my brain were so thorough and precise as to resemble AIXI, the analog might be a very tight analog indeed, but it’s still an analog, right? I mean, I don’t take you to mean that an AIXI ‘brain’ would literally form a class-M planet inside its CPU so as to better understand the sentient beings on that planet. The AIXI brain would just be thinking about the ideas that govern the behavior of the sentient beings...and thinking about ideas, even very precisely, doesn’t make the ideas real.
I might be missing something here; I’d appreciate it if you could point out the flaw(s) in my logic.
Substrate independence, functionalism, even the generalized anti-zombie principle—all of these have been covered in some depth on Lesswrong before. Much of it is in the sequences, like nonperson predicates and some of the links from it.
If you don’t believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?
A highly detailed model of me, may not be me. But it will, at least, be a model which (for purposes of prediction via similarity) thinks itself to be Eliezer Yudkowsky. It will be a model that, when cranked to find my behavior if asked “Who are you and are you conscious?”, says “I am Eliezer Yudkowsky and I seem have subjective experiences” for much the same reason I do.
I buy that. That sort of model could probably exist.
Your “zombie”, in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.
That sort of zombie can’t possibly exist.
If you don’t believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?
It’s not that I don’t believe an emulated mind can be conscious. Perhaps it could. What boggles my mind is the assertion that emulation is sufficient to make a mind conscious—that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious.
I have no opinion about whether my mind is computable. It seems likely that a reasonably good model of my mind might be computable.
I’m not sure what to make of the proposition that meat has special computational properties. I wouldn’t put it that way, especially since I don’t like the connotation that brains are fundamentally physically different from rocks. My point isn’t that brains are special; my point is that matter-energy is special. Existence, in the physical sense, doesn’t seem to me to be a quality that can be specified in an equation or an algorithm. I can solve Maxwell’s equations all day long and never create a photon from scratch.
That doesn’t necessarily mean that photons have special computational properties; it just means that even fully computable objects don’t come into being by virtue of their having been computed. I guess I don’t believe in substrate independence?
What boggles my mind is the assertion that emulation is sufficient to make a mind conscious—that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious
There are several reasons this is mind boggling, but they stem from a false intuition pump—consciousness like your own requires vastly more information than could be written down on a piece of paper.
Here is a much better way of thinking about it. From physics and neuroscience etc we know that the pattern identity of human-level consciousness (as consciousness isn’t a simple boolean quality) is essentially encoded in the synaptic junctions, and corresponds to about 10^15 bits (roughly). Those bits are you.
Now if we paused your brain activity with chemicals, or we froze it, you would cease to be conscious, but would still exist because there is the potential to regain conscious activity in the future. So consciousness as a state is an active computational process that requires energy.
So in the end of the day, consciousness is a particular computational process(energy) on a particular arrangement of bits(matter).
There are many other equivalent ways of representing that particular arrangement, and the generality of turing machines is such that a sufficiently powerful computer is an arrangement of mass(bits) that with sufficient energy(computation) can represent any other system that can possibly exist. Anything. Including human consciousness.
Thanks; voted up (along w/ the other replies) for clarity & relevance.
the pattern identity of human-level consciousness (as consciousness isn’t a simple boolean quality) is essentially encoded in the synaptic junctions, and corresponds to about 10^15 bits (roughly). Those bits are you.
How confident are you that those 10^15 bits are you? For example, suppose I showed you the 10^15 bits on a high-fidelity but otherwise ordinary bank of supercomputers, allowed you to verify to your heart’s content that the bits matched high-fidelity scans of your wetware, and then offered to anesthetize you, remove your brain, and replace it with a silicon-based computer that would implement those 10^15 bits with the same efficiency and fidelity as your current brain. All your medical expenses would be covered and your employer(s) have agreed to provide unpaid leave. You would be sworn to secrecy, the risk of bio-incompatibility/immuno-rejection is essentially zero, and the main benefit is that every engineering test of the artificial brain has shown it to be immune to certain brain diseases such as mad cow and Alzheimer’s. On the flip side, if you’re wrong, and those 10^15 bits are not quite you, you would either cease to be conscious or have a consciousness that would be altered in ways that might be difficult to predict (unless you have a theory about how or why you might be wrong).
How confident are you that those 10^15 bits are you?
Reasonably confident.
[snip mind replacement scenario]
Would you accept the surgery? Would you hesitate?
I wouldn’t accept the surgery, but not for purely philosophical reasons. I have a much lower confidence bound in the particular technology you described. I’m more confident in my philosophical position, but combine the two and it would be an unacceptable risk.
And in general even a small risk of death is to be strongly minimized.
All of that of course could change if I say had some brain disease.
I have a simple analogy that I think captures much of the weight of the patternist / functionalist philosophy.
What is Hamlet? I mean really, what is it? When shakespeare wrote it into his first manuscript, was Hamlet that manuscript? Did it exist before then?
Like Hamlet, we are not the ink or the pages, but we are actually the words themselves.
Up to this moment every human mind is associated with exactly one single physical manuscript, and thus we confuse the two, but that is a limitation of our biological inheritance, not an absolute physical limitation.
I have some thought experiments that illustrate why I adopt the functionalist point of view, mainly because it results as the last consistent contender.
I have some thought experiments that illustrate why I adopt the functionalist point of view, mainly because it results as the last consistent contender.
I will read them soon.
What is Hamlet? I mean really, what is it? When shakespeare wrote it into his first manuscript, was Hamlet that manuscript? Did it exist before then?
To stretch your analogy a bit, I think that words are the first approximation of what Hamlet is, certainly more so than a piece of paper or a bit of ink, but that the analysis cannot really end with words. The words were probably changed a bit from one edition or one printing to the next. The meaning of the words has changed some over the centuries. By social convention, it is legitimate for a director or producer of a classic play to interpret the play in his or her own style; the stage directions are incomplete enough to allow for considerable variation in the context in which the scripted lines are delivered, and yet not all contexts would be equally acceptable, equally well-received, equally deserving of the title “Hamlet.” Hamlet has been spoofed, translated, used as the unspoken subtext of instrumental music or wordless dance; all these things are also part of what it is for something to be “Hamlet.” Hamlet in one sense existed as soon as Shakespeare composed most of the words in his head, and in another sense is still coming into being today.
Likewise, your consciousness and my consciousness is certainly made up of neurons, which in turn are made of quarks and things, but it is unlikely that all my consciousness is stored in my brain; some is in my spine, some is in my body, in the way that various cells have had their epigenetic markers moved so as to activate or deactivate particular codons at particular pH levels, in the way that other people remember us and interact with us and in the way that a familiar journal entry or scent can revive particular memories or feelings. Quarks themselves may be basic, or they may be composed of sub-sub-subatomic particles, which in turn are composed of still smaller things; perhaps it is tortoises all the way down, and if we essentially have no idea of how it is that the neurons in our brain give rise to consciousness, why should we expect a model that is accurate only to the nearest millionth of a picometer to capture us in enough fidelity to replicate consciousness?
If a toneless machine read aloud the bare words of Hamlet with the rhythm of a metronome, would it really be Hamlet? Would an adult who understood 16th century English but who had no previous exposure to drama be able to understand such a Hamlet?
What is Hamlet? I mean really, what is it? When shakespeare wrote it into his first manuscript, was Hamlet that manuscript? Did it exist before then?
To stretch your analogy a bit, I think that words are the first approximation of what Hamlet is, certainly more so than a piece of paper or a bit of ink, but that the analysis cannot really end with words.
[..]
Hamlet in one sense existed as soon as Shakespeare composed most of the words in his head, and in another sense is still coming into being today.
I think we would agree then that the ‘substance’ of Hamlet is a pattern of ideas—information. As is a mind.
Likewise, your consciousness and my consciousness is certainly made up of neurons
Err no! No more than Hamlet is made up of ink! Our consciousness is a pattern of information, in the same sense as Hamlet. It is encoded in the synaptic junctions, in the same sense that Hamlet can be encoded on your computer’s hard drive. The neurons have an active computational role, but are also mainly the energy engine—the great bulk of the computation is done right at the storage site—in the synapses.
if we essentially have no idea of how it is that the neurons in our brain give rise to consciousness
We do have ideas, and this picture is getting increasingly clear every year. Understanding consciousness is synonymous with reverse engineering the brain and building a brain simulation AI. I suspect that many people want a single brilliant idea that explains consciousness, like an e=mc^2 you can write on bumpersticks. But unfortunately it is much more complex than that. The brain has some neat tricks that are that simple (the self-organizing hebbian dynamics in the cortex could be explained in a few equations perhaps), but it is a complex engine built out of many many components.
If you haven’t read them yet already, I recommend Daniel Dennet’s “Consciousness Explained” and Hawkin’s “On Intelligence”. If you don’t have as much time just check out the latter. Reading both gives a good understanding of the scope of consciousness and the latter especially is a layman-friendly summary of the computational model of the brain emerging from neuroscience. Hawkins has a background that mixes neuroscience, software, and hardware—which I find is the appropriate mix for really understanding consciousness.
You don’t really understand a principle until you can actually build it.
That being said, On Intelligence is something of an advertisement for Hawkin’s venture and is now 6 years old, so it must be taken with a grain of salt.
why should we expect a model that is accurate only to the nearest millionth of a picometer to capture us in enough fidelity to replicate consciousness?
For the same reason that once you understand the architecture of a computer, you don’t need to simulate it down to the molecular level to run it’s software.
A similar level of scale separation exists in the brain, and moreover it must exist for our brains to perform effective computation at all. Without scale separation you just have noise, chaos, and no computational capability to accurately simulate and predict your environment.
I think you’ve successfully analyzed your beliefs, as far as you’ve gone—it does seem that “substrate independence” is something you don’t believe in. However, “substrate independence” is not an indivisible unit; it’s composed of parts which you do seem to believe in.
For instance, you seem to accept that the highly detailed model of EY, whether that just means functionally emulating his neurons and glial cells, or actually computing his hamiltonian, will claim to be him, for much the same reason he does. If we then simulate, at whatever level appropriate to our simulated EY, a highly detailed model of his house and neighborhood that evolves according to the same rules that the real life versions do, he will think the same things regarding these things that the real life EY does.
If we go on to simulate the rest of the universe, including all the other people in it, with the same degree of fidelity, no observation or piece of evidence other than the anthropic could tell them they’re in a simulation.
Bear in mind that nothing magic happens when these equations go from paper to computer: If you had the time and low mathematical error rate and notebook space to sit down and work everything out on paper, the consequences would be the same. It’s a slippery concept to work one’s intuition around, but xkcd #505 gives as good an intuition pump as I’ve seen.
By “real,” I mean made of “stuff.” I get through a typical day and navigate my ordinary world by assuming that there is a distinction between “stuff” (matter-energy) and “ideas” (ways of arranging the matter-energy in space-time).
I don’t think you can make this distinction meaningful. After all, what’s an electron? Just a pattern in the electron field...
If there isn’t one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist.
This isn’t actually what I meant by computationalism (although I was using the word from memory, and my concept may differ from the philosopher’s definition).
The idea that mere specification of formal relationships, that mere math in theory, can cause worlds to exist is a separate position than basic computationalism, and I don’t buy it.
A formal mathematical system needs to actually be computed to be real. That is what causes time to flow in the child virtual universe. And in our physics, that requires energy in the parent universe. It also requires mass to represent bits. So computation can’t just arise out of nothing—it requires computational elements in a parent universe organized in the right way.
khafra’s replies are delving deeper into the philosophical background, so I don’t need to add much more
The AIXI algorithm amounts to a formal mathematical definition of intelligence, but in plain english we can just say intelligence is a capacity for modelling and predicting one’s environment.
This relates to the computability of physics and the materialist computationalist assumption in the SA itself. If we figure out the exact math underlying the universe (and our current theories are pretty close), and you ran that program on an infinite or near infinite computer, that system would be indistinguishable from the universe itself from the perspective of observers inside the simulation. Thus it would recreate the universe (albeit embedded in a parent universe). If you were to look inside that simulated universe, it would have entire galaxies, planets, humans or aliens pondering their consciousness, writing on websites, etc etc etc
I worry that there may be an instance of the Mind Projection Fallacy involved here. You are assuming there is a one-place predicate E(X) ⇔ {X has real existence}. But maybe the right way of thinking about it is as a two-place predicate J(A,X)<=> {Agent A judges that X has real existence}.
Example: In this formulation, Descartes’s “cogito ergo sum” might best be expressed as leading to the conclusion J(me,me). Perhaps I can also become convinced of J(you,you) and perhaps even J(sim-being,sim_being). But getting from there to E(me) seems to be Mind Projection; getting to J(me, you) seems difficult; and getting to J(me, sim-being) seems very difficult. Especially if I can’t also get to J(sim-being, me).
Very coherent; thank you.
Do your claims really depend on the optimality of AIXI? It seems to me that, using your logic, if I ran the exact math underlying the universe on, say, Wolfram Alpha, or a TI-86 graphing calculator, the simulated inhabitants would still have realistic experiences; they would just have them more slowly relative to our current frame of reality’s time-stream.
No, computationalism is separate, and was more or less assumed. I discussed AIXI as interesting just because it shows that universal intelligence is in fact simulation, and so future hyperintelligences will create beings like us just by thinking/simulating our time period (in sufficient detail). And moreover, they won’t have much of a choice (if they really want to deeply understand it).
As to your second thought, turing machines are turing machines, so it doesn’t matter what form it takes as long as it has sufficient space and time. Of course, that rules out your examples though: you’ll need something just a tad bigger than a TI-86 or Wolfram Alpha (on today’s machines) to simulate anything on the scale of a planet, let alone a single human brain.
I think I’m finally starting to understand your article. I will probably have to go back and vote it up; it’s a worthwhile point.
Do you have the link for that? I think there’s an article somewhere, but I can’t remember what it’s called.
If there isn’t one, why do you assume computationalism? I find it stunningly implausible that the mere specification of formal relationships among abstract concepts is sufficient to reify those concepts, i.e., to cause them to actually exist. For me, the very definitions of “concept,” “relationship,” and “exist” are almost enough to justify an assumption of anti-computationalism. A “concept” is something that might or might not exist; it is merely potential existence. A “relationship” is a set of concepts. I either don’t know of or don’t understand any of the insights that would suggest that everything that potentially exists and is computed therefore actually exists—computing, to me, just sounds like a way of manipulating concepts, or, at best, of moving a few bits of matter around, perhaps LED switches or a turing tape, in accordance with a set of concepts. How could moving LED switches around make things real?
By “real,” I mean made of “stuff.” I get through a typical day and navigate my ordinary world by assuming that there is a distinction between “stuff” (matter-energy) and “ideas” (ways of arranging the matter-energy in space-time). Obviously thinking about an idea will tend to form some analog of the idea in the stuff that makes up my brain, and, if my brain were so thorough and precise as to resemble AIXI, the analog might be a very tight analog indeed, but it’s still an analog, right? I mean, I don’t take you to mean that an AIXI ‘brain’ would literally form a class-M planet inside its CPU so as to better understand the sentient beings on that planet. The AIXI brain would just be thinking about the ideas that govern the behavior of the sentient beings...and thinking about ideas, even very precisely, doesn’t make the ideas real.
I might be missing something here; I’d appreciate it if you could point out the flaw(s) in my logic.
Substrate independence, functionalism, even the generalized anti-zombie principle—all of these have been covered in some depth on Lesswrong before. Much of it is in the sequences, like nonperson predicates and some of the links from it.
If you don’t believe an emulated mind can be conscious, do you believe that your mind is noncomputable or that meat has special computational properties?
I buy that. That sort of model could probably exist.
That sort of zombie can’t possibly exist.
It’s not that I don’t believe an emulated mind can be conscious. Perhaps it could. What boggles my mind is the assertion that emulation is sufficient to make a mind conscious—that there exists a particular bunch of equations and algorithms such htat when they are written on a piece of paper they are almost certainly non-conscious, but when they are run through a Turing machine they are almost certainly conscious.
I have no opinion about whether my mind is computable. It seems likely that a reasonably good model of my mind might be computable.
I’m not sure what to make of the proposition that meat has special computational properties. I wouldn’t put it that way, especially since I don’t like the connotation that brains are fundamentally physically different from rocks. My point isn’t that brains are special; my point is that matter-energy is special. Existence, in the physical sense, doesn’t seem to me to be a quality that can be specified in an equation or an algorithm. I can solve Maxwell’s equations all day long and never create a photon from scratch.
That doesn’t necessarily mean that photons have special computational properties; it just means that even fully computable objects don’t come into being by virtue of their having been computed. I guess I don’t believe in substrate independence?
There are several reasons this is mind boggling, but they stem from a false intuition pump—consciousness like your own requires vastly more information than could be written down on a piece of paper.
Here is a much better way of thinking about it. From physics and neuroscience etc we know that the pattern identity of human-level consciousness (as consciousness isn’t a simple boolean quality) is essentially encoded in the synaptic junctions, and corresponds to about 10^15 bits (roughly). Those bits are you.
Now if we paused your brain activity with chemicals, or we froze it, you would cease to be conscious, but would still exist because there is the potential to regain conscious activity in the future. So consciousness as a state is an active computational process that requires energy.
So in the end of the day, consciousness is a particular computational process(energy) on a particular arrangement of bits(matter).
There are many other equivalent ways of representing that particular arrangement, and the generality of turing machines is such that a sufficiently powerful computer is an arrangement of mass(bits) that with sufficient energy(computation) can represent any other system that can possibly exist. Anything. Including human consciousness.
Thanks; voted up (along w/ the other replies) for clarity & relevance.
How confident are you that those 10^15 bits are you? For example, suppose I showed you the 10^15 bits on a high-fidelity but otherwise ordinary bank of supercomputers, allowed you to verify to your heart’s content that the bits matched high-fidelity scans of your wetware, and then offered to anesthetize you, remove your brain, and replace it with a silicon-based computer that would implement those 10^15 bits with the same efficiency and fidelity as your current brain. All your medical expenses would be covered and your employer(s) have agreed to provide unpaid leave. You would be sworn to secrecy, the risk of bio-incompatibility/immuno-rejection is essentially zero, and the main benefit is that every engineering test of the artificial brain has shown it to be immune to certain brain diseases such as mad cow and Alzheimer’s. On the flip side, if you’re wrong, and those 10^15 bits are not quite you, you would either cease to be conscious or have a consciousness that would be altered in ways that might be difficult to predict (unless you have a theory about how or why you might be wrong).
Would you accept the surgery? Would you hesitate?
Reasonably confident.
[snip mind replacement scenario]
I wouldn’t accept the surgery, but not for purely philosophical reasons. I have a much lower confidence bound in the particular technology you described. I’m more confident in my philosophical position, but combine the two and it would be an unacceptable risk.
And in general even a small risk of death is to be strongly minimized.
All of that of course could change if I say had some brain disease.
I have a simple analogy that I think captures much of the weight of the patternist / functionalist philosophy.
What is Hamlet? I mean really, what is it? When shakespeare wrote it into his first manuscript, was Hamlet that manuscript? Did it exist before then?
Like Hamlet, we are not the ink or the pages, but we are actually the words themselves.
Up to this moment every human mind is associated with exactly one single physical manuscript, and thus we confuse the two, but that is a limitation of our biological inheritance, not an absolute physical limitation.
I have some thought experiments that illustrate why I adopt the functionalist point of view, mainly because it results as the last consistent contender.
I will read them soon.
To stretch your analogy a bit, I think that words are the first approximation of what Hamlet is, certainly more so than a piece of paper or a bit of ink, but that the analysis cannot really end with words. The words were probably changed a bit from one edition or one printing to the next. The meaning of the words has changed some over the centuries. By social convention, it is legitimate for a director or producer of a classic play to interpret the play in his or her own style; the stage directions are incomplete enough to allow for considerable variation in the context in which the scripted lines are delivered, and yet not all contexts would be equally acceptable, equally well-received, equally deserving of the title “Hamlet.” Hamlet has been spoofed, translated, used as the unspoken subtext of instrumental music or wordless dance; all these things are also part of what it is for something to be “Hamlet.” Hamlet in one sense existed as soon as Shakespeare composed most of the words in his head, and in another sense is still coming into being today.
Likewise, your consciousness and my consciousness is certainly made up of neurons, which in turn are made of quarks and things, but it is unlikely that all my consciousness is stored in my brain; some is in my spine, some is in my body, in the way that various cells have had their epigenetic markers moved so as to activate or deactivate particular codons at particular pH levels, in the way that other people remember us and interact with us and in the way that a familiar journal entry or scent can revive particular memories or feelings. Quarks themselves may be basic, or they may be composed of sub-sub-subatomic particles, which in turn are composed of still smaller things; perhaps it is tortoises all the way down, and if we essentially have no idea of how it is that the neurons in our brain give rise to consciousness, why should we expect a model that is accurate only to the nearest millionth of a picometer to capture us in enough fidelity to replicate consciousness?
If a toneless machine read aloud the bare words of Hamlet with the rhythm of a metronome, would it really be Hamlet? Would an adult who understood 16th century English but who had no previous exposure to drama be able to understand such a Hamlet?
I think we would agree then that the ‘substance’ of Hamlet is a pattern of ideas—information. As is a mind.
Err no! No more than Hamlet is made up of ink! Our consciousness is a pattern of information, in the same sense as Hamlet. It is encoded in the synaptic junctions, in the same sense that Hamlet can be encoded on your computer’s hard drive. The neurons have an active computational role, but are also mainly the energy engine—the great bulk of the computation is done right at the storage site—in the synapses.
We do have ideas, and this picture is getting increasingly clear every year. Understanding consciousness is synonymous with reverse engineering the brain and building a brain simulation AI. I suspect that many people want a single brilliant idea that explains consciousness, like an e=mc^2 you can write on bumpersticks. But unfortunately it is much more complex than that. The brain has some neat tricks that are that simple (the self-organizing hebbian dynamics in the cortex could be explained in a few equations perhaps), but it is a complex engine built out of many many components.
If you haven’t read them yet already, I recommend Daniel Dennet’s “Consciousness Explained” and Hawkin’s “On Intelligence”. If you don’t have as much time just check out the latter. Reading both gives a good understanding of the scope of consciousness and the latter especially is a layman-friendly summary of the computational model of the brain emerging from neuroscience. Hawkins has a background that mixes neuroscience, software, and hardware—which I find is the appropriate mix for really understanding consciousness.
You don’t really understand a principle until you can actually build it.
That being said, On Intelligence is something of an advertisement for Hawkin’s venture and is now 6 years old, so it must be taken with a grain of salt.
For the same reason that once you understand the architecture of a computer, you don’t need to simulate it down to the molecular level to run it’s software.
A similar level of scale separation exists in the brain, and moreover it must exist for our brains to perform effective computation at all. Without scale separation you just have noise, chaos, and no computational capability to accurately simulate and predict your environment.
Thanks for the reading recommendations! I will get back to you after reading both books in about 3 months.
I think you’ve successfully analyzed your beliefs, as far as you’ve gone—it does seem that “substrate independence” is something you don’t believe in. However, “substrate independence” is not an indivisible unit; it’s composed of parts which you do seem to believe in.
For instance, you seem to accept that the highly detailed model of EY, whether that just means functionally emulating his neurons and glial cells, or actually computing his hamiltonian, will claim to be him, for much the same reason he does. If we then simulate, at whatever level appropriate to our simulated EY, a highly detailed model of his house and neighborhood that evolves according to the same rules that the real life versions do, he will think the same things regarding these things that the real life EY does.
If we go on to simulate the rest of the universe, including all the other people in it, with the same degree of fidelity, no observation or piece of evidence other than the anthropic could tell them they’re in a simulation.
Bear in mind that nothing magic happens when these equations go from paper to computer: If you had the time and low mathematical error rate and notebook space to sit down and work everything out on paper, the consequences would be the same. It’s a slippery concept to work one’s intuition around, but xkcd #505 gives as good an intuition pump as I’ve seen.
what is this btw?
xkcd #505.
I don’t think you can make this distinction meaningful. After all, what’s an electron? Just a pattern in the electron field...
This isn’t actually what I meant by computationalism (although I was using the word from memory, and my concept may differ from the philosopher’s definition).
The idea that mere specification of formal relationships, that mere math in theory, can cause worlds to exist is a separate position than basic computationalism, and I don’t buy it.
A formal mathematical system needs to actually be computed to be real. That is what causes time to flow in the child virtual universe. And in our physics, that requires energy in the parent universe. It also requires mass to represent bits. So computation can’t just arise out of nothing—it requires computational elements in a parent universe organized in the right way.
khafra’s replies are delving deeper into the philosophical background, so I don’t need to add much more