Where is lesswrong.com? “On the internet” would be the naive answer, but there’s no part of the internet we could naively recognize as being lesswrong.com. A bunch of electrical impulses get interpreted as ones and zeroes which get translated in a certain language, which converts them into another language (English), which each mind interacting with the site translates in its own way. At the base level, before minds get involved, there’s nothing more complex that a bunch of magnets and electric signals and some servers and so on (I’m not a computer person, so cut me some slack on the details). Yet, out of all of that emerges your post, this comment, and so on.
I know that it is in principle possible to understand how all of this comes together, but I also know that I do not in fact understand it. If I were really to look at how complex this site is—down to the level of the chemist who makes the fertilizer to supply the farmer who feeds the truck driver who delivers the petroleum that gets refined into the plastic that makes the keyboard of the engineer who maintains the power plant that keeps the server running—I have absolutely no idea what’s going on, and probably never will even if I devoted my entire life to understanding how this website comes together. In fact, I have good reason to believe there are parts of what’s going on that I don’t even know are parts of what are going on—I don’t even understand the basic underlying structure at a complete level. But if a bunch of people were really dedicated to it, they could probably figure it out, so that by asking the group of them, you could figure out what you needed to know about how the site works; in other words, it is in principle understandable, even if no one understands it in its entirety.
There is thus nothing particularly problematic about saying, “So, I don’t get how this whole consciousness thing works, but there’s probably no magic involved,” just as there’s no magic (excepting EY’s magic) involved in putting this site together. Saying, “I can’t naively figure out how some extremely complicated system works, therefore, the answer is: magic!” is simply not a reasonable solution. It is possible that there is something more going on in the brain that we can currently understand, but it seems exceedingly unlikely that it is in principle un-understandable.
There is thus nothing particularly problematic about saying, “So, I don’t get how this whole consciousness thing works, but there’s probably no magic involved,” just as there’s no magic (excepting EY’s magic) involved in putting this site together.
If I were to say to you that negative numbers can be made by adding together positive numbers, you just have to add them together in the right way—that would sound strange and wrong, yes? If you start at 1, and keep adding 1, you do not expect your sum to equal −1 (or the square root of −1, or an apple) at any stage. When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning. They’re saying: I may not know every property that very large numbers / very large piles of atoms would exhibit, but it would be magic to get that property from those ingredients.
The problem with the analogy is that we know a whole lot about numbers—math is an artificial language which we created and decided upon the axioms of. How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples? But I’ve made this point before.
What I would find more interesting is an explanation of what magic would do here. It seems obvious that our perception of a homogenous shade of pink is, in some significant way, related to lightwave frequencies, retinas, and neurons. Let’s assume there is some “magic” involved that in turn converts this physical phenomena into an experience. Wouldn’t it have to interact with neurons and such, so that it generates an experience of pink and not an experience of strawberry-rhubarb pie? If it’s epiphenomenal, how could it accomplish this, and how could it be meaningful? If it’s not epiphenomenal, how does it interact with actual matter? Why can’t we detect it?
It’s quite clear that when it comes to how consciousness works, the current best answer is, “We don’t get it, but it has something to do with the brain and neurons.” Answering, “We don’t get it, but it has something to do with the brain and neurons and magic” appears to be an inferior answer.
This may be a cheap shot around these parts, but the non-materialist position feels a lot like an argument for the existence of God.
It’s quite clear that when it comes to how consciousness works, the current best answer is, “We don’t get it, but it has something to do with the brain and neurons.” Answering, “We don’t get it, but it has something to do with the brain and neurons and magic” appears to be an inferior answer.
This is perfect and I’m not sure there is much more to say.
How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples?
It’s our theories of matter which are the problem—and which are clear enough for me to say that something is missing. My position as stated here actually is an identity theory. Experiences are a part of the brain and are causally relevant. But the ontology of physics is wrong, and the attempted reduction of phenomenology to that ontology is also wrong. Instead, phenomenology is giving us a glimpse of the true ontology. All that we see directly is the inner ontology of the conscious experience itself, but one supposes that there is some relationship to the ontology of everything else.
These mostly crop up in quantum field theory, where various formal expressions have infinite values. These can often be “regularized” to give finite results, or at least turned into a form that while still infinite, can be “renormalized” by such means as considering various terms as referring to observed values, rather than the “bare values”, which are carefully tweaked (often taking limits as they go to zero) in a coordinated way, so that the observed values remain okay.
Letting s be the sum above, in some sense what we’re “really” saying is that s = 1 + 2 s, which can be seen by formal manipulation. This has two solutions in the (one-point compactification of) the complex numbers: infinity, and −1. When doing things like summing Feynmann diagrams, we can have similar things where a physical propagator is essentially described as a bare propagator plus perturbative terms that should be written in terms of products of propagators, leading again to infinite series that diverge (several interlocked infinite series, actually—the photon propagator should include terms with each charged particle, the electron should include terms with photon intermediates, etc.).
IIRC, The Casimir effect can be explained by using Zeta function regularization to sum up contributions of an infinite number of vaccuum modes, though it is certainly not the only way to perform the calculation
Explicit physics calculations I do not have at the ready.
EDIT: please do not take the descriptions of the physics above too seriously. It’s not quite what people actually do, but it’s close enough to give some of the flavor.
When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning.
does not also apply to the piling up of degrees of freedom in a quantum monad?
I have another question, which I expect someone has already asked somewhere, but I doubt I’ll be able to find your response, so I’ll just ask again. Would a simulation of a conscious quantum monad by a classical computation also be conscious?
Answer to the first question: everything I say here about redness applies to the other problem topics. The final stage of a monadic theory is meant to have the full ontology of consciousness. But it may have an intermediate stage in which it is still just mathematical.
Answer to the second question: No. In monadology (at least as I conceive it), consciousness is only ever a property of individual monads, typically in very complex states. Most matter consists of large numbers of individual monads in much simpler states, and classical computation involves coordinating their interactions so that the macrostates implement the computation. So you should be able to simulate the dynamics of a single complex monad using many simple monads, but that’s all.
If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me… Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction—suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you’re talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can’t be conscious in its current form, and will try to make itself so. It doesn’t sound very likely. But it’s more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It’s also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn’t apply to you.
So, let’s suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let’s suppose we get those out of the way and I’m left with the philosophical problem. How do I respond?
To begin with, the situation hasn’t changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let’s forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it’s in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I’m monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven’t committed a classical em of you to a specific one. Are you just not sure what it would do?
It’s a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
The reason lookup tables don’t work is that you can’t change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the “full computation” table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
Where is lesswrong.com? “On the internet” would be the naive answer, but there’s no part of the internet we could naively recognize as being lesswrong.com. A bunch of electrical impulses get interpreted as ones and zeroes which get translated in a certain language, which converts them into another language (English), which each mind interacting with the site translates in its own way. At the base level, before minds get involved, there’s nothing more complex that a bunch of magnets and electric signals and some servers and so on (I’m not a computer person, so cut me some slack on the details). Yet, out of all of that emerges your post, this comment, and so on.
I know that it is in principle possible to understand how all of this comes together, but I also know that I do not in fact understand it. If I were really to look at how complex this site is—down to the level of the chemist who makes the fertilizer to supply the farmer who feeds the truck driver who delivers the petroleum that gets refined into the plastic that makes the keyboard of the engineer who maintains the power plant that keeps the server running—I have absolutely no idea what’s going on, and probably never will even if I devoted my entire life to understanding how this website comes together. In fact, I have good reason to believe there are parts of what’s going on that I don’t even know are parts of what are going on—I don’t even understand the basic underlying structure at a complete level. But if a bunch of people were really dedicated to it, they could probably figure it out, so that by asking the group of them, you could figure out what you needed to know about how the site works; in other words, it is in principle understandable, even if no one understands it in its entirety.
There is thus nothing particularly problematic about saying, “So, I don’t get how this whole consciousness thing works, but there’s probably no magic involved,” just as there’s no magic (excepting EY’s magic) involved in putting this site together. Saying, “I can’t naively figure out how some extremely complicated system works, therefore, the answer is: magic!” is simply not a reasonable solution. It is possible that there is something more going on in the brain that we can currently understand, but it seems exceedingly unlikely that it is in principle un-understandable.
If I were to say to you that negative numbers can be made by adding together positive numbers, you just have to add them together in the right way—that would sound strange and wrong, yes? If you start at 1, and keep adding 1, you do not expect your sum to equal −1 (or the square root of −1, or an apple) at any stage. When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning. They’re saying: I may not know every property that very large numbers / very large piles of atoms would exhibit, but it would be magic to get that property from those ingredients.
The problem with the analogy is that we know a whole lot about numbers—math is an artificial language which we created and decided upon the axioms of. How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples? But I’ve made this point before.
What I would find more interesting is an explanation of what magic would do here. It seems obvious that our perception of a homogenous shade of pink is, in some significant way, related to lightwave frequencies, retinas, and neurons. Let’s assume there is some “magic” involved that in turn converts this physical phenomena into an experience. Wouldn’t it have to interact with neurons and such, so that it generates an experience of pink and not an experience of strawberry-rhubarb pie? If it’s epiphenomenal, how could it accomplish this, and how could it be meaningful? If it’s not epiphenomenal, how does it interact with actual matter? Why can’t we detect it?
It’s quite clear that when it comes to how consciousness works, the current best answer is, “We don’t get it, but it has something to do with the brain and neurons.” Answering, “We don’t get it, but it has something to do with the brain and neurons and magic” appears to be an inferior answer.
This may be a cheap shot around these parts, but the non-materialist position feels a lot like an argument for the existence of God.
This is perfect and I’m not sure there is much more to say.
It’s our theories of matter which are the problem—and which are clear enough for me to say that something is missing. My position as stated here actually is an identity theory. Experiences are a part of the brain and are causally relevant. But the ontology of physics is wrong, and the attempted reduction of phenomenology to that ontology is also wrong. Instead, phenomenology is giving us a glimpse of the true ontology. All that we see directly is the inner ontology of the conscious experience itself, but one supposes that there is some relationship to the ontology of everything else.
\sum_{n=0}^{infinity} 2^n “=” −1.
That is a bit tongue in cheek, but there are divergent sums that are used in serious physical calculations.
I’m curious about this. More details please!
These mostly crop up in quantum field theory, where various formal expressions have infinite values. These can often be “regularized” to give finite results, or at least turned into a form that while still infinite, can be “renormalized” by such means as considering various terms as referring to observed values, rather than the “bare values”, which are carefully tweaked (often taking limits as they go to zero) in a coordinated way, so that the observed values remain okay.
Letting s be the sum above, in some sense what we’re “really” saying is that s = 1 + 2 s, which can be seen by formal manipulation. This has two solutions in the (one-point compactification of) the complex numbers: infinity, and −1. When doing things like summing Feynmann diagrams, we can have similar things where a physical propagator is essentially described as a bare propagator plus perturbative terms that should be written in terms of products of propagators, leading again to infinite series that diverge (several interlocked infinite series, actually—the photon propagator should include terms with each charged particle, the electron should include terms with photon intermediates, etc.).
IIRC, The Casimir effect can be explained by using Zeta function regularization to sum up contributions of an infinite number of vaccuum modes, though it is certainly not the only way to perform the calculation
http://cornellmath.wordpress.com/2007/07/28/sum-divergent-series-i/ and the next two posts are a nice introduction to some of these methods.
Wikipedia has a fair number of examples:
http://en.wikipedia.org/wiki/1_−_2_%2B_3_−_4_%2B_·_·_·
http://en.wikipedia.org/wiki/1_−_2_%2B_4_−_8_%2B_·_·_·
http://en.wikipedia.org/wiki/1_%2B_1_%2B_1_%2B_1_%2B_·_·_·
http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_·_·_·
Explicit physics calculations I do not have at the ready.
EDIT: please do not take the descriptions of the physics above too seriously. It’s not quite what people actually do, but it’s close enough to give some of the flavor.
wnoise hits it out of the park!
Can you clarify why
does not also apply to the piling up of degrees of freedom in a quantum monad?
I have another question, which I expect someone has already asked somewhere, but I doubt I’ll be able to find your response, so I’ll just ask again. Would a simulation of a conscious quantum monad by a classical computation also be conscious?
Answer to the first question: everything I say here about redness applies to the other problem topics. The final stage of a monadic theory is meant to have the full ontology of consciousness. But it may have an intermediate stage in which it is still just mathematical.
Answer to the second question: No. In monadology (at least as I conceive it), consciousness is only ever a property of individual monads, typically in very complex states. Most matter consists of large numbers of individual monads in much simpler states, and classical computation involves coordinating their interactions so that the macrostates implement the computation. So you should be able to simulate the dynamics of a single complex monad using many simple monads, but that’s all.
And would such a computation claim to be conscious, p-zombie-style?
If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me… Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction—suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you’re talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can’t be conscious in its current form, and will try to make itself so. It doesn’t sound very likely. But it’s more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It’s also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn’t apply to you.
So, let’s suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let’s suppose we get those out of the way and I’m left with the philosophical problem. How do I respond?
To begin with, the situation hasn’t changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let’s forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it’s in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I’m monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven’t committed a classical em of you to a specific one. Are you just not sure what it would do?
It’s a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
Thanks for entertaining this thought experiment.
I think Eliezer Yudkowsky’s remarks on giant lookup tables in the Zombie Sequence just about cover the interesting questions.
The reason lookup tables don’t work is that you can’t change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
A LUT can handle change, if it encodes a function of type (Input × State) → (Output × State).
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the “full computation” table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
I submit that such a table is not particularly less interesting than a Turing machine.