Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of detection) and verified surface correspondence (the person emulated says they can’t internally detect any difference) then my probability of consciousness is essentially “top”, i.e. I would not bother to think about alternative hypotheses because the probability would be low enough to fall off the radar of things I should think about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even though your biological brain isn’t?
What if it had only been verified that the em’s overall behavior perfectly corresponds to its biological template (i.e. without corresponding subparts down to your chosen ground level)?
What if e.g. groups of neurons could be perfectly (and more efficiently) simulated, using an algorithm which doesn’t need to retain a “synapse” construct?
Do you feel that some of the biological structural features on some level of granularity need to have clearly identifiable point-to-point counterparts in the algorithm?
If so, why stop at “synaptic compartments”, why not go to some even finer-grained level? You presumably wouldn’t insist on the algorithm explicitly simulating atoms (or elementary particles), groups of those (you’d probably agree) may be abstracted from, using higher-level functionally equivalent subalgorithms.
Since in any case, “verified surface correspondence” is a given (i.e. all em-implementations aren’t differentiable from a black-box view), on what basis would you say which (functionally superfluous) parts may be optimized away, and which must be preserved? Choosing “synaptic compartments” seems like privileging the hypothesis based on what’s en vogue in literature.
This is probably another variant of the hard problem of consciousness, and unless resource requirements do not play any role at all, it’s unlikely that ems won’t end up being simulated as efficiently as possible (and synaptic compartments be damned), especially since the ems will profess not to notice a thing (functional equivalence).
What if it had only been verified that the em’s overall behavior perfectly corresponds to its biological template (i.e. without corresponding subparts down to your chosen ground level)?
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.)
What if e.g. groups of neurons could be perfectly (and more efficiently) simulated, using an algorithm which doesn’t need to retain a “synapse” construct?
I would believe in this after someone had shown extremely high-fidelity simulation of synaptic compartments, then demonstrated the (computational) proposition that their high-level sim was equivalent.
Do you feel that some of the biological structural features on some level of granularity need to have clearly identifiable point-to-point counterparts in the algorithm?
No, but it’s sufficient to establish causal isomorphism. At the most extreme level, if you can simulate out a synapse by quantum fields, then you are very confident in your ability to simulate it because you have a laws-of-physics-level understanding of the quantum fields and of the simulation of the quantum fields.
Since in any case, “verified surface correspondence” is a given (i.e. all em-implementations aren’t differentiable from a black-box view)
Only in terms of very high-level abstractions being reproduced, since literal pointwise behavior is unlikely to be reproducible given thermal noise and quantum uncertainty. But it remains true that I expect any disturbance of the referent of “consciousness” to disturb the resulting agent’s tendency to write philosophy papers about “consciousness”. Note the high-level behavioral abstraction.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’. There are no worlds of reasonable probability in which both tests are simultaneously and accidentally fooled in the process of constructing a technology honestly meant to produce high-fidelity uploads.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Instead, if structural correspondence allowed for significant additional confidence that the em’s professions of being conscious were true, wouldn’t such a model just not stop, demanding “turtles all the way down”?
I guess I’m not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to “just” overall functional equivalence.
IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’..
Key word: “Sufficient”. I did not say, “necessary”.
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I’d like to capture the notion of being able to contain a consciousness, so what I’m asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. “pointwise” isomorphism, if you’re saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn’t seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human → em, and em → human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.
Instead, if structural correspondence allowed for significant additional confidence that the em’s professions of being conscious were true, wouldn’t such a model just not stop, demanding “turtles all the way down”?
IOW, why assign “top” probability to the synaptic level, when there are further levels.
I meant to apply the “not” to the “specifically,” rather than to the “gold.” Gold isn’t what I normally think of being used as a computing substrate, though I suppose it could get used that way if we use up all the more abundant elements as we convert the solar system into a Dyson sphere (AFAIK, there may be a reason I’m unaware of not to do that).
A person who cares about bending spacetime lots is not equivalent to a person who cares about doing things isomorphic to bending spacetime lots. One will refuse to be replaced by a simulation, and the other will welcome it. One will try to make big compressed piles of things, and the other will daydream about making unfathomably big compressed piles of things.
Telling a person who cares about bending spacetime lots that, within the simulation, they’ll think they’re bending spacetime lots will not motivate them. They don’t care about thinking they’re bending spacetime. They want to actually bend spacetime. P wants X, not S(X), even though S(P) S(wants) S(X).
If isomorphism is enough then the person who cares about bending spacetime a lot, who wants X but not S(X), is somehow fundamentally misguided. A case I can think of where that would be the case is a simulated world where simulated simulations are unwrapped (and hopefully placed within observable distance, so P can find out X = S(X) and react accordingly). In other cases.… well, at the moment, I just don’t see how it’s misguided to be P wanting X but not care about S(P) S(wanting) S(X).
I don’t want to think I’m conscious. I don’t want the effects of what I would do if I were conscious to be computed out in exacting detail. I don’t want people to tell stories about times I was conscious. I want to be conscious. On the other hand, I suppose that’s what most non-simulated evolved things would say...
I spend time worrying about whether random thermal fluctuation in (for example) suns produces sporadic conscious moments simply due to random causal structure alignments. Since I also believe most potential conscious moments are bizarre and painful, that worries me. This worry is not useful when embedded in systems one, a worry which the latter was not created to cope with, so I only worry in the system two philosophical curiosity sense.
What does “like” mean, there? The actual biochemistry, so that pieces of Em could be implanted in a real brain, or just accurate virtualisation, like a really good flight simulator?
Flight simulator, compared to instrumentation of and examination of biology. This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I’d expect what was left of the person to notice that.
and in particular, if it eliminates consciousness, I’d expect what was left of the person to notice that.
This is not intended to undermine your position (since I share it) but this seems like a surprising claim to me. From what I understand of experiments done on biological humans with parts of their brains malfunctioning there are times where they are completely incapable of recognising the state of their brain even when it is proved to them convincingly. Since ‘consciousness’ seems at least somewhat related to the parts of the brain with introspective capabilities it does not seem implausible that some of the interventions that eliminate consciousness also eliminate the capacity to notice that lack.
Are you making a claim based off knowledge of human neuropsychology that I am not familiar with or is it claim based on philosophical reasoning. (Since I haven’t spent all that much time analysing the implications of aspects of consciousness there could well be something I’m missing.)
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
I’d tend to agree, at least with respect to novel or interesting work.
If you’ll pardon some academic cynicism, it wouldn’t surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.
In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious
How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn’t conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as “odd” as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.
At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about “art” when loaded with info to write a book on art.
Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that “they are like me, I am conscious deep down, Occam’s razor suggests they are too.” Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.
How would you know, or even what would make you think, that it was NOT conscious?
I’d examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.
Because while it’s conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn’t key to high-level surface properties, in which case you’d expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc. However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don’t actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you’re conscious to the limits of inspection yet does not produce actual consciousness, etc.
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn’t key to high-level surface properties, in which case you’d expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc.
Hmm. I would expect a difference, but … out of interest, how much talk about consciousness do you think is directly caused by it (ie non-chat-bot-simulable.)
Because while it’s conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause
For some value of “cause”. If you are interested in which synaptic signals cause which reports, then you
have guaranteed that the cause will be the same. However, I think what we are interested in is whether
reports of experience and self-awareness are caused by experience and self-awareness
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some
However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don’t actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you’re conscious to the limits of inspection yet does not produce actual consciousness, etc.
Maybe, But your stipulation of causal isomorphism at the synaptic level only guarantees that there
will only be minor differences at that level, Since you don’t care how the Ems synapses are implemented there could be major differences at the subsynaptic level .. indeed, if your Em is silicon-based, there will be. And if those differences lead to differences in consciousness (which they could, irrespective of the the point made above, since they are major differences), those differences won’t be reported, because the immediate cause of a report is a synaptic firing, which will be guaranteed to be the same!
You have, in short, set up the perfect conditions for zombiehood: a silicon-based Em is different enough to a wetware brain to reasonably have a different form of consciousness, but it can’t report such differences, because it is a functional equivalent..it will say that tomatoes are red, whatever it sees!
The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren’t conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn’t been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn’t actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the “consciousness” bit in every synapse in his brain.
But isn’t it still possible that a simulation that lost its consciousness would still retain memories about consciousness that were sufficient, even without access to real consciousness, to generate potentially even ‘novel’ content about consciousness?
That’s possible, although then the consciousness-related utterances would be of the form “oh my, I seem to have suddenly stopped being conscious” or the like (if you believe that consciousness plays a causal role in human utterances such as “yep, i introspected on my consciousness and it’s still there”), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren’t conscious.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness),
A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will
be for the reason that it is a duplicate, Physically, the “reason” or, ultimate cause, could be quite different, since the WBE is physically different.
since it hasn’t been deliberately programmed to fake consciousness-talk.
It has been programmed to be a functional duplicate of a specific individual.,
Or, something extremely unlikely has happened.
Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn’t actually play any role in our consciousness-talk,
Actually it isn’t, for reasons that are widely misunderstood: kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
Did you read all the way to the dialogue containing this hypothetical?
Albert: “Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules.”
The following discussion seems very relevant indeed.
Charles: “Uh-uh! Your operation certainly did disturb the true cause of my talking about consciousness. It substituted a different cause in its place, the robots. Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn’t mean it’s the same cause that was originally there.”
Albert: “But I wouldn’t even have to tell you about the robot operation. You wouldn’t notice. If you think, going on introspective evidence, that you are in an important sense “the same person” that you were five minutes ago, and I do something to you that doesn’t change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified. Doesn’t the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?”
How does Albert know that Charles;s consciousness hasn’t changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won’t report the change because of the functional equivalence of the change.
Charles: “Introspection isn’t perfect. Lots of stuff goes on inside my brain that I don’t notice.”
If Charles’s qualia have changed, that will be noticeable to Charles—introspection is hardly necessary, sinc ethe external world wil look different! But Charles won’t report the change. “Introspection” is being used ambiguously here, between what is noticed and what is reported.
Albert: “Yeah, and I can detect the switch flipping! You’re detecting something that doesn’t make a noticeable difference to the true cause of your talk about consciousness and personal identity. And the proof is, you’ll talk just the same way afterward.”
Albert’s comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs,
There can mutliple causes of reports like “I see red”. Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,
Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
It may also help you to read making beliefs pay rent and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
Doesn’t follow, Qualia aren’t causing Charles’s qualia-talk, but that doens’t mean thery aren’t causing mine.
Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys (“deliberately replacing them with a substitute”). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence—implying that there’s a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered—that stretches belief past the breaking point. You’re saying, essentially: “qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec”. Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I’m still not sure I understand you. Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by “functional equivalent”? I’m having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we’re using “functional equivalence” in a very different sense. To you, it seems to indicate “a system that behaves in the same way despite having potentially hugely different internal architecture”. To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don’t have qualia, even though I have difficulty seeing how they could compute “talk of qualia” without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system—the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That’s what I’d consider the “natural” functional-equivalence system.
If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work.#
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery?
I am noting thar equivlant talk must be included in functional equivalence.
Why not just build a regular qualia engine, by copying the meat-brain processes 1:1?
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Okay. I don’t think it’s possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don’t think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what’s there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons—that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.
If Charles’s qualia have changed, that will be noticeable to Charles—introspection is hardly necessary, sinc ethe external world wil look different! But Charles won’t report the change.
I don’t think I understand what you’re saying here, what kind of change could you notice but not report?
If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won’t be able to report it. You will
report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access—remember or think about—the change, if that is part of the preserved functionality, But if your experience changes, you can’t
fail to experience it).
Hmm, it seems to me that any change that affects your experience but not your reports must have also affected your memory. Otherwise you should be able to say that the color of tomatoes now seems darker or cooler or just different than it did before. Would you agree?
The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs
Two things. 1) that the same electronic functioning produces consciousness if implemented on biological goo but does not if implemented on silicon seems unlikely, what probability would you assign that this is the meaningful difference? 2) if it is biological goo we need to have consciousness, why not build an AI out of biological goo? Why not synthesize neurons and stack and connect them in the appropriate ways, and have understood the whole process well enough that either you assemble it working or you know how to start it? It would still be artificial, but made from materials that can produce consciousness when functioning.
1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism.
But there is a Hard Problem of consc, precisely because some aspects—subjective experince, qualia—don’t have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can’t even get a start on building emotion chips or writing seeRed().
2) It’s not practical at the monent, and wouldn’t answer the theoretical questions.
This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I’d expect what was left of the
person to notice that.
Appeals to contradict this comment:
EY to Juno_Watt
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.
Given an extremely-high-resolution em with verified pointwise causal isomorphism (that is, it has been verified that emulated synaptic compartments are behaving like biological synaptic compartments to the limits of detection) and verified surface correspondence (the person emulated says they can’t internally detect any difference) then my probability of consciousness is essentially “top”, i.e. I would not bother to think about alternative hypotheses because the probability would be low enough to fall off the radar of things I should think about. Do you spend a lot of time worrying that maybe a brain made out of gold would be conscious even though your biological brain isn’t?
What if it had only been verified that the em’s overall behavior perfectly corresponds to its biological template (i.e. without corresponding subparts down to your chosen ground level)?
What if e.g. groups of neurons could be perfectly (and more efficiently) simulated, using an algorithm which doesn’t need to retain a “synapse” construct?
Do you feel that some of the biological structural features on some level of granularity need to have clearly identifiable point-to-point counterparts in the algorithm?
If so, why stop at “synaptic compartments”, why not go to some even finer-grained level? You presumably wouldn’t insist on the algorithm explicitly simulating atoms (or elementary particles), groups of those (you’d probably agree) may be abstracted from, using higher-level functionally equivalent subalgorithms.
Since in any case, “verified surface correspondence” is a given (i.e. all em-implementations aren’t differentiable from a black-box view), on what basis would you say which (functionally superfluous) parts may be optimized away, and which must be preserved? Choosing “synaptic compartments” seems like privileging the hypothesis based on what’s en vogue in literature.
This is probably another variant of the hard problem of consciousness, and unless resource requirements do not play any role at all, it’s unlikely that ems won’t end up being simulated as efficiently as possible (and synaptic compartments be damned), especially since the ems will profess not to notice a thing (functional equivalence).
Since whole brains are not repeatable, verifying behavioral isomorphism with a target would require a small enough target that its internal interactions were repeatable. (Then, having verified the isomorpmism, you tile it across the whole brain.)
I would believe in this after someone had shown extremely high-fidelity simulation of synaptic compartments, then demonstrated the (computational) proposition that their high-level sim was equivalent.
No, but it’s sufficient to establish causal isomorphism. At the most extreme level, if you can simulate out a synapse by quantum fields, then you are very confident in your ability to simulate it because you have a laws-of-physics-level understanding of the quantum fields and of the simulation of the quantum fields.
Only in terms of very high-level abstractions being reproduced, since literal pointwise behavior is unlikely to be reproducible given thermal noise and quantum uncertainty. But it remains true that I expect any disturbance of the referent of “consciousness” to disturb the resulting agent’s tendency to write philosophy papers about “consciousness”. Note the high-level behavioral abstraction.
The combination of verified pointwise causal isomorphism of repeatable small parts, combined with surface behavioral equivalence on mundane levels of abstraction, is sufficient for me to relegate the alternative hypothesis to the world of ‘not bothering to think about it any more’. There are no worlds of reasonable probability in which both tests are simultaneously and accidentally fooled in the process of constructing a technology honestly meant to produce high-fidelity uploads.
The kind of model which postulates that “a conscious em-algorithm must not only act like its corresponding human, under the hood it must also be structured like that human” would not likely stop at ”… at least be structured like that human for, like, 9 orders of magnitude down from a human’s size, to the level that you a human can see through an electron microscope, that’s enough after that it doesn’t matter (much / at all)”. Wouldn’t that be kind of arbitrary and make for an ugly model?
Instead, if structural correspondence allowed for significant additional confidence that the em’s professions of being conscious were true, wouldn’t such a model just not stop, demanding “turtles all the way down”?
I guess I’m not sure what some structural fidelity can contribute (and find those models too construed which place consciousness somewhere beyond functional equivalence, but still in the upper echelons of the substructures, conveniently not too far from the surface level), compared to “just” overall functional equivalence.
IOW, the big (viable) alternative to functional equivalence, which is structural (includes functional) equivalence, would likely not stop just a few levels down.
Key word: “Sufficient”. I did not say, “necessary”.
This brings up something that has been on my mind for a long time. What are the necessary and sufficient conditions for two computations to be (homeo?)morphic? This could mean a lot of things, but specifically I’d like to capture the notion of being able to contain a consciousness, so what I’m asking is, what we would have to prove in order to say program A contains a consciousness --> program B contains a consciousness. “pointwise” isomorphism, if you’re saying what I think, seems too strict. On the other hand, allowing any invertible function to be a ___morphism doesn’t seem strict enough. For one thing we can put any reversible computation in 1-1 correspondence with a program that merely stores a copy of the initial state of the first program and ticks off the natural numbers. Restricting our functions by, say, resource complexity, also seems to lead to both similar and unrelated issues...
Has this been discussed in any other threads?
Given that an isomorphism requires checking that the relationship is one-to-one in both directions i.e. human → em, and em → human, I see little reason to worry about recursing to the absolute bottom.
Suppose that it turns out that in some sense, ems are little endian, whilst humans are big endian, yet, all other differences are negligible. Does that throw the isomorphism out the window? Of course not.
IOW, why assign “top” probability to the synaptic level, when there are further levels.
Not gold, specifically, but if I catch your intended meaning, yes.
Still digesting your other comments in this subthread, will try to further respond to those.
Why not gold specifically?
I meant to apply the “not” to the “specifically,” rather than to the “gold.” Gold isn’t what I normally think of being used as a computing substrate, though I suppose it could get used that way if we use up all the more abundant elements as we convert the solar system into a Dyson sphere (AFAIK, there may be a reason I’m unaware of not to do that).
Is isomorphism enough?
Consider gravity as an analogy.
A person who cares about bending spacetime lots is not equivalent to a person who cares about doing things isomorphic to bending spacetime lots. One will refuse to be replaced by a simulation, and the other will welcome it. One will try to make big compressed piles of things, and the other will daydream about making unfathomably big compressed piles of things.
Telling a person who cares about bending spacetime lots that, within the simulation, they’ll think they’re bending spacetime lots will not motivate them. They don’t care about thinking they’re bending spacetime. They want to actually bend spacetime. P wants X, not S(X), even though S(P) S(wants) S(X).
If isomorphism is enough then the person who cares about bending spacetime a lot, who wants X but not S(X), is somehow fundamentally misguided. A case I can think of where that would be the case is a simulated world where simulated simulations are unwrapped (and hopefully placed within observable distance, so P can find out X = S(X) and react accordingly). In other cases.… well, at the moment, I just don’t see how it’s misguided to be P wanting X but not care about S(P) S(wanting) S(X).
I don’t want to think I’m conscious. I don’t want the effects of what I would do if I were conscious to be computed out in exacting detail. I don’t want people to tell stories about times I was conscious. I want to be conscious. On the other hand, I suppose that’s what most non-simulated evolved things would say...
I spend time worrying about whether random thermal fluctuation in (for example) suns produces sporadic conscious moments simply due to random causal structure alignments. Since I also believe most potential conscious moments are bizarre and painful, that worries me. This worry is not useful when embedded in systems one, a worry which the latter was not created to cope with, so I only worry in the system two philosophical curiosity sense.
I find Boltzmann Brains to be more of an unconvincing thought experiment than an actual possibility.
Is this concern altruistic/compassionate?
What does “like” mean, there? The actual biochemistry, so that pieces of Em could be implanted in a real brain, or just accurate virtualisation, like a really good flight simulator?
Flight simulator, compared to instrumentation of and examination of biology. This by itself retains the possibility that something vital was missed, but then it should show up in the surface correspondences of behavior, and in particular, if it eliminates consciousness, I’d expect what was left of the person to notice that.
This is not intended to undermine your position (since I share it) but this seems like a surprising claim to me. From what I understand of experiments done on biological humans with parts of their brains malfunctioning there are times where they are completely incapable of recognising the state of their brain even when it is proved to them convincingly. Since ‘consciousness’ seems at least somewhat related to the parts of the brain with introspective capabilities it does not seem implausible that some of the interventions that eliminate consciousness also eliminate the capacity to notice that lack.
Are you making a claim based off knowledge of human neuropsychology that I am not familiar with or is it claim based on philosophical reasoning. (Since I haven’t spent all that much time analysing the implications of aspects of consciousness there could well be something I’m missing.)
Fair enough, anosognosia would certainly be a possibility if something did eliminate consciousness. But I would expect severe deficits in writing philosophy papers about consciousness to emerge afterward.
I’d tend to agree, at least with respect to novel or interesting work.
If you’ll pardon some academic cynicism, it wouldn’t surprise me much if an uploaded, consciousness redacted tenured professor could go ahead producing papers that would be accepted by journals. The task of publishing papers has certain differences to that of making object level progress. In fact, it seems likely that a narrow artificial intelligence specifically competent at literary synthesis could make actual valuable progress on human knowledge of this kind without being in the remote ballpark of conscious.
How would you know, or even what would make you think, that it was NOT conscious? Even if it said it wasn’t conscious, that would be evidence but not dispositive. After all, there are humans such as James and Ryle who deny consciousness. Perhaps their denial is in a narrow or technical sense, but one would expect a conscious literary synthesis program to be AT LEAST as “odd” as the oddest human being, and so some fairly extensive discussion would need to be carried out with the thing to determine how it is using the terms.
At the simplest level consciousness seems to mean self-consciousness: I know that I exist, you know that you exist. If you were to ask a literary program whether it knew it existed, how could it meaningfully say no? And if it did meaningfully say no, and you loaded it with data about itself (much as you must load it with data about art when you want it to write a book of art criticism or on aesthetics) then it would have to say it knows it exists, as much as it would have to say it knows about “art” when loaded with info to write a book on art.
Ultimately, unless you can tell me how I am wrong, our only evidence of anybody but our own consciuosness is by a weak inference that “they are like me, I am conscious deep down, Occam’s razor suggests they are too.” Sure the literary program is less like me than is my wife, but it is more like me than a clam is like me, and it is more like me in some respects (but not overall) than is a chimpanzee. I think you would have to put your confidence that the literary program is conscious at something in the neighborhood of your confidence that a chimpanzee is conscious.
I’d examine the credentials and evidence of competence of the narrow AI engineer that created it and consult a few other AI experts and philosophers who are familiar with the particular program design.
Then why require causal isomporphism at the synaptic structure in addition to surface correspondence of behaviour?
Because while it’s conceivable that an effort to match surface correspondences alone (make something which talked like it was conscious) would succeed for reasons non-isomorphic to those why we exhibit those surface behaviors (its cause of talking about consciousness is not isomorphic to our cause) it defies all imagination that an effort to match synaptic-qua-synapse behaviors faithfully would accidentally reproduce talk about consciousness with a different cause. Thus this criterion is entirely sufficient (perhaps not necessary).
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some tiny little overlooked property of the synapses wasn’t key to high-level surface properties, in which case you’d expect what was left to stop talking about consciousness, or undergo endless epileptic spasms, etc. However it leaves the realm of things that happen in the real world, and enters the realm of elaborate fears that don’t actually happen in real life, to suppose that some tiny overlooked property of the synapses both destroys the original cause of talk about consciousness, and substitutes an entirely new distinct and non-isomorphic cause which reproduces the behavior of talking about consciousness and thinking you’re conscious to the limits of inspection yet does not produce actual consciousness, etc.
Hmm. I would expect a difference, but … out of interest, how much talk about consciousness do you think is directly caused by it (ie non-chat-bot-simulable.)
For some value of “cause”. If you are interested in which synaptic signals cause which reports, then you have guaranteed that the cause will be the same. However, I think what we are interested in is whether reports of experience and self-awareness are caused by experience and self-awareness
We also speak of surface correspondence. in addition to synaptic correspondence, to verify that some
Maybe, But your stipulation of causal isomorphism at the synaptic level only guarantees that there will only be minor differences at that level, Since you don’t care how the Ems synapses are implemented there could be major differences at the subsynaptic level .. indeed, if your Em is silicon-based, there will be. And if those differences lead to differences in consciousness (which they could, irrespective of the the point made above, since they are major differences), those differences won’t be reported, because the immediate cause of a report is a synaptic firing, which will be guaranteed to be the same!
You have, in short, set up the perfect conditions for zombiehood: a silicon-based Em is different enough to a wetware brain to reasonably have a different form of consciousness, but it can’t report such differences, because it is a functional equivalent..it will say that tomatoes are red, whatever it sees!
http://lesswrong.com/lw/p7/zombies_zombies/
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/
http://lesswrong.com/lw/f1u/causal_reference/
More generally http://wiki.lesswrong.com/wiki/Zombies_(sequence)
The argument against p-zombies is that there is no physical difference that could explain the difference in consciousness. That does not extend to silicon WBEs or AIs.
The argument against p-zombies is that the reason for our talk of consciousness is literally our consciousness, and hence there is no reason for a being not otherwise deliberately programmed to reproduce talk about consciousness to do it if it weren’t conscious. It is a corollary of this that a zombie, which is physically identical, and therefore not deliberately programmed to imitate talk of consciousness but must still reproduce it, must talk about consciousness for the same reason we do. That is, the zombies must be conscious.
A faithful synaptic-level silicone WBE, if it independently starts talking about it at all, must be talking about it for the same reason as us (ie. consciousness), since it hasn’t been deliberately programmed to fake consciousness-talk. Or, something extremely unlikely has happened.
Note that supposing that how the synapses are implemented could matter for consciousness, even while the macro-scale behaviour of the brain is identical, is equivalent to supposing that consciousness doesn’t actually play any role in our consciousness-talk, since David Chalmers would write just as many papers on the Hard Problem regardless of whether we flipped the “consciousness” bit in every synapse in his brain.
But isn’t it still possible that a simulation that lost its consciousness would still retain memories about consciousness that were sufficient, even without access to real consciousness, to generate potentially even ‘novel’ content about consciousness?
That’s possible, although then the consciousness-related utterances would be of the form “oh my, I seem to have suddenly stopped being conscious” or the like (if you believe that consciousness plays a causal role in human utterances such as “yep, i introspected on my consciousness and it’s still there”), implying that such a simulation would not have been a faithful synaptic-level WBE, having clearly differing macro-level behaviour.
A functional duplicate will talk the same way as whomever it is a duplicate of.
A WBE of a specific person will respond to the same stimuli in the same way as that person. Logically, that will be for the reason that it is a duplicate, Physically, the “reason” or, ultimate cause, could be quite different, since the WBE is physically different.
It has been programmed to be a functional duplicate of a specific individual.,
Something unlikely to happen naturally has happened. A WBE is an artificial construct which is exactly the same as an person in some ways,a nd radically different in others.
Actually it isn’t, for reasons that are widely misunderstood: kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
http://lesswrong.com/lw/p9/the_generalized_antizombie_principle/
Why? That doesn’t argue any point relevant to this discussion.
Did you read all the way to the dialogue containing this hypothetical?
The following discussion seems very relevant indeed.
I don’t see anything very new here.
How does Albert know that Charles;s consciousness hasn’t changed? It could have changed becasue of the replacement of protoplasm by silicon. And Charles won’t report the change because of the functional equivalence of the change.
If Charles’s qualia have changed, that will be noticeable to Charles—introspection is hardly necessary, sinc ethe external world wil look different! But Charles won’t report the change. “Introspection” is being used ambiguously here, between what is noticed and what is reported.
Albert’s comment is a non sequitur. That the same effect occurs does not prove that the same cause occurs, There can mutliple causes of reports like “I see red”. Because the neural substitution preserves funcitonal equivlance, Charles will report the same qualia whether or not he still has them,
Implying that qualia can be removed from a brain while maintaining all internal processes that sum up to cause talk of qualia, without deliberately replacing them with a substitute. In other words, your “qualia” are causally impotent and I’d go so far as to say, meaningless.
Are you sure you read Eliezer’s critique of Chalmers? This is exactly the error that Chalmers makes.
It may also help you to read making beliefs pay rent and consider what the notion of qualia actually does for you, if you can imagine a person talking of qualia for the same reason as you while not having any.
Doesn’t follow, Qualia aren’t causing Charles’s qualia-talk, but that doens’t mean thery aren’t causing mine. Kidney dyalisis machines don’t need nephrons, but that doens’t mean nephrons are causally idle in kidneys.
The epiphenomenality argument works for atom-by-atom duplicates, but not in WBE and neural replacement scenarios. if indentity theory is true, qualia have the causal powers of whatever physical properties they are identical to. If identity theory is true, changing the physcial substrate could remove or change the qualia.
You keep bringing up that argument, but kidney dialysis machines are built specifically to replace the functionality of kidneys (“deliberately replacing them with a substitute”). If you built a kidney-dialysis machine by a 1:1 mapping and forgot some cell type that is causally active in kidneys, the machine would not actually work. If it did, you should question if that cell type actually does anything in kidneys.
Changing the physical substrate could remove the qualia, but to claim it could remove the qualia while keeping talk of qualia alive, by sheer coincidence—implying that there’s a separate, unrelated reason why the replacement neurons talk of qualia, that has nothing to do with qualia, that was not deliberately engineered—that stretches belief past the breaking point. You’re saying, essentially: “qualia cause talk of qualia in my meatbrain, but talk of qualia is not any indication of qualia in any differently built brain implementing the same spec”. Then why are you so certain that your talk of qualia is caused by your supposed qualia, and not the neural analogue of what causes talk of qualia in WBE brains? It really does sound like your qualia are either superfluous or bizarre.
[edit] Actually, I’m still not sure I understand you. Are you proposing that it’s impossible to build a straight neuron substitute that talks of qualia, without engineering purposeful qualia-talk-emulation machinery? Is that what you mean by “functional equivalent”? I’m having serious trouble comprehending your position.
[edit] I went back to your original comment, and I think we’re using “functional equivalence” in a very different sense. To you, it seems to indicate “a system that behaves in the same way despite having potentially hugely different internal architecture”. To me, it indicates a 1:1 neuron computational replacement; keeping the computational processes while running them on a different substrate.
I agree that there may conceivably exist functionally equivalent systems that don’t have qualia, even though I have difficulty seeing how they could compute “talk of qualia” without running a sufficient-fidelity qualia simulation internally, which would again correspond to our qualia. However, I find it unlikely that anybody who is not a very very bored deity would ever actually create such a system—the qualia-talk machinery seems completely pointless to its function, as well as probably much more computationally expensive. (This system has to be self-deluding in a way consistent with a simpler system that it is not allowed to emulate) Why not just build a regular qualia engine, by copying the meat-brain processes 1:1? That’s what I’d consider the “natural” functional-equivalence system.
I arguing about cases ofWEB and neurla replacement, which are stiuplated as not being 1:1 atom-for-atom replacements.
Not coincidence: a further stipulation that funcitonal equivalene is preserved in WBE;s.
I am noting thar equivlant talk must be included in functional equivalence.
You mean atom-by-atom? But is has been put to me that you only need synapse-by-synapse copies. That is what I am responding to.
Okay. I don’t think it’s possible to build a functional equivalent of a mind that talks of qualia because it has them, by 1:1 porting at the synapse level, and get something that talks of qualia without having any. You can stipulate that all day but I don’t think it can actually be done. This is contingent on neurons being the computational elements of our minds. If it turns out that most of the computation of mindstates is done by some sort of significantly lower-scale process and synaptic connections are, if not coincidental, then at least not the primary element of the computation going on in our heads, I could imagine a neural-level functional equivalent that talked of qualia while running the sort of elaborate non-emulation described in my previous comment.
But if neurons are the computational basis of our minds, and you did a 1:1 synapse-level identical functional copy, and it talked of qualia, it would strain credulty to say it talked of qualia for a different reason than the original did, while implementing the same computation. If you traced the neural impulses backwards all the way to the sensory input that caused the utterance, and verified that the neurons computed the same function in both systems, then what’s there left to differentiate them? Do you think your talk of qualia is not caused by a computation in your neurons? Qualia are the things that make us talk about qualia, or else the word is meaningless. To say that the equivalent, different-substrate system talked about qualia out of the same computational processes (at neuron level), but for different, incorrect reasons—that, to me, is either Chalmers-style dualism or some perversion of language that carries no practical value.
I don’t think I understand what you’re saying here, what kind of change could you notice but not report?
If a change to the way your funcitonality is implemented alters how your consciousness seems to you, your consciosuness will seem different to you. If your funcitonality is preserved, you won’t be able to report it. You will report tomatos are red even if they look grue or bleen to you. (You may also not be able to cognitively access—remember or think about—the change, if that is part of the preserved functionality, But if your experience changes, you can’t fail to experience it).
Hmm, it seems to me that any change that affects your experience but not your reports must have also affected your memory. Otherwise you should be able to say that the color of tomatoes now seems darker or cooler or just different than it did before. Would you agree?
Two things. 1) that the same electronic functioning produces consciousness if implemented on biological goo but does not if implemented on silicon seems unlikely, what probability would you assign that this is the meaningful difference? 2) if it is biological goo we need to have consciousness, why not build an AI out of biological goo? Why not synthesize neurons and stack and connect them in the appropriate ways, and have understood the whole process well enough that either you assemble it working or you know how to start it? It would still be artificial, but made from materials that can produce consciousness when functioning.
1) What seems (un)likely to an individual depends on their assumptions. If you regard consc. as a form of information processing, thern there is very little inferrential gap to a conclusion of functionalism or computationalism. But there is a Hard Problem of consc, precisely because some aspects—subjective experince, qualia—don’t have any theoretical or practical basis in functionalism of computer technology: we can build memory chips and write storage routines, but we can’t even get a start on building emotion chips or writing seeRed().
2) It’s not practical at the monent, and wouldn’t answer the theoretical questions.
This comment:
EY to Kawoomba:
Appeals to contradict this comment:
EY to Juno_Watt