This “unidentified physical reaction” would also need to not be turing-computable to have any relevance. Otherwise, you’re just putting forth another zombie-world argument.
A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don’t see why a zombie-world couldn’t be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.
The same way you don’t need to have an actual solar system inside your computer, in order to compute the orbits of the planets—but it’d be very unlikely to have accidentally computed them correctly if you hadn’t studied the actual solar system.
At this point, we have no empirical reason to think that this unidentified mysterious something has any existence at all, outside of a mere intuitive feeling that it “must” be so.
Do you have any empirical reason to think that consciousness is about computation alone? To claim Occam’s razor on this is far from obvious, as the only examples of consciousness (or talking about consciousness) currently concern a certain species of evolved primate with a complex brain, and some trillions of neurons, all of which have have chemical and electrical effects, they aren’t just doing computations on an abstract mathematical universe sans context.
Unless you assume the whole universe is pure mathematics, so there’s no difference between the simulation of a thing and the thing itself. Which means there’s no difference between the mathematical model of a thing and the thing itself. Which means the map is the territory. Which means Tegmark IV.
And Tegmark IV is likewise just a possibility, not a proven thing.
A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don’t see why a zombie-world couldn’t be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.
This is a “does the tree make a sound if there’s no-one there to hear it?” argument.
That is, it assumes that there is a difference between “effects of consciousness” and “consciousness itself”—in the same way that a connection is implied between “hearing” and “sound”.
That is, the argument hinges on the definition of the word whose definition is being questioned, and is an excellent example of intuitions feeling real.
That is, it assumes that there is a difference between “effects of consciousness” and “consciousness itself”—in the same way that a connection is implied between “hearing” and “sound”.
Not quite. What I’m saying is there might be a difference between the computation of a thing and the thing itself. It’s basically an argument against the inevitability of Tegmark IV.
A Turing machine can certainly compute everything there is to know about lifting rocks and their effects—but it still can’t lift a rock.
Likewise a Turing machine could perhaps compute everything there was to know about consciousness and its effects—but perhaps it still couldn’t actually produce one.
Or at least I’ve not been convinced that it’s a logical impossibility for it to be otherwise; nor that I should consider it my preferred possibility that consciousness is solely computation, nothing else.
Wouldn’t the same reasoning mean that all physical processes have to be solely computation? So it’s not just “a Turing machine can produce consciousness”, but “a Turing machine can produce a new physical universe”, and therefore “Yeah, Turing Machines can lift real rocks, though it’s real rocks in a subordinate real universe, not in ours”.
What I’m saying is there might be a difference between the computation of a thing and the thing itself. It’s basically an argument against the inevitability of Tegmark IV.
I think you mean, it’s the skeleton of an argument you could advance if there turned out to actually be some meaning to the phrase “difference between the computation of a thing and the thing itself”.
Or at least I’ve not been convinced that it’s a logical impossibility for it to be otherwise;
Herein lies the error: it’s not up to anybody else to convince you it’s logically impossible, it’s up to you to show that you’re even describing something coherent in the first place.
Really, this is another LW-solved philosophical problem; you just have to grok the quantum physics sequence, in addition to the meanings-of-words one: when you understand that physics itself is a machine, it dissolves the question of what “simulation” or “computation” mean in this context. That is, you’ll realize that the only reason you can even ask the question is because you’re confusing the labels in your mind with real things.
Really, this is another LW-solved philosophical problem; you just have to grok the quantum physics sequence, in addition to the meanings-of-words one: when you understand that physics itself is a machine, it dissolves the question of what “simulation” or “computation” mean in this context.
Could you point to the concrete articles that supposedly dissolve this question? I find the question of what “computation” means as still very much open, and the source of a whole lot of confusion. This is best seen when people attempt to define what constitutes “real” computation as opposed to mere table lookups, replays, state machines implemented by random physical processes, etc.
Needless to say, this situation doesn’t give one the license to jump into mysticism triumphantly. However, as I noted in a recent thread, I observe an unpleasant tendency on LW to use the notions of “computation,” “algorithms,” etc. as semantic stop signs, considering how ill-understood they presently are.
Could you point to the concrete articles that supposedly dissolve this question? I find the question of what “computation” means as still very much open, and the source of a whole lot of confusion.
Please note that I did not say the sequence explains “computation”; merely that it dissolves the illusion the intuitive notion of a meaningful distinction between a “computation” or “simulation” and “reality”.
In particular, an intuitive understanding that people are made of interchangeable particles and nothing else, dissolves the question of “what happens if somebody makes a simulation of you?” in the same way that it dissolves “what happens if there are two copies of you… which one’s the real one?”
That is, the intuitive notion that there’s something “special” about the “original” or “un-simulated” you is incoherent, because the identity of entities is an unreal concept existing only in human brains’ representation of reality, rather than in reality itself.
The QM sequence demonstrates this; it does not, AFAIR, attempt to rigorously define “computation”, however.
This is best seen when people attempt to define what constitutes “real” computation as opposed to mere table lookups, replays, state machines implemented by random physical processes, etc.
Those sound like similarly confused notions to me—i.e., tree-sound-hearing questions, rather than meaningful ones. I would therefore refer such questions to the “usage of words” sequence, especially “How an Algorithm Feels From The Inside” (which was my personal source of intuitions about such confusions).
Please note that I did not say the sequence explains “computation”; merely that it dissolves the illusion the intuitive notion of a meaningful distinction between a “computation” or “simulation” and “reality”.
Fair enough, though I can’t consider these explanations as settled until the notion of “computation” itself is fully clarified. I haven’t read the entire corpus of sequences, though I think I’ve read most of the articles relevant for these questions, and what I’ve seen of the attempts there to deal with the question of what precisely constitutes “computation” is, in my opinion, far from satisfactory. Further non-trivial insight is definitely still needed there.
Fair enough, though I can’t consider these explanations as settled until the notion of “computation” itself is fully clarified.
Personally, I would more look for someone asking that question to show what isn’t “computation”. That is, the word itself seems rather meaningless, outside of its practical utility (i.e. “have you done that computation yet?”). Trying to pin it down in some absolute sense strikes me as a definitional argument… i.e., one where you should first be asking, “Why do I care what computation is?”, and then defining it to suit your purpose, or using an alternate term for greater precision.
You say it has a practical utility, and yet you call it meaningless? If rationality is about winning, how can something with practical utility be meaningless?
Here’s what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result.
What isn’t computation? Pretty much everything else. I don’t call gravity a computation, I call it a phenomenon. Because gravity doesn’t act on symbolisms and abstractions (like numbers), it acts on real things. A division or a multiplication is a computation, because it acts on numbers. A computation is a map, not a territory, same way that numbers are a map, not a territory.
What I don’t know is what you mean by “physics is a machine”. For that statement to be meaningful you’d have to explain what would it mean for physics not to be a machine. If you mean that physics is deterministic and causal, then sure. If you mean that physics is a computation, then I’ll say no, you’ve not yet proven to me that the bottom layer of reality is about mathematical concepts playing with themselves.
That’s the Tegmark IV hypothesis, and it’s NOT a solved issue, not by a long shot.
Here’s what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result...I don’t call gravity a computation...Because gravity doesn’t act on symbolisms and abstractions (like numbers), it acts on real things.
A computer (a real one, like a laptop) also acts on real things. For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions. For example, you might spell-check a text—which describes what it is doing as an operation on an abstraction, since the text itself is an abstraction. A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing—the text—in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction—such as spell-checking a text—or as an action on a real thing—such as modifying the physical state of the memory.
So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then I see no obvious barrier to understanding gravity as operating on abstractions.
A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing—the text—in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction—such as spell-checking a text—or as an action on a real thing—such as modifying the physical state of the memory.
This is similar to my point, but the other way around, sort of.
My point is that the “abstraction” exists only in the eye of the observer (mind of the commentator?), rather than having any independent existence.
In reality, there is no computer, just atoms. No “computation”, just movement. It is we as observers who label these things to be happening, or not happening, and argue about what labels we should apply to them.
None of this is a problem, until somebody gets to the question of whether something really is the “right” label to apply, only usually they phrase it in the form of whether something can “really” be something else.
But what’s actually meant is, “is this the right label to apply in our minds?”, and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the “inside” and the “outside” are themselves just labels we use.
and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
The question of qualia and subjective experience isn’t a mere “confusion”.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself
You keep using that word “is”, but I don’t think it means what you think it means. ;-)
Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this “is”?
That is, what different predictions will you make, based on “is” or “is not” in your statement?
Consider that one carefully, before you continue.
The question of qualia and subjective experience isn’t a mere “confusion”.
Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
I don’t see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer’s Network B, than Network A… which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.
Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this “is”?
.
Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
Consider that one carefully, before you continue.
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this “is”?
.
That doesn’t look like a reply, there.
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
And if consciousness “is” just computation, what would be different? Do you have any particular reason to think you would observe any of those things?
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of “is” was, in the other statement you made. (It actually makes a great deal of difference, and it’s that difference that makes the rest of your argument .)
Since the reply was just below both of your quotes, then no, the single dot wasn’t one, it was an attempt to distinguish the two quotes.
I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn’t do so.
Like your earlier “funny” response about how I supposedly favoured euthanizing paraplegics, you don’t give me the vibe of responding in good faith.
Do you have any particular reason to think you would observe any of those things?
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I’d accumulate enough evidence that I’d no longer expect my subjective experience to ever find itself inside an electronic computation.
And if evolution stumbled upon consciousness by accident, and it’s solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.
The question to consider was what the meaning of “is” was, in the other statement you made.
Can you make a complete question? What exactly are you asking? The statement you quoted had more than one “is” in it. Four or five of them.
I think we’re done here. As far as I can tell, you’re far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn’t ask you questions to get information from you, I asked you questions to help you dissolve your confusion.
In any event, you haven’t grokked the “usage of words” sequence sufficiently to have a meaningful discussion on this topic. So, I’m going to stop trying now.
You didn’t expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn’t the one confused, perhaps you were.
I certainly am interested in understanding things, and questioning things. That’s why I asked questions to you, which you still haven’t answered:
what do you mean when you say that physics is a machine? (How would the world be different if physics wasn’t a machine?)
what do you mean when you call “computation” a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)
As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.
For a thorough answer to your first question, study the sequences—especially the parts debunking the supernatural, explaining the “merely real”, and the basics of quantum mechanics.
For the second, I mean only that asking whether something “is” a computation or not is a pointless question… as described in “How an Algorithm Feels From The Inside”.
Thanks for the suggestion, but I’ve read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
I do wonder if you’ve read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of “computation” (and “anticipation” too), what makes you think that any of the other sequences he wrote dissolves that term?
Thanks for the suggestion, but I’ve read them all.
Well, that won’t do any good unless you also apply them to the topic at hand.
even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
That depends entirely on what you mean by the words… which you haven’t actually defined, as far as I can tell.
You also seem to think I’m arguing some particular position about consciousness or the simulability thereof, but that isn’t actually so. I am only attempting to dispel confusion, and that’s a very different thing.
I’ve been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both “simulate” and “consciousness” in order to be able to say something that isn’t nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.
That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.
If Eliezer himself is holding onto the concept of “computation”
I rather doubt it, since that article says:
Such causal links could be required for “computation” and “consciousness”—whatever those are.
AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts… in much the same way that Eliezer leaves open the future definition of a “non-person predicate”.
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine.
With a little ingenuity, and as long we’re prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain’s computational activity is delegated to a computer until the computer is doing all of the work. It doesn’t seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.
Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an ‘avatar’ in a simulated world. It doesn’t seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.
The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.
As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer’s computational activity is in turn delegated to a group of abacus-using monks—and then it doesn’t seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...
It’s the bullet I’m not yet prepared to bite—but if do end up doing so, despite all my intuition telling me no, that’ll be the point where I’ll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...
A computer (a real one, like a laptop) also acts on real things.
Of course, which is why the entirety of the existence of a real computer is beyond that of a mere Turing machine. As it can, for example, fall and hurt someone’s legs.
For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions.
Yes, which is why there’s a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.
And yet, pjeby argues that to think the two are different (the computation from the physical operation) is mere “confusion”. It’s not confusion, it’s the frigging difference between map and territory!
So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then
My question is about whether gravity can be fully understood as only operating on abstractions. As real computers can’t be fully understood as that, then it’s the same barrier the two have.
Yes, which is why there’s a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.
It is possible to have more than one map of a given territory. You can have a street map, but also a topographical map. Similarly, a given physical operation can be understood in more than one way, as performing more than one computation. One class of computation is simulation. The physical world (the whole world) can be understood as performing a simulation of a physical world. Whereas only a small part of the laptop is directly involved in spell-checking a text document, the whole laptop, in fact the whole physical world, is directly involved in the simulation.
The computation “spell checking a text” is different from the physical operation. This is easy to prove. For example, had the text been stored in a different place in physical memory then the same computation (“spell checking a text”) could still be performed. There need not be any difference in the computation—for example, the resulting corrected text might be exactly the same regardless of where in memory it is stored. But what about the simulation? If so much as one molecule were removed from the laptop, then the simulation would be a different simulation. We easily proved that the computation “spell checking a text” is different from the physical operation, but we were unable to extend this to proving that the computation “simulating a physical world” is different from the physical operation.
As a sidenote, whenever I try to explain my position I get downvoted some more. Are these downvotes for mere disagreement, or is there something else that the downvoter objects to?
That’s the Tegmark IV hypothesis, and it’s NOT a solved issue, not by a long shot.
Not quite. The Tegmark IV hypothesis is that all possible computations exist as universes. This is considerably more controversial than what pjeby said, which was only that the universe we happen to be in is a computation.
what pjeby said, which was only that the universe we happen to be in is a computation.
Um, no, actually, because I wouldn’t make such a silly statement. (Heck, I don’t even claim to be able to define “computation”!)
All I said was that trying to differentiate “real” and “just a computation” doesn’t make any sense at all. I’m urging the dissolution of that question as nonsensical, rather than trying to answer it.
Basically, it’s the sort of question that only arises because of how the algorithm feels from the inside, not because it has any relationship to the universe outside of human brains.
If a computation can be a universe, and a universe a computation, then you’re 90% of the way to Tegmark IV anyway.
The Tegmark IV hypothesis is a conjunction of “the universe is a computation” and “every computation exists as a universe with some weighting function”. The latter part is much more surprising, so accepting the first part does not get you 90% of the way to proving the conjunction.
The Tegmark IV hypothesis is a conjunction of “the universe is a computation” and “every computation exists as a universe with some weighting function”.
I interpret it more as an (attempted) dissolution of “existing as a universe” to “being a computation”. That is, it should be possible to fully describe the claims made by Tegmark IV without using the words “exist”, “real”, etc., and it should furthermore be possible to take the question “Why does this particular computation I’m in exist as a universe?” and unpack it into cleanly-separated confusion and tautology.
So I wouldn’t take it as saying much more than “there’s nothing you can say about ‘existence’ that isn’t ultimately about some fact about some computation” (or, I’d prefer to say, some fixed structure, about which there could be any number of fixed computations). More concretely, if this universe is as non-magical as it appears to be, then the fact that I think I exist or that the universe exists is causally completely determined by concrete facts about the internal content of this universe; even if this universe didn’t “exist”, then as long as someone in another universe had a fixed description of this universe (e.g. a program sufficient to compute it with arbitrary precision), they could write a program that calculated the answer to the question “Does ata think she exists?” pointed at their description of this universe (and whatever information would be needed to locate this copy of me, etc.), and the answer would be “Yes”, for exactly the same reasons that the answer is in fact “Yes” in this universe.
So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it’s probably purely epiphenomenal. (This reminds me a lot of the GAZP and zombie arguments in general.)
I’m actually having a hard time imagining how that could not be true, so I’m in trouble if it isn’t. I’m also in trouble if it is, being that the ‘weighting function’ aspect is indeed still baffling me.
So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it’s probably purely epiphenomenal.
We probably care about things that exist and less about things that don’t, which makes the abstract fact about existence of any given universe relevant for making decisions that determine otherwise morally relevant properties of these universes. For example, if I find out that I don’t exist, I might then need to focus on optimizing properties of other universes that exist, through determining the properties of my universe that would be accessed by those other universes and would positively affect their moral value in predictable ways.
If being in a universe that exists feels so similar to being in a universe that doesn’t exist that we could confuse the two, then where does the moral distinction come from?
(It’s only to be expected that at least some moral facts are hard to discern, so you won’t feel the truth about them intuitively, you’d need to stage and perform the necessary computations.)
You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn’t have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let’s say you’re only human, and so you haven’t quite gotten around to optimize your psychology in a way that would have this effect. What should you do?
One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn’t matter, since if all you have access to is just a little fraction of value, that’s not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.
On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.
You say it has a practical utility, and yet you call it meaningless?
Actually, what pjeby said was that it was meaningless outside of its practical utility. He didn’t say it was meaningless inside of its practical utility.
My point is that I don’t know what is meant by something being meaningless “outside of its practical utility”. Can you give me an example of a concept that is meaningful outside of its practical utility?
“Electron”. “Potato”. “Euclidean geometry”. These concepts have definitions which are unambiguous even when there is no context specified, unlike, pjeby alleges, “computation”.
Likewise a Turing machine could perhaps compute everything there was to know about consciousness and its effects—but perhaps it still couldn’t actually produce one.
What’s the claim here?
That an abstract Turing machine could not be conscious (or better: contain conscious beings.)
That if a physical Turing machine (let’s just say “computer”) is carrying out a ‘causally closed’ computation, in the sense that once it starts it no longer receives input from outside, then “no minds are created”. (E.g. If it’s simulating a universe containing intelligent observers then none of the simulated observers have minds.)
That regardless of how a physical computer is ‘hooked up’ to the world, something about the fact that it’s a computer (rather than a person) prevents it from being conscious.
I suspect the truth of (1) would be a tautology for you (as part of what it means for something to be an abstract entity). And presumably you would agree with the rest of us that (3) is almost certainly false. So really it just comes down to (2).
For me, (2) seems exactly as plausible as the idea that there could be a distant ‘zombie planet’ (perhaps beyond the cosmological horizon) containing Physically Real People who for some reason lack consciousness. After all, it would be just as causally isolated from us as the simulation. And I don’t think simulation is an ‘absolute notion’. I think one can devise smooth spectrums of scenarios ranging from things that you would call ‘clearly a simulation’ to things that you would call ‘clearly not a simulation’.
Here’s what I think. It’s just a “mysterious answer to a mysterious question” but it’s the best I can come up with.
From the perspective of a simulated person, they are conscious. A ‘perspective’ is defined by a mapping of certain properties of the simulated person to abstract, non-uniquely determined ‘mental properties’.
Perspectives and mental properties do not exist (that’s the whole point—they’re subjective!) It’s a category mistake to ask: does this thing have a perspective? Things don’t “have” perspectives the way they have position or mass. All we can ask is: “From this perspective (which might even be the perspective of a thermostat), how does the world look?”
The difference between a person in a simulation and a ‘real person’ is that defining the perspective of a real person is slightly ‘easier’, slightly ‘more natural’. But if the simulated and real versions are ‘functionally isomorphic’ then any perspective we assign to one can be mapped onto the other in a canonical way. (And having pointed these two facts out, we thereby exhaust everything there is to be said about whether simulated people are ‘really conscious’.)
ETA: I’m actually really interested to know what the downvoter thinks. I mean, I know these ideas are absurd but I can’t see any other way to piece it together. To clarify: what I’m trying to do is take the everyday concept of “what it’s likeness” as far as it will go without either (a) committing myself to a bunch of arbitrary extra facts (such as ‘the exact moment when a person first becomes conscious’ and ‘facts of the matter’ about whether ants/lizards/mice/etc are conscious) or (b) ditching it in favour of a wholly ‘third person’ Dennettian notion of consciousness. (If the criticism is simply that I ought to ditch it in favour of Dennett-style consciousness then I have no reply (ultimately I agree!) but you’re kind-of missing the point of the exercise.)
A zombie-world seems extremely improbable to have evolved naturally, (evolved creatures coincidentally speaking about their consciousness without actually being conscious), but I don’t see why a zombie-world couldn’t be simulated by a programmer who studied how to compute the effects of consciousness, without actually needing to have the phenomenon of consciousness itself.
The same way you don’t need to have an actual solar system inside your computer, in order to compute the orbits of the planets—but it’d be very unlikely to have accidentally computed them correctly if you hadn’t studied the actual solar system.
Do you have any empirical reason to think that consciousness is about computation alone? To claim Occam’s razor on this is far from obvious, as the only examples of consciousness (or talking about consciousness) currently concern a certain species of evolved primate with a complex brain, and some trillions of neurons, all of which have have chemical and electrical effects, they aren’t just doing computations on an abstract mathematical universe sans context.
Unless you assume the whole universe is pure mathematics, so there’s no difference between the simulation of a thing and the thing itself. Which means there’s no difference between the mathematical model of a thing and the thing itself. Which means the map is the territory. Which means Tegmark IV.
And Tegmark IV is likewise just a possibility, not a proven thing.
This is a “does the tree make a sound if there’s no-one there to hear it?” argument.
That is, it assumes that there is a difference between “effects of consciousness” and “consciousness itself”—in the same way that a connection is implied between “hearing” and “sound”.
That is, the argument hinges on the definition of the word whose definition is being questioned, and is an excellent example of intuitions feeling real.
Not quite. What I’m saying is there might be a difference between the computation of a thing and the thing itself. It’s basically an argument against the inevitability of Tegmark IV.
A Turing machine can certainly compute everything there is to know about lifting rocks and their effects—but it still can’t lift a rock. Likewise a Turing machine could perhaps compute everything there was to know about consciousness and its effects—but perhaps it still couldn’t actually produce one.
Or at least I’ve not been convinced that it’s a logical impossibility for it to be otherwise; nor that I should consider it my preferred possibility that consciousness is solely computation, nothing else.
Wouldn’t the same reasoning mean that all physical processes have to be solely computation? So it’s not just “a Turing machine can produce consciousness”, but “a Turing machine can produce a new physical universe”, and therefore “Yeah, Turing Machines can lift real rocks, though it’s real rocks in a subordinate real universe, not in ours”.
I think you mean, it’s the skeleton of an argument you could advance if there turned out to actually be some meaning to the phrase “difference between the computation of a thing and the thing itself”.
Herein lies the error: it’s not up to anybody else to convince you it’s logically impossible, it’s up to you to show that you’re even describing something coherent in the first place.
Really, this is another LW-solved philosophical problem; you just have to grok the quantum physics sequence, in addition to the meanings-of-words one: when you understand that physics itself is a machine, it dissolves the question of what “simulation” or “computation” mean in this context. That is, you’ll realize that the only reason you can even ask the question is because you’re confusing the labels in your mind with real things.
Could you point to the concrete articles that supposedly dissolve this question? I find the question of what “computation” means as still very much open, and the source of a whole lot of confusion. This is best seen when people attempt to define what constitutes “real” computation as opposed to mere table lookups, replays, state machines implemented by random physical processes, etc.
Needless to say, this situation doesn’t give one the license to jump into mysticism triumphantly. However, as I noted in a recent thread, I observe an unpleasant tendency on LW to use the notions of “computation,” “algorithms,” etc. as semantic stop signs, considering how ill-understood they presently are.
Please note that I did not say the sequence explains “computation”; merely that it dissolves the illusion the intuitive notion of a meaningful distinction between a “computation” or “simulation” and “reality”.
In particular, an intuitive understanding that people are made of interchangeable particles and nothing else, dissolves the question of “what happens if somebody makes a simulation of you?” in the same way that it dissolves “what happens if there are two copies of you… which one’s the real one?”
That is, the intuitive notion that there’s something “special” about the “original” or “un-simulated” you is incoherent, because the identity of entities is an unreal concept existing only in human brains’ representation of reality, rather than in reality itself.
The QM sequence demonstrates this; it does not, AFAIR, attempt to rigorously define “computation”, however.
Those sound like similarly confused notions to me—i.e., tree-sound-hearing questions, rather than meaningful ones. I would therefore refer such questions to the “usage of words” sequence, especially “How an Algorithm Feels From The Inside” (which was my personal source of intuitions about such confusions).
Fair enough, though I can’t consider these explanations as settled until the notion of “computation” itself is fully clarified. I haven’t read the entire corpus of sequences, though I think I’ve read most of the articles relevant for these questions, and what I’ve seen of the attempts there to deal with the question of what precisely constitutes “computation” is, in my opinion, far from satisfactory. Further non-trivial insight is definitely still needed there.
Personally, I would more look for someone asking that question to show what isn’t “computation”. That is, the word itself seems rather meaningless, outside of its practical utility (i.e. “have you done that computation yet?”). Trying to pin it down in some absolute sense strikes me as a definitional argument… i.e., one where you should first be asking, “Why do I care what computation is?”, and then defining it to suit your purpose, or using an alternate term for greater precision.
You say it has a practical utility, and yet you call it meaningless? If rationality is about winning, how can something with practical utility be meaningless?
Here’s what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result. What isn’t computation? Pretty much everything else. I don’t call gravity a computation, I call it a phenomenon. Because gravity doesn’t act on symbolisms and abstractions (like numbers), it acts on real things. A division or a multiplication is a computation, because it acts on numbers. A computation is a map, not a territory, same way that numbers are a map, not a territory.
What I don’t know is what you mean by “physics is a machine”. For that statement to be meaningful you’d have to explain what would it mean for physics not to be a machine. If you mean that physics is deterministic and causal, then sure. If you mean that physics is a computation, then I’ll say no, you’ve not yet proven to me that the bottom layer of reality is about mathematical concepts playing with themselves.
That’s the Tegmark IV hypothesis, and it’s NOT a solved issue, not by a long shot.
A computer (a real one, like a laptop) also acts on real things. For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions. For example, you might spell-check a text—which describes what it is doing as an operation on an abstraction, since the text itself is an abstraction. A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing—the text—in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction—such as spell-checking a text—or as an action on a real thing—such as modifying the physical state of the memory.
So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then I see no obvious barrier to understanding gravity as operating on abstractions.
This is similar to my point, but the other way around, sort of.
My point is that the “abstraction” exists only in the eye of the observer (mind of the commentator?), rather than having any independent existence.
In reality, there is no computer, just atoms. No “computation”, just movement. It is we as observers who label these things to be happening, or not happening, and argue about what labels we should apply to them.
None of this is a problem, until somebody gets to the question of whether something really is the “right” label to apply, only usually they phrase it in the form of whether something can “really” be something else.
But what’s actually meant is, “is this the right label to apply in our minds?”, and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the “inside” and the “outside” are themselves just labels we use.
The question of qualia and subjective experience isn’t a mere “confusion”.
You keep using that word “is”, but I don’t think it means what you think it means. ;-)
Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this “is”?
That is, what different predictions will you make, based on “is” or “is not” in your statement?
Consider that one carefully, before you continue.
Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
I don’t see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer’s Network B, than Network A… which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.
.
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
That doesn’t look like a reply, there.
And if consciousness “is” just computation, what would be different? Do you have any particular reason to think you would observe any of those things?
You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of “is” was, in the other statement you made. (It actually makes a great deal of difference, and it’s that difference that makes the rest of your argument .)
Since the reply was just below both of your quotes, then no, the single dot wasn’t one, it was an attempt to distinguish the two quotes.
I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn’t do so.
Like your earlier “funny” response about how I supposedly favoured euthanizing paraplegics, you don’t give me the vibe of responding in good faith.
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I’d accumulate enough evidence that I’d no longer expect my subjective experience to ever find itself inside an electronic computation.
And if evolution stumbled upon consciousness by accident, and it’s solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.
Can you make a complete question? What exactly are you asking? The statement you quoted had more than one “is” in it. Four or five of them.
I think we’re done here. As far as I can tell, you’re far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn’t ask you questions to get information from you, I asked you questions to help you dissolve your confusion.
In any event, you haven’t grokked the “usage of words” sequence sufficiently to have a meaningful discussion on this topic. So, I’m going to stop trying now.
You didn’t expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn’t the one confused, perhaps you were.
I certainly am interested in understanding things, and questioning things. That’s why I asked questions to you, which you still haven’t answered:
what do you mean when you say that physics is a machine? (How would the world be different if physics wasn’t a machine?)
what do you mean when you call “computation” a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)
As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.
For a thorough answer to your first question, study the sequences—especially the parts debunking the supernatural, explaining the “merely real”, and the basics of quantum mechanics.
For the second, I mean only that asking whether something “is” a computation or not is a pointless question… as described in “How an Algorithm Feels From The Inside”.
Thanks for the suggestion, but I’ve read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
I do wonder if you’ve read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of “computation” (and “anticipation” too), what makes you think that any of the other sequences he wrote dissolves that term?
Well, that won’t do any good unless you also apply them to the topic at hand.
That depends entirely on what you mean by the words… which you haven’t actually defined, as far as I can tell.
You also seem to think I’m arguing some particular position about consciousness or the simulability thereof, but that isn’t actually so. I am only attempting to dispel confusion, and that’s a very different thing.
I’ve been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both “simulate” and “consciousness” in order to be able to say something that isn’t nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.
That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.
I rather doubt it, since that article says:
AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts… in much the same way that Eliezer leaves open the future definition of a “non-person predicate”.
Some of the Chalmers’ ideas concerning ‘Fading and dancing qualia’ may be relevant here.
With a little ingenuity, and as long we’re prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain’s computational activity is delegated to a computer until the computer is doing all of the work. It doesn’t seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.
Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an ‘avatar’ in a simulated world. It doesn’t seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.
The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.
As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer’s computational activity is in turn delegated to a group of abacus-using monks—and then it doesn’t seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...
It’s the bullet I’m not yet prepared to bite—but if do end up doing so, despite all my intuition telling me no, that’ll be the point where I’ll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...
Of course, which is why the entirety of the existence of a real computer is beyond that of a mere Turing machine. As it can, for example, fall and hurt someone’s legs.
Yes, which is why there’s a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.
And yet, pjeby argues that to think the two are different (the computation from the physical operation) is mere “confusion”. It’s not confusion, it’s the frigging difference between map and territory!
My question is about whether gravity can be fully understood as only operating on abstractions. As real computers can’t be fully understood as that, then it’s the same barrier the two have.
It is possible to have more than one map of a given territory. You can have a street map, but also a topographical map. Similarly, a given physical operation can be understood in more than one way, as performing more than one computation. One class of computation is simulation. The physical world (the whole world) can be understood as performing a simulation of a physical world. Whereas only a small part of the laptop is directly involved in spell-checking a text document, the whole laptop, in fact the whole physical world, is directly involved in the simulation.
The computation “spell checking a text” is different from the physical operation. This is easy to prove. For example, had the text been stored in a different place in physical memory then the same computation (“spell checking a text”) could still be performed. There need not be any difference in the computation—for example, the resulting corrected text might be exactly the same regardless of where in memory it is stored. But what about the simulation? If so much as one molecule were removed from the laptop, then the simulation would be a different simulation. We easily proved that the computation “spell checking a text” is different from the physical operation, but we were unable to extend this to proving that the computation “simulating a physical world” is different from the physical operation.
As a sidenote, whenever I try to explain my position I get downvoted some more. Are these downvotes for mere disagreement, or is there something else that the downvoter objects to?
Not quite. The Tegmark IV hypothesis is that all possible computations exist as universes. This is considerably more controversial than what pjeby said, which was only that the universe we happen to be in is a computation.
Um, no, actually, because I wouldn’t make such a silly statement. (Heck, I don’t even claim to be able to define “computation”!)
All I said was that trying to differentiate “real” and “just a computation” doesn’t make any sense at all. I’m urging the dissolution of that question as nonsensical, rather than trying to answer it.
Basically, it’s the sort of question that only arises because of how the algorithm feels from the inside, not because it has any relationship to the universe outside of human brains.
If a computation can be a universe, and a universe a computation, then you’re 90% of the way to Tegmark IV anyway.
The Tegmark IV hypothesis is a conjunction of “the universe is a computation” and “every computation exists as a universe with some weighting function”. The latter part is much more surprising, so accepting the first part does not get you 90% of the way to proving the conjunction.
I interpret it more as an (attempted) dissolution of “existing as a universe” to “being a computation”. That is, it should be possible to fully describe the claims made by Tegmark IV without using the words “exist”, “real”, etc., and it should furthermore be possible to take the question “Why does this particular computation I’m in exist as a universe?” and unpack it into cleanly-separated confusion and tautology.
So I wouldn’t take it as saying much more than “there’s nothing you can say about ‘existence’ that isn’t ultimately about some fact about some computation” (or, I’d prefer to say, some fixed structure, about which there could be any number of fixed computations). More concretely, if this universe is as non-magical as it appears to be, then the fact that I think I exist or that the universe exists is causally completely determined by concrete facts about the internal content of this universe; even if this universe didn’t “exist”, then as long as someone in another universe had a fixed description of this universe (e.g. a program sufficient to compute it with arbitrary precision), they could write a program that calculated the answer to the question “Does ata think she exists?” pointed at their description of this universe (and whatever information would be needed to locate this copy of me, etc.), and the answer would be “Yes”, for exactly the same reasons that the answer is in fact “Yes” in this universe.
So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it’s probably purely epiphenomenal. (This reminds me a lot of the GAZP and zombie arguments in general.)
I’m actually having a hard time imagining how that could not be true, so I’m in trouble if it isn’t. I’m also in trouble if it is, being that the ‘weighting function’ aspect is indeed still baffling me.
We probably care about things that exist and less about things that don’t, which makes the abstract fact about existence of any given universe relevant for making decisions that determine otherwise morally relevant properties of these universes. For example, if I find out that I don’t exist, I might then need to focus on optimizing properties of other universes that exist, through determining the properties of my universe that would be accessed by those other universes and would positively affect their moral value in predictable ways.
If being in a universe that exists feels so similar to being in a universe that doesn’t exist that we could confuse the two, then where does the moral distinction come from?
(It’s only to be expected that at least some moral facts are hard to discern, so you won’t feel the truth about them intuitively, you’d need to stage and perform the necessary computations.)
You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn’t have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let’s say you’re only human, and so you haven’t quite gotten around to optimize your psychology in a way that would have this effect. What should you do?
One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn’t matter, since if all you have access to is just a little fraction of value, that’s not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.
On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.
Actually, what pjeby said was that it was meaningless outside of its practical utility. He didn’t say it was meaningless inside of its practical utility.
My point stands: Only meaningful concepts have a practical utility.
I just explained why your point is a straw man.
My point is that I don’t know what is meant by something being meaningless “outside of its practical utility”. Can you give me an example of a concept that is meaningful outside of its practical utility?
“Electron”. “Potato”. “Euclidean geometry”. These concepts have definitions which are unambiguous even when there is no context specified, unlike, pjeby alleges, “computation”.
What’s the claim here?
That an abstract Turing machine could not be conscious (or better: contain conscious beings.)
That if a physical Turing machine (let’s just say “computer”) is carrying out a ‘causally closed’ computation, in the sense that once it starts it no longer receives input from outside, then “no minds are created”. (E.g. If it’s simulating a universe containing intelligent observers then none of the simulated observers have minds.)
That regardless of how a physical computer is ‘hooked up’ to the world, something about the fact that it’s a computer (rather than a person) prevents it from being conscious.
I suspect the truth of (1) would be a tautology for you (as part of what it means for something to be an abstract entity). And presumably you would agree with the rest of us that (3) is almost certainly false. So really it just comes down to (2).
For me, (2) seems exactly as plausible as the idea that there could be a distant ‘zombie planet’ (perhaps beyond the cosmological horizon) containing Physically Real People who for some reason lack consciousness. After all, it would be just as causally isolated from us as the simulation. And I don’t think simulation is an ‘absolute notion’. I think one can devise smooth spectrums of scenarios ranging from things that you would call ‘clearly a simulation’ to things that you would call ‘clearly not a simulation’.
Here’s what I think. It’s just a “mysterious answer to a mysterious question” but it’s the best I can come up with.
From the perspective of a simulated person, they are conscious. A ‘perspective’ is defined by a mapping of certain properties of the simulated person to abstract, non-uniquely determined ‘mental properties’.
Perspectives and mental properties do not exist (that’s the whole point—they’re subjective!) It’s a category mistake to ask: does this thing have a perspective? Things don’t “have” perspectives the way they have position or mass. All we can ask is: “From this perspective (which might even be the perspective of a thermostat), how does the world look?”
The difference between a person in a simulation and a ‘real person’ is that defining the perspective of a real person is slightly ‘easier’, slightly ‘more natural’. But if the simulated and real versions are ‘functionally isomorphic’ then any perspective we assign to one can be mapped onto the other in a canonical way. (And having pointed these two facts out, we thereby exhaust everything there is to be said about whether simulated people are ‘really conscious’.)
ETA: I’m actually really interested to know what the downvoter thinks. I mean, I know these ideas are absurd but I can’t see any other way to piece it together. To clarify: what I’m trying to do is take the everyday concept of “what it’s likeness” as far as it will go without either (a) committing myself to a bunch of arbitrary extra facts (such as ‘the exact moment when a person first becomes conscious’ and ‘facts of the matter’ about whether ants/lizards/mice/etc are conscious) or (b) ditching it in favour of a wholly ‘third person’ Dennettian notion of consciousness. (If the criticism is simply that I ought to ditch it in favour of Dennett-style consciousness then I have no reply (ultimately I agree!) but you’re kind-of missing the point of the exercise.)