Please note that I did not say the sequence explains “computation”; merely that it dissolves the illusion the intuitive notion of a meaningful distinction between a “computation” or “simulation” and “reality”.
Fair enough, though I can’t consider these explanations as settled until the notion of “computation” itself is fully clarified. I haven’t read the entire corpus of sequences, though I think I’ve read most of the articles relevant for these questions, and what I’ve seen of the attempts there to deal with the question of what precisely constitutes “computation” is, in my opinion, far from satisfactory. Further non-trivial insight is definitely still needed there.
Fair enough, though I can’t consider these explanations as settled until the notion of “computation” itself is fully clarified.
Personally, I would more look for someone asking that question to show what isn’t “computation”. That is, the word itself seems rather meaningless, outside of its practical utility (i.e. “have you done that computation yet?”). Trying to pin it down in some absolute sense strikes me as a definitional argument… i.e., one where you should first be asking, “Why do I care what computation is?”, and then defining it to suit your purpose, or using an alternate term for greater precision.
You say it has a practical utility, and yet you call it meaningless? If rationality is about winning, how can something with practical utility be meaningless?
Here’s what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result.
What isn’t computation? Pretty much everything else. I don’t call gravity a computation, I call it a phenomenon. Because gravity doesn’t act on symbolisms and abstractions (like numbers), it acts on real things. A division or a multiplication is a computation, because it acts on numbers. A computation is a map, not a territory, same way that numbers are a map, not a territory.
What I don’t know is what you mean by “physics is a machine”. For that statement to be meaningful you’d have to explain what would it mean for physics not to be a machine. If you mean that physics is deterministic and causal, then sure. If you mean that physics is a computation, then I’ll say no, you’ve not yet proven to me that the bottom layer of reality is about mathematical concepts playing with themselves.
That’s the Tegmark IV hypothesis, and it’s NOT a solved issue, not by a long shot.
Here’s what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result...I don’t call gravity a computation...Because gravity doesn’t act on symbolisms and abstractions (like numbers), it acts on real things.
A computer (a real one, like a laptop) also acts on real things. For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions. For example, you might spell-check a text—which describes what it is doing as an operation on an abstraction, since the text itself is an abstraction. A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing—the text—in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction—such as spell-checking a text—or as an action on a real thing—such as modifying the physical state of the memory.
So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then I see no obvious barrier to understanding gravity as operating on abstractions.
A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing—the text—in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction—such as spell-checking a text—or as an action on a real thing—such as modifying the physical state of the memory.
This is similar to my point, but the other way around, sort of.
My point is that the “abstraction” exists only in the eye of the observer (mind of the commentator?), rather than having any independent existence.
In reality, there is no computer, just atoms. No “computation”, just movement. It is we as observers who label these things to be happening, or not happening, and argue about what labels we should apply to them.
None of this is a problem, until somebody gets to the question of whether something really is the “right” label to apply, only usually they phrase it in the form of whether something can “really” be something else.
But what’s actually meant is, “is this the right label to apply in our minds?”, and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the “inside” and the “outside” are themselves just labels we use.
and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
The question of qualia and subjective experience isn’t a mere “confusion”.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself
You keep using that word “is”, but I don’t think it means what you think it means. ;-)
Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this “is”?
That is, what different predictions will you make, based on “is” or “is not” in your statement?
Consider that one carefully, before you continue.
The question of qualia and subjective experience isn’t a mere “confusion”.
Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
I don’t see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer’s Network B, than Network A… which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.
Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this “is”?
.
Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
Consider that one carefully, before you continue.
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
Try making your beliefs pay rent, what differences do you expect to observe in reality, between different states of this “is”?
.
That doesn’t look like a reply, there.
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
And if consciousness “is” just computation, what would be different? Do you have any particular reason to think you would observe any of those things?
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of “is” was, in the other statement you made. (It actually makes a great deal of difference, and it’s that difference that makes the rest of your argument .)
Since the reply was just below both of your quotes, then no, the single dot wasn’t one, it was an attempt to distinguish the two quotes.
I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn’t do so.
Like your earlier “funny” response about how I supposedly favoured euthanizing paraplegics, you don’t give me the vibe of responding in good faith.
Do you have any particular reason to think you would observe any of those things?
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I’d accumulate enough evidence that I’d no longer expect my subjective experience to ever find itself inside an electronic computation.
And if evolution stumbled upon consciousness by accident, and it’s solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.
The question to consider was what the meaning of “is” was, in the other statement you made.
Can you make a complete question? What exactly are you asking? The statement you quoted had more than one “is” in it. Four or five of them.
I think we’re done here. As far as I can tell, you’re far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn’t ask you questions to get information from you, I asked you questions to help you dissolve your confusion.
In any event, you haven’t grokked the “usage of words” sequence sufficiently to have a meaningful discussion on this topic. So, I’m going to stop trying now.
You didn’t expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn’t the one confused, perhaps you were.
I certainly am interested in understanding things, and questioning things. That’s why I asked questions to you, which you still haven’t answered:
what do you mean when you say that physics is a machine? (How would the world be different if physics wasn’t a machine?)
what do you mean when you call “computation” a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)
As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.
For a thorough answer to your first question, study the sequences—especially the parts debunking the supernatural, explaining the “merely real”, and the basics of quantum mechanics.
For the second, I mean only that asking whether something “is” a computation or not is a pointless question… as described in “How an Algorithm Feels From The Inside”.
Thanks for the suggestion, but I’ve read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
I do wonder if you’ve read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of “computation” (and “anticipation” too), what makes you think that any of the other sequences he wrote dissolves that term?
Thanks for the suggestion, but I’ve read them all.
Well, that won’t do any good unless you also apply them to the topic at hand.
even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
That depends entirely on what you mean by the words… which you haven’t actually defined, as far as I can tell.
You also seem to think I’m arguing some particular position about consciousness or the simulability thereof, but that isn’t actually so. I am only attempting to dispel confusion, and that’s a very different thing.
I’ve been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both “simulate” and “consciousness” in order to be able to say something that isn’t nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.
That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.
If Eliezer himself is holding onto the concept of “computation”
I rather doubt it, since that article says:
Such causal links could be required for “computation” and “consciousness”—whatever those are.
AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts… in much the same way that Eliezer leaves open the future definition of a “non-person predicate”.
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine.
With a little ingenuity, and as long we’re prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain’s computational activity is delegated to a computer until the computer is doing all of the work. It doesn’t seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.
Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an ‘avatar’ in a simulated world. It doesn’t seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.
The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.
As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer’s computational activity is in turn delegated to a group of abacus-using monks—and then it doesn’t seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...
It’s the bullet I’m not yet prepared to bite—but if do end up doing so, despite all my intuition telling me no, that’ll be the point where I’ll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...
A computer (a real one, like a laptop) also acts on real things.
Of course, which is why the entirety of the existence of a real computer is beyond that of a mere Turing machine. As it can, for example, fall and hurt someone’s legs.
For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions.
Yes, which is why there’s a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.
And yet, pjeby argues that to think the two are different (the computation from the physical operation) is mere “confusion”. It’s not confusion, it’s the frigging difference between map and territory!
So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then
My question is about whether gravity can be fully understood as only operating on abstractions. As real computers can’t be fully understood as that, then it’s the same barrier the two have.
Yes, which is why there’s a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.
It is possible to have more than one map of a given territory. You can have a street map, but also a topographical map. Similarly, a given physical operation can be understood in more than one way, as performing more than one computation. One class of computation is simulation. The physical world (the whole world) can be understood as performing a simulation of a physical world. Whereas only a small part of the laptop is directly involved in spell-checking a text document, the whole laptop, in fact the whole physical world, is directly involved in the simulation.
The computation “spell checking a text” is different from the physical operation. This is easy to prove. For example, had the text been stored in a different place in physical memory then the same computation (“spell checking a text”) could still be performed. There need not be any difference in the computation—for example, the resulting corrected text might be exactly the same regardless of where in memory it is stored. But what about the simulation? If so much as one molecule were removed from the laptop, then the simulation would be a different simulation. We easily proved that the computation “spell checking a text” is different from the physical operation, but we were unable to extend this to proving that the computation “simulating a physical world” is different from the physical operation.
As a sidenote, whenever I try to explain my position I get downvoted some more. Are these downvotes for mere disagreement, or is there something else that the downvoter objects to?
That’s the Tegmark IV hypothesis, and it’s NOT a solved issue, not by a long shot.
Not quite. The Tegmark IV hypothesis is that all possible computations exist as universes. This is considerably more controversial than what pjeby said, which was only that the universe we happen to be in is a computation.
what pjeby said, which was only that the universe we happen to be in is a computation.
Um, no, actually, because I wouldn’t make such a silly statement. (Heck, I don’t even claim to be able to define “computation”!)
All I said was that trying to differentiate “real” and “just a computation” doesn’t make any sense at all. I’m urging the dissolution of that question as nonsensical, rather than trying to answer it.
Basically, it’s the sort of question that only arises because of how the algorithm feels from the inside, not because it has any relationship to the universe outside of human brains.
If a computation can be a universe, and a universe a computation, then you’re 90% of the way to Tegmark IV anyway.
The Tegmark IV hypothesis is a conjunction of “the universe is a computation” and “every computation exists as a universe with some weighting function”. The latter part is much more surprising, so accepting the first part does not get you 90% of the way to proving the conjunction.
The Tegmark IV hypothesis is a conjunction of “the universe is a computation” and “every computation exists as a universe with some weighting function”.
I interpret it more as an (attempted) dissolution of “existing as a universe” to “being a computation”. That is, it should be possible to fully describe the claims made by Tegmark IV without using the words “exist”, “real”, etc., and it should furthermore be possible to take the question “Why does this particular computation I’m in exist as a universe?” and unpack it into cleanly-separated confusion and tautology.
So I wouldn’t take it as saying much more than “there’s nothing you can say about ‘existence’ that isn’t ultimately about some fact about some computation” (or, I’d prefer to say, some fixed structure, about which there could be any number of fixed computations). More concretely, if this universe is as non-magical as it appears to be, then the fact that I think I exist or that the universe exists is causally completely determined by concrete facts about the internal content of this universe; even if this universe didn’t “exist”, then as long as someone in another universe had a fixed description of this universe (e.g. a program sufficient to compute it with arbitrary precision), they could write a program that calculated the answer to the question “Does ata think she exists?” pointed at their description of this universe (and whatever information would be needed to locate this copy of me, etc.), and the answer would be “Yes”, for exactly the same reasons that the answer is in fact “Yes” in this universe.
So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it’s probably purely epiphenomenal. (This reminds me a lot of the GAZP and zombie arguments in general.)
I’m actually having a hard time imagining how that could not be true, so I’m in trouble if it isn’t. I’m also in trouble if it is, being that the ‘weighting function’ aspect is indeed still baffling me.
So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it’s probably purely epiphenomenal.
We probably care about things that exist and less about things that don’t, which makes the abstract fact about existence of any given universe relevant for making decisions that determine otherwise morally relevant properties of these universes. For example, if I find out that I don’t exist, I might then need to focus on optimizing properties of other universes that exist, through determining the properties of my universe that would be accessed by those other universes and would positively affect their moral value in predictable ways.
If being in a universe that exists feels so similar to being in a universe that doesn’t exist that we could confuse the two, then where does the moral distinction come from?
(It’s only to be expected that at least some moral facts are hard to discern, so you won’t feel the truth about them intuitively, you’d need to stage and perform the necessary computations.)
You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn’t have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let’s say you’re only human, and so you haven’t quite gotten around to optimize your psychology in a way that would have this effect. What should you do?
One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn’t matter, since if all you have access to is just a little fraction of value, that’s not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.
On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.
You say it has a practical utility, and yet you call it meaningless?
Actually, what pjeby said was that it was meaningless outside of its practical utility. He didn’t say it was meaningless inside of its practical utility.
My point is that I don’t know what is meant by something being meaningless “outside of its practical utility”. Can you give me an example of a concept that is meaningful outside of its practical utility?
“Electron”. “Potato”. “Euclidean geometry”. These concepts have definitions which are unambiguous even when there is no context specified, unlike, pjeby alleges, “computation”.
Fair enough, though I can’t consider these explanations as settled until the notion of “computation” itself is fully clarified. I haven’t read the entire corpus of sequences, though I think I’ve read most of the articles relevant for these questions, and what I’ve seen of the attempts there to deal with the question of what precisely constitutes “computation” is, in my opinion, far from satisfactory. Further non-trivial insight is definitely still needed there.
Personally, I would more look for someone asking that question to show what isn’t “computation”. That is, the word itself seems rather meaningless, outside of its practical utility (i.e. “have you done that computation yet?”). Trying to pin it down in some absolute sense strikes me as a definitional argument… i.e., one where you should first be asking, “Why do I care what computation is?”, and then defining it to suit your purpose, or using an alternate term for greater precision.
You say it has a practical utility, and yet you call it meaningless? If rationality is about winning, how can something with practical utility be meaningless?
Here’s what I mean by computation: The handling of concepts and symbolic representations of concepts and mathematical abstractions in such a way that they return a causally derived result. What isn’t computation? Pretty much everything else. I don’t call gravity a computation, I call it a phenomenon. Because gravity doesn’t act on symbolisms and abstractions (like numbers), it acts on real things. A division or a multiplication is a computation, because it acts on numbers. A computation is a map, not a territory, same way that numbers are a map, not a territory.
What I don’t know is what you mean by “physics is a machine”. For that statement to be meaningful you’d have to explain what would it mean for physics not to be a machine. If you mean that physics is deterministic and causal, then sure. If you mean that physics is a computation, then I’ll say no, you’ve not yet proven to me that the bottom layer of reality is about mathematical concepts playing with themselves.
That’s the Tegmark IV hypothesis, and it’s NOT a solved issue, not by a long shot.
A computer (a real one, like a laptop) also acts on real things. For example if it has a hard drive, then as it writes to the hard drive it is modifying the surface of the platters. A computer (a real one) can be understood as operating on abstractions. For example, you might spell-check a text—which describes what it is doing as an operation on an abstraction, since the text itself is an abstraction. A text is an abstraction rather than a physical object because you could take the very same same text and write it to the hard drive, or hold it in memory, or print it out, thereby realizing the same abstract thing—the text—in three distinct physical ways. In summary the same computer activity can be described as an operation on an abstraction—such as spell-checking a text—or as an action on a real thing—such as modifying the physical state of the memory.
So the question is whether gravity can be understood as operating on abstractions. Since a computer such as a laptop, which is acting on real, physical things, can also be understood as operating on abstractions, then I see no obvious barrier to understanding gravity as operating on abstractions.
This is similar to my point, but the other way around, sort of.
My point is that the “abstraction” exists only in the eye of the observer (mind of the commentator?), rather than having any independent existence.
In reality, there is no computer, just atoms. No “computation”, just movement. It is we as observers who label these things to be happening, or not happening, and argue about what labels we should apply to them.
None of this is a problem, until somebody gets to the question of whether something really is the “right” label to apply, only usually they phrase it in the form of whether something can “really” be something else.
But what’s actually meant is, “is this the right label to apply in our minds?”, and if they’d simply notice that the question is not about reality, but their categorization of arbitrarily-chosen chunks of reality, they’d stop being confused and arguing nonsense.
If computation isn’t the real thing, only the movement is, then a simulation (which is the complete representation of a thing using a different movement which can however be seen as performing the same computation) is not the thing itself, and you have no reason to believe that the phenomenon of consciousness can be internally experienced in a computer simulation, that an algorithm can feel anything from the inside. Because the “inside” and the “outside” are themselves just labels we use.
The question of qualia and subjective experience isn’t a mere “confusion”.
You keep using that word “is”, but I don’t think it means what you think it means. ;-)
Try making your beliefs pay rent: what differences do you expect to observe in reality, between different states of this “is”?
That is, what different predictions will you make, based on “is” or “is not” in your statement?
Consider that one carefully, before you continue.
Really? Would you care to explain what differences you predict to see in the world, as a result of the existence or non-existence of these concepts?
I don’t see that we have need of such convoluted hypotheses, when the simpler explanation is merely that our neural architecture more closely resembles Eliezer’s Network B, than Network A… which is a very modest hypothesis indeed, since Network B has many evolutionary advantages compared to Network A.
.
Sure. Here’s two simple ones:
If consciousness isn’t just computation, then I don’t expect to ever observe waking up as a simulation in a computer.
If consciousness isn’t just computation, then I don’t expect to ever see evolved or self-improved (not intentionally designed to be similar to humans) electronic entities discussing between themselves about the nature of qualia and subjective inner experience.
You’ve severely underestimated my rationality if all this time you thought I hadn’t even considered the question before I started my participation in this thread.
That doesn’t look like a reply, there.
And if consciousness “is” just computation, what would be different? Do you have any particular reason to think you would observe any of those things?
You missed the point of that comment entirely, as can be seen by you moving the quotation away from its referent. The question to consider was what the meaning of “is” was, in the other statement you made. (It actually makes a great deal of difference, and it’s that difference that makes the rest of your argument .)
Since the reply was just below both of your quotes, then no, the single dot wasn’t one, it was an attempt to distinguish the two quotes.
I have to estimate the probability of you purposefully trying to make me look as if I intentionally avoided answering your question, while knowing I didn’t do so.
Like your earlier “funny” response about how I supposedly favoured euthanizing paraplegics, you don’t give me the vibe of responding in good faith.
Of course. If consciousness is computation, then I expect that if my mind’s computation is simulated in a Turing machine, half the times my next experience will be of me inside the machine. By repeating the experiment enough times, I’d accumulate enough evidence that I’d no longer expect my subjective experience to ever find itself inside an electronic computation.
And if evolution stumbled upon consciousness by accident, and it’s solely dependent on some computational internal-to-the-algorithm component, then an evolution of mere algorithms in a Turing machine, should also be eventually expected to stumble upon consciousness and produce similar discussions about consciousness once it reaches the point of simulating minds of sufficient complexity.
Can you make a complete question? What exactly are you asking? The statement you quoted had more than one “is” in it. Four or five of them.
I think we’re done here. As far as I can tell, you’re far more interested in how you appear to other people than actually understanding anything, or at any rate questioning anything. I didn’t ask you questions to get information from you, I asked you questions to help you dissolve your confusion.
In any event, you haven’t grokked the “usage of words” sequence sufficiently to have a meaningful discussion on this topic. So, I’m going to stop trying now.
You didn’t expect me to have actual answers to your questions, and you think that my having answers indicates a problem with my side of the discussion; instead of perhaps updating your probabilities to think that I wasn’t the one confused, perhaps you were.
I certainly am interested in understanding things, and questioning things. That’s why I asked questions to you, which you still haven’t answered:
what do you mean when you say that physics is a machine? (How would the world be different if physics wasn’t a machine?)
what do you mean when you call “computation” a meaningless concept outside its practical utility? (What concept is there that is meaningful outside its practical utility?)
As I answered your questions, I think you should do me the reciprocal courtesy of answering these two.
For a thorough answer to your first question, study the sequences—especially the parts debunking the supernatural, explaining the “merely real”, and the basics of quantum mechanics.
For the second, I mean only that asking whether something “is” a computation or not is a pointless question… as described in “How an Algorithm Feels From The Inside”.
Thanks for the suggestion, but I’ve read them all. It seems to me you are perhaps talking about reductionism, which admittedly is a related issue, but even reductionists don’t need to believe that the simulation of a thing equals the thing simulated.
I do wonder if you’ve read http://lesswrong.com/lw/qr/timeless_causality/ . If Eliezer himself is holding onto the concept of “computation” (and “anticipation” too), what makes you think that any of the other sequences he wrote dissolves that term?
Well, that won’t do any good unless you also apply them to the topic at hand.
That depends entirely on what you mean by the words… which you haven’t actually defined, as far as I can tell.
You also seem to think I’m arguing some particular position about consciousness or the simulability thereof, but that isn’t actually so. I am only attempting to dispel confusion, and that’s a very different thing.
I’ve been saying only that someone who claims that there is some mysterious thing that prevents consciousness from being simulated, is going to have to reduce a coherent definition of both “simulate” and “consciousness” in order to be able to say something that isn’t nonsensical, because both of those notions are tied too strongly to inbuilt biases and intuitions.
That is, anything you try to say about this subject without a proper reduction is almost bound to be confused rubbish, sprinkled with repeated instances of the mind projection fallacy.
I rather doubt it, since that article says:
AFAICT, the article is silent on these points, having nothing in particular to say about such vague concepts… in much the same way that Eliezer leaves open the future definition of a “non-person predicate”.
Some of the Chalmers’ ideas concerning ‘Fading and dancing qualia’ may be relevant here.
With a little ingenuity, and as long we’re prepared to tolerate ridiculously impractical thought experiments, we could think up a scenario where more and more of your brain’s computational activity is delegated to a computer until the computer is doing all of the work. It doesn’t seem plausible that this would somehow cause your conscious experience to progressively fade away without you noticing.
Then we could imagine repeatedly switching the input/output connections of the simulated brain between your actual body and an ‘avatar’ in a simulated world. It doesn’t seem plausible that this would cause your conscious experience to keep switching on and off without you noticing.
The linked essay is a bit long for me to read right now, but I promise to do so within the weekend.
As to your particular example, the problem is I can also think an even more ridiculously impractical thought experiment: one in which more and more of that computer’s computational activity is in turn delegated to a group of abacus-using monks—and then it doesn’t seem plausible for my conscious experience to keep on persisting, when the monks end up being the ones doing all the work...
It’s the bullet I’m not yet prepared to bite—but if do end up doing so, despite all my intuition telling me no, that’ll be the point where I’ll also have to believe Tegmark IV. P(Tegmark IV|consciousness can persist in the manipulations of abacci)~=99% for me...
Of course, which is why the entirety of the existence of a real computer is beyond that of a mere Turing machine. As it can, for example, fall and hurt someone’s legs.
Yes, which is why there’s a difference between the computation (the map) and the physical operation (the territory). A computer has an undisputed physical reality and performed undisputed physical acts (the territory). These can be understood as performing computations. (a map). The two are different, and therefore the computation is different from the phsycail operation.
And yet, pjeby argues that to think the two are different (the computation from the physical operation) is mere “confusion”. It’s not confusion, it’s the frigging difference between map and territory!
My question is about whether gravity can be fully understood as only operating on abstractions. As real computers can’t be fully understood as that, then it’s the same barrier the two have.
It is possible to have more than one map of a given territory. You can have a street map, but also a topographical map. Similarly, a given physical operation can be understood in more than one way, as performing more than one computation. One class of computation is simulation. The physical world (the whole world) can be understood as performing a simulation of a physical world. Whereas only a small part of the laptop is directly involved in spell-checking a text document, the whole laptop, in fact the whole physical world, is directly involved in the simulation.
The computation “spell checking a text” is different from the physical operation. This is easy to prove. For example, had the text been stored in a different place in physical memory then the same computation (“spell checking a text”) could still be performed. There need not be any difference in the computation—for example, the resulting corrected text might be exactly the same regardless of where in memory it is stored. But what about the simulation? If so much as one molecule were removed from the laptop, then the simulation would be a different simulation. We easily proved that the computation “spell checking a text” is different from the physical operation, but we were unable to extend this to proving that the computation “simulating a physical world” is different from the physical operation.
As a sidenote, whenever I try to explain my position I get downvoted some more. Are these downvotes for mere disagreement, or is there something else that the downvoter objects to?
Not quite. The Tegmark IV hypothesis is that all possible computations exist as universes. This is considerably more controversial than what pjeby said, which was only that the universe we happen to be in is a computation.
Um, no, actually, because I wouldn’t make such a silly statement. (Heck, I don’t even claim to be able to define “computation”!)
All I said was that trying to differentiate “real” and “just a computation” doesn’t make any sense at all. I’m urging the dissolution of that question as nonsensical, rather than trying to answer it.
Basically, it’s the sort of question that only arises because of how the algorithm feels from the inside, not because it has any relationship to the universe outside of human brains.
If a computation can be a universe, and a universe a computation, then you’re 90% of the way to Tegmark IV anyway.
The Tegmark IV hypothesis is a conjunction of “the universe is a computation” and “every computation exists as a universe with some weighting function”. The latter part is much more surprising, so accepting the first part does not get you 90% of the way to proving the conjunction.
I interpret it more as an (attempted) dissolution of “existing as a universe” to “being a computation”. That is, it should be possible to fully describe the claims made by Tegmark IV without using the words “exist”, “real”, etc., and it should furthermore be possible to take the question “Why does this particular computation I’m in exist as a universe?” and unpack it into cleanly-separated confusion and tautology.
So I wouldn’t take it as saying much more than “there’s nothing you can say about ‘existence’ that isn’t ultimately about some fact about some computation” (or, I’d prefer to say, some fixed structure, about which there could be any number of fixed computations). More concretely, if this universe is as non-magical as it appears to be, then the fact that I think I exist or that the universe exists is causally completely determined by concrete facts about the internal content of this universe; even if this universe didn’t “exist”, then as long as someone in another universe had a fixed description of this universe (e.g. a program sufficient to compute it with arbitrary precision), they could write a program that calculated the answer to the question “Does ata think she exists?” pointed at their description of this universe (and whatever information would be needed to locate this copy of me, etc.), and the answer would be “Yes”, for exactly the same reasons that the answer is in fact “Yes” in this universe.
So it seems that whether this universe exists has nothing to do with whether or not we think it does, in which case it’s probably purely epiphenomenal. (This reminds me a lot of the GAZP and zombie arguments in general.)
I’m actually having a hard time imagining how that could not be true, so I’m in trouble if it isn’t. I’m also in trouble if it is, being that the ‘weighting function’ aspect is indeed still baffling me.
We probably care about things that exist and less about things that don’t, which makes the abstract fact about existence of any given universe relevant for making decisions that determine otherwise morally relevant properties of these universes. For example, if I find out that I don’t exist, I might then need to focus on optimizing properties of other universes that exist, through determining the properties of my universe that would be accessed by those other universes and would positively affect their moral value in predictable ways.
If being in a universe that exists feels so similar to being in a universe that doesn’t exist that we could confuse the two, then where does the moral distinction come from?
(It’s only to be expected that at least some moral facts are hard to discern, so you won’t feel the truth about them intuitively, you’d need to stage and perform the necessary computations.)
You wake up in a magical universe with your left arm replaced by a blue tentacle. A quick assessment would tell that the measure of that place is probably pretty low, and you shouldn’t have even bothered to have a psychology that would allow you remain sane upon having to perform an update on an event of such improbability. But let’s say you’re only human, and so you haven’t quite gotten around to optimize your psychology in a way that would have this effect. What should you do?
One argument is that your measure is motivated exactly by assessing the potential moral influence of your decisions in advance of being restricted to one option by observations. In this sense, low measure shouldn’t matter, since if all you have access to is just a little fraction of value, that’s not an argument for making a sloppy job of optimizing it. If you can affect the universes that simulate your universe, you derive measure from the potential to influence those universes, and so there is no sense in which you can affect universes of greater measure than your own.
On the other hand, if there should be a sense in which you can influence more than your measure suggests, that this measure only somehow refers to the value of the effect in the same universe as you are, whatever that means, then you should seek to make that much greater effect in the higher-measure universes, treating your own universe in purely instrumental sense.
Actually, what pjeby said was that it was meaningless outside of its practical utility. He didn’t say it was meaningless inside of its practical utility.
My point stands: Only meaningful concepts have a practical utility.
I just explained why your point is a straw man.
My point is that I don’t know what is meant by something being meaningless “outside of its practical utility”. Can you give me an example of a concept that is meaningful outside of its practical utility?
“Electron”. “Potato”. “Euclidean geometry”. These concepts have definitions which are unambiguous even when there is no context specified, unlike, pjeby alleges, “computation”.