My comments in this sub-thread brought out more challenges and queries than I expected. I thought that by now everyone would expect me to periodically say a few things out of line regarding identity, consciousness, and so on, and that only the people I was addressing might respond. I want to reply in a way which provides some context for the answers I’m going to give, but which covers old territory as little as possible. So I would first direct interested parties to my articles here, for the big picture according to me. Those articles are flawed in various ways, but much of what I have to say is there.
Just to review some basics: The problems of consciousness and personal identity are even more severe than is generally acknowledged here. Understanding consciousness, for example, is not just a matter of identifying which part of the brain is the conscious part. From the perspective of physics, any such identification looks like property dualism. Here I want to mention a view due to JanetK, which I never answered, according to which the missing ingredient is “biology”: the reason that consciousness looks like a problem from a physical perspective is because one has failed to take into account various biological facts. My reply is that certainly consciousness will not be understood without those facts, but nonetheless, they do nothing to resolve the sort of problems described in my article on consciousness, because they can still be ontologically reduced to elaborate combinations of the same physical basics. Some far more radical ontological novelty will be required if we are going to assemble stuff like “color”, “meaning”, or “the flow of time” out of what physics gives us.
What we have, in our theories of consciousness, is property dualism that wants to be a monism. We say, here is the physical thing—a brain, or maybe a computer or an upload if we are being futuristic—and that is where the mind-stuff resides, or it is the mind-stuff. But for now, the two sides of the alleged identity are qualitatively distinct. That is why it is really a dualism, but a “property dualism” rather than a “substance dualism”. The mind is (some part of) the brain, but the mindlike properties of the mind simply cannot be identified with the physical properties of the brain.
The instinct of people trained in modern science is to collapse the dualism onto the physical side of the equation, because they have been educated to think of reality in those terms. But color, meaning and time are real, so if they are not really present on the physical side of the identity, then a truly monistic solution has to go the other way. The problem now is that it sounds as if we are rejecting the reality of matter. This is why I talked about monads: it is a concept of what is physically elementary which can nonetheless be expanded into something which is actually mindlike in its “interior”. It requires a considerable rethink of how the basic degrees of freedom in physics are grouped into things; and it also requires that what we would now call quantum effects are somewhere functionally relevant to conscious cognition, or else this ontological regrouping would make no difference at the level where the problem of consciousness resides. So yes, there are several big inferential leaps there, and a prediction (that there is a quantum neurobiology) for which there is as yet no support. All I can say is that I didn’t make those leaps lightly, and that all simpler alternatives appear to be fatally compromised in some way.
One consequence of all this is that I can be a realist about the existence of a conscious self in ways which must sound very retrograde to everyone here who has embraced the brave new ideas of copying, patternist theories of identity, the unreality of time on the physical level, and so on. To my way of thinking, I am a “monad”, some subsystem of the brain with many degrees of freedom, which is a genuine ontological unity, and whose state can be directly identified with (and not just associated with) my state in the world as I perceive it subjectively. This is an entity which persists in time, and which interacts with its environment (presumably, simpler monads making up the neighboring subsystems of the brain). If one grants for a moment the possibility of thinking about reality in these terms, clearly it makes these riddles about personal identity a lot simpler. There is a very clear sense in which I am not my copies. At best, they are other monads who start out in the same state. There is no conscious sorites paradox. Whenever you have consciousness, it is because you have a monad big enough to be conscious—it’s that simple.
So having set the stage—and apologies to anyone tired of my screeds on these subjects—now we can turn to cryonics. I said to Roko
I would be rather surprised if the “neurophysical correlate of selfhood” survives the freezing transition.
to which he responded
The neurophysical correlate of selfhood can survive a temperature drop to 0 but it can’t survive a phase change?
I posit that, in terms of current physics, the locus of consciousness is some mesoscopic quantum-coherent subsystem of the brain, whose coherence persists even during unconsciousness (which is just a change of its state) but which would not last through the cryonic freezing of the brain. If this persistent biological quantum coherence exists, it will exist because of, and not in spite of, metabolic activity. When that ceases, something must happen to the “monad” (which is just another name for something like “big irreducible tensor factor in the brain’s wavefunction”) - it comes apart into simpler monads, it sheds degrees of freedom until it becomes just another couple of correlated electrons, I don’t have a fixed idea about it. But this is what death is, in the monadic “theory”. If the frozen brain is restored to life, and a new conscious condensate (or whatever) forms, that will be a new “big tensor factor”, a new “monad”, and a new self. That is the idea.
You could accept my proposed metaphysics for the sake of argument and still say, but can’t you identify with the successor monad? It will have your memories, and so forth. In other words, this ontology of monadic minds should still allow for something like a copy. I don’t really have a fixed opinion about this, largely because how the conscious monad accesses and experiences its memories and identity remains completely untheorized by me. The existence of a monad as a persistent “substance” suggests the possibility that memories in a monad might be somehow internal to it, rather than externally supplied data which pops into its field of consciousness when appropriate. This in turn suggests that a lot of what is written, in futurist speculation about digital minds, transferrable memories, and so forth, would not apply. You might be able to transfer unconscious dispositions but not a certain type of authentic conscious memory; it might be that the only way in which the latter could be induced in a monad would be for it, that particular monad, to “personally” undergo the experience in question. Or, it might really be the case that all forms of memory, knowledge, perception and so forth are externally based and externally induced, so that my recollection of what happened this morning is not ontologically any different from the same “recollection” occurring in a newly created copy which never actually had the experience.
Again, I apologize somewhat for going on at such length with these speculations. But I do think that the philosophies of both mind and matter which are the consensus here—I’m thinking of a sort of blithe computationalism with respect to consciousness, and the splitting multiverse of MWI as a theory of physics—are very likely to be partly or even completely false, and this has to have implications for topics like cryonics, AI, various exotic ethical doctrines based on a future-centric utilitarianism, and so on.
the locus of consciousness is some mesoscopic quantum-coherent subsystem of the brain
Why do people keep trying to posit quantum as the answer to this problem when it has been so soundly refuted?
Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10-13–10-20s) are typically much shorter than the relevant dynamical time scales (∼10-3–10-1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way
There is a long history of diverse speculation by scientists about quantum mechanics and the mind. There was an early phase when biology hardly figured and it was often a type of dualism inspired by Copenhagen-interpretation emphasis on “observers”. But these days the emphasis is very much on applying quantum mechanics to specific neuromolecular structures. There are papers about superpositions of molecular conformation, transient quantum coherence in ionic complexes, phonons in filamentary structures, and so on. To me, this work still doesn’t look good enough, but it’s a necessary transitional step, in which ambitious simple models of elementary quantum biophysics are being proposed. The field certainly needs a regular dose of quantitative skepticism such as Tegmark provided. But entanglement in condensed-matter systems is a very subtle thing. There are many situations in which long-range quantum order forms despite local disorder. Like it or not, you can’t debunk the idea of a quantum brain in a few pages because we assuredly have not thought of all the ways in which it might work.
As for the philosophical rationale of the thing, that varies a lot. But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it’s entanglement that makes the difference. Any realistic hypothesis is not going to be fuzzy and just say “the quantum is the answer”. It will be more like, special long-lived clathrins found in the porosome complex of astrocytes associated with glutamate-receptor hotspots in neocortical layer V share quantum excitons in a topologically protected way, forming a giant multifractal cluster state which nonlocally regulates glutamatergic excitation in the cortex—etc. And we’re just not at that level yet.
I mean that sincerely—there ought to be some reason that, say, you have to come up with your monad theory, and I quite frankly don’t know of any that would impel me to do so.
Starting point: consciousness is real. This sequence of conscious experiences is part of reality.
Next: The physical world doesn’t look like that. (That consciousness is a problem for atomism has been known for more than 2000 years.)
So let us suppose that this is how it feels to be some physical thing “from the inside”. Here we face a new problem if we suppose that orthodox computational neuroscience is the whole story. There must then be a mapping from various physical states (e.g. arrangements of elementary particles in space, forming a brain) to the corresponding conscious states. But mappings from physics to causal-functional roles are fuzzy in two ways. We don’t have, and don’t need, an exact criterion as to whether any particular elementary particle is part of the “thing” whose state we are characterizing functionally. Similarly, we don’t have, and don’t need, a dividing line in the space of all possible physical configurations providing an exact demarcation between one computational state and another.
All this is just a way of saying that functional and computational properties are not entirely objective from a physical standpoint. There are always borderline cases but we don’t really care about not having an exact border, because most of the time the components of a functioning computational device are in physical states which are obviously well in correspondence with the abstract computational states they represent. A device whose components are constantly testing the boundaries of the mapping is a device in danger of deviating from its function.
However, when it comes to consciousness, a fuzzy-but-good-enough mapping like this is not good enough, because consciousness (according to our starting point) is an entirely real and “objective” element of reality. It is what it is “exactly”, and therefore its counterpart in physical ontology must also have an exact characterization, both with respect to physical parts and with respect to physical states. A coarse-grained many-to-one mapping which is irresolvably fuzzy at the edges is not an option.
But this is a very hard thing to achieve if we persist in thinking of the physical world as a sort of hurricane of trillions of particles in space, with all that matters cognitively being certain mass movements of particles and things made out of them. Fortunately, as it turns out, quantum mechanics suggests the possibility of a rather different physical ontology, and neuroscience still has plenty of room for quantum effects to be cognitively relevant. Thus one is led to consider quantum ontologies in which there is something which can be the exact physical counterpart of consciousness, and theories of mind in which quantum effects are part of the brain’s machinery.
I think you grant excessive reliability to your impressions of consciousness. A philosophical argument along the lines proposed is an awfully weak thread to hang a theory on.
Doesn’t it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain.
ETA: I can’t see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.
No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are “quantum tensor factors”, and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually “big” somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It’s a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property.
What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their “physical properties”, and then their correlated “subjective properties”. But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the “physical properties” of a “big” X. That way, they can enter directly into cause and effect.
No, it would just mean that you can have unconscious simulations of consciousness.
Doesn’t this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.
if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities?
It’s like asking, why do you think you exist, when there are books with fictional characters in them? I don’t know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don’t see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.
I understand you. Your definition is “real consciousness” is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and we know that at least one human brain is conscious and consciousness must be physical entity to participate in causal chain. All other quantum tensor factors and their sets are not consciousness by definition.
Questions are:
How to define said class without fuzziness, when it is yet not known what is not “real consciousness”? Should we include dolphins’ tensor factors, great apes’ ones and so on?
Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does “thermostatousness” of refrigerator exist as physical entity?
Of course, temperature and “termostatousness” are our high-level description of physical systems, they don’t exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don’t you see flaw in that contradiction?
Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. (“Microstate” means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a “macrostate”.) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.
So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf’s law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.
The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these “monads” or “tensor factors” containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say “everything only exists in the mind”, which then implies that the mind only exists in the mind. A “high-level description” is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
The first question is a question about how a theory like this would develop in detail. I can’t say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.
Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, “the mind only exists in the mind”.
If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition?
If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we determine that it will or will not?
A “high-level description” is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
Interesting thing is that mind as a high-level description of brain workings is mind-dependent on the same mind (it’s not a paradox, but a recursion), not on a mind. Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain. Thus mind is subjective in a sense that it is conceptual description of brain workings (including concepts of self, mind and so on), and mind is objective in a sense that its content can be reconstructed from structure of brain.
I think we need such an ontology because of the paradox of saying otherwise, “the mind only exists in the mind”.
It isn’t paradox, really.
I can’t help imagining procedure of accepting works on philosophy of mind: “Please, show your tensor factor. … Zombies and simulations are not allowed. Next”.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition?
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the “tensor factor”. I can’t tell you what structure because I don’t have the theory, of course. I’m just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can’t contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.
How in principle can we determine that [something] will or will not [be conscious]?
If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there’s definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.
If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko’s example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.
Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain.
Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact—one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics—same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
Here’s the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think “I want to know more”, my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.
Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn’t require adding fundamental ontological concepts.
Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I’ll try to come up with another definition.
But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it’s entanglement that makes the difference.
This really sounds to me like a perfect fit for Robin’s grandparent post. If, say, nonlocality is important, why achieve it through quantum means?
This is meant to be ontological nonlocality and not just causal coordination of activities throughout a spatial region. That is, we would be talking about entities which do not reduce to a sum of spatially localized parts possessing localized (encapsulated) states. An entangled EPR pair is a paradigm example of such ontological nonlocality, if you think the global quantum state is the actual state, because the wavefunction cannot be factorized into a tensor product of quantum states possessed by the individual particles in the pair. You are left with the impression of a single entity which interfaces with the rest of the universe in two places. (There are other, more esoteric indications that reality has ontological nonlocality.)
These complex unities glued together by quantum entanglement are of interest (to me) as a way to obtain physical entities which are complex and yet have objective boundaries; see my comment to RobinZ.
Not only does this quantum brain idea violate known experimental and theoretical facts about the brain, it also violates what we know about evolution. Why would evolution design a system that maintains coherence during sleep and unconsciousness, if this has no effect on inclusive genetic fitness?
(Mitchell Porter thinks that his “copy” would behave essentially identically to what he would have done had he not “lost his selfhood”, so in terms of reproductive fitness, there’s no difference)
Though I agree that this quantum brain idea is against all evidence, I don’t think the evolutionary criticism applies. Not every adaptation has a direct effect on inclusive genetic fitness; some are just side effects of other adaptations.
Well, it might be that maintaining the system rather than restarting it when full consciousness resumes is an easier path to the adaptation, or has some advantage we don’t understand.
Of course, if the restarted “copy” would seem externally and internally as a continuation, the natural question is why bother positing such a monad in the first place?
If you want something that flies, the simplest way is for it to have wings that still exist even when it’s on the ground. We don’t actually know (big understatement there) the relative difficulty of evolving a “persistent quantum mind” versus a “transient quantum mind” versus a “wholly classical mind”.
There may also be an anthropic aspect. If consciousness can only exist in a quantum ontological unit (e.g. the irreducible tensor factors I mention here), then you cannot find yourself to be an evolved intelligence based solely on classical computation employing many such entities. Such beings might exist in the universe, but by hypothesis there would be nobody home. This isn’t relevant to persistent vs transient, but it’s relevant for quantum vs classical.
You seem to jump to the conclusion that, in the favorable case, (that consciousness only exists in quantum computers AND quantum coherence is the fundamental basis of persistent identity), the coherence timescale would obviously be your whole lifetime, even if hypothermia, anesthetics, etc happen, but as soon as you are cryopreserved, it decoheres, so that the physical basis of persistent identity corresponds perfectly to the culturally accepted notion.
But that would be awfully convenient! Why not assign most of your probability to the proposition that evolution accidentally designed a quantum computer with a decoherence timescale of one second? ten seconds? 100 seconds? 1000 seconds? 10,000 seconds? Why not postulate that unconsciousness or sleep destroys the coherence? After all, we know that classical computation is perfectly adequate for evolutionarily adaptive tasks (because we can do them on a classical computer).
This is, first of all, an exercise in taking appearances (“phenomenology”) seriously. Consciousness comes in intervals with internal continuity, one often comes to waking consciousness out of a dream (suggesting that the same stream of consciousness still existed during sleep, but that with mental and physical relaxation and the dimming of the external senses, it was dominated by fantasy and spontaneous imagery), and one should consider the phenomenon of memory to at least be consistent with the idea that there is persistent existence, not just throughout one interval of waking consciousness, but throughout the whole biological lifetime.
So if you’re going to think about yourself as physically actual and as actually persistent, you should think of yourself as existing at least for the duration of the current period of waking consciousness, and you have every reason to think that you are the same “you” who had those experiences in earlier periods that you can remember. The idea that you are flickering in and out of existence during a single day or during a lifetime is somewhat at odds with the phenomenological perspective.
Cryopreservation is far more disruptive than anything which happens during a biological lifetime. Cells full of liquid water freeze over and grow into ice crystals which burst their membranes. Metabolism ceases entirely. Some, maybe even most models of persistent biological quantum coherence have it depending on a metabolically maintained throughput of energy. To survive the freezing transition, it seems like the “bio-qubits” would have to exist in molecular capsules that weren’t penetrated as the ice formed.
But if you’re going to argue phenomenologically, then any form of reanimation that restores the persons memory in a continuous way will seem (from the inside) to be continuous.
Can I ask: have you ever been under a general anesthetic?
It is a philosophically significant life event, because what you experience is just so incredibly at odds with what actually happens. You lie there waiting for the anesthetic to take effect, and then the next instant, your eyes open and find your arm/leg/whatever in plaster, and a glance at the clock suggests that 3 hours have passed.
I’d personally want to be cryopreserved before I fully lost my marbles so that I can experience that kind of time travel. Imagine closing your eyes, then reopening them and it’s the 23rd century? How cool would that be?
I must have been, at some point, but a long time ago and don’t remember.
Clearly there are situations where extra facts would lead you to conclude that the impression of continuity is an illusion. If you woke up as Sherlock Holmes, remembering your struggle with Moriarty as you fell off a cliff moments before, and were then shown convincingly that Holmes was a fictional character from centuries before, and you were just an artificial person provided with false memories in his image, you would have to conclude that in this case, you had erred somehow in judging reality on the basis of subjective appearances.
It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out. Reliable reconstruction would require such a profound knowledge of brain structure and function, that there wouldn’t be room for continuing uncertainty about quantum effects in the brain. By then you would know it was there or not there, so regardless of how the revivee felt, the people(?) doing the reviving should already know the answers regarding identity and the nature of personal existence.
(I add the qualification reliable reconstruction, because there might well be a period in which it’s possible to experiment with reconstructive protocols while not really knowing what you’re doing. Consider the idea of freezing a C. elegans and then simulating it on the basis of micrometer sections. We could just about do this today, except that we would mostly be guessing how to map the preserved ultrastructure to computational elements of a simulation. One would prefer the revival of human beings not to proceed via similar trial and error.)
In the present, the question is whether subjectively continuous but temporally discontinuous experience, such as you report, is evidence for the self only having an intermittent physical existence. Well, the experience is consistent with the idea that you really did cease to exist during those 3 hours, but it is also consistent with the idea that you existed but your time sense shut down along with your usual senses, or that it stagnated in the absence of external and internal input.
that there wouldn’t be room for continuing uncertainty about quantum effects in the brain.
There is no uncertainty. A large amount of evidence points to the lack of quantum effects in the brain. Furthermore, there was never really any evidence in favor of quantum effects, and certainly none has been produced.
I think that most of the problems of consciousness have already been figured out; Gary Drescher, Dan Dennett, Drew McDerrmot have done it. They just don’t yet have overwhelming evidence, so you have to be “light like a leaf blown by the winds of evidence” to see their answer as being correct.
It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out.
The remaining unsolved problems in this area seem to be related to the philosophy of computations-in-general, such as “what counts as implementing a computation” or anthropic/big world problems.
The remaining unsolved problems in this area seem to be related to the philosophy of computations-in-general, such as “what counts as implementing a computation” or anthropic/big world problems.
Which is to say, decision theory for algorithms, understanding of how an algorithm controls mathematical structures, and how intuitions about the real world and subjective anticipation map to that formal setting.
Well, that’s one possible solution. But not without profound problems, for example the problem of lack of a canonical measure over “all mathematical structures” (even the lack of a clean definition of what “all structures” means).
But it certainly solves some problems, and has the sort of “reductionistic” feel to it that indicates it is likely to be true.
Well, that’s one possible solution. But not without profound problems, for example the problem of lack of a canonical measure over “all mathematical structures” (even the lack of a clean definition of what “all structures” means).
Logics allow to work with classes of mathematical structures (not necessarily individual structures), which seems to be a good enough notion of working with “all mathematical structures”. A “measure” (if, indeed, it’s a useful concept) is aspect of preference, and preferences are inherently non-canonical, though I hope to find a relatively “canonical” procedure for defining (“extracting”) preference in terms of an agent-program.
Any given concept is what it is. Truth about any given concept is not a matter of preference.
But in cases where there is no “canonical choice of a concept”, it is a matter of choice which concept to consider. If you want a concept with certain properties, these properties already define a concept of their own, and might determine the mathematical structure that satisfies them, or might leave some freedom in choosing one you prefer for the task.
In case of quantum mechanical measure, you want your concept of measure to produce “probabilities” that conform with the concept of subjective anticipation, which is fairly regular and thus create illusion of “universality”, because preferences of most minds like ours (evolved like ours, in our physics) have subjective anticipation as a natural category, a pattern that has significant explanatory (and hence, optimization) power. But subjective anticipation is still not a universally interesting concept, one can consider a mind that looks at your theories about it, says “so what?”, and goes on optimizing something else.
The reason I spoke about Mangled Worlds MWI is that the Integral[ ] measure is not dependent upon subjective anticipation.
This is because in mangled worlds QM there is a physically meaningful sense in which some things cease to exist, namely that things (people, computers, any complex or macroscopic phenomenon) get “Mangled” if their Integral[ ] measure gets too low.
That preference is a cause of a given choice doesn’t prohibit physics to also be a cause. There is rarely an ultimate source (unique dependence). You value thinking about what is real (accords with physical laws) because you evolved to value real things. There are also concepts which are not about our physical laws which you value, because evolution isn’t a perfect designer.
This is also a free will argument. I say that there is a decision to be made about which concepts to consider, and you say that the decision is already made by the laws of physics. It’s easier to see how you do have free will for more trivial choices. It’s more difficult to consider acting and thinking as if you live in different physics. In both cases, the counterfactual is physically impossible, you couldn’t have made a different choice. Your thoughts accord with the laws of physics, caused by physics, embedded within physics. And in both cases, what is actually true (what action you’ll perform; and what theories you’ll think about) is determined by your decision.
As an agent, you shouldn’t (terminally) care about what laws of physics say, only about what your preference says, so this cause is always more relevant, although currently less accessible to reflection.
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don’t quite see what about my reply made you think that this was relevant?
The point is that in Mangled world QM there is such a think as objective probability, even though the world is (relatively) big, and it basically turns out to be defined by just the number of instances of something rather than something else.
I think Vladimir is essentially saying that caring about that objective property of that particular mathematical structure is still your “arbitrary”, subjectively objective preference. I don’t think I understand where the free will argument comes in either.
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein’s Middle Earth.
But I think that what Big Worlds calls into question is whether there is such a thing as “what actually exists” and “what will actually happen”. That’s the problem. I agree that evolution could (like it did in the case of subjective anticipation and MWI QM) have played a really cruel trick on us.
But I brought up Mangled Worlds because it seems that Mangled worlds is a case where there is such a thing as “what will actually happen” and “what actually exists”, even though the world is relatively big (though mangled worlds is importantly different to MWI with no mangler or world-eater)
The important difference between MWI and Mangled-MWI is that if you say “ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10
measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time .
The important difference between MWI and Mangled-MWI is that if you say “ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time.
So what? Not everyone cares about what happens in this world. Plus, you don’t have to exist in this world to optimize it (though it helps).
If we take as an assumption that Mangled-worlds MWI is the only kind of “Bigness” that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.
Though, of course, acausally speaking, a slim probability that some other world exists is enough for people to (perhaps?) take notice of it.
EDIT: One way to try to salvage objective reality from Big Worlds would be to drive a wedge between “other worlds that we have actual evidence for” (such as MWI) and “Other worlds that are in-principle incapable of providing positive evidence of their existence”, (such as Tegmark’s MUH), then showing that all of the evidentially implied big worlds are not problematic for objectivity, as seems to be the case for Mangled-MWI. However, this would only work if one were willing to part with kolmogorov/Bayesian reasoning, and say that certain perfectly low-complexity hypotheses are thrown out for being “too big” and “too hypothetical”.
If we take as an assumption that Mangled-worlds MWI is the only kind of “Bigness” that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.
I’m fairly sure at this point it’s conceptual confusion to say that. You can care about mathematical structures, and control mathematical structures, that have nothing to do with the real world. These mathematical structures don’t have to be “worlds” in any usual sense, for example they don’t have to be processes (have time), and they don’t have to contain you in them in any form.
One of the next iterations of ambient decision theory should make it clearer, though the current version should give a hint (but probably isn’t worth the bother in the current form, considering it has known philosophical/mathematical bugs—but I’m studying, improving my mathematical sanity).
Perhaps the distinction I’m interested is the difference between control and function-ness.
There is an abstract mathematical function, say, the parity function of the number of open eyes I have. It is a function of me, but I wouldn’t say that I am controlling it in the conventional sense, because it is abstract.
I guess if there were an actual light that lit up as a function of the parity, then I would feel comfortable with “control”, and I would say that I am controlling the light
The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word “control” requires the thing-under-control to be real, but sure, one can use the words how one pleases.
It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as
“controlling something that’s no more-or-less real than the laptop in front of you”
versus
“this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you”
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn’t, we have to deal with that. And we can’t assume a priori what preference talks about.
My previous position (and, it seems, long-held position of Wei Dai’s) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent’s program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent’s strategies as the best option.
Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent’s decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent’s program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven’t even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
This complication is created by the same old ontology problem: if preference talks about the real world, power to you, but if it doesn’t, we have to deal with that.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
I think that this is a different sense of the word “control” than controlling physical things.
UDT is about control in the same sense. See this comment for a point in that direction (and my last comment on “Ambient decision theory go-through” thread on SIAI DT list). I believe this to be conceptual clarification of the usual notion of control, having the usual notion (“explicit control”) as a special case (almost, modulo explicit dependence bias—it allows to get better results than if you only consider the explicit dependence as stated).
they don’t have to contain you in them in any form.
Can you elaborate on this?
See “ambient dependence” on DT list, but the current notion (involving mathematical structures more general than programs) is not written up. I believe “logical control”, as used by Wei/Eliezer, refers to basically the same idea. In a two-player game, you can control the other player’s decisions despite not literally sitting inside their head.
I’m not on that list. Do you know who the list owner is?
Just as a note, my current gut feeling is that it is perfectly plausible that the right way to go is to do something like UDT but with a notion of what worlds are real (as in Mangled worlds QM).
However, I shall read your theory of controlling that which is unreal and see what I make of it!
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein’s Middle Earth.
But you do care about optimizing Middle Earth (let it be Middle Earth with Halting Oracles to be sure), to some tiny extent, even though it doesn’t exist at all.
Free will is about dependencies: one got to say that the outcome depends on your decision. At the same time, outcome depends on other things. Here, considering quantum mechanical measure depends on what’s true about the world, but at the same time it depends on what you prefer to consider. Thus, saying that there are objective facts dictated by the laws of physics is analogous to saying that all your decisions are already determined by the physical laws.
My argument was that as in the case of the naive free will argument, here too we can (indeed, should, once we get to the point of being able to tell the difference) see physical laws as (subjectively) chosen. Of course, as you can’t change your own preference, you can’t change the implied physical laws seen as aspect of that preference (to make them nicer for some purpose, say).
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don’t quite see what about my reply made you think that this was relevant?
It is relevant, but I ran out of expectation to communicate this quickly, so let’s all hope I figure out and write up in detail my philosophical framework for decision theory sometime soon.
It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out.
I don’t agree with this claim. One would simply need an understanding of what brain systems are necessary for consciousness and how to restore those systems to a close approximation to pre-existing state (presumably using nanotech). This doesn’t take much in the way of actually understanding how those systems function. Once one had well-developed nanotech one could learn this sort of thing simply be trial and error on animals (seeing what was necessary for survival, and what was necessary for training to stay intact) and then move on to progressively larger brained creatures. This doesn’t require a deep understanding of intelligence or consciousness, simply an understanding of what parts of the brain are being used and how to restore them.
We don’t actually know (big understatement there) the relative difficulty of evolving a “persistent quantum mind” versus a “transient quantum mind” versus a “wholly classical mind”.
Actually, we do. We’ve been trying for decades to build viable quantum computers, and it turns out to be excruciatingly hard.
My comments in this sub-thread brought out more challenges and queries than I expected. I thought that by now everyone would expect me to periodically say a few things out of line regarding identity, consciousness, and so on, and that only the people I was addressing might respond. I want to reply in a way which provides some context for the answers I’m going to give, but which covers old territory as little as possible. So I would first direct interested parties to my articles here, for the big picture according to me. Those articles are flawed in various ways, but much of what I have to say is there.
Just to review some basics: The problems of consciousness and personal identity are even more severe than is generally acknowledged here. Understanding consciousness, for example, is not just a matter of identifying which part of the brain is the conscious part. From the perspective of physics, any such identification looks like property dualism. Here I want to mention a view due to JanetK, which I never answered, according to which the missing ingredient is “biology”: the reason that consciousness looks like a problem from a physical perspective is because one has failed to take into account various biological facts. My reply is that certainly consciousness will not be understood without those facts, but nonetheless, they do nothing to resolve the sort of problems described in my article on consciousness, because they can still be ontologically reduced to elaborate combinations of the same physical basics. Some far more radical ontological novelty will be required if we are going to assemble stuff like “color”, “meaning”, or “the flow of time” out of what physics gives us.
What we have, in our theories of consciousness, is property dualism that wants to be a monism. We say, here is the physical thing—a brain, or maybe a computer or an upload if we are being futuristic—and that is where the mind-stuff resides, or it is the mind-stuff. But for now, the two sides of the alleged identity are qualitatively distinct. That is why it is really a dualism, but a “property dualism” rather than a “substance dualism”. The mind is (some part of) the brain, but the mindlike properties of the mind simply cannot be identified with the physical properties of the brain.
The instinct of people trained in modern science is to collapse the dualism onto the physical side of the equation, because they have been educated to think of reality in those terms. But color, meaning and time are real, so if they are not really present on the physical side of the identity, then a truly monistic solution has to go the other way. The problem now is that it sounds as if we are rejecting the reality of matter. This is why I talked about monads: it is a concept of what is physically elementary which can nonetheless be expanded into something which is actually mindlike in its “interior”. It requires a considerable rethink of how the basic degrees of freedom in physics are grouped into things; and it also requires that what we would now call quantum effects are somewhere functionally relevant to conscious cognition, or else this ontological regrouping would make no difference at the level where the problem of consciousness resides. So yes, there are several big inferential leaps there, and a prediction (that there is a quantum neurobiology) for which there is as yet no support. All I can say is that I didn’t make those leaps lightly, and that all simpler alternatives appear to be fatally compromised in some way.
One consequence of all this is that I can be a realist about the existence of a conscious self in ways which must sound very retrograde to everyone here who has embraced the brave new ideas of copying, patternist theories of identity, the unreality of time on the physical level, and so on. To my way of thinking, I am a “monad”, some subsystem of the brain with many degrees of freedom, which is a genuine ontological unity, and whose state can be directly identified with (and not just associated with) my state in the world as I perceive it subjectively. This is an entity which persists in time, and which interacts with its environment (presumably, simpler monads making up the neighboring subsystems of the brain). If one grants for a moment the possibility of thinking about reality in these terms, clearly it makes these riddles about personal identity a lot simpler. There is a very clear sense in which I am not my copies. At best, they are other monads who start out in the same state. There is no conscious sorites paradox. Whenever you have consciousness, it is because you have a monad big enough to be conscious—it’s that simple.
So having set the stage—and apologies to anyone tired of my screeds on these subjects—now we can turn to cryonics. I said to Roko
to which he responded
I posit that, in terms of current physics, the locus of consciousness is some mesoscopic quantum-coherent subsystem of the brain, whose coherence persists even during unconsciousness (which is just a change of its state) but which would not last through the cryonic freezing of the brain. If this persistent biological quantum coherence exists, it will exist because of, and not in spite of, metabolic activity. When that ceases, something must happen to the “monad” (which is just another name for something like “big irreducible tensor factor in the brain’s wavefunction”) - it comes apart into simpler monads, it sheds degrees of freedom until it becomes just another couple of correlated electrons, I don’t have a fixed idea about it. But this is what death is, in the monadic “theory”. If the frozen brain is restored to life, and a new conscious condensate (or whatever) forms, that will be a new “big tensor factor”, a new “monad”, and a new self. That is the idea.
You could accept my proposed metaphysics for the sake of argument and still say, but can’t you identify with the successor monad? It will have your memories, and so forth. In other words, this ontology of monadic minds should still allow for something like a copy. I don’t really have a fixed opinion about this, largely because how the conscious monad accesses and experiences its memories and identity remains completely untheorized by me. The existence of a monad as a persistent “substance” suggests the possibility that memories in a monad might be somehow internal to it, rather than externally supplied data which pops into its field of consciousness when appropriate. This in turn suggests that a lot of what is written, in futurist speculation about digital minds, transferrable memories, and so forth, would not apply. You might be able to transfer unconscious dispositions but not a certain type of authentic conscious memory; it might be that the only way in which the latter could be induced in a monad would be for it, that particular monad, to “personally” undergo the experience in question. Or, it might really be the case that all forms of memory, knowledge, perception and so forth are externally based and externally induced, so that my recollection of what happened this morning is not ontologically any different from the same “recollection” occurring in a newly created copy which never actually had the experience.
Again, I apologize somewhat for going on at such length with these speculations. But I do think that the philosophies of both mind and matter which are the consensus here—I’m thinking of a sort of blithe computationalism with respect to consciousness, and the splitting multiverse of MWI as a theory of physics—are very likely to be partly or even completely false, and this has to have implications for topics like cryonics, AI, various exotic ethical doctrines based on a future-centric utilitarianism, and so on.
Why do people keep trying to posit quantum as the answer to this problem when it has been so soundly refuted?
My current leading hypotheses:
“Quantum mechanics” feels like a mysterious-enough big rock to crack the equally mysterious phenomenon of “consciousness”.
Free will feels like it requires indeterminism, and quantum mechanics is often described as indeterministic.
There is a long history of diverse speculation by scientists about quantum mechanics and the mind. There was an early phase when biology hardly figured and it was often a type of dualism inspired by Copenhagen-interpretation emphasis on “observers”. But these days the emphasis is very much on applying quantum mechanics to specific neuromolecular structures. There are papers about superpositions of molecular conformation, transient quantum coherence in ionic complexes, phonons in filamentary structures, and so on. To me, this work still doesn’t look good enough, but it’s a necessary transitional step, in which ambitious simple models of elementary quantum biophysics are being proposed. The field certainly needs a regular dose of quantitative skepticism such as Tegmark provided. But entanglement in condensed-matter systems is a very subtle thing. There are many situations in which long-range quantum order forms despite local disorder. Like it or not, you can’t debunk the idea of a quantum brain in a few pages because we assuredly have not thought of all the ways in which it might work.
As for the philosophical rationale of the thing, that varies a lot. But since we know that most neural computation is not conscious, I find it remarkably natural to suppose that it’s entanglement that makes the difference. Any realistic hypothesis is not going to be fuzzy and just say “the quantum is the answer”. It will be more like, special long-lived clathrins found in the porosome complex of astrocytes associated with glutamate-receptor hotspots in neocortical layer V share quantum excitons in a topologically protected way, forming a giant multifractal cluster state which nonlocally regulates glutamatergic excitation in the cortex—etc. And we’re just not at that level yet.
What evidence is there that would promote any given quantum-mechanical theory of consciousness to attention?
I mean that sincerely—there ought to be some reason that, say, you have to come up with your monad theory, and I quite frankly don’t know of any that would impel me to do so.
How I got here:
Starting point: consciousness is real. This sequence of conscious experiences is part of reality.
Next: The physical world doesn’t look like that. (That consciousness is a problem for atomism has been known for more than 2000 years.)
So let us suppose that this is how it feels to be some physical thing “from the inside”. Here we face a new problem if we suppose that orthodox computational neuroscience is the whole story. There must then be a mapping from various physical states (e.g. arrangements of elementary particles in space, forming a brain) to the corresponding conscious states. But mappings from physics to causal-functional roles are fuzzy in two ways. We don’t have, and don’t need, an exact criterion as to whether any particular elementary particle is part of the “thing” whose state we are characterizing functionally. Similarly, we don’t have, and don’t need, a dividing line in the space of all possible physical configurations providing an exact demarcation between one computational state and another.
All this is just a way of saying that functional and computational properties are not entirely objective from a physical standpoint. There are always borderline cases but we don’t really care about not having an exact border, because most of the time the components of a functioning computational device are in physical states which are obviously well in correspondence with the abstract computational states they represent. A device whose components are constantly testing the boundaries of the mapping is a device in danger of deviating from its function.
However, when it comes to consciousness, a fuzzy-but-good-enough mapping like this is not good enough, because consciousness (according to our starting point) is an entirely real and “objective” element of reality. It is what it is “exactly”, and therefore its counterpart in physical ontology must also have an exact characterization, both with respect to physical parts and with respect to physical states. A coarse-grained many-to-one mapping which is irresolvably fuzzy at the edges is not an option.
But this is a very hard thing to achieve if we persist in thinking of the physical world as a sort of hurricane of trillions of particles in space, with all that matters cognitively being certain mass movements of particles and things made out of them. Fortunately, as it turns out, quantum mechanics suggests the possibility of a rather different physical ontology, and neuroscience still has plenty of room for quantum effects to be cognitively relevant. Thus one is led to consider quantum ontologies in which there is something which can be the exact physical counterpart of consciousness, and theories of mind in which quantum effects are part of the brain’s machinery.
I think you grant excessive reliability to your impressions of consciousness. A philosophical argument along the lines proposed is an awfully weak thread to hang a theory on.
Doesn’t it mean that consciousness is epiphenomenon? As all quantum algorithms can be expressed as equivalent classical algorithms, and we can have unconscious computer which is functionally equivalent to human brain.
ETA: I can’t see any reason to associate consciousness with some particular kind of physical object/process, as it undermines functional significance of consciousness as high-level coordination, decision making and self-representation system of brain.
No, it would just mean that you can have unconscious simulations of consciousness. Think of it like this. We say that the things in the universe which have causal power are “quantum tensor factors”, and consciousness always inhabits a single big tensor factor, but we can simulate it with lots of little ones interacting appropriately. More precisely, consciousness is some sort of structure which is actually present in the big tensor factor, but which is not actually present in any of the small ones. However, its dynamics and interactions can be simulated by the small ones collectively. Also, if you took a small tensor factor and made it individually “big” somehow (evolved it into a big state), it might individually be able to acquire consciousness. But the hypothesis is that consciousness as such is only ever found in one tensor factor, not in sets of them. It’s a slightly abstract conception when so many details are lacking, but it should be possible to understand the idea: the world is made of Xs, an individual X can have property Y, a set of Xs cannot, but a set of Xs can imitate the property.
What would really make consciousness epiphenomenal is if we persisted with property dualism, so we have the Xs, their “physical properties”, and then their correlated “subjective properties”. But the whole point of this exercise is to be able to say that the subjective properties (which we know to exist in ourselves) are the “physical properties” of a “big” X. That way, they can enter directly into cause and effect.
Doesn’t this undermine your entire philosophical basis for your argument which rests on the experience of consciousness being real? if your system allows such an unconscious classical simulation then why believe you are one of the actual conscious entities? This seems P-Zombieish.
It’s like asking, why do you think you exist, when there are books with fictional characters in them? I don’t know exactly what is happening when I confirm by inspection that some reality exists or that I have consciousness. But I don’t see any reason to doubt the reality or efficacy of such epistemic processes, just because there should also be unconscious state machines that can mimic their causal structure.
I understand you. Your definition is “real consciousness” is quantum tensor factor that belong to particular class of quantum tensor factors, because we can find them in human brains and
we know that at least one human brain is conscious and
consciousness must be physical entity to participate in causal chain.
All other quantum tensor factors and their sets are not consciousness by definition.
Questions are:
How to define said class without fuzziness, when it is yet not known what is not “real consciousness”? Should we include dolphins’ tensor factors, great apes’ ones and so on?
Is it always necessary for something to exist as physical entity to participate in causal chain? Does temperature exist as physical entity? Does “thermostatousness” of refrigerator exist as physical entity?
Of course, temperature and “termostatousness” are our high-level description of physical systems, they don’t exist in your sense. So, it seems that you see contradiction in subjectively apparent existence of consciousness and apparent nonexistence of physical representation of consciousness as high-level description of brain functions. Don’t you see flaw in that contradiction?
Causality for statistical or functional properties mostly reduces to generalizations about the behavior of exact microstates. (“Microstate” means physical state completely specified in its microscopic detail. A purely thermodynamic or macroscopic description is a “macrostate”.) The entropy goes up because most microstate trajectories go from the small phase-space volume into the large phase-space volume. Macroscopic objects have persistent traits because most microstate trajectories for those objects stay in the same approximate region of state space.
So the second question is about ontology of macrostate causation. I say it is fundamentally statistical. Cause and effect in elemental form only operates locally in the microstate, between and within fundamental entities, whatever they are. Macrostate tendencies are like theromodynamic laws or Zipf’s law, they are really statements about statistics of very large and complex chains of exact microscopic causal relations.
The usual materialist idea of consciousness is that it is also just a macrostate phenomenon and process. But as I explained, the macrostate definition is a little fuzzy, and this runs against the hypothesis that consciousness exists objectively. I will add that because these “monads” or “tensor factors” containing consciousness are necessarily very complex, there should be a sort of internal statistical dynamics. The laws of folk psychology might just be statistical mechanics of exact conscious states. But it is conceptually incoherent to say that consciousness is purely a high-level description if you think it exists objectively; it is the same fallacy as when some Buddhists say “everything only exists in the mind”, which then implies that the mind only exists in the mind. A “high-level description” is necessarily something which is partly conceptual in nature, and not wholly objectively independent in its existence, and this means it is partly mind-dependent.
The first question is a question about how a theory like this would develop in detail. I can’t say ahead of time. The physical premise is, the world is a web of tensor factors of various sizes, mostly small but a few of them big; and consciousness inhabits one of these big factors which exists during the lifetime of a brain. If a theory fulfilling the premise develops and makes sense, then I think you would expect any big tensor factor in a living organism, and also in any other physical system, to also correspond to some sort of consciousness. In principle, such a physical theory should itself tell you whether these big factors arise dynamically in a particular physical entity, given a specification of the entity.
Does this answer the final remark about contradiction? Each tensor factor exists completely objectively. The individual tensor factor which is complex enough to have consciousness also exists objectively and has its properties objectively, and such properties include all aspects of its subjectivity. The rest of the brain consists of the small tensor factors (which we would normally call uncorrelated or weakly correlated quantum particles), whose dynamics provide unconscious computation to supplement conscious dynamics of the big tensor factor. I think it is a self-consistent ontology in which consciousness exists objectively, fundamentally, and exactly, and I think we need such an ontology because of the paradox of saying otherwise, “the mind only exists in the mind”.
What will make demarcation line between small and big tensor factors less fuzzy than the macrostate definition? If we will feed internal states of classical brain simulation into quantum box (outputs discarded), containing 10^2 or 10^20 entangled particles/quasi-particles, will it make simulation conscious? How in principle can we determine that it will or will not?
Interesting thing is that mind as a high-level description of brain workings is mind-dependent on the same mind (it’s not a paradox, but a recursion), not on a mind. Different observers will agree on the content of high-level model of brain workings presented in same brain, as that model is unambiguously determined by the structure of brain. Thus mind is subjective in a sense that it is conceptual description of brain workings (including concepts of self, mind and so on), and mind is objective in a sense that its content can be reconstructed from structure of brain.
It isn’t paradox, really.
I can’t help imagining procedure of accepting works on philosophy of mind: “Please, show your tensor factor. … Zombies and simulations are not allowed. Next”.
The difference is between conscious and not conscious. This will translate mathematically into presence or absence of some particular structure in the “tensor factor”. I can’t tell you what structure because I don’t have the theory, of course. I’m just sketching how a theory of this kind might work. But the difference between small and big is number of internal degrees of freedom. It is reasonable to suppose that among the objects containing the consciousness structure, there is a nontrivial lower bound on the number of degrees of freedom. Here is where we can draw a line between small and big, since the small tensor factors by definition can’t contain the special structure and so truly cannot be conscious. However, being above the threshold would just be necessary but not sufficient, for presence of consciousness.
If you have a completed theory of consciousness, then you answer this question just as you would answer any other empirical question in a domain where you have a well-tested theory: You evaluate the data using the theory. If the theory tells you all the tensor factors in the box are below the magic threshold, there’s definitely no consciousness there. If there might be some big tensor factors present, it will be more complicated, but it will still be standard reasoning.
If you are still developing the theory, you should focus just on the examples which will help you finish it, e.g. Roko’s example of general anesthesia. That might be an important clue to how biology, phenomenology, and physical reality go together. Eventually you have a total theory and then you can apply it to other organisms, artificial quantum systems like in your thought experiment, and so on.
Any causal model using macrostates leaves out some micro information. For any complex physical system, there is a hierarchy of increasingly coarse-grained macrostate models. At the bottom of the hierarchy is exact physical fact—one model state for each exact physical microstate. At the top of the hierarchy is trivial model with no dynamics—same macrostate for all possible microstates. In between are many possible coarse-grainings, in which microstates are combined into macrostates. (A macrostate is therefore a region in the microscopic state space.)
So there is no single macrostate model of the brain determined by its structure. There is always a choice of which coarse-graining to use. Maybe now you can see the problem: if conscious states are computational macrostates, then they are not objectively grounded, because every macrostate exists in the context of a particular coarse-graining, and other ones are always possible.
Here’s the point of divergence. There is peculiar coarse-graining. Specifically it is conceptual self-model consciousness uses to operate on (as a wrote earlier it uses concepts of self, mind, desire, intention, emotion, memory, feeling, etc. When I think “I want to know more”, my consciousness uses concepts of that model to (crudely) represent actual state of (part of) brain including parts which represent model itself). Thus, to find a consciousness in a system it is necessary to find a coarse-graining such that corresponding macrostate of system is isomorphic to physical state of part of the system (it is not sufficient, however). Or in map-territory analogy to find a part of territory that isomorphic to a (crude) map of territory.
Edit: Well, it seems that lower bound on information content of map is necessary for this approach too. However, this approach doesn’t require adding fundamental ontological concepts.
Edit: Isomorphism condition is too limiting, it will require another level of course-graining to be true. I’ll try to come up with another definition.
This really sounds to me like a perfect fit for Robin’s grandparent post. If, say, nonlocality is important, why achieve it through quantum means?
This is meant to be ontological nonlocality and not just causal coordination of activities throughout a spatial region. That is, we would be talking about entities which do not reduce to a sum of spatially localized parts possessing localized (encapsulated) states. An entangled EPR pair is a paradigm example of such ontological nonlocality, if you think the global quantum state is the actual state, because the wavefunction cannot be factorized into a tensor product of quantum states possessed by the individual particles in the pair. You are left with the impression of a single entity which interfaces with the rest of the universe in two places. (There are other, more esoteric indications that reality has ontological nonlocality.)
These complex unities glued together by quantum entanglement are of interest (to me) as a way to obtain physical entities which are complex and yet have objective boundaries; see my comment to RobinZ.
Not only does this quantum brain idea violate known experimental and theoretical facts about the brain, it also violates what we know about evolution. Why would evolution design a system that maintains coherence during sleep and unconsciousness, if this has no effect on inclusive genetic fitness?
(Mitchell Porter thinks that his “copy” would behave essentially identically to what he would have done had he not “lost his selfhood”, so in terms of reproductive fitness, there’s no difference)
Though I agree that this quantum brain idea is against all evidence, I don’t think the evolutionary criticism applies. Not every adaptation has a direct effect on inclusive genetic fitness; some are just side effects of other adaptations.
Sure, but the empirical difficulty of maintaining quantum coherent state would imply that it isn’t the kind of thing that would happen by accident.
Well, it might be that maintaining the system rather than restarting it when full consciousness resumes is an easier path to the adaptation, or has some advantage we don’t understand.
Of course, if the restarted “copy” would seem externally and internally as a continuation, the natural question is why bother positing such a monad in the first place?
If you want something that flies, the simplest way is for it to have wings that still exist even when it’s on the ground. We don’t actually know (big understatement there) the relative difficulty of evolving a “persistent quantum mind” versus a “transient quantum mind” versus a “wholly classical mind”.
There may also be an anthropic aspect. If consciousness can only exist in a quantum ontological unit (e.g. the irreducible tensor factors I mention here), then you cannot find yourself to be an evolved intelligence based solely on classical computation employing many such entities. Such beings might exist in the universe, but by hypothesis there would be nobody home. This isn’t relevant to persistent vs transient, but it’s relevant for quantum vs classical.
You seem to jump to the conclusion that, in the favorable case, (that consciousness only exists in quantum computers AND quantum coherence is the fundamental basis of persistent identity), the coherence timescale would obviously be your whole lifetime, even if hypothermia, anesthetics, etc happen, but as soon as you are cryopreserved, it decoheres, so that the physical basis of persistent identity corresponds perfectly to the culturally accepted notion.
But that would be awfully convenient! Why not assign most of your probability to the proposition that evolution accidentally designed a quantum computer with a decoherence timescale of one second? ten seconds? 100 seconds? 1000 seconds? 10,000 seconds? Why not postulate that unconsciousness or sleep destroys the coherence? After all, we know that classical computation is perfectly adequate for evolutionarily adaptive tasks (because we can do them on a classical computer).
This is, first of all, an exercise in taking appearances (“phenomenology”) seriously. Consciousness comes in intervals with internal continuity, one often comes to waking consciousness out of a dream (suggesting that the same stream of consciousness still existed during sleep, but that with mental and physical relaxation and the dimming of the external senses, it was dominated by fantasy and spontaneous imagery), and one should consider the phenomenon of memory to at least be consistent with the idea that there is persistent existence, not just throughout one interval of waking consciousness, but throughout the whole biological lifetime.
So if you’re going to think about yourself as physically actual and as actually persistent, you should think of yourself as existing at least for the duration of the current period of waking consciousness, and you have every reason to think that you are the same “you” who had those experiences in earlier periods that you can remember. The idea that you are flickering in and out of existence during a single day or during a lifetime is somewhat at odds with the phenomenological perspective.
Cryopreservation is far more disruptive than anything which happens during a biological lifetime. Cells full of liquid water freeze over and grow into ice crystals which burst their membranes. Metabolism ceases entirely. Some, maybe even most models of persistent biological quantum coherence have it depending on a metabolically maintained throughput of energy. To survive the freezing transition, it seems like the “bio-qubits” would have to exist in molecular capsules that weren’t penetrated as the ice formed.
But if you’re going to argue phenomenologically, then any form of reanimation that restores the persons memory in a continuous way will seem (from the inside) to be continuous.
Can I ask: have you ever been under a general anesthetic?
It is a philosophically significant life event, because what you experience is just so incredibly at odds with what actually happens. You lie there waiting for the anesthetic to take effect, and then the next instant, your eyes open and find your arm/leg/whatever in plaster, and a glance at the clock suggests that 3 hours have passed.
I’d personally want to be cryopreserved before I fully lost my marbles so that I can experience that kind of time travel. Imagine closing your eyes, then reopening them and it’s the 23rd century? How cool would that be?
I must have been, at some point, but a long time ago and don’t remember.
Clearly there are situations where extra facts would lead you to conclude that the impression of continuity is an illusion. If you woke up as Sherlock Holmes, remembering your struggle with Moriarty as you fell off a cliff moments before, and were then shown convincingly that Holmes was a fictional character from centuries before, and you were just an artificial person provided with false memories in his image, you would have to conclude that in this case, you had erred somehow in judging reality on the basis of subjective appearances.
It seems unlikely that reliable reconstruction of cryonics patients could occur and yet the problem of consciousness not yet be figured out. Reliable reconstruction would require such a profound knowledge of brain structure and function, that there wouldn’t be room for continuing uncertainty about quantum effects in the brain. By then you would know it was there or not there, so regardless of how the revivee felt, the people(?) doing the reviving should already know the answers regarding identity and the nature of personal existence.
(I add the qualification reliable reconstruction, because there might well be a period in which it’s possible to experiment with reconstructive protocols while not really knowing what you’re doing. Consider the idea of freezing a C. elegans and then simulating it on the basis of micrometer sections. We could just about do this today, except that we would mostly be guessing how to map the preserved ultrastructure to computational elements of a simulation. One would prefer the revival of human beings not to proceed via similar trial and error.)
In the present, the question is whether subjectively continuous but temporally discontinuous experience, such as you report, is evidence for the self only having an intermittent physical existence. Well, the experience is consistent with the idea that you really did cease to exist during those 3 hours, but it is also consistent with the idea that you existed but your time sense shut down along with your usual senses, or that it stagnated in the absence of external and internal input.
There is no uncertainty. A large amount of evidence points to the lack of quantum effects in the brain. Furthermore, there was never really any evidence in favor of quantum effects, and certainly none has been produced.
I think that most of the problems of consciousness have already been figured out; Gary Drescher, Dan Dennett, Drew McDerrmot have done it. They just don’t yet have overwhelming evidence, so you have to be “light like a leaf blown by the winds of evidence” to see their answer as being correct.
The remaining unsolved problems in this area seem to be related to the philosophy of computations-in-general, such as “what counts as implementing a computation” or anthropic/big world problems.
Which is to say, decision theory for algorithms, understanding of how an algorithm controls mathematical structures, and how intuitions about the real world and subjective anticipation map to that formal setting.
Well, that’s one possible solution. But not without profound problems, for example the problem of lack of a canonical measure over “all mathematical structures” (even the lack of a clean definition of what “all structures” means).
But it certainly solves some problems, and has the sort of “reductionistic” feel to it that indicates it is likely to be true.
Logics allow to work with classes of mathematical structures (not necessarily individual structures), which seems to be a good enough notion of working with “all mathematical structures”. A “measure” (if, indeed, it’s a useful concept) is aspect of preference, and preferences are inherently non-canonical, though I hope to find a relatively “canonical” procedure for defining (“extracting”) preference in terms of an agent-program.
In the case of MWI quantum, the measure is Integral[ ], and if Robin’s Mangled Worlds is true, there’s no doubt that this measure is not “preference”.
What is the difference between the MWI/Mangled Big World and other Big Worlds such that measure is preference in others but not in MWI/Mangled?
Any given concept is what it is. Truth about any given concept is not a matter of preference.
But in cases where there is no “canonical choice of a concept”, it is a matter of choice which concept to consider. If you want a concept with certain properties, these properties already define a concept of their own, and might determine the mathematical structure that satisfies them, or might leave some freedom in choosing one you prefer for the task.
In case of quantum mechanical measure, you want your concept of measure to produce “probabilities” that conform with the concept of subjective anticipation, which is fairly regular and thus create illusion of “universality”, because preferences of most minds like ours (evolved like ours, in our physics) have subjective anticipation as a natural category, a pattern that has significant explanatory (and hence, optimization) power. But subjective anticipation is still not a universally interesting concept, one can consider a mind that looks at your theories about it, says “so what?”, and goes on optimizing something else.
The reason I spoke about Mangled Worlds MWI is that the Integral[ ] measure is not dependent upon subjective anticipation.
This is because in mangled worlds QM there is a physically meaningful sense in which some things cease to exist, namely that things (people, computers, any complex or macroscopic phenomenon) get “Mangled” if their Integral[ ] measure gets too low.
That preference is a cause of a given choice doesn’t prohibit physics to also be a cause. There is rarely an ultimate source (unique dependence). You value thinking about what is real (accords with physical laws) because you evolved to value real things. There are also concepts which are not about our physical laws which you value, because evolution isn’t a perfect designer.
This is also a free will argument. I say that there is a decision to be made about which concepts to consider, and you say that the decision is already made by the laws of physics. It’s easier to see how you do have free will for more trivial choices. It’s more difficult to consider acting and thinking as if you live in different physics. In both cases, the counterfactual is physically impossible, you couldn’t have made a different choice. Your thoughts accord with the laws of physics, caused by physics, embedded within physics. And in both cases, what is actually true (what action you’ll perform; and what theories you’ll think about) is determined by your decision.
As an agent, you shouldn’t (terminally) care about what laws of physics say, only about what your preference says, so this cause is always more relevant, although currently less accessible to reflection.
Yes, I get that free will is compatible with deterministic physics. That is not the issue. I don’t quite see what about my reply made you think that this was relevant?
The point is that in Mangled world QM there is such a think as objective probability, even though the world is (relatively) big, and it basically turns out to be defined by just the number of instances of something rather than something else.
I think Vladimir is essentially saying that caring about that objective property of that particular mathematical structure is still your “arbitrary”, subjectively objective preference. I don’t think I understand where the free will argument comes in either.
Sure, it is arbitrary to care about what actually exists and what will actually happen, as opposed to (for example) running your life around trying to optimize the state of Tolkein’s Middle Earth.
But I think that what Big Worlds calls into question is whether there is such a thing as “what actually exists” and “what will actually happen”. That’s the problem. I agree that evolution could (like it did in the case of subjective anticipation and MWI QM) have played a really cruel trick on us.
But I brought up Mangled Worlds because it seems that Mangled worlds is a case where there is such a thing as “what will actually happen” and “what actually exists”, even though the world is relatively big (though mangled worlds is importantly different to MWI with no mangler or world-eater)
The important difference between MWI and Mangled-MWI is that if you say “ah, measure over a big world is part of preference, and my preference is for a ||Psi>|^10 measure, then you will very quickly end up mangled, i.e. there will be no branches of the wavefunction where your decision algorithm interacts with reality in the intended way for an extended period of time .
So what? Not everyone cares about what happens in this world. Plus, you don’t have to exist in this world to optimize it (though it helps).
If we take as an assumption that Mangled-worlds MWI is the only kind of “Bigness” that the world has, then there is nothing else to care about apart from what happens in one of the branches, and since nothing exists apart from those branches, you have to exist in at least one of them to actually do anything.
Though, of course, acausally speaking, a slim probability that some other world exists is enough for people to (perhaps?) take notice of it.
EDIT: One way to try to salvage objective reality from Big Worlds would be to drive a wedge between “other worlds that we have actual evidence for” (such as MWI) and “Other worlds that are in-principle incapable of providing positive evidence of their existence”, (such as Tegmark’s MUH), then showing that all of the evidentially implied big worlds are not problematic for objectivity, as seems to be the case for Mangled-MWI. However, this would only work if one were willing to part with kolmogorov/Bayesian reasoning, and say that certain perfectly low-complexity hypotheses are thrown out for being “too big” and “too hypothetical”.
I’m fairly sure at this point it’s conceptual confusion to say that. You can care about mathematical structures, and control mathematical structures, that have nothing to do with the real world. These mathematical structures don’t have to be “worlds” in any usual sense, for example they don’t have to be processes (have time), and they don’t have to contain you in them in any form.
One of the next iterations of ambient decision theory should make it clearer, though the current version should give a hint (but probably isn’t worth the bother in the current form, considering it has known philosophical/mathematical bugs—but I’m studying, improving my mathematical sanity).
Perhaps the distinction I’m interested is the difference between control and function-ness.
There is an abstract mathematical function, say, the parity function of the number of open eyes I have. It is a function of me, but I wouldn’t say that I am controlling it in the conventional sense, because it is abstract.
More abstract than whether your eyes are open? They’re about the same distance from the underlying physics.
I guess if there were an actual light that lit up as a function of the parity, then I would feel comfortable with “control”, and I would say that I am controlling the light
… Whether the light is on is also pretty abstract, no?
The role of decision-theoretical notion of control is to present consequences of your possible decisions for evaluation by preference. Whatever fills that role, but if one can value mathematical abstractions, then the notion of control has to describe how to control abstractions. Conveniently, the real world can be seen as just another mathematical structure (class of structures).
I would say that the conventional usage of the word “control” requires the thing-under-control to be real, but sure, one can use the words how one pleases.
It worries me somewhat that we seem to concerned with what word-set we use here; this indicates that the degree to which we value performing certain actions depends whether we frame it as
“controlling something that’s no more-or-less real than the laptop in front of you”
versus
“this nonexistent abstraction happens to be a function of you; so what? There are infinitely many abstract functions of you”
Is there some actual substance here?
This complication is created by the same old ontology problem: if preference talks about the real world, power to you (though that would make physics relevant, which is no good too), but if it doesn’t, we have to deal with that. And we can’t assume a priori what preference talks about.
My previous position (and, it seems, long-held position of Wei Dai’s) was to assume that preference can be expressed as talking about behavior of programs (as in UDT), since ultimately it has to determine behavior of agent’s program, and seeing the environment as programs fits the pattern and allows to express preferences that hold arbitrary agent’s strategies as the best option.
Now, since ambient decision theory (ADT) suggests treating the notions of consequences of agent’s decision as logical theories, it became more natural to see environment as models of those theories, and so structures more general than programs. But more importantly, if, as logical theories, preferred concepts do not refer to programs (even though they can directly influence only behavior of agent’s program), there is no easy way of converting them into preference-about-programs equivalents. Getting the info out of those theories may well be undecidable, something to work on during decision-making and not on the preliminary stage of preference-definition.
Also, trying to have preferences about abstractions, especially infinite ones, seems bound to end in tears, i.e. a complete mess of an ontology problem. You’d import all the problems of philosophy of mathematics in and heap them on top of the problems of ethics. Not to mention Godelian problems, large cardinal axiom problems, etc. Just the thought of trying to sort all that out fills me with dread.
Scary, and I haven’t even finished converting myself into a pure mathematician yet. :-) I was hoping to avoid these issues by somehow limiting preference to programs, but investigation led me back to the harder problem statement. Ultimately, a simpler understanding has to be found, that sidesteps the monstrosity of set-theoretical infrastructure and diversity of logics. At this point though, I expect to benefit from conceptual clarity brought by standard mathematical tools.
I think the problem might be that the distinction between the real world and the hypothetical world might not be logically defensible, in which case we have an ontology problem of awesome proportions on our hands.
I believe as much: for foundational study of decision-making, the notions of “real world” are useless, which is why we have to deal with “all mathematical structures”, somehow accessed through more manageable concepts (for which the best fit is logic, though that’s uncomfortable for many reasons).
(I’d still expect that it’s possible to extract some fuzzy outline of the concept of the “real world”, like it’s possible to vaguely define “chairs” or “anger”.)
Maybe. Though my intuition seems to point to a more fundamental role for “reality” in decisionmaking.
Evolution designed our primitive notions of decisionmaking in a context where there was a very clear and unique reality; why should there even be a clear and unique generalization to the new contexts, i.e. the set of all mathematical structures?
I predict that we’ll end up with a plethora of different kinds of decision theory, which lead to a whole random assortment of different practical recommendations, and the very finest of framing differences could push a person to act in completely different ways, with one exception being a decision theory that caches out the notion of reality, that will be relatively unique because of its relative similarity to our pretheoretic notions.
But I am willing to be proven wrong.
Generalization comes from the expressive power of a mind: you can think about all sorts of concepts beside the real world. That evolution would fail to delineate the real world in this concept space perfectly seems obvious: all sorts of good-fit approximations would do for its purposes, but when we are talking FAI, we have to deal with what was actually chosen, not what “was supposed to be chosen” by evolution. This argument applies to other evolutionary drives more easily.
I think you misunderstood me: I meant why should there even be a clear and unique generalization of human goals and decisionmaking to the case of preferences over the set of mathematical possibilities.
I did not mean why should there even be a clear and unique generalization of the human concept of reality—for the time being I was assuming that there wouldn’t be one.
You don’t try to generalize, or extrapolate human goals. You try to figure out what they already are.
I think that this is a different sense of the word “control” than controlling physical things.
Can you elaborate on this?
UDT is about control in the same sense. See this comment for a point in that direction (and my last comment on “Ambient decision theory go-through” thread on SIAI DT list). I believe this to be conceptual clarification of the usual notion of control, having the usual notion (“explicit control”) as a special case (almost, modulo explicit dependence bias—it allows to get better results than if you only consider the explicit dependence as stated).
See “ambient dependence” on DT list, but the current notion (involving mathematical structures more general than programs) is not written up. I believe “logical control”, as used by Wei/Eliezer, refers to basically the same idea. In a two-player game, you can control the other player’s decisions despite not literally sitting inside their head.
I just accidentally found this other decision theory google group and thought LWers might find it of interest.
I’m not on that list. Do you know who the list owner is?
Just as a note, my current gut feeling is that it is perfectly plausible that the right way to go is to do something like UDT but with a notion of what worlds are real (as in Mangled worlds QM).
However, I shall read your theory of controlling that which is unreal and see what I make of it!
Yes you are (via r****c at googlemail.com). IIRC, you got there after I sent you an invitation. Try logging in on the list page.
Oh, thanks. Obviously I accepted and forgot about it.
But you do care about optimizing Middle Earth (let it be Middle Earth with Halting Oracles to be sure), to some tiny extent, even though it doesn’t exist at all.
Free will is about dependencies: one got to say that the outcome depends on your decision. At the same time, outcome depends on other things. Here, considering quantum mechanical measure depends on what’s true about the world, but at the same time it depends on what you prefer to consider. Thus, saying that there are objective facts dictated by the laws of physics is analogous to saying that all your decisions are already determined by the physical laws.
My argument was that as in the case of the naive free will argument, here too we can (indeed, should, once we get to the point of being able to tell the difference) see physical laws as (subjectively) chosen. Of course, as you can’t change your own preference, you can’t change the implied physical laws seen as aspect of that preference (to make them nicer for some purpose, say).
It is relevant, but I ran out of expectation to communicate this quickly, so let’s all hope I figure out and write up in detail my philosophical framework for decision theory sometime soon.
I don’t agree with this claim. One would simply need an understanding of what brain systems are necessary for consciousness and how to restore those systems to a close approximation to pre-existing state (presumably using nanotech). This doesn’t take much in the way of actually understanding how those systems function. Once one had well-developed nanotech one could learn this sort of thing simply be trial and error on animals (seeing what was necessary for survival, and what was necessary for training to stay intact) and then move on to progressively larger brained creatures. This doesn’t require a deep understanding of intelligence or consciousness, simply an understanding of what parts of the brain are being used and how to restore them.
Actually, we do. We’ve been trying for decades to build viable quantum computers, and it turns out to be excruciatingly hard.