“There are souls” is not an explanation of how they work.
But of course that is an explanation, in the same sense that Maxwell’s equations explain the behavior of electric and magnetic fields. “We have this here thing, and is generates this here other thing, which is an observation we describe with this here equation”. No different (disregarding the complexity penalty) from saying “souls generate experience” and if all you miss is the math-speak, then insert some greek letter for “soul” and another one for “consciousness”. Of course, there is no reason to posit that ‘souls’ exist in the first place, given the commonly accepted definition. However, the concept of souls doesn’t get discarded because it doesn’t explain consciousness, because it does. It gets discarded because it adds complexity without making up for it by making predictions, or simplifying the descriptions of the available data/experiments.
As for myself, I simply note the evidence above, the problem it leaves unsolved, and my lack of any idea for a solution.
A common error with the (to me) best candidate explanation, panpsychism, is that it is often conflated with “everything is conscious, so everything thinks / has a mind / is agenty in some sense / can suffer”. Obviously matter has the potential to generate qualia, at least in certain configurations. It seems, just on complexity grounds, a simpler model to posit that consciousness-generation is just something that matter does, rather than something which happens exclusively in brains or other algorithm-instantiators (not unlike Tegmark’s thought process leading up to “Our Mathematical Universe”). Brains just have the means to process / consolidate and report it. Consider if it were so, then evolutionary selection pressures would work on that property as well; leading to e.g. synchronizing individual “atoms” of consciousness into larger assemblies (cool guy).
Of course, the distinction between the uncontroversial “matter has the potential to generate consciousness” (which the process-folk would also agree with) and “all matter generates some proto-form of consciousness, and the brain evolved to synchronize and shape these building blocks” may be merely a difference in phrasing. Nevertheless, I lean towards the latter purely because it seems simpler in an algorithmic complexity sense. (These are abridged thoughts, the in-a-nutshell version. There are weak points, such as model-building and consciousness being linked in some sense, otherwise there would be no selection pressure to consolidate consciousness. Still, I feel the ‘it’s an emergent property’ to be much more flawed. We forget that emergent is just code for ‘can’t grasp it on a more basic level, a shortcut our computationally limited models are forced to make. A computationally unlimited model-builder could do away with the whole ‘emergent’ concept in the first place, and describe a chair—or a wave in the sea—on the most basic level. Concluding that something is emergent in the sense of saying it only exists on a certain level of granularity upwards is, to me, confusing our very useful model-building hacks with the reality they aim to describe. There is no ‘emergent’ in reality. There is only the base level, everything else is a computational hack used by model-builders.)
“There are souls” is not an explanation of how they work.
But of course that is an explanation, in the same sense that Maxwell’s equations explain the behavior of electric and magnetic fields. “We have this here thing, and is generates this here other thing, which is an observation we describe with this here equation”.
The difference is that the explanation by souls contains no equations, no mechanisms, nothing but the word. Consciousness extends before life and after death because “we have immortal souls”. It’s like saying that things fall “because of gravity”. That someone speaks a foreign language well “by being fluent”. That a person learns to ride a bicycle by “getting the knack of it”. (The first of those three examples is Feynman’s; the other two are things I have actually heard someone say.)
No different (disregarding the complexity penalty) from saying “souls generate experience” and if all you miss is the math-speak, then insert some greek letter for “soul” and another one for “consciousness”.
That would be cargo-cult mathematics: imitating superficial details of the external form (Greek letters) while failing to understand what mathematics is. (cough Procrastination Equation cough)
Still, I feel the ‘it’s an emergent property’ to be much more flawed.
Indeed, “emergence” is no more of an explanation. However, I don’t think that
A computationally unlimited model-builder could do away with the whole ‘emergent’ concept in the first place,
Describing things in terms of “tables”, “chairs”, “mountains”, “rivers” and so on is a great deal shorter than describing them in terms of quarks (and how do we know that quarks and the bottom level?). A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment. Such an entity would not be making models at all.
There is only the base level, everything else is a computational hack used by model-builders.
How does this claim cash out in terms of experience? If someone tried to take it seriously, why wouldn’t they go on to think, “‘I’ is just a computational hack used by model-builders. I don’t exist! You don’t exist! We’re just patterns of neural fitings. No, there are no neurons, there’s just atoms! No, atoms don’t exist either, just quarks! But how do I know they’re the base level? No, I don’t exist! There is no ‘I’ to know things! There are no things, no knowing of things! These words don’t exist! They’re just meaningless vibrations and neural firings! No, there are no vibrations and no neurons! It’s all quarks! Quarkquarkquark...” and spending the rest of their days in a padded cell?
It’s like saying that things fall “because of gravity”.
But that’s precisely what we say. Things fall “because this equation describes how they fall” (math just allows for a more precise description than natural languages). All we do is find good (first priority: accurate, second priority: short) descriptions, which is just “this does that”. Fundamentally, a law of gravity and a law of “souls do consciousness” are the same thing, except the first is actually useful and can be “cashed out” better. Consider F=ma were a base-level description. How is “because F=ma” any more of an explanation than “because souls do consciousness” (of course disregarding practicalities such as predictive value etc., I’m only concerned with “status as an explanation”)?
A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment.
Well, you can reject Omega-type thought experiments with the same reasoning. Also, Turing Machines.
I’m surprised that “There is only the base level, everything else is a computational hack used by model-builders.” is considered to be controversial. I don’t mean it as “the referents of the abstractions us model-builders use don’t exist”, just as “‘the wave’ is just a label, the referent of which isn’t some self-evident basic unit; the concept is just a short-hand which is good enough for our daily-life purposes”. Think of it this way: Would two supremely powerful model-builders come up with “chair”, independently? If there’s reason to answer “no”, then chair is just a label useful to some model-builders, as opposed to something fundamental to the territory.
It’s like saying that things fall “because of gravity”.
But that’s precisely what we say. Things fall “because this equation describes how they fall” (math just allows for a more precise description than natural languages).
“This equation describes how they fall” is a sensible thing to say. “Because of gravity” is only sensible if it refers to that mathematics. The usage I intended to refer to is when someone says that who doesn’t know the mathematics and is therefore not referring to it—a member of the general public doing nothing but repeating a word he has learned.
A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment.
Well, you can reject Omega-type thought experiments with the same reasoning.
I do reject some of them, and have done so here in the past. Not all of these thought experiments make any sense. Omega works great for formulating Newcomb’s problem. After that it’s all downhill.
Also, Turing Machines.
Turing machines do not perform arbitrary computations instantly.
Think of it this way: Would two supremely powerful model-builders come up with “chair”, independently? If there’s reason to answer “no”, then chair is just a label useful to some model-builders, as opposed to something fundamental to the territory.
I think there is reason to answer “yes”. (I assume these model-builders are looking at human civilisation, and not at intelligent octopuses swimming in the methane lakes of Titan.) Less parochially, they will come up with integers, real numbers, calculus, atoms, molecules, fluid dynamics, and so on. Is “group” (in the mathematical sense) merely a computational hack over the “base level” of ZF (or some other foundation for mathematics)?
What does it actually mean to claim that something is “just a computational hack”, in contrast to being “fundamental to the territory”? What would you be discovering, when you discovered that something belonged to one class rather than the other? Nobody has seen a quark, not within any reasonable reading of “to see”. Were atoms just a computational hack before we discovered they were made of parts? Were protons and neutrons just a computational hack before quarks were thought of? How can we tell whether quarks are just a computational hack? Only in hindsight, after someone comes up with another theory of particle physics?
That’s rather a barrage of questions, but they are intended to be one question, expressed in different ways. I am basically not getting the distinction you are drawing here between “base-level things” and “computational hacks”, and what you get from that distinction.
“Because of gravity” is only sensible if it refers to that mathematics.
Well, that’s where we disagree (I’d agree with “useful” instead of “sensible”). The mathematical description is just a more precise way of describing what we see, of describing what a thing does. It is not providing any “justification”. The experimental result needs no justification. It just is. And we describe that result, the conditions, the intermediate steps. No matter how precise that description, no matter what language we clad it in, the “mechanism” always remains “because that’s what gravity does”. We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness. Just not one surviving the Razor.
To preempt possible misunderstandings, I’m pointing out the distinction between “we have no reason to assume this explanation is the shortest (explains observations in the most compact way) while also being accurate (not contradicting the observations) -- and—“this is not an explanation, it just says ‘souls do consciousness’ without providing a mechanism”. The first I strongly agree with. The second I strongly disagree with. All our explanations boil down to “because x does y”, be they Maxwell’s or his demon’s soul’s, or his silver hammer’s.
Turing machines do not perform arbitrary computations instantly.
I wasn’t previously aware you draw distinctions between “concepts which cannot exist which are useful to ponder” and “concepts which cannot exist which are over the top”. ;-) Both of which could be called magical. While I do see your point, the Computer Science-y way of thinking (with which you’re obviously familiar) kind of trains one to look at extreme cases / the limits, to test the boundary conditions to check whether some property holds in general; even if those boundary conditions aren’t achievable. Hence the usefulness of TMs.
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow. No need to have as many separate concepts for the macroscopic and the microscopic if you have no difficulty making the computations from a few levels down (need not be ‘the base level’). Each abstracted layer creates new potential inaccuracies/errors, unless we assume nothing is lost in translation. Usually we don’t need to concern ourselves with atoms when we put down a chair, but eventually it will happen that we put down a chair and something unexpected happens because of an atomic butterfly effect which was invisible from the macroscopic layer of abstraction.
That’s rather a barrage of questions, but they are intended to be one question, expressed in different ways. I am basically not getting the distinction you are drawing here between “base-level things” and “computational hacks”, and what you get from that distinction.
Let me try one way of explaining what I mean, and one way to explain why I think it’s an important distinction. Consider two model-builders which are both unconstrained to the maximum degree you’d allow without dismissing them as useless fantasies. Consider two perfectly accurate models of reality (or as accurate as you’d allow them to be). Presumably, they would by necessity be isomorphic, and their shortest representation identical. However, since those shortest representations are uncomputable (let’s retreat to more realism when it suits us), let’s just assume we’re dealing with 2 non-optimally compressed but perfectly accurate models of reality. One which the uFAI exterminating the humans came up with, and one which the uFAI terminating the Alpha-Centaurians came up with. So they meet over a game of cards, and compare models of reality. Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate? Maybe, once exchanging notes, the Alpha-Centauri AI notes that humans (before being exterminated) liked to use ‘chairs’, so it includes some representation of ‘chair’ in its databanks. Maybe the AI’s don’t rely on such token concepts in the first place, and just describe different conglomerates of atoms, as atoms. It’s not that they couldn’t just save the ‘chair’-concept, there would just be no necessity to do so. No added accuracy, no added model fidelity, no added predictive power. Only if they lacked the oomph to describe everything as atoms-only would they start using labels like “chairs” and “flowers” and “human meat energy conversion facilities”.
What I get from that distinction is recognizing pseudo-answers such as “consciousnessness is an emergent phenomenon and only exists at a certain macroscopic level” as mostly being a confusion of thinking macroscopic layers to be somehow self-contained, independent of the lower levels, instead of computationally-friendly abstractions and approximations of lower levels. When we say “chairs are real, and atoms are real, and the quarks are real, and (whichever base levels we get down to) is real”, and hold all of those as true at the same time, there is a danger of forgetting that chairs are only real because atoms are real, which are only real because elementary particles are real, which …”, there is a dependency chain going all the way down to who knows where. All the way down to the “everything which can be described by math exists” swirling color vortex. “Consciousness is an actual physical phenomenon which can only be described as a macroscopic process, which only emerges at a higher level of abstraction, yet it exists and creates conscious experience” is self-contradictory, to me. It confuses a layer of abstraction which helps us process the world with a self-contained “emergent” world which is capable of creating conscious experience all on its own. Consciousness must be expressable purely on a base-level (whatever it may be), or it cannot be.
Of course it’s not feasible to talk about consciousness or chairs on a quark level (unless you’re Penrose), and “emergent” used as “we talk about it on this level because it seems most accessible to us” is perfectly fine. However, because of the computational-hack vs. territory confusion, “emergent” is used all too often as if it was some answer to some riddle, instead of an admission of insufficient resources.
That’s rather a barrage of text with only a single pass of proof-reading, if you have enough time to go through it, please point out where I’ve been unclear, or what doesn’t make sense to you.
We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn’t tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of “souls”, nobody knows anything. (1)
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow.
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it’s more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find “chair” showing up in some Solomonoff-like induction method, what does it mean to say they don’t exist? Or “hydrogen”? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between “computational hack” and “really exists” becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don’t think it’s worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can’t see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren’t actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate?
The possibilities of the universe are too vast for the human concept of “chair” to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.
But of course that is an explanation, in the same sense that Maxwell’s equations explain the behavior of electric and magnetic fields. “We have this here thing, and is generates this here other thing, which is an observation we describe with this here equation”. No different (disregarding the complexity penalty) from saying “souls generate experience” and if all you miss is the math-speak, then insert some greek letter for “soul” and another one for “consciousness”. Of course, there is no reason to posit that ‘souls’ exist in the first place, given the commonly accepted definition. However, the concept of souls doesn’t get discarded because it doesn’t explain consciousness, because it does. It gets discarded because it adds complexity without making up for it by making predictions, or simplifying the descriptions of the available data/experiments.
A common error with the (to me) best candidate explanation, panpsychism, is that it is often conflated with “everything is conscious, so everything thinks / has a mind / is agenty in some sense / can suffer”. Obviously matter has the potential to generate qualia, at least in certain configurations. It seems, just on complexity grounds, a simpler model to posit that consciousness-generation is just something that matter does, rather than something which happens exclusively in brains or other algorithm-instantiators (not unlike Tegmark’s thought process leading up to “Our Mathematical Universe”). Brains just have the means to process / consolidate and report it. Consider if it were so, then evolutionary selection pressures would work on that property as well; leading to e.g. synchronizing individual “atoms” of consciousness into larger assemblies (cool guy).
Of course, the distinction between the uncontroversial “matter has the potential to generate consciousness” (which the process-folk would also agree with) and “all matter generates some proto-form of consciousness, and the brain evolved to synchronize and shape these building blocks” may be merely a difference in phrasing. Nevertheless, I lean towards the latter purely because it seems simpler in an algorithmic complexity sense. (These are abridged thoughts, the in-a-nutshell version. There are weak points, such as model-building and consciousness being linked in some sense, otherwise there would be no selection pressure to consolidate consciousness. Still, I feel the ‘it’s an emergent property’ to be much more flawed. We forget that emergent is just code for ‘can’t grasp it on a more basic level, a shortcut our computationally limited models are forced to make. A computationally unlimited model-builder could do away with the whole ‘emergent’ concept in the first place, and describe a chair—or a wave in the sea—on the most basic level. Concluding that something is emergent in the sense of saying it only exists on a certain level of granularity upwards is, to me, confusing our very useful model-building hacks with the reality they aim to describe. There is no ‘emergent’ in reality. There is only the base level, everything else is a computational hack used by model-builders.)
The difference is that the explanation by souls contains no equations, no mechanisms, nothing but the word. Consciousness extends before life and after death because “we have immortal souls”. It’s like saying that things fall “because of gravity”. That someone speaks a foreign language well “by being fluent”. That a person learns to ride a bicycle by “getting the knack of it”. (The first of those three examples is Feynman’s; the other two are things I have actually heard someone say.)
That would be cargo-cult mathematics: imitating superficial details of the external form (Greek letters) while failing to understand what mathematics is. (cough Procrastination Equation cough)
Indeed, “emergence” is no more of an explanation. However, I don’t think that
Describing things in terms of “tables”, “chairs”, “mountains”, “rivers” and so on is a great deal shorter than describing them in terms of quarks (and how do we know that quarks and the bottom level?). A model-builder so computationally unlimited as to make any finite computation in epsilon time is too magical to make a useful thought experiment. Such an entity would not be making models at all.
How does this claim cash out in terms of experience? If someone tried to take it seriously, why wouldn’t they go on to think, “‘I’ is just a computational hack used by model-builders. I don’t exist! You don’t exist! We’re just patterns of neural fitings. No, there are no neurons, there’s just atoms! No, atoms don’t exist either, just quarks! But how do I know they’re the base level? No, I don’t exist! There is no ‘I’ to know things! There are no things, no knowing of things! These words don’t exist! They’re just meaningless vibrations and neural firings! No, there are no vibrations and no neurons! It’s all quarks! Quarkquarkquark...” and spending the rest of their days in a padded cell?
But that’s precisely what we say. Things fall “because this equation describes how they fall” (math just allows for a more precise description than natural languages). All we do is find good (first priority: accurate, second priority: short) descriptions, which is just “this does that”. Fundamentally, a law of gravity and a law of “souls do consciousness” are the same thing, except the first is actually useful and can be “cashed out” better. Consider F=ma were a base-level description. How is “because F=ma” any more of an explanation than “because souls do consciousness” (of course disregarding practicalities such as predictive value etc., I’m only concerned with “status as an explanation”)?
Well, you can reject Omega-type thought experiments with the same reasoning. Also, Turing Machines.
I’m surprised that “There is only the base level, everything else is a computational hack used by model-builders.” is considered to be controversial. I don’t mean it as “the referents of the abstractions us model-builders use don’t exist”, just as “‘the wave’ is just a label, the referent of which isn’t some self-evident basic unit; the concept is just a short-hand which is good enough for our daily-life purposes”. Think of it this way: Would two supremely powerful model-builders come up with “chair”, independently? If there’s reason to answer “no”, then chair is just a label useful to some model-builders, as opposed to something fundamental to the territory.
“This equation describes how they fall” is a sensible thing to say. “Because of gravity” is only sensible if it refers to that mathematics. The usage I intended to refer to is when someone says that who doesn’t know the mathematics and is therefore not referring to it—a member of the general public doing nothing but repeating a word he has learned.
I do reject some of them, and have done so here in the past. Not all of these thought experiments make any sense. Omega works great for formulating Newcomb’s problem. After that it’s all downhill.
Turing machines do not perform arbitrary computations instantly.
I think there is reason to answer “yes”. (I assume these model-builders are looking at human civilisation, and not at intelligent octopuses swimming in the methane lakes of Titan.) Less parochially, they will come up with integers, real numbers, calculus, atoms, molecules, fluid dynamics, and so on. Is “group” (in the mathematical sense) merely a computational hack over the “base level” of ZF (or some other foundation for mathematics)?
What does it actually mean to claim that something is “just a computational hack”, in contrast to being “fundamental to the territory”? What would you be discovering, when you discovered that something belonged to one class rather than the other? Nobody has seen a quark, not within any reasonable reading of “to see”. Were atoms just a computational hack before we discovered they were made of parts? Were protons and neutrons just a computational hack before quarks were thought of? How can we tell whether quarks are just a computational hack? Only in hindsight, after someone comes up with another theory of particle physics?
That’s rather a barrage of questions, but they are intended to be one question, expressed in different ways. I am basically not getting the distinction you are drawing here between “base-level things” and “computational hacks”, and what you get from that distinction.
Well, that’s where we disagree (I’d agree with “useful” instead of “sensible”). The mathematical description is just a more precise way of describing what we see, of describing what a thing does. It is not providing any “justification”. The experimental result needs no justification. It just is. And we describe that result, the conditions, the intermediate steps. No matter how precise that description, no matter what language we clad it in, the “mechanism” always remains “because that’s what gravity does”. We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness. Just not one surviving the Razor.
To preempt possible misunderstandings, I’m pointing out the distinction between “we have no reason to assume this explanation is the shortest (explains observations in the most compact way) while also being accurate (not contradicting the observations) -- and—“this is not an explanation, it just says ‘souls do consciousness’ without providing a mechanism”. The first I strongly agree with. The second I strongly disagree with. All our explanations boil down to “because x does y”, be they Maxwell’s or his demon’s soul’s, or his silver hammer’s.
I wasn’t previously aware you draw distinctions between “concepts which cannot exist which are useful to ponder” and “concepts which cannot exist which are over the top”. ;-) Both of which could be called magical. While I do see your point, the Computer Science-y way of thinking (with which you’re obviously familiar) kind of trains one to look at extreme cases / the limits, to test the boundary conditions to check whether some property holds in general; even if those boundary conditions aren’t achievable. Hence the usefulness of TMs.
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow. No need to have as many separate concepts for the macroscopic and the microscopic if you have no difficulty making the computations from a few levels down (need not be ‘the base level’). Each abstracted layer creates new potential inaccuracies/errors, unless we assume nothing is lost in translation. Usually we don’t need to concern ourselves with atoms when we put down a chair, but eventually it will happen that we put down a chair and something unexpected happens because of an atomic butterfly effect which was invisible from the macroscopic layer of abstraction.
Let me try one way of explaining what I mean, and one way to explain why I think it’s an important distinction. Consider two model-builders which are both unconstrained to the maximum degree you’d allow without dismissing them as useless fantasies. Consider two perfectly accurate models of reality (or as accurate as you’d allow them to be). Presumably, they would by necessity be isomorphic, and their shortest representation identical. However, since those shortest representations are uncomputable (let’s retreat to more realism when it suits us), let’s just assume we’re dealing with 2 non-optimally compressed but perfectly accurate models of reality. One which the uFAI exterminating the humans came up with, and one which the uFAI terminating the Alpha-Centaurians came up with. So they meet over a game of cards, and compare models of reality. Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate? Maybe, once exchanging notes, the Alpha-Centauri AI notes that humans (before being exterminated) liked to use ‘chairs’, so it includes some representation of ‘chair’ in its databanks. Maybe the AI’s don’t rely on such token concepts in the first place, and just describe different conglomerates of atoms, as atoms. It’s not that they couldn’t just save the ‘chair’-concept, there would just be no necessity to do so. No added accuracy, no added model fidelity, no added predictive power. Only if they lacked the oomph to describe everything as atoms-only would they start using labels like “chairs” and “flowers” and “human meat energy conversion facilities”.
What I get from that distinction is recognizing pseudo-answers such as “consciousnessness is an emergent phenomenon and only exists at a certain macroscopic level” as mostly being a confusion of thinking macroscopic layers to be somehow self-contained, independent of the lower levels, instead of computationally-friendly abstractions and approximations of lower levels. When we say “chairs are real, and atoms are real, and the quarks are real, and (whichever base levels we get down to) is real”, and hold all of those as true at the same time, there is a danger of forgetting that chairs are only real because atoms are real, which are only real because elementary particles are real, which …”, there is a dependency chain going all the way down to who knows where. All the way down to the “everything which can be described by math exists” swirling color vortex. “Consciousness is an actual physical phenomenon which can only be described as a macroscopic process, which only emerges at a higher level of abstraction, yet it exists and creates conscious experience” is self-contradictory, to me. It confuses a layer of abstraction which helps us process the world with a self-contained “emergent” world which is capable of creating conscious experience all on its own. Consciousness must be expressable purely on a base-level (whatever it may be), or it cannot be.
Of course it’s not feasible to talk about consciousness or chairs on a quark level (unless you’re Penrose), and “emergent” used as “we talk about it on this level because it seems most accessible to us” is perfectly fine. However, because of the computational-hack vs. territory confusion, “emergent” is used all too often as if it was some answer to some riddle, instead of an admission of insufficient resources.
That’s rather a barrage of text with only a single pass of proof-reading, if you have enough time to go through it, please point out where I’ve been unclear, or what doesn’t make sense to you.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn’t tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of “souls”, nobody knows anything. (1)
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it’s more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find “chair” showing up in some Solomonoff-like induction method, what does it mean to say they don’t exist? Or “hydrogen”? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between “computational hack” and “really exists” becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don’t think it’s worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can’t see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren’t actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
The possibilities of the universe are too vast for the human concept of “chair” to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.