“Because of gravity” is only sensible if it refers to that mathematics.
Well, that’s where we disagree (I’d agree with “useful” instead of “sensible”). The mathematical description is just a more precise way of describing what we see, of describing what a thing does. It is not providing any “justification”. The experimental result needs no justification. It just is. And we describe that result, the conditions, the intermediate steps. No matter how precise that description, no matter what language we clad it in, the “mechanism” always remains “because that’s what gravity does”. We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness. Just not one surviving the Razor.
To preempt possible misunderstandings, I’m pointing out the distinction between “we have no reason to assume this explanation is the shortest (explains observations in the most compact way) while also being accurate (not contradicting the observations) -- and—“this is not an explanation, it just says ‘souls do consciousness’ without providing a mechanism”. The first I strongly agree with. The second I strongly disagree with. All our explanations boil down to “because x does y”, be they Maxwell’s or his demon’s soul’s, or his silver hammer’s.
Turing machines do not perform arbitrary computations instantly.
I wasn’t previously aware you draw distinctions between “concepts which cannot exist which are useful to ponder” and “concepts which cannot exist which are over the top”. ;-) Both of which could be called magical. While I do see your point, the Computer Science-y way of thinking (with which you’re obviously familiar) kind of trains one to look at extreme cases / the limits, to test the boundary conditions to check whether some property holds in general; even if those boundary conditions aren’t achievable. Hence the usefulness of TMs.
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow. No need to have as many separate concepts for the macroscopic and the microscopic if you have no difficulty making the computations from a few levels down (need not be ‘the base level’). Each abstracted layer creates new potential inaccuracies/errors, unless we assume nothing is lost in translation. Usually we don’t need to concern ourselves with atoms when we put down a chair, but eventually it will happen that we put down a chair and something unexpected happens because of an atomic butterfly effect which was invisible from the macroscopic layer of abstraction.
That’s rather a barrage of questions, but they are intended to be one question, expressed in different ways. I am basically not getting the distinction you are drawing here between “base-level things” and “computational hacks”, and what you get from that distinction.
Let me try one way of explaining what I mean, and one way to explain why I think it’s an important distinction. Consider two model-builders which are both unconstrained to the maximum degree you’d allow without dismissing them as useless fantasies. Consider two perfectly accurate models of reality (or as accurate as you’d allow them to be). Presumably, they would by necessity be isomorphic, and their shortest representation identical. However, since those shortest representations are uncomputable (let’s retreat to more realism when it suits us), let’s just assume we’re dealing with 2 non-optimally compressed but perfectly accurate models of reality. One which the uFAI exterminating the humans came up with, and one which the uFAI terminating the Alpha-Centaurians came up with. So they meet over a game of cards, and compare models of reality. Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate? Maybe, once exchanging notes, the Alpha-Centauri AI notes that humans (before being exterminated) liked to use ‘chairs’, so it includes some representation of ‘chair’ in its databanks. Maybe the AI’s don’t rely on such token concepts in the first place, and just describe different conglomerates of atoms, as atoms. It’s not that they couldn’t just save the ‘chair’-concept, there would just be no necessity to do so. No added accuracy, no added model fidelity, no added predictive power. Only if they lacked the oomph to describe everything as atoms-only would they start using labels like “chairs” and “flowers” and “human meat energy conversion facilities”.
What I get from that distinction is recognizing pseudo-answers such as “consciousnessness is an emergent phenomenon and only exists at a certain macroscopic level” as mostly being a confusion of thinking macroscopic layers to be somehow self-contained, independent of the lower levels, instead of computationally-friendly abstractions and approximations of lower levels. When we say “chairs are real, and atoms are real, and the quarks are real, and (whichever base levels we get down to) is real”, and hold all of those as true at the same time, there is a danger of forgetting that chairs are only real because atoms are real, which are only real because elementary particles are real, which …”, there is a dependency chain going all the way down to who knows where. All the way down to the “everything which can be described by math exists” swirling color vortex. “Consciousness is an actual physical phenomenon which can only be described as a macroscopic process, which only emerges at a higher level of abstraction, yet it exists and creates conscious experience” is self-contradictory, to me. It confuses a layer of abstraction which helps us process the world with a self-contained “emergent” world which is capable of creating conscious experience all on its own. Consciousness must be expressable purely on a base-level (whatever it may be), or it cannot be.
Of course it’s not feasible to talk about consciousness or chairs on a quark level (unless you’re Penrose), and “emergent” used as “we talk about it on this level because it seems most accessible to us” is perfectly fine. However, because of the computational-hack vs. territory confusion, “emergent” is used all too often as if it was some answer to some riddle, instead of an admission of insufficient resources.
That’s rather a barrage of text with only a single pass of proof-reading, if you have enough time to go through it, please point out where I’ve been unclear, or what doesn’t make sense to you.
We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn’t tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of “souls”, nobody knows anything. (1)
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow.
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it’s more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find “chair” showing up in some Solomonoff-like induction method, what does it mean to say they don’t exist? Or “hydrogen”? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between “computational hack” and “really exists” becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don’t think it’s worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can’t see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren’t actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate?
The possibilities of the universe are too vast for the human concept of “chair” to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.
Well, that’s where we disagree (I’d agree with “useful” instead of “sensible”). The mathematical description is just a more precise way of describing what we see, of describing what a thing does. It is not providing any “justification”. The experimental result needs no justification. It just is. And we describe that result, the conditions, the intermediate steps. No matter how precise that description, no matter what language we clad it in, the “mechanism” always remains “because that’s what gravity does”. We have no evidence to assume that there are “souls” which generate consciousness. However, that is an explanation for consciousness. Just not one surviving the Razor.
To preempt possible misunderstandings, I’m pointing out the distinction between “we have no reason to assume this explanation is the shortest (explains observations in the most compact way) while also being accurate (not contradicting the observations) -- and—“this is not an explanation, it just says ‘souls do consciousness’ without providing a mechanism”. The first I strongly agree with. The second I strongly disagree with. All our explanations boil down to “because x does y”, be they Maxwell’s or his demon’s soul’s, or his silver hammer’s.
I wasn’t previously aware you draw distinctions between “concepts which cannot exist which are useful to ponder” and “concepts which cannot exist which are over the top”. ;-) Both of which could be called magical. While I do see your point, the Computer Science-y way of thinking (with which you’re obviously familiar) kind of trains one to look at extreme cases / the limits, to test the boundary conditions to check whether some property holds in general; even if those boundary conditions aren’t achievable. Hence the usefulness of TMs.
But even considering a reasonably but not wholly unconstrained model-builder, it seems sensible to assume there would be fewer intermediate layers of abstractions needed, as resources grow. No need to have as many separate concepts for the macroscopic and the microscopic if you have no difficulty making the computations from a few levels down (need not be ‘the base level’). Each abstracted layer creates new potential inaccuracies/errors, unless we assume nothing is lost in translation. Usually we don’t need to concern ourselves with atoms when we put down a chair, but eventually it will happen that we put down a chair and something unexpected happens because of an atomic butterfly effect which was invisible from the macroscopic layer of abstraction.
Let me try one way of explaining what I mean, and one way to explain why I think it’s an important distinction. Consider two model-builders which are both unconstrained to the maximum degree you’d allow without dismissing them as useless fantasies. Consider two perfectly accurate models of reality (or as accurate as you’d allow them to be). Presumably, they would by necessity be isomorphic, and their shortest representation identical. However, since those shortest representations are uncomputable (let’s retreat to more realism when it suits us), let’s just assume we’re dealing with 2 non-optimally compressed but perfectly accurate models of reality. One which the uFAI exterminating the humans came up with, and one which the uFAI terminating the Alpha-Centaurians came up with. So they meet over a game of cards, and compare models of reality. Maybe the Alpha-Centaurians—floating sentient gas bags, as opposed to blood bags—never sat down (before being exterminated), so its models don’t contain anything easily corresponding to a chair. Would that make its model of physics less powerful, or less accurate? Maybe, once exchanging notes, the Alpha-Centauri AI notes that humans (before being exterminated) liked to use ‘chairs’, so it includes some representation of ‘chair’ in its databanks. Maybe the AI’s don’t rely on such token concepts in the first place, and just describe different conglomerates of atoms, as atoms. It’s not that they couldn’t just save the ‘chair’-concept, there would just be no necessity to do so. No added accuracy, no added model fidelity, no added predictive power. Only if they lacked the oomph to describe everything as atoms-only would they start using labels like “chairs” and “flowers” and “human meat energy conversion facilities”.
What I get from that distinction is recognizing pseudo-answers such as “consciousnessness is an emergent phenomenon and only exists at a certain macroscopic level” as mostly being a confusion of thinking macroscopic layers to be somehow self-contained, independent of the lower levels, instead of computationally-friendly abstractions and approximations of lower levels. When we say “chairs are real, and atoms are real, and the quarks are real, and (whichever base levels we get down to) is real”, and hold all of those as true at the same time, there is a danger of forgetting that chairs are only real because atoms are real, which are only real because elementary particles are real, which …”, there is a dependency chain going all the way down to who knows where. All the way down to the “everything which can be described by math exists” swirling color vortex. “Consciousness is an actual physical phenomenon which can only be described as a macroscopic process, which only emerges at a higher level of abstraction, yet it exists and creates conscious experience” is self-contradictory, to me. It confuses a layer of abstraction which helps us process the world with a self-contained “emergent” world which is capable of creating conscious experience all on its own. Consciousness must be expressable purely on a base-level (whatever it may be), or it cannot be.
Of course it’s not feasible to talk about consciousness or chairs on a quark level (unless you’re Penrose), and “emergent” used as “we talk about it on this level because it seems most accessible to us” is perfectly fine. However, because of the computational-hack vs. territory confusion, “emergent” is used all too often as if it was some answer to some riddle, instead of an admission of insufficient resources.
That’s rather a barrage of text with only a single pass of proof-reading, if you have enough time to go through it, please point out where I’ve been unclear, or what doesn’t make sense to you.
I stick to the view that giving a phenomenon a name is not an explanation. It may be useful to have a name, but it doesn’t tell you anything about the phenomenon. If you are looking at an unfamiliar bird, and I tell you that it is a European shadwell, I have told you nothing about the bird. At the most, I have given you a pointer with which you can look up what other people know about it, but in the case of “souls”, nobody knows anything. (1)
I would expect more abstractions to be used, not fewer. As a practical example of this, look at the history of programming environments. More and more layers of abstraction, as more computational resources have become available to implement them, because it’s more efficient to work that way. Efficiency is always a concern, however much your computational resources grow. Wishing that problem away is beyond the limit of what I consider a useful thought-experiment.
Extending the reality-based fantasy in the direction of Solomonoff induction, if you find “chair” showing up in some Solomonoff-like induction method, what does it mean to say they don’t exist? Or “hydrogen”? If these are concepts that a fundamental method of thinking produces, whoever executes it, well, the distinction between “computational hack” and “really exists” becomes obscure. What work is it doing?
There is a sort of naive realism which holds that a chair exists because it partakes of a really existing essence of chairness, but however seriously that was taken in ancient Greece, I don’t think it’s worth air-time today. Naive unrealism, that says that nothing exists except for fundamental particles, I take no more seriously. Working things out from these supposed fundamentals is not possible, regardless of the supply of reality-based fantasy resources. We can’t see quarks. We can only barely see atoms, and only a tiny number of them. What we actually get from processing whatever signals come to us is ideas of macroscopic things, not quarks. There is no real computation that these perceptions are computational approximations to, that we could make if only we were better at seeing and computing. As we have discovered lower and lower levels, they have explained in general terms what is going on at the higher levels, but they aren’t actually much help in specific computations.
This is quite an old chestnut in the philosophy of science: the more fundamental the entity, the more remote it is from perception.
The possibilities of the universe are too vast for the human concept of “chair” to have ever been raised to the attention of the Centauran AI. Not having the concept will not have impaired it in any way, because it has no use for it. (Those who believe that zero is not a number, feel free to replace the implied zeroes there by epsilon.) When the Human AI communicates to it something of our history, then it will have that concept.
(1) Neither do they know anything about European shadwells, which is a name I just made up.