there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
This strikes me as probably true but unproven
It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should… and on up the meta-chain. It isn’t clear why such a system wouldn’t have access to any ontology that is accessible by the human mind.
My original formulation is that AI = state-machine materialism = computational epistemology = a closed circle. However, it’s true that you could have an AI which axiomatically imputes a particular phenomenology to the physical states, and such an AI could even reason about the mental life associated with transhumanly complex physical states, all while having no mental life of its own. It might be able to tell us that a certain type of state machine is required in order to feel meta-meta-pain, meta-meta-pain being something that no human being has ever felt or imagined, but which can be defined combinatorically as a certain sort of higher-order intentionality.
However, an AI cannot go from just an ontology of physical causality, to an ontology which includes something like pain, employing only computational epistemology. It would have to be told that state X is “pain”. And even then it doesn’t really know that to be in state X is to feel pain. (I am assuming that the AI doesn’t possess consciousness; if it does, then it may be capable of feeling pain itself, which I take to be a prerequisite for knowing what pain is.)
It appears to me that you are looking for an ontology that provides a natural explanation for things like “qualia” and “consciousness” (perhaps by way of phenomenology). You would refer to this ontology as the “true ontology”. You reject Platonism “an ontology which reifies mathematical or computational abstractions”, because things like “qualia” are absent.
From my perspective, your search for the “true ontology”—which privileges the phenomenological perspective of “consciousness”—is indistinguishable from the scientific realism that you reject under the name “Platonism”—which (by some accounts) privileges a materialistic or mathematical perspective of everything.
For example, using a form of your argument I could reject both of these approaches to realism because they fail to directly account for the phenomenological existence of SpongeBob SquarePants, and his wacky antics.
Much of what you have written roughly matches my perspective, so to be clear I am objecting to the following concepts and many of the conclusions you have drawn from them:
“true ontology”
“true epistemology”
“Consciousness objectively exists”
I claim that variants of antirealism have more to offer than realism. References to “true” and “objective” have implied contexts from which they must be considered, and without those contexts they hold no meaning. There is nothing that we can claim to be universally true or objective that does not have this dependency (including this very claim (meta-recursively...)). Sometimes this concept is stated as “we have no direct access to reality”.
So from what basis can we evaluate “reality” (whatever that is)? We clearly are evaluating reality from within our dynamic existence, some of which we refer to as consciousness. But consciousness can’t be fundamental, because its identification appears to depend upon itself performing the identification; and a description of consciousness appears to be incomplete in that it does not actually generate the consciousness it describes.
Extending this concept a bit, when we go looking for the “reality” that underpins our consciousness, we have to model that it based in terms of our experience which is dependent upon… well it depends on our consciousness and its dynamic dependence on “reality”. Also, these models don’t appear to generate the phenomenon they describe, and so it appears that circular reasoning and incompleteness are fundamental to our experience.
Because of this I suggest that we adopt an epistemology that is based on the meta-recursive dependence of descriptions on dynamic contexts. Using an existing dynamic context (such as our consciousness) we can explore reality in the terms that are accessible from within that context. We may not have complete objective access to that context, but we can explore it and form models to describe it, from inside of it.
We can also form new dynamic contexts that operate in terms of the existing context, and these newly formed inner contexts can interact with each in terms of dynamic patterns of the terms of the existing context. From our perspective we can only interact with our child contexts in the terms of the existing context, but the inner contexts may be generating internal experiences that are very different than those existing outside of it, based on the interaction of the dynamic patterns we have defined for them.
Inverting this perspective, then perhaps our consciousness is formed from the experiences generated from the dynamic patterns formed within an exterior context, and that context is itself generated from yet another set of interacting dynamic patterns… and so on. We could attempt to identify this nested set of relationships as its own ontology… only it may not actually be so well structured. It may actually be organized more like a network of partially overlapping contexts, where some parts interact strongly and other parts interact very weakly. In any case, our ability to describe this system will depend heavily on the dynamic perspective from which we observe the related phenomenon; and our perspective is of course embedded within the system we are attempting to describe.
I am not attempting to confuse the issues by pointing out how complex this can be. I am attempting to show a few things:
There is no absolute basis, no universal truth, no center, no bottom layer… from our perspective which is embedded in the “stuff of reality”. I make no claims about anything I don’t have access to.
Any ontology or epistemology will inherently be incomplete and circularly self-dependent, from some perspective.
The generation of meaning and existence is dependent on dynamic contexts of evaluation. When considering meaning or existence it is best to consider them in the terms of the context that is generating them.
Some models/ontologies/epistemologies are better than others, but the label “better” is dependent on the context of evaluation and is not fundamental.
The joints that we are attempting to carve the universe at are dependent upon the context of evaluation, and are not fundamental.
Meaning and existence are dynamic, not static. A seemingly static model is being dynamically generated, and stops existing when that modeling stops.
Using a model of dynamic patterns, based in terms of dynamic patterns we might be able to explain how consciousness emerges from non-conscious stuff, but this model will not be fundamental or complete, it will simply be one way to look at the Whole Sort of General Mish Mash of “reality”.
To apply this to your “principle of non-vagueness”. There is no reason to expect that mapping between pairs of arbitrary perspectives—between physical and phenomenological states in this case—is necessarily precise (or even meaningful). Simply because they are two different ways of describing arbitrary slices of “reality” means that they may refer to not-entirely overlapping parts of “reality”. Certainly physical and phenomenological states are modeled and measured in very different ways, so a great deal of non-overlap caused uncertainty/vagueness should be expected.
And this claim:
But as I have argued, not only must the true ontology be deeper than state-machine materialism, there is no way for an AI employing computational epistemology to bootstrap to a deeper ontology.
Current software is rarely programmed to directly model state-machines. It may be possible to map the behavior of existing systems to state machines, but it is not generally the perspective generally held by the programmers, or by the dynamically running software. The same is true for current AI, so from that perspective your claim seems a bit odd to me. The perspective that an AI can be mapped to a state-machine is based on a particular perspective on the AI involved, but in fact that mapping does not discount that the AI is implemented within the same “reality” that we are. If our physical configuration (from some perspective) allows us to generate consciousness then there is no general barrier that should prevent AI systems from achieving a similar form of consciousness.
I recognize that these descriptions that may not bridge our inference gap; in fact they may not even properly encode my intended meaning. I can see that you are searching for an epistemology that better encodes for your understanding of the universe; I’m just tossing in my thoughts to see if we can generate some new perspectives.
People have noticed circular dependencies among subdisciplines of philosophy before. A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.
Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.
That’s not my philosophy; I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn’t an endless merry-go-round, it’s a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.
Or until you discover the phenomenological counterpart of Gödel’s theorem. In what you write I don’t see a proof that foundations don’t exist or can’t be reached. Perhaps they can’t, but in the absence of a proof, I see no reason to abandon cognitive optimism.
A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.
I have read many of your comments and I am uncertain how to model your meanings for ‘ontology’, ‘epistemology’ and ‘methodology’, especially in relation to each other.
Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to—in the process establishing the relationship between these terms?
Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.
The term “cycles” doesn’t really capture my sense of the situation. Perhaps the sense of recurrent hypergraphs is closer.
Also, I do not limit my argument only to things we describe as cognitive contexts. My argument allows for any type of context of evaluation. For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.
...and this justifies ontological relativism.
I think that this epistemology actually justifies something more like an ontological perspectivism, but it generalizes the context of evaluation beyond the human centric concepts found in relativism and perspectivism. Essentially it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work I have found in epistemology, philosophy, linguistics and semiotics.
In what you write I don’t see a proof that foundations don’t exist or can’t be reached.
I’m glad you don’t see those proofs because I can’t claim either point from the implied perspective of your statement. Your statement assumes that there exists an objective perspective from which a foundation can be described. The problem with this concept is that we don’t have access to any such objective perspective. We can only identify the perspective as “objective” from some perspective… which means that the identified “objective” perspective depends upon the perspective that generated the label, rendering the label subjective.
You do provide an algorithm for finding an objective description:
I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn’t an endless merry-go-round, it’s a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.
Again from this it seems that while you reject some current conclusions of science, you actually embrace scientific realism—that there is an external reality that can be completely and consistently described.
As long as you are dealing in terms of maps (descriptions) it isn’t clear that to me that you ever escape the language hierarchy and therefore you are never free of Gödel’s theorems. To achieve the level of completeness and consistency you strive for, it seems that you need to describe reality in terms equivalent to those it uses… which means you aren’t describing it so much as generating it. If this description of a reality is complete then it is rendered in terms of itself, and only itself, which would make it a reality independent of ours, and so we would have no access to it (otherwise it would simply be a part of our reality and therefore not complete). Descriptions of reality that generate reality aren’t directly accessible by the human mind; any translation of these descriptions to human accessible terms would render the description subject to Gödel’s theorems.
I see no reason to abandon cognitive optimism.
I don’t want anybody to abandon the search for new and better perspectives on reality just because we don’t have access to an objective perspective. But by realizing that there are no objective perspectives we can stop arguing about the “right” way of viewing all of reality and spend that time finding “good” or “useful” ways to view parts of it.
Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to—in the process establishing the relationship between these terms?
Let’s say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique. There’s naturally an interplay between these disciplines. Each discipline has methods, the methods might be employed before you’re clear on how they work, so you might perform a phenomenological study of the methods in order to establish what it is that you’re doing. Reflection is supposed to be a source of knowledge about consciousness, so it’s an epistemological methodology for constructing a phenomenological ontology… I don’t have a formula for how it all fits together (but if you do an image search on “hermeneutic circle” you can find various crude flowcharts). If I did, I would be much more advanced.
For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.
I wouldn’t call that meaning, unless you’re going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it’s just cause and effect. True meaning is an aspect of consciousness. Functionalist “meaning” is based on an analogy with meaning-driven behavior in a conscious being.
it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work
Does your philosophy have a name? Like “functionalist perspectivism”?
Let’s say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique.
Thanks for the description. That would place the core of my claims as an ontology, with implications for how to approach epistemology, and phenomenology.
I wouldn’t call that meaning, unless you’re going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it’s just cause and effect. True meaning is an aspect of consciousness. Functionalist “meaning” is based on an analogy with meaning-driven behavior in a conscious being.
I recognize that my use of meaning is not normative. I won’t defend this use because my model for it is still sloppy, but I will attempt to explain it.
The antenna-photon interaction that you refer to as cause and effect I would refer to as a change in the dynamics of the system, as described from a particular perspective.
To refer to this interaction as cause and effect requires that some aspect of the system be considered the baseline; the effect then is how the state of the system is modified by the influencing entity. Such a perspective can be adopted and might even be useful. But the perspective that I am holding is that the antenna and the photon are interacting. This is a process that modifies both systems. The “meaning” that is formed is unique to the system; it depends on the particulars of the systems and their interactions. Within the system that “meaning” exists in terms of the dynamics allowed by the nature of the system. When we describe that “meaning” we do so in the terms generated from an external perspective, but that description will only capture certain aspects of the “meaning” actually generated within the system.
How does this description compare with your concept of “meaning-qualia”?
Does your philosophy have a name? Like “functionalist perspectivism”?
I think that both functionalism and perspectivism are poor labels for what I’m attempting to describe; because both philosophies pay too much attention to human consciousness and neither are set to explain the nature of existence generally.
For now I’m calling my philosophy the interpretive context hypothesis (ICH), at least until I discover a better name or a better model.
The contexts from which you identify “state-machine materialism” and “pain” appear to be very different from each other, so it is no surprise that you find no room for “pain” within your model of “state-machine materialism”.
You appear to identify this issue directly in this comment:
My position is that a world described in terms of purely physical properties or purely computational properties does not contain qualia. Such a description itself would contain no reference to qualia.
Looking for the qualia of “pain” in a state-machine model of a computer is like trying to find out what my favorite color is by using a hammer to examine the contents of my head. You are simply using the wrong interface to the system.
If you examine the compressed and encrypted bit sequence stored on a DVD as a series of 0 and 1 characters, you will not be watching the movie.
If you don’t understand the Russian language, then for a novel written in Russian you will not find the subtle twists of plot compelling.
If you choose some perspectives on Searle’s Chinese room thought experiment you will not see the Chinese speaker, you will only see the mechanism that generates Chinese symbols.
So stuff like “qualia”, “pain”, “consciousness”, and “electrons” only exist (hold meaning) from perspectives that are capable of identifying them. From other perspective they are non-existent (have no meaning).
If you chose a perspective on “conscious experience” that requires a specific sort of physical entity to be present, then a computer without that will never qualify as “conscious”, for you. Others may disagree, perhaps pointing out aspects of its responses to them, or how some aspects of the system are functionally equivalent to the physical entity you require. So, which is the right way to identify consciousness? To figure that out you need to create a perspective from which you can identify one as right, and the other as wrong.
It seems possible for an AI to engage in a process of search within the ontological Hilbert space. It may not be efficient, but a random search should make all parts of any particular space accessible, and a random search across a Hilbert space of ontological spaces should make other types of ontological spaces accessible, and a random search across a Hilbert space containing Hilbert spaces of ontological spaces should… and on up the meta-chain. It isn’t clear why such a system wouldn’t have access to any ontology that is accessible by the human mind.
My original formulation is that AI = state-machine materialism = computational epistemology = a closed circle. However, it’s true that you could have an AI which axiomatically imputes a particular phenomenology to the physical states, and such an AI could even reason about the mental life associated with transhumanly complex physical states, all while having no mental life of its own. It might be able to tell us that a certain type of state machine is required in order to feel meta-meta-pain, meta-meta-pain being something that no human being has ever felt or imagined, but which can be defined combinatorically as a certain sort of higher-order intentionality.
However, an AI cannot go from just an ontology of physical causality, to an ontology which includes something like pain, employing only computational epistemology. It would have to be told that state X is “pain”. And even then it doesn’t really know that to be in state X is to feel pain. (I am assuming that the AI doesn’t possess consciousness; if it does, then it may be capable of feeling pain itself, which I take to be a prerequisite for knowing what pain is.)
Continuing my argument.
It appears to me that you are looking for an ontology that provides a natural explanation for things like “qualia” and “consciousness” (perhaps by way of phenomenology). You would refer to this ontology as the “true ontology”. You reject Platonism “an ontology which reifies mathematical or computational abstractions”, because things like “qualia” are absent.
From my perspective, your search for the “true ontology”—which privileges the phenomenological perspective of “consciousness”—is indistinguishable from the scientific realism that you reject under the name “Platonism”—which (by some accounts) privileges a materialistic or mathematical perspective of everything.
For example, using a form of your argument I could reject both of these approaches to realism because they fail to directly account for the phenomenological existence of SpongeBob SquarePants, and his wacky antics.
Much of what you have written roughly matches my perspective, so to be clear I am objecting to the following concepts and many of the conclusions you have drawn from them:
“true ontology”
“true epistemology”
“Consciousness objectively exists”
I claim that variants of antirealism have more to offer than realism. References to “true” and “objective” have implied contexts from which they must be considered, and without those contexts they hold no meaning. There is nothing that we can claim to be universally true or objective that does not have this dependency (including this very claim (meta-recursively...)). Sometimes this concept is stated as “we have no direct access to reality”.
So from what basis can we evaluate “reality” (whatever that is)? We clearly are evaluating reality from within our dynamic existence, some of which we refer to as consciousness. But consciousness can’t be fundamental, because its identification appears to depend upon itself performing the identification; and a description of consciousness appears to be incomplete in that it does not actually generate the consciousness it describes.
Extending this concept a bit, when we go looking for the “reality” that underpins our consciousness, we have to model that it based in terms of our experience which is dependent upon… well it depends on our consciousness and its dynamic dependence on “reality”. Also, these models don’t appear to generate the phenomenon they describe, and so it appears that circular reasoning and incompleteness are fundamental to our experience.
Because of this I suggest that we adopt an epistemology that is based on the meta-recursive dependence of descriptions on dynamic contexts. Using an existing dynamic context (such as our consciousness) we can explore reality in the terms that are accessible from within that context. We may not have complete objective access to that context, but we can explore it and form models to describe it, from inside of it.
We can also form new dynamic contexts that operate in terms of the existing context, and these newly formed inner contexts can interact with each in terms of dynamic patterns of the terms of the existing context. From our perspective we can only interact with our child contexts in the terms of the existing context, but the inner contexts may be generating internal experiences that are very different than those existing outside of it, based on the interaction of the dynamic patterns we have defined for them.
Inverting this perspective, then perhaps our consciousness is formed from the experiences generated from the dynamic patterns formed within an exterior context, and that context is itself generated from yet another set of interacting dynamic patterns… and so on. We could attempt to identify this nested set of relationships as its own ontology… only it may not actually be so well structured. It may actually be organized more like a network of partially overlapping contexts, where some parts interact strongly and other parts interact very weakly. In any case, our ability to describe this system will depend heavily on the dynamic perspective from which we observe the related phenomenon; and our perspective is of course embedded within the system we are attempting to describe.
I am not attempting to confuse the issues by pointing out how complex this can be. I am attempting to show a few things:
There is no absolute basis, no universal truth, no center, no bottom layer… from our perspective which is embedded in the “stuff of reality”. I make no claims about anything I don’t have access to.
Any ontology or epistemology will inherently be incomplete and circularly self-dependent, from some perspective.
The generation of meaning and existence is dependent on dynamic contexts of evaluation. When considering meaning or existence it is best to consider them in the terms of the context that is generating them.
Some models/ontologies/epistemologies are better than others, but the label “better” is dependent on the context of evaluation and is not fundamental.
The joints that we are attempting to carve the universe at are dependent upon the context of evaluation, and are not fundamental.
Meaning and existence are dynamic, not static. A seemingly static model is being dynamically generated, and stops existing when that modeling stops.
Using a model of dynamic patterns, based in terms of dynamic patterns we might be able to explain how consciousness emerges from non-conscious stuff, but this model will not be fundamental or complete, it will simply be one way to look at the Whole Sort of General Mish Mash of “reality”.
To apply this to your “principle of non-vagueness”. There is no reason to expect that mapping between pairs of arbitrary perspectives—between physical and phenomenological states in this case—is necessarily precise (or even meaningful). Simply because they are two different ways of describing arbitrary slices of “reality” means that they may refer to not-entirely overlapping parts of “reality”. Certainly physical and phenomenological states are modeled and measured in very different ways, so a great deal of non-overlap caused uncertainty/vagueness should be expected.
And this claim:
Current software is rarely programmed to directly model state-machines. It may be possible to map the behavior of existing systems to state machines, but it is not generally the perspective generally held by the programmers, or by the dynamically running software. The same is true for current AI, so from that perspective your claim seems a bit odd to me. The perspective that an AI can be mapped to a state-machine is based on a particular perspective on the AI involved, but in fact that mapping does not discount that the AI is implemented within the same “reality” that we are. If our physical configuration (from some perspective) allows us to generate consciousness then there is no general barrier that should prevent AI systems from achieving a similar form of consciousness.
I recognize that these descriptions that may not bridge our inference gap; in fact they may not even properly encode my intended meaning. I can see that you are searching for an epistemology that better encodes for your understanding of the universe; I’m just tossing in my thoughts to see if we can generate some new perspectives.
People have noticed circular dependencies among subdisciplines of philosophy before. A really well-known one is the cycle connecting ontology and epistemology: your epistemology should imply your ontology, and your ontology must permit your epistemology. More arcane is the interplay between phenomenology, epistemology, and methodology.
Your approach to ontology seems to combine these two cycles, with the p/e/m cycle being more fundamental. All ontological claims are said to be dependent on a cognitive context, and this justifies ontological relativism.
That’s not my philosophy; I see the possibility of reaching foundations, and also the possibility of countering the relativistic influence of the p/e/m perspective, simply by having a good ontological account of what the p/e/m cycle is about. From this perspective, the cycle isn’t an endless merry-go-round, it’s a process that you iterate in order to perfect your thinking. You chase down the implications of one ology for another, and you keep that up until you have something that is complete and consistent.
Or until you discover the phenomenological counterpart of Gödel’s theorem. In what you write I don’t see a proof that foundations don’t exist or can’t be reached. Perhaps they can’t, but in the absence of a proof, I see no reason to abandon cognitive optimism.
I have read many of your comments and I am uncertain how to model your meanings for ‘ontology’, ‘epistemology’ and ‘methodology’, especially in relation to each other.
Do you have links to sources that describe these types of cycles, or are you willing to describe the cycles you are referring to—in the process establishing the relationship between these terms?
The term “cycles” doesn’t really capture my sense of the situation. Perhaps the sense of recurrent hypergraphs is closer.
Also, I do not limit my argument only to things we describe as cognitive contexts. My argument allows for any type of context of evaluation. For example an antennae interacting with a photon creates a context of evaluation that generates meaning in terms of the described system.
I think that this epistemology actually justifies something more like an ontological perspectivism, but it generalizes the context of evaluation beyond the human centric concepts found in relativism and perspectivism. Essentially it stops privileging human consciousness as the only context of evaluation that can generate meaning. It is this core idea that separates my epistemology from most of the related work I have found in epistemology, philosophy, linguistics and semiotics.
I’m glad you don’t see those proofs because I can’t claim either point from the implied perspective of your statement. Your statement assumes that there exists an objective perspective from which a foundation can be described. The problem with this concept is that we don’t have access to any such objective perspective. We can only identify the perspective as “objective” from some perspective… which means that the identified “objective” perspective depends upon the perspective that generated the label, rendering the label subjective.
You do provide an algorithm for finding an objective description:
Again from this it seems that while you reject some current conclusions of science, you actually embrace scientific realism—that there is an external reality that can be completely and consistently described.
As long as you are dealing in terms of maps (descriptions) it isn’t clear that to me that you ever escape the language hierarchy and therefore you are never free of Gödel’s theorems. To achieve the level of completeness and consistency you strive for, it seems that you need to describe reality in terms equivalent to those it uses… which means you aren’t describing it so much as generating it. If this description of a reality is complete then it is rendered in terms of itself, and only itself, which would make it a reality independent of ours, and so we would have no access to it (otherwise it would simply be a part of our reality and therefore not complete). Descriptions of reality that generate reality aren’t directly accessible by the human mind; any translation of these descriptions to human accessible terms would render the description subject to Gödel’s theorems.
I don’t want anybody to abandon the search for new and better perspectives on reality just because we don’t have access to an objective perspective. But by realizing that there are no objective perspectives we can stop arguing about the “right” way of viewing all of reality and spend that time finding “good” or “useful” ways to view parts of it.
Let’s say that ontology is the study of that which exists, epistemology the study of knowledge, phenomenology the study of appearances, and methodology the study of technique. There’s naturally an interplay between these disciplines. Each discipline has methods, the methods might be employed before you’re clear on how they work, so you might perform a phenomenological study of the methods in order to establish what it is that you’re doing. Reflection is supposed to be a source of knowledge about consciousness, so it’s an epistemological methodology for constructing a phenomenological ontology… I don’t have a formula for how it all fits together (but if you do an image search on “hermeneutic circle” you can find various crude flowcharts). If I did, I would be much more advanced.
I wouldn’t call that meaning, unless you’re going to explicitly say that there are meaning-qualia in your antenna-photon system. Otherwise it’s just cause and effect. True meaning is an aspect of consciousness. Functionalist “meaning” is based on an analogy with meaning-driven behavior in a conscious being.
Does your philosophy have a name? Like “functionalist perspectivism”?
Thanks for the description. That would place the core of my claims as an ontology, with implications for how to approach epistemology, and phenomenology.
I recognize that my use of meaning is not normative. I won’t defend this use because my model for it is still sloppy, but I will attempt to explain it.
The antenna-photon interaction that you refer to as cause and effect I would refer to as a change in the dynamics of the system, as described from a particular perspective.
To refer to this interaction as cause and effect requires that some aspect of the system be considered the baseline; the effect then is how the state of the system is modified by the influencing entity. Such a perspective can be adopted and might even be useful. But the perspective that I am holding is that the antenna and the photon are interacting. This is a process that modifies both systems. The “meaning” that is formed is unique to the system; it depends on the particulars of the systems and their interactions. Within the system that “meaning” exists in terms of the dynamics allowed by the nature of the system. When we describe that “meaning” we do so in the terms generated from an external perspective, but that description will only capture certain aspects of the “meaning” actually generated within the system.
How does this description compare with your concept of “meaning-qualia”?
I think that both functionalism and perspectivism are poor labels for what I’m attempting to describe; because both philosophies pay too much attention to human consciousness and neither are set to explain the nature of existence generally.
For now I’m calling my philosophy the interpretive context hypothesis (ICH), at least until I discover a better name or a better model.
The contexts from which you identify “state-machine materialism” and “pain” appear to be very different from each other, so it is no surprise that you find no room for “pain” within your model of “state-machine materialism”.
You appear to identify this issue directly in this comment:
Looking for the qualia of “pain” in a state-machine model of a computer is like trying to find out what my favorite color is by using a hammer to examine the contents of my head. You are simply using the wrong interface to the system.
If you examine the compressed and encrypted bit sequence stored on a DVD as a series of 0 and 1 characters, you will not be watching the movie.
If you don’t understand the Russian language, then for a novel written in Russian you will not find the subtle twists of plot compelling.
If you choose some perspectives on Searle’s Chinese room thought experiment you will not see the Chinese speaker, you will only see the mechanism that generates Chinese symbols.
So stuff like “qualia”, “pain”, “consciousness”, and “electrons” only exist (hold meaning) from perspectives that are capable of identifying them. From other perspective they are non-existent (have no meaning).
If you chose a perspective on “conscious experience” that requires a specific sort of physical entity to be present, then a computer without that will never qualify as “conscious”, for you. Others may disagree, perhaps pointing out aspects of its responses to them, or how some aspects of the system are functionally equivalent to the physical entity you require. So, which is the right way to identify consciousness? To figure that out you need to create a perspective from which you can identify one as right, and the other as wrong.