I understand the subjective idealist perspective very well as I used to be one myself. I’d recommend you to understand my usage of the word “fundamental” because currently when, you just interchangably use it with “real” you are missing the point I’m trying to make.
Again, try reasoning in terms of map and territory, I suspect it can be enlightning for you as it was for me. Mental things are part of a map. Initially when you see a map you are more certain of its existence than the territory that this map describes. And yet the map itself can still be just a part of territory. Drawn on the wall of a building or on a piece of paper that was produced on a factory that is shown on this exact map.
I begin my exposition describing what means “explanation” in Physicalism.
To explain something is to show how it reduces to something you already understand, making mysterious things not mysterious anymore.
But what about chat GTP? How helpful are the “plans” of the machine to assess conscience? What additional scientific knowledge do you need? You have the generative model, the most complete set of plans and physical relations you can dream of. Regarding chat GTP you are almost like the Laplace demon. Still, you don’t know how sentient is it.
We have good understanding how to make one. We still lack in understanding how exactly it works. Interpretability research are ongoing and we are slowly learning more and more about it, filling the blancks in our understanding. I’m not sure why you call an opaque box “the most complete set of plans and physical relations you can dream of”—it’s clearly not the case.
the physicalist-reductionist explanation (no matter how perfect) is simply not enough for the assessment of sentience.
It’s not that you had the pleasure to experience the most perfect physicalist-reductionist explanation. So how can you know that you are not giving up too early? Epiphenomenalism doesn’t try to solve the mystery so it won’t be able to do it. Had we just assumed a priori that we would never learn anything more about GPT we wouldn’t be doing all the interpretability research and as a result it would be a self fulfilling prophecy leaving us less knowledgeble then we are now.
If it “considers”, “chose” and “has goals”, it is conscious.
I notice a contradiction here. Previously you’ve said that we can’t know whether somthing other than us is conscious. Now you say that we can, if this something considers, chooses and has goals. Or are you claiming that we can’t possibly know whether something possess such abilities via materialistic science? If so, would you agree that epiphenomenalism is wrong if I showed you the way to do it?
“Mental things are part of a map. Initially when you see a map you are more certain of its existence than the territory that this map describes. And yet the map itself can still be just a part of territory”
Well, this quite metaphorical for a hard materialist … :-)
Additionally, I cannot compare “map” and territory, because I (subject) can only deal with different maps (I cannot access reality, wherever it means, directly), at most different internal representations of something I believe is “out there”.
“Now you say that we can, if this something considers, chooses and has goals”
I am only sure of the fact that I “consider”, “choose” and “have goals”. For the rest, it is only a hypothesis (quite persuasive for very similar organisms that also speak, and the more different from me the harder is to use analogy to assess sentience).
“We have good understanding how to make one. We still lack in understanding how exactly it works. Interpretability research are ongoing and we are slowly learning more and more about it, filling the blanks in our understanding “
I agree with this. In this particular case of “biology”, we have perfect knowledge of biochemistry [=the generative model], but as humans we want also the intermediate layers (like cytology). Still, in my view it does not matter if you go top-down [classical biology] or bottom-up [AI interpretability], what you have is phenomenal knowledge.
The “neural correlates of conscience” people are working in axiomatic formulations (Information Integration Theory), because they can only trust human experience (where analogy and language are available). For the rest of beings we have no Rosetta Stone that allow sentience comparison. Even in the case of humans, everything depends on “trust” on others’ experience, because the only conscience I can measure is the mine one.
“It’s not that you had the pleasure to experience the most perfect physicalist-reductionist explanation. So how can you know that you are not giving up too early? Epiphenomenalism doesn’t try to solve the mystery so it won’t be able to do it.”
Epiphenomenalism suggest that there is no mystery to solve. Of course, epiphenomenalists are as interested in “understanding” phenomenologically either AI and biological systems as anybody else. But after explaining everything, either by having the generative model of the systems or the intermediate layers of reduction, sentience can only be assessed by the the conscious entity itself.
Every conscious being in the universe is epistemologically alone, owning the knowledge of their sentience as a metaphysical absolute certainty that cannot be transferred to any other subject. Splendid loneliness...
Well, this quite metaphorical for a hard materialist … :-)
Were you under a misconception that materialists can’t use metaphors because they are not “sciency” enough?)
Additionally, I cannot compare “map” and territory, because I (subject) can only deal with different maps (I cannot access reality, wherever it means, directly), at most different internal representations of something I believe is “out there”.
True, the initial model is imperfect. But it’s a start. You can improve on it when you understand the initial framework. A model where you are writing your own map based on the other maps or even where you are locked in the infinite recursion of maps referencing other maps, without being able to access the territory in any other way than through a map. And still despite that, the maps can be made from the trees that grow on the territory.
I am only sure of the fact that I “consider”, “choose” and “have goals”. For the rest, it is only a hypothesis (quite persuasive for very similar organisms that also speak, and the more different from me the harder is to use analogy to assess sentience).
I think I understand your reasoning very well. And it seems obvious to me that if I managed to show you a counter example that I’m talking about, an entity about which I can be quite certain that it can “consider”, “choose” and “have goals” despite all the reasons that you brought up, you would have to accept that you made a mistake somewhere. But I want you to explicitly acknowledge this. Sorry for the annoyance, I promise that I’m not doing it just for my own amusement, that I expect it to be more helpful for your this way. So, please, say: “Yes, if you show me such an example it will falsify my theory”.
“ What if there was an entity that could consider possible futures conditional on its actions and choose one of them—suiting its goals the best, - and yet this entity completely lacked conscious experience? What properties of free will would such entity lack? Or do you believe that such entity is impossible in our universe (or even in any universe?) thus leaving the possibility for the falsification of your theory?”
I don’t think I am making a “theory”, but more an “interpretation”.
My position is that freedom is a property related to conscious experience, so as long as there is no conscience, my definition of freedom cannot be applied. Conscious choice is the basis of freedom. In fact, this is specially obvious, because my definition of freedom opens the door to moral responsibility, that obviously cannot exist with no conscience! What am I missing here?
I understand the subjective idealist perspective very well as I used to be one myself. I’d recommend you to understand my usage of the word “fundamental” because currently when, you just interchangably use it with “real” you are missing the point I’m trying to make.
Again, try reasoning in terms of map and territory, I suspect it can be enlightning for you as it was for me. Mental things are part of a map. Initially when you see a map you are more certain of its existence than the territory that this map describes. And yet the map itself can still be just a part of territory. Drawn on the wall of a building or on a piece of paper that was produced on a factory that is shown on this exact map.
To explain something is to show how it reduces to something you already understand, making mysterious things not mysterious anymore.
We have good understanding how to make one. We still lack in understanding how exactly it works. Interpretability research are ongoing and we are slowly learning more and more about it, filling the blancks in our understanding. I’m not sure why you call an opaque box “the most complete set of plans and physical relations you can dream of”—it’s clearly not the case.
It’s not that you had the pleasure to experience the most perfect physicalist-reductionist explanation. So how can you know that you are not giving up too early? Epiphenomenalism doesn’t try to solve the mystery so it won’t be able to do it. Had we just assumed a priori that we would never learn anything more about GPT we wouldn’t be doing all the interpretability research and as a result it would be a self fulfilling prophecy leaving us less knowledgeble then we are now.
I notice a contradiction here. Previously you’ve said that we can’t know whether somthing other than us is conscious. Now you say that we can, if this something considers, chooses and has goals. Or are you claiming that we can’t possibly know whether something possess such abilities via materialistic science? If so, would you agree that epiphenomenalism is wrong if I showed you the way to do it?
“Mental things are part of a map. Initially when you see a map you are more certain of its existence than the territory that this map describes. And yet the map itself can still be just a part of territory”
Well, this quite metaphorical for a hard materialist … :-)
Additionally, I cannot compare “map” and territory, because I (subject) can only deal with different maps (I cannot access reality, wherever it means, directly), at most different internal representations of something I believe is “out there”.
“Now you say that we can, if this something considers, chooses and has goals”
I am only sure of the fact that I “consider”, “choose” and “have goals”. For the rest, it is only a hypothesis (quite persuasive for very similar organisms that also speak, and the more different from me the harder is to use analogy to assess sentience).
“We have good understanding how to make one. We still lack in understanding how exactly it works. Interpretability research are ongoing and we are slowly learning more and more about it, filling the blanks in our understanding “
I agree with this. In this particular case of “biology”, we have perfect knowledge of biochemistry [=the generative model], but as humans we want also the intermediate layers (like cytology). Still, in my view it does not matter if you go top-down [classical biology] or bottom-up [AI interpretability], what you have is phenomenal knowledge.
The “neural correlates of conscience” people are working in axiomatic formulations (Information Integration Theory), because they can only trust human experience (where analogy and language are available). For the rest of beings we have no Rosetta Stone that allow sentience comparison. Even in the case of humans, everything depends on “trust” on others’ experience, because the only conscience I can measure is the mine one.
“It’s not that you had the pleasure to experience the most perfect physicalist-reductionist explanation. So how can you know that you are not giving up too early? Epiphenomenalism doesn’t try to solve the mystery so it won’t be able to do it.”
Epiphenomenalism suggest that there is no mystery to solve. Of course, epiphenomenalists are as interested in “understanding” phenomenologically either AI and biological systems as anybody else. But after explaining everything, either by having the generative model of the systems or the intermediate layers of reduction, sentience can only be assessed by the the conscious entity itself.
Every conscious being in the universe is epistemologically alone, owning the knowledge of their sentience as a metaphysical absolute certainty that cannot be transferred to any other subject. Splendid loneliness...
Were you under a misconception that materialists can’t use metaphors because they are not “sciency” enough?)
True, the initial model is imperfect. But it’s a start. You can improve on it when you understand the initial framework. A model where you are writing your own map based on the other maps or even where you are locked in the infinite recursion of maps referencing other maps, without being able to access the territory in any other way than through a map. And still despite that, the maps can be made from the trees that grow on the territory.
I think I understand your reasoning very well. And it seems obvious to me that if I managed to show you a counter example that I’m talking about, an entity about which I can be quite certain that it can “consider”, “choose” and “have goals” despite all the reasons that you brought up, you would have to accept that you made a mistake somewhere. But I want you to explicitly acknowledge this. Sorry for the annoyance, I promise that I’m not doing it just for my own amusement, that I expect it to be more helpful for your this way. So, please, say: “Yes, if you show me such an example it will falsify my theory”.
“ What if there was an entity that could consider possible futures conditional on its actions and choose one of them—suiting its goals the best, - and yet this entity completely lacked conscious experience? What properties of free will would such entity lack? Or do you believe that such entity is impossible in our universe (or even in any universe?) thus leaving the possibility for the falsification of your theory?”
I don’t think I am making a “theory”, but more an “interpretation”.
My position is that freedom is a property related to conscious experience, so as long as there is no conscience, my definition of freedom cannot be applied. Conscious choice is the basis of freedom. In fact, this is specially obvious, because my definition of freedom opens the door to moral responsibility, that obviously cannot exist with no conscience! What am I missing here?