One would not predict the existence of intrinisically subjective qualities in an entirely physcial, and therefor entirely objective, universe.
Disagree.
Let’s look at the actual observations. I see red, It has some atomic “redness” that is different from the atomic “blueness” of blue and the atomic pleasure of orgasm and the atomic feeling of cold.
Each of these atomic “qualia” are subjectively irreducible. There are not smaller parts that my subjective experience of “red” is made up of.
Is this roughly the qualia problem? That’s my understanding of it.
Here’s a simple computer program that reports on whether or not it has atomic subjective experience:
qualia = {"red", "blue", "cold", "pleasure"}
memory_associations = {red = {"anger", "hot"}, blue = {"cold", "calm"},
pleasure = {"hot", "good"}}
function experience_qualia(input)
for _, q in ipairs(qualia) do
if input == q then
print("my experience of", input, "is the same as", q)
else
print(q, "and", input, "feel different")
end
end
print("furthermore, the feeling of", input, "seems connected to")
print(table.concat(memory_associations[input], " and "))
print("I have no way of reducing these experiences, therefore I exist outside physics")
end
experience_qualia"red"
experience_qualia"blue"
From the inside, the program experiences no mechanisms of reduction of these atomic qualia, but from the outside, we can see that they are strings, made up of bytes, and compared by hash value. While I don’t know the details of the neurosceince of qualia, I expect the findings to be roughly similar. Something will be an irreducible symbol with various associations and uniqueness from within the system, but outside, we will be able to see “oh look, redness is this particular pattern of neurons firing”.
EDIT: LW killed my program formatting. It should still run (lua, by the way)
Also, lots of syntax differences (end, then, do, function, whitespace, elseif, etc). They are similar in that they are dynamic languages. I don’t think anything was particularly inspired by anything else.
If you mean this… to be clear, I didn’t complain about it not demonstrating “qualia sameness”. I complained (implicitly) that the claim that it demonstrated all the properties that some people claim demonstrate qualia in real-world systems (like people) was demonstrably false.
(In particular, that it didn’t demonstrate anything persistent across different reporting, whereas my own experience does demonstrate something persistent across different reporting.)
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
Point taken. As I tried to explain somewhere, it was all the properties that I thought of at the moment, with the implicit assertion that the rest of the properties could be demonstrated as required.
“From the inside, the program experiences no mechanisms of reduction of these atomic qualia”
Materialism predicts that algorithms have an “inside”?
As a further note, I’ll have to say that if all the blue and if the red in my visual experience were switched around, my hunch tells me that I’d be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that “red” is associated with hot, and that “blue” is associated with cold… The qualia of the visual experience itself would be different.
Materialism predicts that algorithms have an “inside”?
Yes. The scene from within a formal system (like algebra) has certain qualities (equations, variables, functions, etc) that are different from the scene outside (markings on paper, equals sign, BEDMAS, variable names, brackets for function application).
That’s not really a materialism thing, it’s a math thing.
As a further note, I’ll have to say that if all the blue and if the red in my visual experience were squitched around, my hunch tells me that I’d be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that “red” is associated with hot, and that “blue” is associated with cold… The qualia of the visual experience itself would be different.
Hence the part where they are compared to other qualia. Maybe that’s not enough, but imagining getting “blue” or “sdfg66df” instead of “red” (which is the evidence you are using) is of course going to return “they are different” because they don’t compare equal. Even if the output of the computation ends up being the same.
That’s not really a materialism thing, it’s a math thing.
I’m under the impression that what you describe falls under computationalism, not materialism, but my reading on these ideas is shallow and I may be confusing some of these terms...
I must say I can’t tell the difference between materialism “the mind is built of stuff” and computationalism “the mind is built of algorithms (running on stuff)”.
That thought experiment doesn’t make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn’t notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.
There’s no evidence that your programme experiences anything from the inside. Which is one way in which your claim is surreptitiously eliminativist. Another is that, examined from the outside, we can tell what the programme’s qualia are:
they are nothing. They have no quaities other than being different from one another. But qualia don’t seem like that from the inside! You say your programme’s qualia are subjective because it can’t examine their internal structure...but there
ins’t any. They are not subjective somethings, they are just nothings.
There’s no evidence that your programme experiences anything from the inside.
then neither is there evidence that I do, or you do.
they are nothing. They have no quaities other than being different from one another.
I can’t think of qualities that my subjective experience of “red” has that the atom “red” does not have in my program.
But qualia don’t seem like that from the inside!
Sure they do. Redness has this unique redness to it the same way “red” has this unique ness.
your programme’s qualia are subjective because
I was using “subjective” as a perspective, not a quality.
can’t examine their internal structure...but there ins’t any.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc.
then neither is there evidence that I do, or you do.
I have plenty of evidence of my own experiences. Were you restricting “evidence” to third-person, objective evidence?
I can’t think of qualities that my subjective experience of “red” has that the atom “red” does not have in my program.
I can. I think that if I experienced nothing but an even expanse of red, that would be different from experiencing
nothing but a salty taste, or nothing but middle C
Sure they do. Redness has this unique redness to it the same way “red” has this unique ness.
Redness isn’t expressible. “Object at 0x8cf643” is.
Your programme’s qualia are subjective because can’t examine their internal structure...but there ins’t any.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc
If that’s accessible to them, it’s objective and expressible. If not, its just a nothing. Neither way do
you have a “somethng” that is subjective.
I wouldn’t predict the existence of self-replicating molecules either. In fact, I’m not sure I’m in a position to predict anything at all about physical phenomena without appealing to empirical knowledge I’ve gathered from this particular physical world.
I don’t know about excruciating detail, but I think the general idea is this:
One would not predict the existence of evil in a universe created by a benevolent God.
One would not predict the existence of intrinisically subjective qualities in an entirely physcial, and therefor entirely objective, universe.
Disagree.
Let’s look at the actual observations. I see red, It has some atomic “redness” that is different from the atomic “blueness” of blue and the atomic pleasure of orgasm and the atomic feeling of cold. Each of these atomic “qualia” are subjectively irreducible. There are not smaller parts that my subjective experience of “red” is made up of.
Is this roughly the qualia problem? That’s my understanding of it.
Here’s a simple computer program that reports on whether or not it has atomic subjective experience:
From the inside, the program experiences no mechanisms of reduction of these atomic qualia, but from the outside, we can see that they are strings, made up of bytes, and compared by hash value. While I don’t know the details of the neurosceince of qualia, I expect the findings to be roughly similar. Something will be an irreducible symbol with various associations and uniqueness from within the system, but outside, we will be able to see “oh look, redness is this particular pattern of neurons firing”.
EDIT: LW killed my program formatting. It should still run (lua, by the way)
Having never seen any Lua, I’m surprised by how much it looks like Python. Any idea whether Python stole its set literals from Lua?
ETA: Python port (with output)
python:
Lua:
Also, lots of syntax differences (end, then, do, function, whitespace, elseif, etc). They are similar in that they are dynamic languages. I don’t think anything was particularly inspired by anything else.
Ah, ok, in python {‘x’, ‘y’} would denote an unordered set containing ‘x’ and ‘y’, I assumed a correspondence.
lua unordered sets are a bit more verbose:
thanks for the port.
Next up we should extend it with free will and true knowledge (causal entanglement).
And I think someone asked about not demonstrating qualia sameness in the absence of truthful reporting.
(I’m not going to waste more time on any of this, but it could be done)
If you mean this… to be clear, I didn’t complain about it not demonstrating “qualia sameness”. I complained (implicitly) that the claim that it demonstrated all the properties that some people claim demonstrate qualia in real-world systems (like people) was demonstrably false.
(In particular, that it didn’t demonstrate anything persistent across different reporting, whereas my own experience does demonstrate something persistent across different reporting.)
I agree that actually recoding it to demonstrate such persistence is a waste of time; far simpler is to not make such over-reaching claims.
I removed “complained”.
Point taken. As I tried to explain somewhere, it was all the properties that I thought of at the moment, with the implicit assertion that the rest of the properties could be demonstrated as required.
Point taken.
Reported.
oh. thank you very much. I should learn to do that.
Materialism predicts that algorithms have an “inside”?
As a further note, I’ll have to say that if all the blue and if the red in my visual experience were switched around, my hunch tells me that I’d be experiencing something different; not just in the sense of different memory associations but that the visual experience itself would be different. It would not just be that “red” is associated with hot, and that “blue” is associated with cold… The qualia of the visual experience itself would be different.
Yes. The scene from within a formal system (like algebra) has certain qualities (equations, variables, functions, etc) that are different from the scene outside (markings on paper, equals sign, BEDMAS, variable names, brackets for function application).
That’s not really a materialism thing, it’s a math thing.
Hence the part where they are compared to other qualia. Maybe that’s not enough, but imagining getting “blue” or “sdfg66df” instead of “red” (which is the evidence you are using) is of course going to return “they are different” because they don’t compare equal. Even if the output of the computation ends up being the same.
I’m under the impression that what you describe falls under computationalism, not materialism, but my reading on these ideas is shallow and I may be confusing some of these terms...
I must say I can’t tell the difference between materialism “the mind is built of stuff” and computationalism “the mind is built of algorithms (running on stuff)”.
If I get them confused in some way, sorry.
That thought experiment doesn’t make much sense. If the experiences were somehow switched, but everything else kept the same (i .e all your memories and associations of red are still connected to each other and everything else in the same way) you wouldn’t notice the difference; everything would still match your memories exactly. If there even is such a thing as raw qualia there is no reason to suppose they are stable from one moment to the other; as long as the correct network of associations is triggered there is no evolutionary advantage either way.
I could not find an online Lua-bin, but pasting it into a Lua Demo and clicking Run does the trick.
did it work?
There’s no evidence that your programme experiences anything from the inside. Which is one way in which your claim is surreptitiously eliminativist. Another is that, examined from the outside, we can tell what the programme’s qualia are: they are nothing. They have no quaities other than being different from one another. But qualia don’t seem like that from the inside! You say your programme’s qualia are subjective because it can’t examine their internal structure...but there ins’t any. They are not subjective somethings, they are just nothings.
then neither is there evidence that I do, or you do.
I can’t think of qualities that my subjective experience of “red” has that the atom “red” does not have in my program.
Sure they do. Redness has this unique redness to it the same way “red” has this unique ness.
I was using “subjective” as a perspective, not a quality.
Sure there is. Go look in the lua source code. there is the global string memo-table, GC metadata, string contents (array of bytes), type annotations, etc.
I have plenty of evidence of my own experiences. Were you restricting “evidence” to third-person, objective evidence?
I can. I think that if I experienced nothing but an even expanse of red, that would be different from experiencing nothing but a salty taste, or nothing but middle C
Redness isn’t expressible. “Object at 0x8cf643” is.
If that’s accessible to them, it’s objective and expressible. If not, its just a nothing. Neither way do you have a “somethng” that is subjective.
I wouldn’t predict the existence of self-replicating molecules either. In fact, I’m not sure I’m in a position to predict anything at all about physical phenomena without appealing to empirical knowledge I’ve gathered from this particular physical world.
It’s a pickle, all right.
OK: “does not predict” was not strong enough. In each case, the opposite is predicted.