My current-favourite frame on “qualia” is that it refers to the class of objects we can think about (eg, they’re part of what generates what I say rn) for which behaviour is invariant across structure-preserving transformations.
(There’s probably some cool way to say that with category theory or transformations, and it may or may not give clarity, but idk.)
Eg, my “yellow” could map to blue, and “blue” to yellow, and we could still talk together without noticing anything amiss even if your “yellow” mapped to yellow for you.
Both blue and yellow are representational objects, the things we use to represent/refer to other things with, like memory-addresses in a machine. For externally observable behaviour, it just matters what they dereference to, regardless of where in memory you put them. If you swap two representational objects, while ensuring you don’t change anything about how your neurons link up to causal nodes outside the system, your behaviour stays the same.
Note that this isn’t the case for most objects. I can’t swap hand⇄tomato, without obvious glitches like me saying “what a tasty-looking tomato!” and trying to eat my hand. Hands and tomatoes do not commute.
It’s what allows us to (try to) talk about “tomato” as opposed to just tomato, and explains why we get so confused when we try to ground out (in terms of agreed-upon observables) what we’re talking about when we talk about “tomato”.
But how/why do we have representations for our representational objects in the first place? It’s like declaring a var (address₁↦value), and then declaring a var for that var (address₂↦address₁) while being confused about why the second dereferences to something ‘arbitrary’.
Maybe it starts when somebody asks you “what do you mean by ‘X’?”, and now you have to map the internal generators of [you saying “X”] in order to satisfy their question. Or not. Probably not. Napkin out.
[Epistemic status: napkin]
My current-favourite frame on “qualia” is that it refers to the class of objects we can think about (eg, they’re part of what generates what I say rn) for which behaviour is invariant across structure-preserving transformations.
(There’s probably some cool way to say that with category theory or transformations, and it may or may not give clarity, but idk.)
Eg, my “yellow” could map to blue, and “blue” to yellow, and we could still talk together without noticing anything amiss even if your “yellow” mapped to yellow for you.
Both blue and yellow are representational objects, the things we use to represent/refer to other things with, like memory-addresses in a machine. For externally observable behaviour, it just matters what they dereference to, regardless of where in memory you put them. If you swap two representational objects, while ensuring you don’t change anything about how your neurons link up to causal nodes outside the system, your behaviour stays the same.
Note that this isn’t the case for most objects. I can’t swap hand⇄tomato, without obvious glitches like me saying “what a tasty-looking tomato!” and trying to eat my hand. Hands and tomatoes do not commute.
It’s what allows us to (try to) talk about “tomato” as opposed to just tomato, and explains why we get so confused when we try to ground out (in terms of agreed-upon observables) what we’re talking about when we talk about “tomato”.
But how/why do we have representations for our representational objects in the first place? It’s like declaring a var (address₁↦value), and then declaring a var for that var (address₂↦address₁) while being confused about why the second dereferences to something ‘arbitrary’.
Maybe it starts when somebody asks you “what do you mean by ‘X’?”, and now you have to map the internal generators of [you saying “X”] in order to satisfy their question. Or not. Probably not. Napkin out.