I think this is very real. Important to also note that non-specific joy exists and can be reliably triggered by certain chemicals.
My inference from this is that preferences are a useful but leaky reification, and if we want to get to ‘ground truth’ about comfort and discomfort, we need a frame that emerges cleanly from the brain’s implementation level.
This is all very suspicious. Let’s say I write a program for a robot that will gather apples and avoid tigers. So most of its hardware and software complexity will be taken up by circuitry to recognize apples, recognize tigers, move legs, and so on. There seems no reason why any measure of “symmetry” of the mental state, taken from outside, would correlate much to whether the robot is currently picking an apple or running from a tiger—or in other words, to pleasure or pain.
Maybe we have some basic difference from such robots, but I’d bet that we’re not that different. Most of our brain is workaday machinery. If it makes “waves”, these waves are probably about workaday functioning. If you’re measuring anything real, it’s probably not a correlate of consciousness at all, and more likely a correlate of how busy the brain is being at any moment. No?
Symmetry is a (if not ‘the central’) Schelling point if one is in fact using harmonics for computation. I.e., I believe if one actually went and implemented a robot built around the computational principles the brain uses, that gathered apples and avoided tigers, it would tacitly follow a symmetry gradient.
I think this is very real. Important to also note that non-specific joy exists and can be reliably triggered by certain chemicals.
My inference from this is that preferences are a useful but leaky reification, and if we want to get to ‘ground truth’ about comfort and discomfort, we need a frame that emerges cleanly from the brain’s implementation level.
This is the founding insight behind QRI — see here for a brief summary https://opentheory.net/2021/07/a-primer-on-the-symmetry-theory-of-valence/
This is all very suspicious. Let’s say I write a program for a robot that will gather apples and avoid tigers. So most of its hardware and software complexity will be taken up by circuitry to recognize apples, recognize tigers, move legs, and so on. There seems no reason why any measure of “symmetry” of the mental state, taken from outside, would correlate much to whether the robot is currently picking an apple or running from a tiger—or in other words, to pleasure or pain.
Maybe we have some basic difference from such robots, but I’d bet that we’re not that different. Most of our brain is workaday machinery. If it makes “waves”, these waves are probably about workaday functioning. If you’re measuring anything real, it’s probably not a correlate of consciousness at all, and more likely a correlate of how busy the brain is being at any moment. No?
Here’s @lsusr describing the rationale for using harmonics in computation — my research is focused on the brain, but I believe he has a series of LW posts describing how he’s using this frame for implementing an AI system: https://www.lesswrong.com/posts/zcYJBTGYtcftxefz9/neural-annealing-toward-a-neural-theory-of-everything?commentId=oaSQapNfBueNnt5pS&fbclid=IwAR0dpMyxz8rEnunCbLLYUh1l2CrjxRhNsQT1h_qdSgmOLDiVx5-G-auThTc
Symmetry is a (if not ‘the central’) Schelling point if one is in fact using harmonics for computation. I.e., I believe if one actually went and implemented a robot built around the computational principles the brain uses, that gathered apples and avoided tigers, it would tacitly follow a symmetry gradient.