The mystery of pain and pleasure
Some arrangements of particles feel better than others. Why?
We have no general theories, only descriptive observations within the context of the vertebrate brain, about what produces pain and pleasure. It seems like there’s a mystery here, a general principle to uncover.
Let’s try to chart the mystery. I think we should, in theory, be able to answer the following questions:
(1) What are the necessary and sufficient properties for a thought to be pleasurable?
(2) What are the characteristic mathematics of a painful thought?
(3) If we wanted to create an artificial neural network-based mind (i.e., using neurons, but not slavishly patterned after a mammalian brain) that could experience bliss, what would the important design parameters be?
(4) If we wanted to create an AGI whose nominal reward signal coincided with visceral happiness—how would we do that?
(5) If we wanted to ensure an uploaded mind could feel visceral pleasure of the same kind a non-uploaded mind can, how could we check that?
(6) If we wanted to fill the universe with computronium and maximize hedons, what algorithm would we run on it?
(7) If we met an alien life-form, how could we tell if it was suffering?
It seems to me these are all empirical questions that should have empirical answers. But we don’t seem to have much for hand-holds which can give us a starting point.
Where would *you* start on answering these questions? Which ones are good questions, and which ones are aren’t? And if you think certain questions aren’t good, could you offer some you think are?
As suggested by shminux, here’s some research I believe is indicative of the state of the literature (though this falls quite short of a full literature review):
Tononi’s IIT seems relevant, though it only addresses consciousness and explicitly avoids valence. Max Tegmark has a formal generalization of IIT which he claims should apply to non-neural substrates. And although Tegmark doesn’t address valence either, he posted a recent paper on arxiv noting that there *is* a mystery here, and that it seems topical for FAI research.
Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are somewhat relevant, though ultimately correlative or merely descriptive, and seem to have little universalization potential.
There’s also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain. Luke covers Berridge’s research in his post, The Neuroscience of Pleasure. Short version: ‘liking’, ‘wanting’, and ‘learning’ are all handled by different systems in the brain. Opioids within very small regions of the brain seem to induce the ‘liking’ response; elsewhere in the brain, opioids only produce ‘wanting’. We don’t know how or why yet. This sort of research constrains a general principle, but doesn’t really hint toward one.
In short, there’s plenty of research around the topic, but it’s focused exclusively on humans/mammals/vertebrates: our evolved adaptations, our emotional systems, and our architectural quirks. Nothing on general or universal principles that would address any of (1)-(7). There is interesting information-theoretic / patternist work being done, but it’s highly concentrated around consciousness research.
---
Bottom line: there seems to be a critically important general principle as to what makes certain arrangements of particles innately preferable to others, and we don’t know what it is. Exciting!
These questions seem confused, but I’m having trouble articulating exactly why I think that. Something like “you are trying to take concepts that are appropriate when you model the world at one level of detail and applying them to a model of the world at a more detailed level, and this is a type error.”
I understand the type of criticism generally, but could you say more about this specific case?
I’m curious if the objection stems from some mismatch of abstraction layers, or just the habit of not speaking about certain topics in certain terms.
Pleasure is not a static “arrangement of particles”. Pleasure is a neurological process.
You can’t find a “pleasure pattern” that’s fully generalized. The information is always contextual.
This isn’t a perfect articulation of my objections, but this is a difficult subject.
Surely neurological processes are “arrangements of particles” too, though.
I think your question gets to the heart of the matter- is there a general principle to be found with regard to which patterns within conscious systems innately feel good, or isn’t there? It would seem very surprising to me if there wasn’t.
Processes are not “arrangements”, it’s a dynamic vs static difference.
Right. It might be a little bit more correct to speak of ‘temporal arrangements of arrangements of particles’, for which ‘processes’ is a much less awkward shorthand.
But saying “pleasure is a neurological process” seems consistent with saying “it all boils down to physical stuff- e.g., particles, eventually”, and doesn’t seem to necessarily imply that “you can’t find a ‘pleasure pattern’ that’s fully generalized. The information is always contextual.”
Good is a complex concept, not an irreducible basic constituent of the universe. It’s deeply rooted in our human stuff like metabolism (food is good), reproduction (sex is good), social environment (having allies is good) etc. We can generalize from this and say that the general pattern of “good” things is that they tend to reinforce themselves. If you feel good, you’ll strive to achive the same later. If you feel bad, you’ll strive to avoid feeling that in the future. So if an experience makes more of it then it’s good, otherwise it’s bad.
Note that we could also ask: “Is there a general principle to be found with regard to which patterns within conscious systems innately feel like smelling a rose, or isn’t there?” We could build rose smell detecting machines in various ways. How can you say that one is really having the experience of smelling it while another isn’t?
It seems like you’re making two very distinct assertions here: first, that valence is not a ‘natural kind’, that it doesn’t ‘carve reality at the joints’, and is impossible to form a crisp, physical definition of; and second, that valence is highly connected to drives that have been evolutionarily advantageous to have. The second is clearly correct; the first just seems to be an assertion (one that I understand, and I think reasonable people can hold at this point, but that I disagree with).
I don’t like the expression “carve reality at the joints”, I think it’s very vague and hard to verify if a concept carves it there or not. The best way I can imagine this is that you have lots of events or ‘things’ in some description space and you can notice some clusterings, and you pick those clusters as concepts. But a lot depends on which subspace you choose and on what scale you’re working… ‘Good’ may form a cluster or may not, I just don’t even know how you could give evidence either way. It’s unclear how you could formalize this in practice.
My thoughts on pleasure and the concept of good is that your problem is that you’re trying to discover the sharp edges of these categories, whereas concepts don’t work like that. Take a look at this LW post and this one from Slatestarcodex. From the second one, the concept of a behemah/dag exists because fishing and hunting exist.
Try to make it clearer what you’re trying to ask. “What is pleasure really?” is a useless question. You may ask “what is going on in my body when I feel pleasure?” or “how could I induce that state again?”
You seem to be looking for some mathematical description of the pattern of pleasure that would unify pleasure in humans and aliens with totally unknown properties (that may be based on fundamentally different chemistry or maybe instead of electomagnetism-based chemistry their processes work over the strong nuclear force or whatever). What do you really have in mind here? A formula, like a part of space giving off pulses at the rate of X and another part of space at 1 cm distance pulsating with rate Y?
You may just as well ask how we would detect alien life at all. And then I’d say “life” is a human concept, not a divine platonic object out there that you can go to and see what it really is. We even have edge cases here on Earth, like viruses or prions. But the importance of these sorts of questions disappears if you think about what you’d do with the answer. If it’s “I just want to know how it really is, I can’t imagine doing anything practical with the answer” then it’s too vague to be answered.
I think we’re still not seeing eye-to-eye on the possibility that valence, i.e., whatever pattern within conscious systems innately feels good, can be described crisply.
If it’s clear a priori that it can’t, then yes, this whole question is necessarily confused. But I see no argument to that effect, just an assertion. From your perspective, my question takes the form: “what’s the thing that all dogs have in common?”- and you’re trying to tell me it’s misguided to look for some platonic ‘essence of dogness’. Concepts don’t work like that. I do get that, and I agree that most concepts are like that. But from my perspective, your assertion sounds like, “all concepts pertaining to this topic are necessarily vague, so it’s no use trying to even hypothesize that a crisp mathematical relationship could exist.” I.e., you’re assuming your conclusion. Now, we can point to other contexts where rather crisp mathematical models do exist: electromagnetism, for instance. How do you know the concept of valence is more like ‘dogness’ than electromagnetism?
Ultimately, the details, or mathematics, behind any ‘universal’ or ‘rigorous’ theory of valence would depend on having a well-supported, formal theory of consciousness to start from. It’s no use talking about patterns within conscious systems when we don’t have a clear idea of what constitutes a conscious system. A quantitative approach to valence needs a clear ontology, which we don’t have yet (Tononi’s IIT is a good start, but hardly a final answer). But let’s not mistake the difficulty in answering these questions with them being inherently unanswerable.
We can imagine someone making similar critiques a few centuries ago regarding whether electromagnetism was a sharply-defined concept, or whether understanding it matters. It turned out electromagnetism was a relatively sharply-defined concept: there was something to get, and getting it did matter. I suspect a similar relationship holds with valence in conscious systems. I’m not sure it does, but I think it’s more reasonable to accept the possibility than not at this point.
Life, sin, disease, redness, maleness and indeed dogness “may” also be like electromagnetism. The English language may also be a fundamental part of the universe and maybe you could tell if “irregardless” or “wanna” are real English words by looking into a microscope or turning your telescope to certain parts of the sky, or maybe by looking at chicken intestines, who knows. I know some people think like this. Stuart Hameroff says that morality may be encoded into the universe at the Planck scale. So maybe that’s where you should look for “good”, maybe “pleasure” is there as well.
But anyway, research into electromagnetism was done using the scientific method, which means that the hypothesis had to produce predictions that were tested and replicated numerous times. What sort of experiment would you envision for testing something about “inherently pleasurable” arrangements of atoms? Would the atoms make you feel warm and fuzzy inside when you look at them? Or would you try to put that pattern into different living creatures and see if they react with their normal joyful reactions?
Although life, sin, disease, redness, maleness, and dogness are (I believe) inherently ‘leaky’ / ‘fuzzy’ abstractions that don’t belong with electromagnetism, this is a good comment. If a hypothesis is scientific, it will make falsifiable predictions. I hope to have something more to share on this soon.
Asking “how do qualia systematically relate to physics” is not a useless question, since answering it would make physicalism knowledge with no element of commitment.
Thanks, that’s exactly what I was trying to say!
It seems to me that good and bad are actually easy to define indeed. Minusdash gives a definition: Good is a state an entity strives to obtain (again). This is a functional definition and that should be enough. How states are physically represented in other beings is unknown and is in my opinion irrelevant.
A possible answer:
There are many different kinds of pain and pleasure, and trying to categorize all of them together loses information.
For starters, the difference between physical and mental pain and pleasure.
To get more nuanced, the difference between the stingy pain of a slap, the thudy pain of a punch, the searing pain of fire, and the pain from electricity are all very distinct feelings, which could have very different circuitry.
I’m not as sure on the last paragraph, I would place that at 60% probability.
On the first point—what you say is clearly right, but is also consistent with the notion that there are certain mathematical commonalities which hold across the various ‘flavors’ of pleasure, and different mathematical commonalities in pain states.
Squashing the richness of human emotion into a continuum of positive and negative valence sounds like a horribly lossy transform, but I’m okay with that in this context. I expect that experiences at the ‘pleasure’ end of the continuum will have important commonalities ‘under the hood’ with others at that same end. And those commonalities will vanish, and very possibly invert, when we look at the ‘agony’ end.
On the second point, the evidence points to physical and emotional pain sharing many of the same circuits, and indeed, drugs which reduce physical pain also reduce emotional pain. On the other hand, as you might expect, there are some differences in the precise circuitry each type of pain activates. But by and large, the differences are subtle.
Yes, and the point seems to go double for pleasure. There are many varieties, and most are associated with a particular sensation. The pleasures of sex are very different from the pleasures of ice cream, for example. Admittedly, there is such a thing as just feeling good—but maybe that’s a whole-body sensation. And now I’d like to move on from falenas108′s point, to make one of my own.
Where I’m going with this is: I’m not sure it’s even possible to instantiate the pleasures as we know them without duplicating our circuitry. So if your AGI in question 4 is not supposed to be built on the brain’s patterns, you might want to rephrase the question: you can certainly provide reward signals, but calling them “pleasures” might be misleading. And in question 5, I have dire doubts about the experiences of an upload, unless the upload is onto a computer that is explicitly designed with many of the detailed features of mammalian brains. As you point out, much of the research you’ve encountered is “not applicable outside of the human brain.” I suspect there’s no way around that: investigating the brains of humans (and other animals we are reasonably confident feel pains and pleasures) is the only way to understand these phenomena.
Tononi’s theory supports my cautions, I believe. On Tononi’s account of qualia, it is extremely unlikely that a system built on radically different principles from a human brain would experience the same qualia we do. You can probably see why, but if not, I’ll sketch my reasoning upon request.
This all seems to be about the “qualia” problem. Take another example. How would you know if an alien was having the experience of seeing the color red? Well, you could show it red and see what changes. You could infer it from its behavior (for example if you trained it that red means food—if indeed the alien eats food).
Similarly you could tell that it’s suffering when it does something to avoid an ongoing situation, and if later on it would very much prefer not to go under the same conditions ever again.
I don’t think there is anything special about the actual mechanism and neural pattern that expresses pain or suffering in our brains. It’s that pattern’s relation to memories, sensory inputs and motor outputs that’s important.
Probably you could even retrain the brain to consider a certain fixed brain stimulus to be pleasure even though it was previously associated with pain. It’s like putting on those corrective glasses that turn the visual input by 180° and the brain can adapt to that situation and the person is feeling normal after some time.
I see the argument, but I’ll note that your comments seem to run contrary to the literature on this: see, e.g., Berridge on “Dissecting components of reward: ‘liking’, ‘wanting’, and learning”, as summed up by Luke in The Neuroscience of Pleasure. In short, behavior, memory, and enjoyment (‘seeking’, ‘learning’, and ‘liking’ in the literature) all seem to be fairly distinct systems in the brain. If we consider a being with a substantially different cognitive architecture, whether through divergent evolution or design, it seems problematic to view behavior as the gold standard of whether it’s experiencing pleasure or suffering. At this point it may be the most practical approach, but it’s inherently imperfect.
My strong belief is that although there is substantial plasticity in how we interpret experiences as positive or negative, this plasticity isn’t limitless. Some things will always feel painful; others will always feel pleasurable, given a not-too-highly-modified human brain. But really, I think this line of thinking is a red herring: it’s not about the stimulus, it’s about what’s happening inside the brain, and any crisp/rigorous/universal principles will be found there.
Is valence a ‘natural kind’? Does it ‘carve reality at the joints’? Intuitions on this differ (here’s a neat article about the lack of consensus about emotions). I don’t think anger, or excitement, or grief carve reality at the joints- I think they’re pretty idiosyncratic to the human emotional-cognitive architecture. But if anything about our emotions is fundamental/universal, I think it’d have to be their valence.
Yes, this is the qualia problem, and, no it isn’t easy to imagine pain and pleasure being inverted. Spectrum inversion isn’t a necessary criterion for something being a quale. You seen to have landed on the easy end of the hard problem.
I don’t know how limited plasticity is. Speculation: maybe if we put on some color filter glasses that changes red with green or somehow mixes up the colors, then maybe even after a long time we’d still have the experience of the original red, even when looking at outside green material. Okay, let’s say it’s not plastic enough, we’d still feel an internal red qualia. But in what sense?
What if the brain would truly rewire to recognize plants and moldy fruit etc. in the presence of “red” perception and the original “green” pattern would feed into visceral avoidance of “green” liquids (blood) and would wire into the speech areas in such a way that nominal “green” sensation is extremely linked to the word “red” (for example as measured by these experiments where the words meaning colors are colored with different colors, for example the word blue written in yellow). In this case, how could we say that the person is still “seeing green” when presented with objectively red things? What would be our anticipation under this hypothesis?
Now, I think emotions are the same thing. Of course it could be that the brain architecture cannot rewire itself to start sweating and shouting and producing adrenaline in the presence of the previously pleasure associated pattern. Maybe the two modules are too far away or there is some other physical limitation. Then the question is pointless, it’s about an impossible scenario. If the brain can’t rewire itself then it still produces the old kind of behavior that is inconsisent with reality so it is observable (e.g. smiling when we would expect a normal person to should actually shout in pain).
I don’t think we can view pleasure as simply existing inside the brain without considering the environment. Similarly, the motor cortex doesn’t contain the actual information of what the limbs look like. It’s a relay station. It only works because the muscles are where they are. You can’t tell what a motor neuron controls unless you follow its axon and look at what muscle it is attached to. The neuron by itself isn’t a representation of the muscle or that muscles movement. An emotional neural pattern is also only associated with that emotion to the extent that it results in certain responses and is triggered by certain stimuli. Things are not labeled up in the universe. Does the elephant feel like it’s using its nose when it’s lifting things up? Or does it rather feel like an arm? It isn’t a productive line of thinking. If it quacks… It’s like asking whether abortion is really a sin, whether “irregardless” is really an English word, whether a submarine can really swim.
When you replace the pattern but keep all behavior and physiological responses normal, then I’d say the person is having the usual emotions that we associate with the responses and behavior that we can observe. The problem isn’t about what we anticipate but the fact that we are at edge cases that we haven’t encountered yet and we don’t have an intuitive idea of how we should interpret such a situation.
I think you should start smaller and slower. Try thinking about animals with simpler brains like worms, and what it means that it is having a certain sensation.
That would be an interesting experiment to do. We already know that people can adapt to wearing lenses that invert the picture or shift it laterally. Changing the colours while maintaining differences would be a little more complicated but quite feasible. You would need something similar to a VR headset, with a front-facing camera in front of each eye. The camera sensors would be connected, via some electronics to process the colours in any desired way, to the screen that each eye would see. This would be doable by a hobbyist with the necessary technical know-how. It might be as simple as cannibalising a couple of pocket cameras and switching some of the connections to the screen on the back.
Note that all worthwhile original research starts with a literature review. What have you found so far?
Tononi’s Phi theory seems somewhat relevant, though it only addresses consciousness and explicitly avoids valence. It does seem like something that could be adapted toward answering questions like this (somehow).
Current models of emotion based on brain architecture and neurochemicals (e.g., EMOCON) are relevant, though ultimately correlative and thus not applicable outside of the human brain.
There’s also a great deal of quality literature about specific correlates of pain and happiness- e.g., Building a neuroscience of pleasure and well-being and An fMRI-Based Neurologic Signature of Physical Pain.
In short, I’ve found plenty of plenty of research around the topic but nothing that’s particularly predictive outside of very constrained contexts. No generalized theories. There’s some interesting stuff happening around panpsychism (e.g., see these two pieces by Chalmers) but they focus on consciousness, not valence.
My intuition is valence will be encoded within frequency dynamics in a way that will be very amiable to mathematical analysis, but right now I’m seeking clarity about how to speak about the problem.
Edit: I’ll add this to the bottom of the post
Off-topic, but I notice that this post, according to the time-stamp, was apparently posted on March 1, 2015. There are comments attached to it, however, dating from 2013. Does anyone know why this is?
I had posted the original in 2013, and did a major revision today, before promoting it (leaving the structure of the questions intact, to preserve previous discussion referents).
I hope I haven’t committed any faux pas in doing this.
First recommendation is to get to the bottom of what question you are actually asking. What are you actually trying to do? Do the right thing? Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?
See disguised queries
It feels good? Some pretty heavy neuroscience to say anything beyond that. Again, what are you going to do with the answer to this question. Ask that question instead.
Also note that “necessary and sufficient” is an obsolete model of concepts. See the human’s guide to words.
What does this mean? How do I calculate exactly how much pain someone will experience if I punch them? Again, ask the real question.
Um. Why would you want to do that? Is this simply a hypothetical to see if we understand the concept?
It really depends on what aspect you are interested in; you could create “pleasure” and “pain” by hacking up some kind of simple reinforcement learner, and I suppose you could shoehorn that into a neural network if you really wanted to. But why?
Note that a simple reinforcement learner “experiences” “pain” and “pleasure” in some sense, but not in the morally relevant sense. You will find that the moral aspect is much more anthropomorphic and much more complex, I think.
I guess you could have a little “visceral happiness” meter that gets filled up in the right conditions, but this would a profound waste of AGI capability, and probably doesn’t do what you actually wanted. What is it you actually want?
Ask them? The same way we think we know for non-uploaded minds.
If I wanted to turn the universe into paperclips and meaningless crap, how would I do it? Why is your question interesting? Is this simply an excercise in learning how to fill the universe with X? You could pick a less confusing X.
I feel like you might be importing a few mistaken assumptions into this whole line of questioning. I recommend that you lurk more and read some of the stuff I linked.
Good question:
How would a potentially powerful optimizing process have to be constructed to be provably capable of steering towards some coherent objective(s) over the long run and through self-modifications?
Downvote preventers get downvoted.
I think you’re right that the OP doesn’t quite hit the mark, but you got carried away and started almost wilfully misinterpreting. Especially your answers to 4, 5 and 6.
We seem to be talking past each other, to some degree. To clarify, my six questions were chosen to illustrate how much we don’t know about the mathematics and science behind psychological valence. I tried to have all of them point at this concept, each from a slightly different angle. Perhaps you interpret them as ‘disguised queries’ because you thought my intent was other than to seek clarity about how to speak about this general topic of valence, particularly outside the narrow context of the human brain?
I am not trying to “Learn how to manipulate people? Learn how to torture? Become a pleasure delivery professional?”—my focus is entirely on speaking about psychological valence in clear terms, illustrating that there’s much we don’t know, and make the case that there are empirical questions about the topic that don’t seem to have empirical answers. Also, in very tentative terms, to express the personal belief that a clear theory on exactly what states of affairs are necessary and sufficient for creating pain and pleasure may have some applicability to FAI/AGI topics (e.g., under what conditions can simulated people feel pain?).
I did not find ‘necessary and sufficient’, or any permutation thereof, in the human’s guide to words. Perhaps you’d care to explicate why you didn’t care for my usage?
Re: (3) and (4), I’m certain we’re not speaking of the same things. I recall Eliezer writing about how creating pleasure isn’t as simple as defining a ‘pleasure variable’ and incrementing it:
I can do that on my macbook pro; it does not create pleasure.
There exist AGIs in design space that have the capacity to (viscerally) feel pleasure, much like humans do. There exist AGIs in design space with a well-defined reward channel. I’m asking: what principles can we use to construct an AGI which feels visceral pleasure when (and only when) its reward channel is activated? If you believe this is trivial, we are not communicating successfully.
I’m afraid we may not share common understandings (or vocabulary) on many important concepts, and I’m picking up a rather aggressive and patronizing vibe, but a genuine thanks for taking the time to type out your comment, and especially the intent in linking that which you linked. I will try not to violate too many community norms here.
I’m not nyan_sandwich, but here is what I believe to be his point about asking for necessary and sufficient conditions.
Part of your question (maybe not all) appears to be: how should we define “pleasure”?
Aside from precise technical definitions (“an abelian group is a set A together with a function from AxA to A, such that …”), the meaning of a word is hardly ever* accurately given by any necessary-and-sufficient conditions that can be stated explicitly in a reasonable amount of space, because that just isn’t the way human minds work.
We learn the meaning of a word by observing how it’s used. We see, and hear, a word like “pleasure” or “pain” applied to various things, and not to others. What our brains do with this is approximately to consider something an instance of “pleasure” in so far as it resembles other things that are called “pleasure”. There’s no reason why any manageable set of necessary and sufficient conditions should be equivalent to that.
Further, different people are exposed to different sets of uses of the word, and evaluate resemblance in different ways. So your idea of “pleasure” may not be the same as mine, and there’s no reason why there need be any definite answer to the question of whose is better.
Typically, lots of different things will contribute to our considering something sufficiently like other instances of “pleasure” to deserve that name itself. In some particular contexts, some will be more important than others. So if you’re trying to pin down a precise definition for “pleasure”, the features you should concentrate on will depend on what that definition is going to be used for.
Does any of that help?
It does, and thank you for the reply.
How should we define “pleasure”? -- A difficult question. As you mention, it is a cloud of concepts, not a single one. It’s even more difficult because there appears to be precious little driving the standardization of the word—e.g., if I use the word ‘chair’ differently than others, it’s obvious, people will correct me, and our usages will converge. If I use the word ‘pleasure’ differently than others, that won’t be as obvious because it’s a subjective experience, and there’ll be much less convergence toward a common usage.
But I’d say that in practice, these problems tend to work themselves out, at least enough for my purposes. E.g., if I say “think of pure, unadulterated agony” to a room of 10000 people, I think the vast majority would arrive at fairly similar thoughts. Likewise, if I asked 10000 people to think of “pure, unadulterated bliss… the happiest moment in your life”, I think most would arrive at thoughts which share certain attributes, and none (<.01%) would invert answers to these two questions.
I find this “we know it when we see it” definitional approach completely philosophically unsatisfying, but it seems to work well enough for my purposes, which is to find mathematical commonalities across brain-states people identify as ‘pleasurable’, and different mathematical commonalities across brain-states people identify as ‘painful’.
I see what you mean by “the meaning of a word is hardly ever accurately given by any necessary-and-sufficient conditions that can be stated explicitly in a reasonable amount of space, because that just isn’t the way human minds work.” On the other hand, all words are imperfect and we need to talk about this somehow. How about this: (1) what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?
“what are the characteristic mathematics of (i.e., found disproportionally in) self-identified pleasurable brain states?”
Certain areas of the brain get more active and certain hormones get into the bloodstream. How does this help you out?
Even if it turns out that there is no rigorously definable one-dimensional measure of valence we still need to search for physical correlates to pleasure and pain and find approximate measures to use when resolving moral dilemmas.
Regarding the response to (6), why don’t you want to maximise hedons? Having a rigorous definition of what you are trying to maximise needn’t mean that what you are trying to maximise is arbitrary to you, and that pleasure is complex (or maybe it is simple but we don’t understand it yet) does not imply that we don’t want it.
I am very surprised and pleased to find thinking that so closely parallels my own.
I typed this is Google: “what makes a particular arrangement of matter in the brain pain or pleasure”
This is what I have been thinking. The universe contains consciousness. Matter and energy (at least arranged in the form of a brain) are conscious. There are states of consciousness that feel good and states of consciousness that feel bad. What is the difference between the arrangements of matter (or the processes) in the brain that make some feel good and some feel bad? I think it would be extremely helpful and lead to answers if we actually knew exactly what these arrangements are and studied them, studied the physics of what is going on. Do we know yet? How much do we know?
Reasons I want to know
1 - curiosity 2 - increasing happiness and decreasing suffering in biological beings 3 - creating synthetic intelligence that is happy 4 - minimizing suffering in the universe and maximizing pleasure in the universe. it just occurred to me a few days ago that all the matter in the universe could be converted to whatever state is the most pleasure, convert the universe into bliss. hmm...many minds or one giant mind?
Right, absolutely. These are all things that we don’t know, but should.
Are you familiar with David Pearce’s Hedonistic Imperative movement? He makes a lot of the same points and arguments, basically outlining that it doesn’t seem impossible that we could (and should) radically reduce, and eventually eliminate, suffering via technology.
But the problem is, we don’t know what suffering is. So we have to figure that out before we can make much radical progress on this sort of work. I.e., I think a rigorous definition of suffering will be an information-theoretic one—that it’s a certain sort of pattern within conscious systems—but we know basically nothing about what sort of pattern it is.
(I like the word “valence” instead of pain/pleasure, joy/suffering, eudaimonia, hedonic tone, etc. It’s a term from psychology that just means ‘the pleasantness or unpleasantness attached to any experience’ and seems to involve less baggage than these other terms.)
I hope to have a formal paper on this out by this winter. In the meantime, if you’re in the Bay Area, feel free to ping me and I can share some thoughts. You may also enjoy a recent blog post: Effective Altruism, and building a better QALY.
This is part of the Hard Problem of Consciousness: why is there any such thing and how does it work? It is Hard because we cannot even see what a solution would be. Even if we discovered patterns of neural activity or anything else that reliably and in great detail matched up with the experience, it seems that that still wouldn’t tell us why there is such a thing as that experience, and would not suggest any test we could apply to a synthetic imitation of the patterns.
The world is already full of alien life-forms—that is, life-forms radically different from yourself. How do you decide, and how should you decide, which of the following suffers? A human being with toothache; a dog that has been hit by a car; a mouse bred to grow cancers; a wasp infected by a fungus that is eating up its whole body and sprouting from its surface; a caterpillar paralysed and being eaten alive by the larvae of that wasp; a jellyfish stranded on the beach that a playful child has thrust its spade into; a fish dying from the sting of a jellyfish; a tree with the sort of burr that wood carvers prize for its ornamental patterns; parched grass in a drought. And, for that matter, a cliff face that has collapsed in a great storm; tectonic plates grinding together; a meteor burning up in the atmosphere.
Right- good questions.
First, I think getting a rigorous answer to this ‘mystery of pain and pleasure’ is contingent upon having a good theory of consciousness. It’s really hard to say anything about which patterns in conscious systems lead to pleasure without a clear definition of what our basic ontology is.
Second, I’ve been calling this “The Important Problem of Consciousness”, a riff off Chalmers’ distinction between the Easy and Hard problems. I.e., if someone switched my red and green qualia in some fundamental sense it wouldn’t matter; if someone switched pain and pleasure, it would.
Third, it seems to me that patternist accounts of consciousness can answer some of your questions, to some degree, just by ruling out consciousness (things can only experience suffering insofar as they’re conscious). How to rank each of your examples in severity, however, is… very difficult.
How can we prove hedons exist at all?
I “feel” stuff happening to me, but that’s hardly evidence. I can point to my patterns of choosing pleasure over pain corresponding to electrical activities in brain structures for sensing and planning, but those aren’t hedons.
One answer is to assume that all brain structures that look like they could correspond to hedons actually do correspond to hedons. This seems like a big assumption, and begs the question of why.
I don’t understand your question. Do you actually dispute that pleasure could serve as the foundation for a consistent set of preferences? Or are you picturing “hedons” as much more concrete than I am?
I assume hedons, a type of qualia, exist.
For the sake of argument, I’ll argue the opposing view:
I don’t believe anyone “feels” anything. People act as if they have preferences and talk about subjective experiences because that is what their brain structures do, not because subjective experiences actually exist. It is perfectly normal for evolved organisms to talk about “the meaning of life”, but these organisms are only patterns in a formal system. In other words, “consciousness”, “self-awareness”, and “meaning” are only patterns in physical brains. There is no “mind” having experiences anywhere. A “ghost”—a mind without a corresponding physical structure—is nonsensical because a mind is merely its pattern in matter.
It does not matter if the patterns representing a brain are computed using pencil-and-paper (see the short story “A Conversation With Einstein’s Brain” by Douglas Hofstadter, in which a choose-your-own-adventure book tries to argue that it has conscious experience).
Hedons—conscious experiences of enjoyment—do not exist. I am aware that I am just a pattern of a brain in a formal system, and not a “person” in the sense of actually having experiences.
In other words, people are all philosophical zombies. Prove me wrong, if you can...
Its a sceptical hypothesis. As such it, neither admits disproof, nor persuades.
These are great questions. I’m not sure they have answers. But they seem extremely pertinent to making a good AGI.
Tegmark’s paper here: http://arxiv.org/pdf/1409.0813.pdf seems to be poking in the same direction.
Neglecting these questions is, IMO, tantamount to moral relativism or nihilism.
Thank you- that paper is extremely relevant and I appreciate the link.
To reiterate, mostly for my own benefit: As Tegmark says- whether we’re talking about a foundation to ethics, or a “final goal”, or we simply want to not be confused about what’s worth wanting, we need to figure out what makes one brain-state innately preferable to another, and ultimately this boils down to arrangements of particles. But what makes one arrangement of particles superior to another? (This is not to give credence to moral relativism- I do believe this has a crisp answer).