I think he’s just pointing out that all you have to do is change the scenario slightly and then my objection doesn’t work.
Still, I’m a little curious about how someone’s ability to state a large number succinctly makes a difference. I mean, suppose the biggest number the mugger knew how to say was 12, and they didn’t know about multiplication, exponents, up arrow notation, etc. They just chose 12 because it was the biggest number they could think of or knew how to express (whether they were bluffing totally or were actually going to torture 3^^^3 people). Should I take a mugger more seriously just because they know how to communicate big numbers to me?
The point of stating the large number succinctly is that it overwhelms the small likelihood of the muggers story being true, at least if you have something resembling a Solomonoff prior. Note also that the mugger isn’t really necessary for the scenario, he’s merely there to supply a hypothesis that you could have come up with on your own.
Good point. I guess the only way to counter these odd scenarios is to point out that everyone’s utility function is different, and then the question is simply whether the responder wants to self-modify (or would be happier in the long run doing so) even after hearing some rationalist arguments to clarify their intuitions. The question of self-modification is a little hard to grasp, but at least it avoids all these far-fetched situations.
“Give me five dollars, or I’ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people.”
I’d just walk away. Why should I care? If I thought about it for so long that I had some lingering qualms, and I got mugged like that a lot, I’d self-modify just to enjoy the rest of the my life more.
As an aside, I don’t think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It’s all subjective.
As an aside, I don’t think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It’s all subjective.
There’s a difference between mental distress and action-motivating desire. If I were asked to pay $5 to prevent someone from being murdered with near-certainty, I would. On the other hand, I would not pay $5 more for a video game where a character does not die, though I can’t be sure of this self-simulation because I play video games rather infrequently. If I only had $5, I would definitely spend it on the former option.
I do not allow my mental distress to respond to the same things that motivate my actions; intuitively grasping the magnitude of existential risks is impossible and even thinking about a fraction of that tragedy could prevent action, such as by causing depression. However, existential risks still motivate my decisions.
I thought of a way that I could be mugged Pascal-style: If I had to watch even one simulated person being tortured in gratuitous, holodeck-level realistic detail even for a minute if I didn’t pay $5, I’d pay. I also wouldn’t self-modify to make me not care about seeing simulated humans tortured in such a way, because I’m afraid that would make my interactions with people I know and care about very strange. I wouldn’t want to be callous about witnessing people tortured, because I think it would take away part of my enjoyment of life. (And there are ways to amp up such scenarios to me it far worse, like if I were forced to torture to death 100 simulations of the people I most care about in a holodeck in order to save those actual people...that would probably have very bad consequences for me, and self-modifying so that I wouldn’t care would just make it worse.)
But let’s face it, the vast majority of people are indeed pretty callous about the actual deaths happening today, all the pain experienced by livestock as they’re slaughtered, and all the pain felt by chronic pain sufferers. People decry such things loudly, but few of those who aren’t directly connected to the victims are losing sleep over such suffering, even though there are actions they could conceivably take to mitigate it. It is uncomfortable to acknowledge, but it seems undeniable.
I thought of a way that I could be mugged Pascal-style
It’s not Pascal’s mugging unless it works with ridiculously low probabilities. Would you pay $5 to avoid a 10^-30 chance of watching 3^^^3 people being tortured?
People decry such things loudly, but few of those who aren’t directly connected to the victims are losing sleep over such suffering, even though there are actions they could conceivably take to mitigate it. It is uncomfortable to acknowledge, but it seems undeniable.
Are you including yourself in “the vast majority of people”? Are you including most of LW? If your utility is bounded, you are probably not vulnerable to Pascal’s mugging. If your utility is not bounded, it is irrelevant whether other people act like their utilities are bounded. Note that even egoists can have unbounded utility functions.
Are you losing sleep over the daily deaths in Iraq? Are most LWers? That’s all I’m saying. I consider myself pretty far above-average empathy-wise, to the extent that if I saw someone be tortured and die I’d probably be completely changed as a person. If I spent more time thinking about the war I probably not be able to sleep at all, and eventually if I steeped myself in the reality of the situation I’d probably go insane or die of grief. The same would probably happen if I spent all my time watching slaughterhouse videos. So I’m not pretending to be callous. I’m just trying to inject some reality into the discussion. If we cared as much as we signal we do, no one would be able go to work, or post on LW. We’d all be too grief-stricken.
So although it depends on what exactly you mean by “unbounded utility function,” it seems that no one’s utility function is really unbounded. And it also isn’t immediately clear that anyone would really want their utility function to be unbounded (unless I’m misinterpreting the term).
Also, point taken about my scenario not being a Pascal’s mugging situation.
Are you losing sleep over the daily deaths in Iraq? Are most LWers? . . . If we cared as much as we signal we do, no one would be able go to work, or post on LW. We’d all be too grief-stricken.
That is exactly what I was talking about when I said “There’s a difference between mental distress and action-motivating desire.”. Utility functions are about choices, not feelings, so I assumed that, in a discussion about utility we would be using the word ‘care’ (as in “If we cared as much as we signal we do”) to refer to motives for action, not mental distress. If this isn’t clear, I’m trying to refer to the same ideas discussed here.
And it also isn’t immediately clear that anyone would really want their utility function to be unbounded (unless I’m misinterpreting the term).
It does not make sense to speak of what someone wants their utility function to be; utility functions just describe actual preferences. Someone’s utility function is unbounded if and only if there are consequences with arbitrarily high utility differences. For every consequence, you can identify one that is over twice as good (relative to some zero point, which can be arbitrarily chosen. This doesn’t really matter if you’re not familiar with the topic, it just corresponds to the fact that if every consequence were 1 utilon better, you would make the same choices because relative utilities would not have changed.) Whether a utility function has this property is important in many circumstances and I consider it an open problem whether humans’ utility functions are unbounded, though some would probably disagree and I don’t know what science doesn’t know.
No, because people are not completely rational. What I ‘really’ want to do is what I would do if I were fully informed, rational, etc. Morality is difficult because our brains do not just tell us what we want. Demonstrated preference would only work with ideal agents, and even then it could only tell you what they want most among the possible options.
BTW, you realize we’re talking about torture vs. dust spec and not Pascal’s mugging here?
I think he’s just pointing out that all you have to do is change the scenario slightly and then my objection doesn’t work.
Still, I’m a little curious about how someone’s ability to state a large number succinctly makes a difference. I mean, suppose the biggest number the mugger knew how to say was 12, and they didn’t know about multiplication, exponents, up arrow notation, etc. They just chose 12 because it was the biggest number they could think of or knew how to express (whether they were bluffing totally or were actually going to torture 3^^^3 people). Should I take a mugger more seriously just because they know how to communicate big numbers to me?
The point of stating the large number succinctly is that it overwhelms the small likelihood of the muggers story being true, at least if you have something resembling a Solomonoff prior. Note also that the mugger isn’t really necessary for the scenario, he’s merely there to supply a hypothesis that you could have come up with on your own.
Good point. I guess the only way to counter these odd scenarios is to point out that everyone’s utility function is different, and then the question is simply whether the responder wants to self-modify (or would be happier in the long run doing so) even after hearing some rationalist arguments to clarify their intuitions. The question of self-modification is a little hard to grasp, but at least it avoids all these far-fetched situations.
For the Pascal’s mugging problem, I don’t think that will help.
Isn’t Pascal’s mugging just this?
I’d just walk away. Why should I care? If I thought about it for so long that I had some lingering qualms, and I got mugged like that a lot, I’d self-modify just to enjoy the rest of the my life more.
As an aside, I don’t think people really care that much about other people dying unless they have some way to connect to it. Someone probably was murdered while you were reading this comment. Is it going to keep you up? On the other hand, people can cry all night about a video game character dying. It’s all subjective.
There’s a difference between mental distress and action-motivating desire. If I were asked to pay $5 to prevent someone from being murdered with near-certainty, I would. On the other hand, I would not pay $5 more for a video game where a character does not die, though I can’t be sure of this self-simulation because I play video games rather infrequently. If I only had $5, I would definitely spend it on the former option.
I do not allow my mental distress to respond to the same things that motivate my actions; intuitively grasping the magnitude of existential risks is impossible and even thinking about a fraction of that tragedy could prevent action, such as by causing depression. However, existential risks still motivate my decisions.
I thought of a way that I could be mugged Pascal-style: If I had to watch even one simulated person being tortured in gratuitous, holodeck-level realistic detail even for a minute if I didn’t pay $5, I’d pay. I also wouldn’t self-modify to make me not care about seeing simulated humans tortured in such a way, because I’m afraid that would make my interactions with people I know and care about very strange. I wouldn’t want to be callous about witnessing people tortured, because I think it would take away part of my enjoyment of life. (And there are ways to amp up such scenarios to me it far worse, like if I were forced to torture to death 100 simulations of the people I most care about in a holodeck in order to save those actual people...that would probably have very bad consequences for me, and self-modifying so that I wouldn’t care would just make it worse.)
But let’s face it, the vast majority of people are indeed pretty callous about the actual deaths happening today, all the pain experienced by livestock as they’re slaughtered, and all the pain felt by chronic pain sufferers. People decry such things loudly, but few of those who aren’t directly connected to the victims are losing sleep over such suffering, even though there are actions they could conceivably take to mitigate it. It is uncomfortable to acknowledge, but it seems undeniable.
It’s not Pascal’s mugging unless it works with ridiculously low probabilities. Would you pay $5 to avoid a 10^-30 chance of watching 3^^^3 people being tortured?
Are you including yourself in “the vast majority of people”? Are you including most of LW? If your utility is bounded, you are probably not vulnerable to Pascal’s mugging. If your utility is not bounded, it is irrelevant whether other people act like their utilities are bounded. Note that even egoists can have unbounded utility functions.
Are you losing sleep over the daily deaths in Iraq? Are most LWers? That’s all I’m saying. I consider myself pretty far above-average empathy-wise, to the extent that if I saw someone be tortured and die I’d probably be completely changed as a person. If I spent more time thinking about the war I probably not be able to sleep at all, and eventually if I steeped myself in the reality of the situation I’d probably go insane or die of grief. The same would probably happen if I spent all my time watching slaughterhouse videos. So I’m not pretending to be callous. I’m just trying to inject some reality into the discussion. If we cared as much as we signal we do, no one would be able go to work, or post on LW. We’d all be too grief-stricken.
So although it depends on what exactly you mean by “unbounded utility function,” it seems that no one’s utility function is really unbounded. And it also isn’t immediately clear that anyone would really want their utility function to be unbounded (unless I’m misinterpreting the term).
Also, point taken about my scenario not being a Pascal’s mugging situation.
That is exactly what I was talking about when I said “There’s a difference between mental distress and action-motivating desire.”. Utility functions are about choices, not feelings, so I assumed that, in a discussion about utility we would be using the word ‘care’ (as in “If we cared as much as we signal we do”) to refer to motives for action, not mental distress. If this isn’t clear, I’m trying to refer to the same ideas discussed here.
It does not make sense to speak of what someone wants their utility function to be; utility functions just describe actual preferences. Someone’s utility function is unbounded if and only if there are consequences with arbitrarily high utility differences. For every consequence, you can identify one that is over twice as good (relative to some zero point, which can be arbitrarily chosen. This doesn’t really matter if you’re not familiar with the topic, it just corresponds to the fact that if every consequence were 1 utilon better, you would make the same choices because relative utilities would not have changed.) Whether a utility function has this property is important in many circumstances and I consider it an open problem whether humans’ utility functions are unbounded, though some would probably disagree and I don’t know what science doesn’t know.
Is this basically saying that you can tell someone else’s utility function by demonstrated preference? It sounds a lot like that.
No, because people are not completely rational. What I ‘really’ want to do is what I would do if I were fully informed, rational, etc. Morality is difficult because our brains do not just tell us what we want. Demonstrated preference would only work with ideal agents, and even then it could only tell you what they want most among the possible options.