Why should all sentiment self-aware minds or persons should have a weight of 1 in personal moral calculation? If two people have to die, me or random human, I think I’ll pick me every time.
Right, we have to draw a distinction here. I’m talking about how we define what’s more ethical. That doesn’t mean you’re going to live up to that perfect ethical standard. You can say, in general, people’s lives are equally valuable, and that knowing nothing about the two groups, you’d prefer two people died instead of three. Of course, in reality, we’re not perfectly ethical, so we’re always going to be choosing the set of three if we’re in it. That doesn’t change our definition, though.
Does that make me unethical? Dunno. Maybe. But do I have any reason to care about a XML tag that reads “unethical” floating above my head? As you imply in later paragraph, not really.
“Unethical” isn’t a binary tag, though. Personally, I think in terms of a self-interest multiplier. It’s more ethical to save 10 people instead of 1, but if I wouldn’t do that if I were the one, then my self-interest multiplier is 10x.
So just what is my self-interest multiplier at? Well, I don’t know exactly how great a bastard I am. But I do try to keep it a bit consistent. For instance, if I’m deciding whether to buy bacon, I try to remember that causing pain to pigs is as bad as causing pain to humans, all else being equal, and I’m being fooled by a lack of emotional connection to them. So that means that buying factory-farmed bacon implies a far, far greater self-interest multiplier than I’m comfortable with. I’d really rather not be that much of a bastard, so I don’t buy it.
Right, we have to draw a distinction here. I’m talking about how we define what’s more ethical. That doesn’t mean you’re going to live up to that perfect ethical standard. You can say, in general, people’s lives are equally valuable, and that knowing nothing about the two groups, you’d prefer two people died instead of three. Of course, in reality, we’re not perfectly ethical, so we’re always going to be choosing the set of three if we’re in it. That doesn’t change our definition, though.
So if this “ethics” thing dosen’t describe our preference properly… uh, what’s it for then?
I think in a information theoretic definition of self. My self-interest multiplier dosen’t rely on me being a single meat-bag body. It works for my ems or my perfect copies too. And the more imperfect copies and even many of the botched copies (with an additional modifier that’s somewhat below 1) and … do you see where I’m going with this?
Yeah, I do, but what I don’t see is how this is ethics, and not mere self-interest.
If you don’t draw any distinction between what you personally want and what counts as a better world in a more universalised way, I don’t see how the concept of “ethics” comes in at all.
Okay. “Morality”’s banned too, as I use it as a synonym for ethics.
As a sub-component of my total preferences, which are predictors of my actions, I consider a kind of “averaged preferences” where I get no more stake in deciding what constitutes a better world than any other mind. The result of this calculation then feeds into my personal preferences, such that I have a weak but not inconsiderable desire to maximise this second measure, which I weigh against other things I want.
It seems to me that you don’t do this second loop through. You have your own desires, which are empathically sensitive to some more than others, and you maximise those.
As a sub-component of my total preferences, which are predictors of my actions, I consider a kind of “averaged preferences” where I get no more stake in deciding what constitutes a better world than any other mind. The result of this calculation then feeds into my personal preferences, such that I have a weak but not inconsiderable desire to maximise this second measure, which I weigh against other things I want.
It seems to me that you don’t do this second loop through.
Oh I do that too. The difference is that I apply a appropriately reduced selfish factor for how much I weigh minds that are similar or dissimilar from my own in various ways.
You can implement the same thing in your total preferences algorithm by using an extended definition of “me” for finding the value of “my personal preferences”.
Edit: I’m not quite sure why this is getting down voted. But I’ll add three clarifications:
I obviously somewhat care about minds that are completely alien to my own too
When I said not that different I meant it, I didn’t mean identical, I just mean the output may not be that different. It really depends on which definition of self one is using running his algorithm, it also depends what our “selfish constant” is (it is unlikely we have a the same one).
By “extended LWish dentition of “me”, I meant the attitude where if you make a perfect copy of you, they are both obviously you, and while they do diverge, and neither can meaningfully call itself the “original”.
To me, that second loop through only has value to the extent that I can buy into the idea that it’s non-partisan—that it’s “objective” in that weaker sense of not being me-specific.
This is why I was confused. I assumed that the problem was, when you talked about “making the world a better place”, “better” was synonymous with your own preferences (the ones which are predictors of your actions). In other words, you’re making the kind of world you want. In this sense, “making the world a better place” might mean you being global dictator in a palace of gold, well stocked harems, etc.
To me, putting that similarity factor into your better world definition is just a lesser version of this same problem. Your definition of “better world” is coloured by the fact that you’re doing the defining. You/ve given yourself most of the stake in the definition, by saying minds count to the extent that they are similar to your own.
To me, that second loop through only has value to the extent that I can buy into the idea that it’s non-partisan—that it’s “objective” in that weaker sense of not being me-specific.
The components that don’t have any additional weight of you are still there in my implementation. If you feel like calling something objective you may as well call that part of the function that. When I said somewhat under 1, that was in the context of the modifier I give to people/entities that are “part-me” when applying the selfish multiplier.
Konkvistador:
1 Me selfish multiplier + 0.5 Me 0.5 * selfish multiplier + …. + 0 Me + 0 Me + 0 Me
syllogism
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
As you can probably see there are trivial ways to make these two equivalent.
Define P[i][j] as the preference-weight for some outcome j of some mind i. P[me][j] is my preference weight for j.
To decide my top-level preference for j—ie in practice whether I want to do it, I consider
S P[me][j] + E sum(P[i][j] for i in minds)
Where S is the selfishness constant, E is the ethics constant, and S+E=1 for the convenience of having my preferences normalised [0,1].
In other words, I try to estimate the result of an unweighted sum of every mind’s preferences, and call the result of that what a disinterested observer would decide I should do. I take that into account, but not absolutely. Note that this makes my preference function recursive, but I don’t see that this matters.
I don’t think your calculation is equivalent, because you don’t estimate sum(P[i][j] for i in minds). To me this means you’re not really thinking about what would be preferable to a disinterested observer, and so it feels like the playing of a different game.
PS In terms of FAI, P[i][j] is the hard part—getting some accurate anticipation of what minds actually prefer. I unapologetically wave my hands on this issue. I have this belief that a pig really, really, really doesn’t enjoy its life in a factory farm, and that I get much less out of eating bacon that it’s losing. I’m pretty confident I’m correct on that, but I have no idea how to formalise it into anything implementable.
I don’t think I’m misunderstanding since while we used different notation to describe this:
S P[me][j] + E sum(P[i][j] for i in minds)
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
We both described your preferences the same way. Though I neglected to explicitly normalize mine. To demonstrate I’m going to change the notation of my formulation to match yours.
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
My notation may have been misleading in this regard, 0.5 Me isn’t 0.5 Me it is just the mark I’d use for a mind that is … well 0.5 Me*. In your model the “me content” dosen’t matter when tallying minds, except when it hits 1 in your own, so there is no need to mark it, but the reason I still used the fraction-of-me notation to describe certain minds was to give an intuition of what your described algorithm and my described algorithm would do with the same data set.
Konkvistador:
1 Me selfish multiplier + 0.5 Me 0.5 * selfish multiplier + …. + 0 Me + 0 Me + 0 Me
syllogism
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
So if syllogism and Konkvistador where using the same selfish multiplier (let us call it S for short as you do) the difference between their systems would be the following
0.5 Me 0.5 (S-1) + 0.3 Me 0.3 (S-1) + …. really small fraction of Me really tiny number (S-1)
This may be a lot or it may not be very much, it really depends on how big it is compared to:
1 Me * S + 0 Me + 0 Me + 0 Me + … 0 Me
In other words if “Me” is very concentrated in a universe, say you drop me in a completely alien one, my algorithm wouldn’t produce an output measurably different from your algorithm. Your algorithm can also consistently give the same result if your S and Me embrace an extended self-identify, rather than just your local feeling of self. Now this of course boils down to the S factor and me being different for the same person when using this algorithm (we are after all talking about how something is or isn’t implemented rather than having silly sloppy math for fun), but I think people really do have a different S factor when thinking of such issues.
In other words if for S * P[me][j] you use dosen’t force your P[me][j] to necessarily a value of one. To help you understand a bit more by that imagine there is a universe that you can arrange to your pleasure and it contains P[you] but not just any P[you] it contains P[you] minus the last two weeks of memory. Does he still deserve the S factor boost? Or at least part of it?
Readers may be wondering that if the two things can be made mathematically equivalent, why I prefer my implementation to his (which is probably more standard among utilitarians who don’t embrace an extended self). Why not just adopt the same model but use a different value of Me or a different S to capture your preferences? This is because in practice I think it makes the better heuristic for me:
The more similar a mind is to mine, the less harm is done by my human tendency towards anthropomorphizing (mind projection fallacy is less an issue when the slime monster really does want our women). In other words I can be more sure of my estimation of their interests, goals and desires is are likley to be influenced by subconscious rigging “their” preferences in my favour because they are now explicitly partially determined by the algorithm in my brain that presumably wants to really find the best option for an individual (the ones that runs when I say “What do I want?”). Most rationalist corrections made for 0.5 Me * 0.5 also have to be used in Me and vice versa.
I find it easier to help most people, because most people are pretty darn similar to me when comparing them with non-human or non-living processes. And it dosen’t feel like a grand act selflessness or something that changes my self-image, signals anything or burns “willpower” but more like common sense.
It captures my intuition that I don’t just care about my preferences and some averaged thing, but I care about specific people’s preferences independent of “my own personal desires” more than others. This puts me in the right frame of mind when interacting with people I care about.
Edit: Down-voted already? Ok, can someone tell me what I’m doing wrong here?
how does you personally buying bacon hurt pigs? Is it because you wouldn’t eat factory farmed non person human (and if so why not?) or an object level calculation of bastardliness of buying factory farmed bacon (presumably via your impact on pigs dying?)
I ask because I personally can’t see the chain from me personally buying pig to pigs dying and I like having an easy source of protein. My brain tells me 0 extra animals die as a result of my eating meat.
I say my brain rather than I because I suspect this may be rationalisation: I don’t think i’d react the same way to the idea of eating babies or decide eating meat is fine because no extra animals die if I wasn’t already eating meat.
It’s starting to look like rationalisation to me. But I still don’t see any object level cost to eating meat.
edit: TL:DR eating meat is proof of my unethicalness but not actually unethical. Oh and for the record i reserve the right to be unethical.
When pig farmers decide how many pigs to slaughter for bacon, they do so based on (among other things) current sales figures for bacon. When I buy bacon, I change those figures in such a way as to trigger a (negligibly) higher number of pigs being slaughtered. So, yeah, my current purchases of bacon contribute to the future death of pigs.
Of course, when pig farmers decide how many sows to impregnate, they do so based on (among other things) current sales figures for bacon. So my current purchases of bacon contribute to the future birth of pigs as well.
So if I want to judge the ethical costs of my purchasing bacon, I need to decide if I value pigs’ lives (in which case purchasing bacon might be a good thing, since it might lead to more pig-lives), as well as decide if I negatively value pigs’ deaths (in which case purchasing bacon might be a bad thing, since it leads to more pig-deaths). If it turns out that both are true, things get complicated, but basically I need to decide how much I value each of those things and run an expected value calculation on “eat bacon” and “don’t eat bacon” (as well as “eat more bacon than I did last month,” which might encourage pig farmers to always create more pigs than they kill, which I might want if I value pig lives more than I negatively value pig deaths).
Personally, I don’t seem to value pig lives in any particularly additive way… that is, I value there being some pigs rather than no pigs, but beyond some hazy threshold number of “some” (significantly fewer than the actual number of pigs in the world), I don’t seem to care how many there are. I don’t seem to negatively value pig deaths very much either, and again I don’t do so in any particularly additive way. (This is sometimes called “scope insensitivity” around here and labeled a sign of irrational thinking, though I’m not really clear what’s wrong with it in this case.)
You don’t need to value pig lives as such to conclude that eating pigs would be against your values. You just need to value (negatively) certain mental states that the pigs can experience, such as the state of being in agony.
I agree that I can conclude that eating pigs is against my values in various different ways, not all of which require that I value pig lives. (For example, I could value keeping kosher.)
But negatively valuing pig agony, period-full-stop, doesn’t get me there. All that does is lead me to conclude that if I’m going to eat pigs, I should do so in ways that don’t result in pigs experiencing agony. (It also leads me to conclude that if I’m going to refuse to eat pigs, I should do that in a way that doesn’t result in pigs experiencing agony.
If I’m at all efficient, it probably leads me to painlessly exterminate pigs… after all, that guarantees there won’t be any pigs-in-agony mental states. And, heck, now that there are all these dead pigs lying around, why not eat them?
More generally, valuing only one thing would lead me to behave in inhuman ways.
You claimed that you didn’t value pig lives presumably as a justification for your decision to eat pigs. You then acknowledged that, if you valued the absence of agony, this would provide you with a reason to abstain from eating pigs not raised humanely. Do you value the absence of agony? If so, what animal products do you eat?
First of all, I didn’t claim I don’t value pig lives. I claimed that the way I value pig lives doesn’t seem to be additive… that I don’t seem to value a thousand pig lives more than a hundred pig lives, for example. Second of all, the extent to which I value pig lives is almost completely unrelated to my decision to eat pigs. I didn’t eat pigs for the first fifteen years of my life or so, and then I started eating pigs, and the extent to which I value pig lives did not significantly change between those two periods of my life.
All of that said… I value the absence of agony. I value other things as well, including my own convenience and the flavor of yummy meat. Judging from my behavior, I seem to value those things more than I negatively value a few dozen suffering cows or a few thousand suffering chickens. (Or pigs, in principle, though I’m not sure I’ve actually eaten a whole pig in my life thus far… I don’t much care for pork.)
Anyway, to answer your question: I eat pretty much all the animal products that are conveniently available in my area. I also wear some, and use some for decoration.
If anyone is inclined to explain their downvotes here, either publicly or privately, I’d be appreciative… I’m not sure what I’m being asked to provide less of.
Hmm this business of valuing pig lives doesn’t sit right with me.
My idea of utilitarianism is that everybody gets an equal vote. So you can feel free to include your weak preferences for more pigs in your self-interested vote, the same way you can vote for the near super-stimulus of crisp, flavoursome bacon. But each individual pig, when casting their vote, is completely apathetic about the continuation of their line.
So if you follow a utilitarian definition of what’s ethical, you can’t use “it’s good that there are pigs” as an argument for eating them being ethical. It’s what you want to happen, not what everyone on average wants to happen. I want to be king of the world, but I can’t claim that everyone else is unethical for not crowning me.
Leaving label definitions aside, I agree with you that IF there’s a uniquely ethical choice that can somehow be derived by aggregating the preferences of some group of preference-havers, then I can’t derive that choice from what I happen to prefer, so in that case if I want to judge the ethical costs of purchasing bacon I need to identify what everybody else prefers as part of that judgment. (I also, in that case, need to know who “everybody else” is before I can make that determination.)
Can you say more about why you find that premise compelling?
I find that premise compelling because I have a psychological need to believe I’m motivated by more than self-interest, and my powers of self-deception are limited by my ability to check my beliefs for self-consistency.
What this amounts to is the need to ask not just what I want, but how to make the world “better” in some more impartial way. The most self-convincing way I’ve found to define “better” is that it improves the net lived experience of other minds.
In other words, if I maximise that measure, I very comfortably feel that I’m doing good.
Personally I reject that premise, though in some contexts I endorse behaving as though it were true for pragmatic social reasons. But I have no problem with you continuing to believe it if that makes you feel good… it seems like a relatively harmless form of self-gratification, and it probably won’t grow hair on your utility function.
Accidentally hit the comment button with a line of text written. Hit the retract button so i could start again. AND IT FUCKING JUST PUT LINES THROUGH IT WHAT THE FUCK.
That is to say, How do I unretract?
“When I buy bacon, I change those figures in such a way as to trigger a (negligibly) higher number of pigs being slaughtered.”
But is my buying bacon actually recorded? It’s possible that those calculations are done on sufficiently large scales that my personally eating meat causes no pig suffering. As in, were i to stop, would any fewer pigs suffer.
It’s not lives and deaths i’m particularly concerned with. The trade off I’m currently thinking about is pig suffering vs bacon. And if pig suffering is the same whether i eat already dead pigs I’ll probably feel better about it.
Scope insensitivity would be not taking into account personal impact on pig suffering I suppose. Seeing as there’s already vast amounts I’m tempted to label anything I could do about it “pointless” or similiar.
Which bypasses the actual utility calculation (otherwise known as actual thinking, shutting up and multiplying.)
The point is I need to have a think about the expected consequences of buying meat, eating meat someone else has bought, eating animals i find by the road etc. Do supermarkets record how much meat is eaten? How important is eating meat for my nutrition (and or convenience?) etc.
Let’s say that demand for bacon fell by 50%. It seems obvious that the market would soon respond and supply of bacon (in number of pigs raised for slaughter) would also fall by 50%, right? Okay, so re-visualise for other values -- 40%, 90%, 10%, 5%, etc.
You should now be convinced that there’s a linear relationship between bacon supply and bacon demand. At a fine enough granularity, it’s probably actually going to be a step-function, because individual businesses will succeed or fail based on pricing, or individual farmers might switch to a different crop. But every non-consumer of meat is just as responsible for that.
In other words, let’s say 5% of people are vegetarian, causing 5% less meat production. We’re all equally responsible for that decline, so we all get to say that, on average, we caused fewer pigs to die.
Right, we have to draw a distinction here. I’m talking about how we define what’s more ethical. That doesn’t mean you’re going to live up to that perfect ethical standard. You can say, in general, people’s lives are equally valuable, and that knowing nothing about the two groups, you’d prefer two people died instead of three. Of course, in reality, we’re not perfectly ethical, so we’re always going to be choosing the set of three if we’re in it. That doesn’t change our definition, though.
“Unethical” isn’t a binary tag, though. Personally, I think in terms of a self-interest multiplier. It’s more ethical to save 10 people instead of 1, but if I wouldn’t do that if I were the one, then my self-interest multiplier is 10x.
So just what is my self-interest multiplier at? Well, I don’t know exactly how great a bastard I am. But I do try to keep it a bit consistent. For instance, if I’m deciding whether to buy bacon, I try to remember that causing pain to pigs is as bad as causing pain to humans, all else being equal, and I’m being fooled by a lack of emotional connection to them. So that means that buying factory-farmed bacon implies a far, far greater self-interest multiplier than I’m comfortable with. I’d really rather not be that much of a bastard, so I don’t buy it.
So if this “ethics” thing dosen’t describe our preference properly… uh, what’s it for then?
I think in a information theoretic definition of self. My self-interest multiplier dosen’t rely on me being a single meat-bag body. It works for my ems or my perfect copies too. And the more imperfect copies and even many of the botched copies (with an additional modifier that’s somewhat below 1) and … do you see where I’m going with this?
Yeah, I do, but what I don’t see is how this is ethics, and not mere self-interest.
If you don’t draw any distinction between what you personally want and what counts as a better world in a more universalised way, I don’t see how the concept of “ethics” comes in at all.
Can we taboo the words ethics and self-interest?
Okay. “Morality”’s banned too, as I use it as a synonym for ethics.
As a sub-component of my total preferences, which are predictors of my actions, I consider a kind of “averaged preferences” where I get no more stake in deciding what constitutes a better world than any other mind. The result of this calculation then feeds into my personal preferences, such that I have a weak but not inconsiderable desire to maximise this second measure, which I weigh against other things I want.
It seems to me that you don’t do this second loop through. You have your own desires, which are empathically sensitive to some more than others, and you maximise those.
I think our positions may not be that different.
Oh I do that too. The difference is that I apply a appropriately reduced selfish factor for how much I weigh minds that are similar or dissimilar from my own in various ways.
You can implement the same thing in your total preferences algorithm by using an extended definition of “me” for finding the value of “my personal preferences”.
Edit: I’m not quite sure why this is getting down voted. But I’ll add three clarifications:
I obviously somewhat care about minds that are completely alien to my own too
When I said not that different I meant it, I didn’t mean identical, I just mean the output may not be that different. It really depends on which definition of self one is using running his algorithm, it also depends what our “selfish constant” is (it is unlikely we have a the same one).
By “extended LWish dentition of “me”, I meant the attitude where if you make a perfect copy of you, they are both obviously you, and while they do diverge, and neither can meaningfully call itself the “original”.
To me, that second loop through only has value to the extent that I can buy into the idea that it’s non-partisan—that it’s “objective” in that weaker sense of not being me-specific.
This is why I was confused. I assumed that the problem was, when you talked about “making the world a better place”, “better” was synonymous with your own preferences (the ones which are predictors of your actions). In other words, you’re making the kind of world you want. In this sense, “making the world a better place” might mean you being global dictator in a palace of gold, well stocked harems, etc.
To me, putting that similarity factor into your better world definition is just a lesser version of this same problem. Your definition of “better world” is coloured by the fact that you’re doing the defining. You/ve given yourself most of the stake in the definition, by saying minds count to the extent that they are similar to your own.
The components that don’t have any additional weight of you are still there in my implementation. If you feel like calling something objective you may as well call that part of the function that. When I said somewhat under 1, that was in the context of the modifier I give to people/entities that are “part-me” when applying the selfish multiplier.
Konkvistador:
1 Me selfish multiplier + 0.5 Me 0.5 * selfish multiplier + …. + 0 Me + 0 Me + 0 Me
syllogism
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
As you can probably see there are trivial ways to make these two equivalent.
Hmm either I don’t understand or you don’t.
Define P[i][j] as the preference-weight for some outcome j of some mind i. P[me][j] is my preference weight for j.
To decide my top-level preference for j—ie in practice whether I want to do it, I consider
S P[me][j] + E sum(P[i][j] for i in minds)
Where S is the selfishness constant, E is the ethics constant, and S+E=1 for the convenience of having my preferences normalised [0,1].
In other words, I try to estimate the result of an unweighted sum of every mind’s preferences, and call the result of that what a disinterested observer would decide I should do. I take that into account, but not absolutely. Note that this makes my preference function recursive, but I don’t see that this matters.
I don’t think your calculation is equivalent, because you don’t estimate sum(P[i][j] for i in minds). To me this means you’re not really thinking about what would be preferable to a disinterested observer, and so it feels like the playing of a different game.
PS In terms of FAI, P[i][j] is the hard part—getting some accurate anticipation of what minds actually prefer. I unapologetically wave my hands on this issue. I have this belief that a pig really, really, really doesn’t enjoy its life in a factory farm, and that I get much less out of eating bacon that it’s losing. I’m pretty confident I’m correct on that, but I have no idea how to formalise it into anything implementable.
I don’t think I’m misunderstanding since while we used different notation to describe this:
S P[me][j] + E sum(P[i][j] for i in minds)
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
We both described your preferences the same way. Though I neglected to explicitly normalize mine. To demonstrate I’m going to change the notation of my formulation to match yours.
1 Me * selfish multiplier + 0.5 Me + 0.3 Me + …. 0 Me + 0 Me + 0 Me
P[me][j] * S + P[1] + P[2] + …. P[i-2] + P[i-1] + P[i]
S P[me][j] + E sum(P[i][j] for i in minds)
My notation may have been misleading in this regard, 0.5 Me isn’t 0.5 Me it is just the mark I’d use for a mind that is … well 0.5 Me*. In your model the “me content” dosen’t matter when tallying minds, except when it hits 1 in your own, so there is no need to mark it, but the reason I still used the fraction-of-me notation to describe certain minds was to give an intuition of what your described algorithm and my described algorithm would do with the same data set.
So if syllogism and Konkvistador where using the same selfish multiplier (let us call it S for short as you do) the difference between their systems would be the following
0.5 Me 0.5 (S-1) + 0.3 Me 0.3 (S-1) + …. really small fraction of Me really tiny number (S-1)
This may be a lot or it may not be very much, it really depends on how big it is compared to:
1 Me * S + 0 Me + 0 Me + 0 Me + … 0 Me
In other words if “Me” is very concentrated in a universe, say you drop me in a completely alien one, my algorithm wouldn’t produce an output measurably different from your algorithm. Your algorithm can also consistently give the same result if your S and Me embrace an extended self-identify, rather than just your local feeling of self. Now this of course boils down to the S factor and me being different for the same person when using this algorithm (we are after all talking about how something is or isn’t implemented rather than having silly sloppy math for fun), but I think people really do have a different S factor when thinking of such issues.
In other words if for S * P[me][j] you use dosen’t force your P[me][j] to necessarily a value of one. To help you understand a bit more by that imagine there is a universe that you can arrange to your pleasure and it contains P[you] but not just any P[you] it contains P[you] minus the last two weeks of memory. Does he still deserve the S factor boost? Or at least part of it?
Readers may be wondering that if the two things can be made mathematically equivalent, why I prefer my implementation to his (which is probably more standard among utilitarians who don’t embrace an extended self). Why not just adopt the same model but use a different value of Me or a different S to capture your preferences? This is because in practice I think it makes the better heuristic for me:
The more similar a mind is to mine, the less harm is done by my human tendency towards anthropomorphizing (mind projection fallacy is less an issue when the slime monster really does want our women). In other words I can be more sure of my estimation of their interests, goals and desires is are likley to be influenced by subconscious rigging “their” preferences in my favour because they are now explicitly partially determined by the algorithm in my brain that presumably wants to really find the best option for an individual (the ones that runs when I say “What do I want?”). Most rationalist corrections made for 0.5 Me * 0.5 also have to be used in Me and vice versa.
I find it easier to help most people, because most people are pretty darn similar to me when comparing them with non-human or non-living processes. And it dosen’t feel like a grand act selflessness or something that changes my self-image, signals anything or burns “willpower” but more like common sense.
It captures my intuition that I don’t just care about my preferences and some averaged thing, but I care about specific people’s preferences independent of “my own personal desires” more than others. This puts me in the right frame of mind when interacting with people I care about.
Edit: Down-voted already? Ok, can someone tell me what I’m doing wrong here?
Can you see how mathematically the two algorithms could create the same output?
how does you personally buying bacon hurt pigs? Is it because you wouldn’t eat factory farmed non person human (and if so why not?) or an object level calculation of bastardliness of buying factory farmed bacon (presumably via your impact on pigs dying?)
I ask because I personally can’t see the chain from me personally buying pig to pigs dying and I like having an easy source of protein. My brain tells me 0 extra animals die as a result of my eating meat.
I say my brain rather than I because I suspect this may be rationalisation: I don’t think i’d react the same way to the idea of eating babies or decide eating meat is fine because no extra animals die if I wasn’t already eating meat.
It’s starting to look like rationalisation to me. But I still don’t see any object level cost to eating meat.
edit: TL:DR eating meat is proof of my unethicalness but not actually unethical. Oh and for the record i reserve the right to be unethical.
When pig farmers decide how many pigs to slaughter for bacon, they do so based on (among other things) current sales figures for bacon. When I buy bacon, I change those figures in such a way as to trigger a (negligibly) higher number of pigs being slaughtered. So, yeah, my current purchases of bacon contribute to the future death of pigs.
Of course, when pig farmers decide how many sows to impregnate, they do so based on (among other things) current sales figures for bacon. So my current purchases of bacon contribute to the future birth of pigs as well.
So if I want to judge the ethical costs of my purchasing bacon, I need to decide if I value pigs’ lives (in which case purchasing bacon might be a good thing, since it might lead to more pig-lives), as well as decide if I negatively value pigs’ deaths (in which case purchasing bacon might be a bad thing, since it leads to more pig-deaths). If it turns out that both are true, things get complicated, but basically I need to decide how much I value each of those things and run an expected value calculation on “eat bacon” and “don’t eat bacon” (as well as “eat more bacon than I did last month,” which might encourage pig farmers to always create more pigs than they kill, which I might want if I value pig lives more than I negatively value pig deaths).
Personally, I don’t seem to value pig lives in any particularly additive way… that is, I value there being some pigs rather than no pigs, but beyond some hazy threshold number of “some” (significantly fewer than the actual number of pigs in the world), I don’t seem to care how many there are. I don’t seem to negatively value pig deaths very much either, and again I don’t do so in any particularly additive way. (This is sometimes called “scope insensitivity” around here and labeled a sign of irrational thinking, though I’m not really clear what’s wrong with it in this case.)
You don’t need to value pig lives as such to conclude that eating pigs would be against your values. You just need to value (negatively) certain mental states that the pigs can experience, such as the state of being in agony.
I agree that I can conclude that eating pigs is against my values in various different ways, not all of which require that I value pig lives. (For example, I could value keeping kosher.)
But negatively valuing pig agony, period-full-stop, doesn’t get me there. All that does is lead me to conclude that if I’m going to eat pigs, I should do so in ways that don’t result in pigs experiencing agony. (It also leads me to conclude that if I’m going to refuse to eat pigs, I should do that in a way that doesn’t result in pigs experiencing agony.
If I’m at all efficient, it probably leads me to painlessly exterminate pigs… after all, that guarantees there won’t be any pigs-in-agony mental states. And, heck, now that there are all these dead pigs lying around, why not eat them?
More generally, valuing only one thing would lead me to behave in inhuman ways.
You claimed that you didn’t value pig lives presumably as a justification for your decision to eat pigs. You then acknowledged that, if you valued the absence of agony, this would provide you with a reason to abstain from eating pigs not raised humanely. Do you value the absence of agony? If so, what animal products do you eat?
First of all, I didn’t claim I don’t value pig lives. I claimed that the way I value pig lives doesn’t seem to be additive… that I don’t seem to value a thousand pig lives more than a hundred pig lives, for example. Second of all, the extent to which I value pig lives is almost completely unrelated to my decision to eat pigs. I didn’t eat pigs for the first fifteen years of my life or so, and then I started eating pigs, and the extent to which I value pig lives did not significantly change between those two periods of my life.
All of that said… I value the absence of agony. I value other things as well, including my own convenience and the flavor of yummy meat. Judging from my behavior, I seem to value those things more than I negatively value a few dozen suffering cows or a few thousand suffering chickens. (Or pigs, in principle, though I’m not sure I’ve actually eaten a whole pig in my life thus far… I don’t much care for pork.)
Anyway, to answer your question: I eat pretty much all the animal products that are conveniently available in my area. I also wear some, and use some for decoration.
If anyone is inclined to explain their downvotes here, either publicly or privately, I’d be appreciative… I’m not sure what I’m being asked to provide less of.
Hmm this business of valuing pig lives doesn’t sit right with me.
My idea of utilitarianism is that everybody gets an equal vote. So you can feel free to include your weak preferences for more pigs in your self-interested vote, the same way you can vote for the near super-stimulus of crisp, flavoursome bacon. But each individual pig, when casting their vote, is completely apathetic about the continuation of their line.
So if you follow a utilitarian definition of what’s ethical, you can’t use “it’s good that there are pigs” as an argument for eating them being ethical. It’s what you want to happen, not what everyone on average wants to happen. I want to be king of the world, but I can’t claim that everyone else is unethical for not crowning me.
Leaving label definitions aside, I agree with you that IF there’s a uniquely ethical choice that can somehow be derived by aggregating the preferences of some group of preference-havers, then I can’t derive that choice from what I happen to prefer, so in that case if I want to judge the ethical costs of purchasing bacon I need to identify what everybody else prefers as part of that judgment. (I also, in that case, need to know who “everybody else” is before I can make that determination.)
Can you say more about why you find that premise compelling?
I find that premise compelling because I have a psychological need to believe I’m motivated by more than self-interest, and my powers of self-deception are limited by my ability to check my beliefs for self-consistency.
What this amounts to is the need to ask not just what I want, but how to make the world “better” in some more impartial way. The most self-convincing way I’ve found to define “better” is that it improves the net lived experience of other minds.
In other words, if I maximise that measure, I very comfortably feel that I’m doing good.
Fair enough.
Personally I reject that premise, though in some contexts I endorse behaving as though it were true for pragmatic social reasons. But I have no problem with you continuing to believe it if that makes you feel good… it seems like a relatively harmless form of self-gratification, and it probably won’t grow hair on your utility function.
Accidentally hit the comment button with a line of text written. Hit the retract button so i could start again. AND IT FUCKING JUST PUT LINES THROUGH IT WHAT THE FUCK.
That is to say, How do I unretract?
“When I buy bacon, I change those figures in such a way as to trigger a (negligibly) higher number of pigs being slaughtered.”
But is my buying bacon actually recorded? It’s possible that those calculations are done on sufficiently large scales that my personally eating meat causes no pig suffering. As in, were i to stop, would any fewer pigs suffer.
It’s not lives and deaths i’m particularly concerned with. The trade off I’m currently thinking about is pig suffering vs bacon. And if pig suffering is the same whether i eat already dead pigs I’ll probably feel better about it.
Scope insensitivity would be not taking into account personal impact on pig suffering I suppose. Seeing as there’s already vast amounts I’m tempted to label anything I could do about it “pointless” or similiar.
Which bypasses the actual utility calculation (otherwise known as actual thinking, shutting up and multiplying.)
The point is I need to have a think about the expected consequences of buying meat, eating meat someone else has bought, eating animals i find by the road etc. Do supermarkets record how much meat is eaten? How important is eating meat for my nutrition (and or convenience?) etc.
You can edit the comment. Conversely, you can simply create a new comment.
It’s hard to visualise, yeah.
Let’s say that demand for bacon fell by 50%. It seems obvious that the market would soon respond and supply of bacon (in number of pigs raised for slaughter) would also fall by 50%, right? Okay, so re-visualise for other values -- 40%, 90%, 10%, 5%, etc.
You should now be convinced that there’s a linear relationship between bacon supply and bacon demand. At a fine enough granularity, it’s probably actually going to be a step-function, because individual businesses will succeed or fail based on pricing, or individual farmers might switch to a different crop. But every non-consumer of meat is just as responsible for that.
In other words, let’s say 5% of people are vegetarian, causing 5% less meat production. We’re all equally responsible for that decline, so we all get to say that, on average, we caused fewer pigs to die.