I have a strongly egalitarian utility function: for scenarios with equal net utility Σ (i.e. sum of normalised personal utilities of individual people) I attach a large penalty to those where the utilities are unevenly distributed (at least in absence of reward and punishment motivations). So my utility is not only a function of Σ, but rather of the whole distribution. In practice it means that X and N will be pretty high.
To set the specific value would be difficult, mainly because I use a set of deontological constraints while evaluating my utility. For example, I tend to discard all positive contributions to Σ which are in sharp conflict with my values: for example, pleasure gained by torturing puppies doesn’t pass the filter. These deontological constraints are useful because of ethical injunctions.
So even if 7^^^7 zoosadists could experience orgasm watching just one puppy tortured, the deontological filter accepts only the negative contribution to Σ from the puppy’s utility function (*). Now this is not much reasonable, but the problem is that neither can I switch off my deontological checks easily, nor do I particularly want to learn that.
So, to answer 1), I have to guess how I would reason in absence of the deontological filters, and am not much confident about the results. But say that for a month of really horrible torture (assuming the victim is not a masochist and hasn’t given consent) with no long-term consequences, I suppose that N would be of order 10^8 with X = a week of a very pleasant vacation. (I could have written 10^10 or 10^6 instead of 10^8, it doesn’t actually feel any different).
As for 2), the largeness of N doesn’t necessarily reflect presence of deontology. It may result either from the far larger disutility of torture compared to utility of pleasure, or from some discount for uneven utility distribution. In my case it is both.
(*) Even if I did get rid of rigid deontology, I would like to preserve some discount for illegitimately gained utilities. So, the value of Σ, taken as argument of my utility function, would be higher if the zoosadists got exactly the same amount of pleasure by other means and the puppy was meanwhile tortured only by accident.
I have a strongly egalitarian utility function: for scenarios with equal net utility Σ (i.e. sum of normalised personal utilities of individual people) I attach a large penalty to those where the utilities are unevenly distributed (at least in absence of reward and punishment motivations). So my utility is not only a function of Σ, but rather of the whole distribution. In practice it means that X and N will be pretty high.
To set the specific value would be difficult, mainly because I use a set of deontological constraints while evaluating my utility. For example, I tend to discard all positive contributions to Σ which are in sharp conflict with my values: for example, pleasure gained by torturing puppies doesn’t pass the filter. These deontological constraints are useful because of ethical injunctions.
So even if 7^^^7 zoosadists could experience orgasm watching just one puppy tortured, the deontological filter accepts only the negative contribution to Σ from the puppy’s utility function (*). Now this is not much reasonable, but the problem is that neither can I switch off my deontological checks easily, nor do I particularly want to learn that.
So, to answer 1), I have to guess how I would reason in absence of the deontological filters, and am not much confident about the results. But say that for a month of really horrible torture (assuming the victim is not a masochist and hasn’t given consent) with no long-term consequences, I suppose that N would be of order 10^8 with X = a week of a very pleasant vacation. (I could have written 10^10 or 10^6 instead of 10^8, it doesn’t actually feel any different).
As for 2), the largeness of N doesn’t necessarily reflect presence of deontology. It may result either from the far larger disutility of torture compared to utility of pleasure, or from some discount for uneven utility distribution. In my case it is both.
(*) Even if I did get rid of rigid deontology, I would like to preserve some discount for illegitimately gained utilities. So, the value of Σ, taken as argument of my utility function, would be higher if the zoosadists got exactly the same amount of pleasure by other means and the puppy was meanwhile tortured only by accident.