N=1,X= -(T+ epsilon) where T is the amount of antifun that a month of torture is.
But I really don’t like that answer.
Some thinking out loud follows:
It’s clear to me that I don’t want to answer this question, for more or less the same reasons that I don’t want to explain how much suffering I would be willing to impose upon you in exchange for X amount of fun for me. Which, of course, doesn’t necessarily stop me from doing it, as long as I don’t have to admit to it.
It’s relatively clear to me that intuitively I’d give a different answer if I started from a small X and worked my way up by increments asking “Is this enough?” than if I started from a large X and worked my way down by increments asking “Is this too little?” From which I conclude that my intuitions on this subject are not reliable. Which I knew already from earlier conversations about “utilon-trade” scenarios.
It’s clear to me that, while I have a fairly concrete understanding of torture, I have a very fuzzy understanding of fun. So when I say “X=-T + epsilon”, I’m comparing concrete-torture-apples to fuzzily-imagined-fun-oranges, and I once again have no reliable intuitions.
If I screen off all of my real-world understanding of torture, so I can do a fuzzily-imagined-apples-to-fuzzily-imagined-apples comparison, it’s a simple optimization question: is 1 (T+ epsilon) − 1 T positive? Why, yes it is. Great, do it! But the minute I unscreen that real-world understanding, I’m back to not wanting to answer that question.
What I’d prefer to do is unpack “-T amount of fun” into something equally concrete and do the comparison that way, but I don’t seem to know how to do that.
N=1,X= -(T+ epsilon) where T is the amount of antifun that a month of torture is.
But I really don’t like that answer.
Some thinking out loud follows:
It’s clear to me that I don’t want to answer this question, for more or less the same reasons that I don’t want to explain how much suffering I would be willing to impose upon you in exchange for X amount of fun for me. Which, of course, doesn’t necessarily stop me from doing it, as long as I don’t have to admit to it.
It’s relatively clear to me that intuitively I’d give a different answer if I started from a small X and worked my way up by increments asking “Is this enough?” than if I started from a large X and worked my way down by increments asking “Is this too little?” From which I conclude that my intuitions on this subject are not reliable. Which I knew already from earlier conversations about “utilon-trade” scenarios.
It’s clear to me that, while I have a fairly concrete understanding of torture, I have a very fuzzy understanding of fun. So when I say “X=-T + epsilon”, I’m comparing concrete-torture-apples to fuzzily-imagined-fun-oranges, and I once again have no reliable intuitions.
If I screen off all of my real-world understanding of torture, so I can do a fuzzily-imagined-apples-to-fuzzily-imagined-apples comparison, it’s a simple optimization question: is 1 (T+ epsilon) − 1 T positive? Why, yes it is. Great, do it! But the minute I unscreen that real-world understanding, I’m back to not wanting to answer that question.
What I’d prefer to do is unpack “-T amount of fun” into something equally concrete and do the comparison that way, but I don’t seem to know how to do that.