Knowing your own suffering is on a pretty solid footing. But in taking into account how we impact others we do not have direct perception. Essentially I deploy a theory-of-mind that blob over there probably corresponds to the same kind of activity that I am. But this does not raise anywhere near to self-evident bar. Openness or closedness has no import here. Even if I am that guy over there, if I don’t know whether they are a masochist or not I don’t know whether causing them to experience pain is a good action or not.
The other reason we have to be cautious when following valence utilitarianism is that there’s no way to measure conscious experience. You know it when you have it, but that’s it.
Does this take imply that if you are employing numbers in your application of utilitarianism that you are doing a misapplication? How can we analyse that a utility monster does not happen if we are not allowed to compare experiences?
The repugnancy avoidance has an issue of representation levels. If you have a repugnant analysis, agreeing with its assumptions is inconsistent to disagreeing with its conclusions. That is when you write down a number (which I know was systematically distanced from) to represent suffering, the symbol manipulations do not ask permission to pass a “intuition filter”. Sure you can say after reflecting a long time on a particular formula that its incongruent and “not the true formula”. But in order to get the analysis started you have to take some stance (even if it uses some unusual and fancy maths or whatever). And the basedness of that particular stance is not saved by it having been possible that we could have chosen another. “If what I said is wrong, then I didn’t mean it” is a way to be “always right” but forfeits meaning anything. If you just use your intuitive feelings on whether a repugnant conclusion should be accepted or not and do not refer at all to the analysis itself, the analysis is not a gear in your decision procedure.
Open individualism bypassing population size problem I could not really follow. We still phase a problem of generating different experience viewpoints. Would it not still follow that it is better to have a world like Game of Thrones with lots of characters in constanly struggling conditions than a book where the one single protagonist is the only character. Sure both being “books” gives a ground to compare them on but if comparability keeps addition it would seem that more points of view leads to more experience. That is if we have some world state with some humans etc and an area of flat space and then consider it contrasting to a state where instead of being flat there is some kind of experiencer there (say a human). Even if we disregard borders it seems this is a strict improvement in experience. Is it better to be one unified brain or equal amount of neurons split into separate “mini-experiencers”? Do persons with multiple personality conditions contribute more experience weight to the world? Do unconcious persons contribute less weight? Does each ant contribute as much as a human? Do artists count more? The repugnant steps can still be taken.
Knowing your own suffering is on a pretty solid footing. But in taking into account how we impact others we do not have direct perception. Essentially I deploy a theory-of-mind that blob over there probably corresponds to the same kind of activity that I am. But this does not raise anywhere near to self-evident bar. Openness or closedness has no import here. Even if I am that guy over there, if I don’t know whether they are a masochist or not I don’t know whether causing them to experience pain is a good action or not.
Does this take imply that if you are employing numbers in your application of utilitarianism that you are doing a misapplication? How can we analyse that a utility monster does not happen if we are not allowed to compare experiences?
The repugnancy avoidance has an issue of representation levels. If you have a repugnant analysis, agreeing with its assumptions is inconsistent to disagreeing with its conclusions. That is when you write down a number (which I know was systematically distanced from) to represent suffering, the symbol manipulations do not ask permission to pass a “intuition filter”. Sure you can say after reflecting a long time on a particular formula that its incongruent and “not the true formula”. But in order to get the analysis started you have to take some stance (even if it uses some unusual and fancy maths or whatever). And the basedness of that particular stance is not saved by it having been possible that we could have chosen another. “If what I said is wrong, then I didn’t mean it” is a way to be “always right” but forfeits meaning anything. If you just use your intuitive feelings on whether a repugnant conclusion should be accepted or not and do not refer at all to the analysis itself, the analysis is not a gear in your decision procedure.
Open individualism bypassing population size problem I could not really follow. We still phase a problem of generating different experience viewpoints. Would it not still follow that it is better to have a world like Game of Thrones with lots of characters in constanly struggling conditions than a book where the one single protagonist is the only character. Sure both being “books” gives a ground to compare them on but if comparability keeps addition it would seem that more points of view leads to more experience. That is if we have some world state with some humans etc and an area of flat space and then consider it contrasting to a state where instead of being flat there is some kind of experiencer there (say a human). Even if we disregard borders it seems this is a strict improvement in experience. Is it better to be one unified brain or equal amount of neurons split into separate “mini-experiencers”? Do persons with multiple personality conditions contribute more experience weight to the world? Do unconcious persons contribute less weight? Does each ant contribute as much as a human? Do artists count more? The repugnant steps can still be taken.