The question you’re asking is, in its broadest term, is it actually a good idea for actions to have potentially negative consequences for people other than the agent? (A related question is whether it’s even a good idea for actions to have potentially negative consequences for the agent.)
In my experience all answers to this question are uncomfortable if I think them through enough. Rejecting one answer because I’ve recently seen it illustrated, and choosing an opposite answer by negating that answer, just causes me to flipflop. (Or, in local terms, subjects me to Dutch Booking.)
My question is not as broad. I can accept a slight cost to external people for a great gain to one agent, under some situation. I don’t mind paying (reasonable) taxes to care for people with diseases, even diseases I know I’ll never get (like genetic diseases), I’m even in favor of it. What I was more pointing to is a scope problem : the gain to Ishtar in term of slightly higher trill for her quest seems way too low to compensate for a lost that’ll affect all presents and future humans, for some even in a comparable way (just hearing the news that the Stardivarius was destroyed can hurt more a music fan than the additional thrill Ishtar gained).
If the gain to her was high enough, and it wouldn’t be possible to get that gain (or something close enough to it) in a less harmful way to others, it could be OK. It would be a complicated question with no easy answer, a case in which I don’t trust myself to use raw consequentialism because I don’t know how to evaluate the real harm done by destroying a Stradivarius, both because I’m not enough of a music fan and because integrating the lost on all current and future humans is beyond my skill. So for high enough values of her gain I’ld be like “are you really sure you can’t give her the gain without that destruction ? and if so, well, I don’t know”.
The question you’re asking is, in its broadest term, is it actually a good idea for actions to have potentially negative consequences for people other than the agent? (A related question is whether it’s even a good idea for actions to have potentially negative consequences for the agent.)
In my experience all answers to this question are uncomfortable if I think them through enough. Rejecting one answer because I’ve recently seen it illustrated, and choosing an opposite answer by negating that answer, just causes me to flipflop. (Or, in local terms, subjects me to Dutch Booking.)
My question is not as broad. I can accept a slight cost to external people for a great gain to one agent, under some situation. I don’t mind paying (reasonable) taxes to care for people with diseases, even diseases I know I’ll never get (like genetic diseases), I’m even in favor of it. What I was more pointing to is a scope problem : the gain to Ishtar in term of slightly higher trill for her quest seems way too low to compensate for a lost that’ll affect all presents and future humans, for some even in a comparable way (just hearing the news that the Stardivarius was destroyed can hurt more a music fan than the additional thrill Ishtar gained).
Interesting. So if Ishtar’s quest was modified so that the gain to her was much much higher, it would be OK?
If the gain to her was high enough, and it wouldn’t be possible to get that gain (or something close enough to it) in a less harmful way to others, it could be OK. It would be a complicated question with no easy answer, a case in which I don’t trust myself to use raw consequentialism because I don’t know how to evaluate the real harm done by destroying a Stradivarius, both because I’m not enough of a music fan and because integrating the lost on all current and future humans is beyond my skill. So for high enough values of her gain I’ld be like “are you really sure you can’t give her the gain without that destruction ? and if so, well, I don’t know”.