That sort of answer is indeed appropriate, but only contingent on this notion of “a version of me who has studied cosmology, etc., for a long time, and both has opinions on certain moral quandaries and also encounters such in practice”. If we set aside this notion, then I am free to have opinions about the thought experiment right now.
Sure, but bgaesop’s “I don’t believe” is disregarding the thought experiment, which is the part I’m responding to. (I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)
(I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)
The two are close enough for the present purposes.
Meanwhile, the point of the thought experiment is not for us to figure out the answer with any kind of definitiveness, but to tease out whether the thought experiment is exploring factors that should even be part of our model at all. (the answer to which be no)
At the very least, you can have some sense of whether you value things that you are unlikely to directly interact with (and/or, how confused you are about that, or how confused you are about how reliably you can tell when you might interact with something)
That sort of answer is indeed appropriate, but only contingent on this notion of “a version of me who has studied cosmology, etc., for a long time, and both has opinions on certain moral quandaries and also encounters such in practice”. If we set aside this notion, then I am free to have opinions about the thought experiment right now.
Sure, but bgaesop’s “I don’t believe” is disregarding the thought experiment, which is the part I’m responding to. (I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)
The two are close enough for the present purposes.
Meanwhile, the point of the thought experiment is not for us to figure out the answer with any kind of definitiveness, but to tease out whether the thought experiment is exploring factors that should even be part of our model at all. (the answer to which be no)
At the very least, you can have some sense of whether you value things that you are unlikely to directly interact with (and/or, how confused you are about that, or how confused you are about how reliably you can tell when you might interact with something)