[Note: this comment comes with small amounts of attempted mindreading, which I think one should be careful with when arguing online. If this doesn’t feel like a fair stab at what you felt your underlying reasoning was, apologies]
If I (Raemon) had said the sentence you just said, my motivation would have been more likely to be a defense against clever, manipulative arguers (a quite valuable thing to have a defense against) than an attempt to have a robust moral framework that can handle new information I might learn about weird edge cases.
Say that rather than a person coming to you and giving you a hypothetical example, the person reflecting upon the hypothetical elephants is you, after having existed for a long enough time that you’ve achieved all your most pressing goals, and you’ve actually studied cosmology yourself, and come the conclusion that the hypothetical elephants most likely exist.
I think it makes sense for people not to worry about moral quandaries that aren’t relevant to them when they have more pressing things to worry about. I think it’s important not to over-apply the results of thought experiments (i.e. in real life there’s no way you could possibly know that pushing a fat man off a bridge will stop a trolley and save five lives).
But insofar as we’re stepping into the domain of “figure out ethics for real, in a robust fashion”, it seems useful to be able to seriously entertain thought experiments, so long as they come properly caveated with “As long as I’m running on human hardware I shouldn’t make serious choices about hypothetical elephants.”
I’d be somewhat surprised if the bgaesop-who’s-studied-cosmology, had decided that ironing out their moral edgecases was their top priority and wanted to account for moral uncertainty and so actually did their own research to figure out whether elephants-outside-the-lightcone existed or mattered… would end up saying that the reason they don’t matter is the same reason fictional elephants don’t matter. (Although I can more easily imagine hypothetical future you deciding they didn’t matter for other reasons)
It seems to me that figuring out the answers to questions that will, and can, only be faced by me-who-has-studied-cosmology-for-a-century (or similar), can, and should, be left to me-who-has-studied-cosmology-for-a-century to figure out. Why should I, who exist now, need to have those answers?
I think that’s totally fair, but in that case I think it makes more sense to say upfront “this conversation doesn’t seem to be meaningful right now” or “for the time being I only base my moral system on things I’m quite confident of” or some such, rather that expressing particular opinions about the thought experiment.
Either you have opinions about the thought experiment, in which case you’re making your best guess and/or preferred meta-strategy for reaching reflective equilibrium or some-such, or you’re not, in which case why are you discussing it at all?
That sort of answer is indeed appropriate, but only contingent on this notion of “a version of me who has studied cosmology, etc., for a long time, and both has opinions on certain moral quandaries and also encounters such in practice”. If we set aside this notion, then I am free to have opinions about the thought experiment right now.
Sure, but bgaesop’s “I don’t believe” is disregarding the thought experiment, which is the part I’m responding to. (I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)
(I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)
The two are close enough for the present purposes.
Meanwhile, the point of the thought experiment is not for us to figure out the answer with any kind of definitiveness, but to tease out whether the thought experiment is exploring factors that should even be part of our model at all. (the answer to which be no)
At the very least, you can have some sense of whether you value things that you are unlikely to directly interact with (and/or, how confused you are about that, or how confused you are about how reliably you can tell when you might interact with something)
[Note: this comment comes with small amounts of attempted mindreading, which I think one should be careful with when arguing online. If this doesn’t feel like a fair stab at what you felt your underlying reasoning was, apologies]
If I (Raemon) had said the sentence you just said, my motivation would have been more likely to be a defense against clever, manipulative arguers (a quite valuable thing to have a defense against) than an attempt to have a robust moral framework that can handle new information I might learn about weird edge cases.
Say that rather than a person coming to you and giving you a hypothetical example, the person reflecting upon the hypothetical elephants is you, after having existed for a long enough time that you’ve achieved all your most pressing goals, and you’ve actually studied cosmology yourself, and come the conclusion that the hypothetical elephants most likely exist.
I think it makes sense for people not to worry about moral quandaries that aren’t relevant to them when they have more pressing things to worry about. I think it’s important not to over-apply the results of thought experiments (i.e. in real life there’s no way you could possibly know that pushing a fat man off a bridge will stop a trolley and save five lives).
But insofar as we’re stepping into the domain of “figure out ethics for real, in a robust fashion”, it seems useful to be able to seriously entertain thought experiments, so long as they come properly caveated with “As long as I’m running on human hardware I shouldn’t make serious choices about hypothetical elephants.”
I’d be somewhat surprised if the bgaesop-who’s-studied-cosmology, had decided that ironing out their moral edgecases was their top priority and wanted to account for moral uncertainty and so actually did their own research to figure out whether elephants-outside-the-lightcone existed or mattered… would end up saying that the reason they don’t matter is the same reason fictional elephants don’t matter. (Although I can more easily imagine hypothetical future you deciding they didn’t matter for other reasons)
It seems to me that figuring out the answers to questions that will, and can, only be faced by me-who-has-studied-cosmology-for-a-century (or similar), can, and should, be left to me-who-has-studied-cosmology-for-a-century to figure out. Why should I, who exist now, need to have those answers?
I think that’s totally fair, but in that case I think it makes more sense to say upfront “this conversation doesn’t seem to be meaningful right now” or “for the time being I only base my moral system on things I’m quite confident of” or some such, rather that expressing particular opinions about the thought experiment.
Either you have opinions about the thought experiment, in which case you’re making your best guess and/or preferred meta-strategy for reaching reflective equilibrium or some-such, or you’re not, in which case why are you discussing it at all?
That sort of answer is indeed appropriate, but only contingent on this notion of “a version of me who has studied cosmology, etc., for a long time, and both has opinions on certain moral quandaries and also encounters such in practice”. If we set aside this notion, then I am free to have opinions about the thought experiment right now.
Sure, but bgaesop’s “I don’t believe” is disregarding the thought experiment, which is the part I’m responding to. (I’m somewhat confused right now how much you’re speaking for yourself, and how much you’re speaking on behalf of your model of bgaesop or people like him)
The two are close enough for the present purposes.
Meanwhile, the point of the thought experiment is not for us to figure out the answer with any kind of definitiveness, but to tease out whether the thought experiment is exploring factors that should even be part of our model at all. (the answer to which be no)
At the very least, you can have some sense of whether you value things that you are unlikely to directly interact with (and/or, how confused you are about that, or how confused you are about how reliably you can tell when you might interact with something)