Does someone really believe that all unrealistic hypotheticals are useless? It seems likely to me that the people you’re talking about have very specific issues with very specific hypotheticals, or that they have past experiences that lead them to believe that people who create unrealistic hypotheticals tend to be full of shit, and you’re just strawmanning them.
philosophical problems often assume “no-one will ever know” so that you can discuss moral principles without having to have a detailed understanding of human psychology and sociology.
This one bothers me. What if I believe that there is literally nothing to morality besides human psychology and sociology?
In fact, some hypotheticals are interesting _mostly_ in the way that they contradict reality. Pointing out a different intuition if “nobody will ever know” than “you’ll know, and others probably won’t automatically know, but there’s a possibility with sufficient advances in forensics” is interesting and I’d be happy to play. Simply taking a result from a “nobody will ever know, and you know that with certainty” and trying to apply it to the real world is something I’d object to.
I think we could have an interesting and/or useful discussion about the distinction between interesting and useful for known-to-be-impossible (which is what “unrealistic” means, right?) topics.
My claim: For purposes of this conversation (trying to determine whether all, some, or no abstract hypotheticals can be validly rejected by showing that they’re unreal), we’re ALREADY not talking about something other than reality. And for unreal things, “interesting” and “useful” are really hard to distinguish.
“Useful” is a prediction. “X is useful” means that you expect to derive some utility from X in the future (or right now). And then if you have an explanation, in what way exactly X is going to give you some non-trivial utility, you may be able to convince someone that X really is useful.
“Interesting” is the feeling of curiosity, it is entirely subjective and requires no explanations. You are free to be interested in whatever you want. You can even be interested, exclusively, in useless things (to the extent that anything is really useless).
My internal evaluation of internet discussions is much less binary than this. My curiosity tends toward things that I may get some utility from, and much of the utility is in the form of enjoyment of exploration and discussion.
There are some cases where there’s more direct utility in terms of behavioral changes I can apply, but almost never on the topic at hand (abstract unrealistic hypotheticals).
“Does someone really believe that all unrealistic hypotheticals are useless?”—I don’t claim an explicit belief, just that many people have a strong bias in this direction and that this often causes them to miss things that would have been obvious if they’d spent even a small amount of time thinking about it.
“This one bothers me. What if I believe that there is literally nothing to morality besides human psychology and sociology?”—Well, you can talk about psychology and sociology as it relates to the nature of morality; the point is to avoid complicating the discussion by forcing people to model the effect people finding out about a particular event would have on society and how they act.
And I’m suggesting that the bias might be justified. Though it’s hard to talk about that without specific examples.
the point is to avoid complicating the discussion by forcing people to model the effect people finding out about a particular event would have on society and how they act.
What if this modeling explains 99% of moral choices, and when you remove it you’re left with nothing but noise? Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off? I’m not trying to start an argument about whether this is true. I’m trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions.
“What if this modeling explains 99% of moral choices, and when you remove it you’re left with nothing but noise?”—Even if it only applies to 1% of situations, it shouldn’t be rounded off to zero. After all, there’s a decent chance you’ll encounter at least one of these situations within your lifetime. But more importantly, this is addressed by the section on Practise Exercise Don’t Need to Be Real.
“Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off?”—I view this similarly to showing someone a really complicated maths proof and them saying, “Given my brain, it’s literally impossible for me to understand a proof this complicated”. In this case you’ll just have to trust other people then. However if, like philosophy, experts disagree, well I suppose you’ll just have to figure out which experts to trust. But that said, I’m skeptical that this is the kind of thing hardcoded into anyone’s brain.
“I’m trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions.”—The floating abstract model doesn’t contain these assumptions. You’ve made the assumption that the model is supposed to be directly applied, which is unwarranted.
The core issue, I think, is that for you “usefulness” is an extremely low bar. Indeed, it might be possible to take any question and show that the utility of having an answer to that question is > 0 (it would be quite hard to find an answer of negative utility, and it would be even harder to show that the utility is exactly 0). So, if you believe that all questions are useful, then there is no way I’ll convince you that some hypotheticals are useless.
And if you don’t believe this at all, then please give a few examples of useless questions, because, clearly, I don’t understand your metric of usefulness/importance.
By the way, why do you use “” quotes instead of the > blockquotes ? The latter are much more readable.
“So, if you believe that all questions are useful, then there is no way I’ll convince you that some hypotheticals are useless”—that’s purely a function of proving a negative being difficult in general. Why do you expect this to be easy?
1. That discussing unrealistic hypotheticals is usually a valuable way to spend my time (or that I tend to underestimate the value of discussing them).
2. That discussing unrealistic hypotheticals usually, eventually, produces some non-zero value.
(1) is what we disagree on, but (2) is what you seem to be proving. If I wanted to convince you that (2) is false, then I would really have to prove negatives. But it’s ok, I don’t actually disagree with (2), that claim is trivially true. If (2) is how you understand “usefulness”, then your post is correct, but also basically void of meaning.
(1) is the claim that some real, living, non-straw humans disagree with and it is not a claim that you defend well. And to disagree with (2) I don’t need to prove negatives, I only need to pick one hypothetical I find rather useless, and ask you to show me that it is really useful. And then, if you’re successful in convincing me, you will have proven that I do sometimes underestimate the value of such hypotheticals.
I tried to do this with the “no-one will ever know” hypotheticals, and I found your replies unconvincing. For example, you said:
Even if it only applies to 1% of situations, it shouldn’t be rounded off to zero.
When you say that something is not zero, you are talking about (2). If you wanted to talk about (1), you could try to explain why this 1% is either very important, or a reasonable starting point, but then I could change the initial assumption to 0.1% and so on (in fact I initially wanted to say that it applies to 0% situations, but hesitated). At some point you have to agree that my beliefs about brains are making the whole “no-one will ever know” class of hypotheticals near useless to me, which sort of contradicts your initial point.
Does someone really believe that all unrealistic hypotheticals are useless? It seems likely to me that the people you’re talking about have very specific issues with very specific hypotheticals, or that they have past experiences that lead them to believe that people who create unrealistic hypotheticals tend to be full of shit, and you’re just strawmanning them.
This one bothers me. What if I believe that there is literally nothing to morality besides human psychology and sociology?
In fact, some hypotheticals are interesting _mostly_ in the way that they contradict reality. Pointing out a different intuition if “nobody will ever know” than “you’ll know, and others probably won’t automatically know, but there’s a possibility with sufficient advances in forensics” is interesting and I’d be happy to play. Simply taking a result from a “nobody will ever know, and you know that with certainty” and trying to apply it to the real world is something I’d object to.
You can be interested in whatever you want, but that’s different from that thing being important or useful (except for its use in entertaining you).
I think we could have an interesting and/or useful discussion about the distinction between interesting and useful for known-to-be-impossible (which is what “unrealistic” means, right?) topics.
My claim: For purposes of this conversation (trying to determine whether all, some, or no abstract hypotheticals can be validly rejected by showing that they’re unreal), we’re ALREADY not talking about something other than reality. And for unreal things, “interesting” and “useful” are really hard to distinguish.
“Useful” is a prediction. “X is useful” means that you expect to derive some utility from X in the future (or right now). And then if you have an explanation, in what way exactly X is going to give you some non-trivial utility, you may be able to convince someone that X really is useful.
“Interesting” is the feeling of curiosity, it is entirely subjective and requires no explanations. You are free to be interested in whatever you want. You can even be interested, exclusively, in useless things (to the extent that anything is really useless).
My internal evaluation of internet discussions is much less binary than this. My curiosity tends toward things that I may get some utility from, and much of the utility is in the form of enjoyment of exploration and discussion.
There are some cases where there’s more direct utility in terms of behavioral changes I can apply, but almost never on the topic at hand (abstract unrealistic hypotheticals).
“Does someone really believe that all unrealistic hypotheticals are useless?”—I don’t claim an explicit belief, just that many people have a strong bias in this direction and that this often causes them to miss things that would have been obvious if they’d spent even a small amount of time thinking about it.
“This one bothers me. What if I believe that there is literally nothing to morality besides human psychology and sociology?”—Well, you can talk about psychology and sociology as it relates to the nature of morality; the point is to avoid complicating the discussion by forcing people to model the effect people finding out about a particular event would have on society and how they act.
And I’m suggesting that the bias might be justified. Though it’s hard to talk about that without specific examples.
What if this modeling explains 99% of moral choices, and when you remove it you’re left with nothing but noise? Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off? I’m not trying to start an argument about whether this is true. I’m trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions.
“What if this modeling explains 99% of moral choices, and when you remove it you’re left with nothing but noise?”—Even if it only applies to 1% of situations, it shouldn’t be rounded off to zero. After all, there’s a decent chance you’ll encounter at least one of these situations within your lifetime. But more importantly, this is addressed by the section on Practise Exercise Don’t Need to Be Real.
“Or, what if this modeling is hard coded into my brain, and is literally impossible to turn off?”—I view this similarly to showing someone a really complicated maths proof and them saying, “Given my brain, it’s literally impossible for me to understand a proof this complicated”. In this case you’ll just have to trust other people then. However if, like philosophy, experts disagree, well I suppose you’ll just have to figure out which experts to trust. But that said, I’m skeptical that this is the kind of thing hardcoded into anyone’s brain.
“I’m trying to show that even the simplest an most innocent looking unrealistic problems could be hiding faulty assumptions.”—The floating abstract model doesn’t contain these assumptions. You’ve made the assumption that the model is supposed to be directly applied, which is unwarranted.
The core issue, I think, is that for you “usefulness” is an extremely low bar. Indeed, it might be possible to take any question and show that the utility of having an answer to that question is > 0 (it would be quite hard to find an answer of negative utility, and it would be even harder to show that the utility is exactly 0). So, if you believe that all questions are useful, then there is no way I’ll convince you that some hypotheticals are useless.
And if you don’t believe this at all, then please give a few examples of useless questions, because, clearly, I don’t understand your metric of usefulness/importance.
By the way, why do you use “” quotes instead of the > blockquotes ? The latter are much more readable.
“So, if you believe that all questions are useful, then there is no way I’ll convince you that some hypotheticals are useless”—that’s purely a function of proving a negative being difficult in general. Why do you expect this to be easy?
There are two distinct claims:
1. That discussing unrealistic hypotheticals is usually a valuable way to spend my time (or that I tend to underestimate the value of discussing them).
2. That discussing unrealistic hypotheticals usually, eventually, produces some non-zero value.
(1) is what we disagree on, but (2) is what you seem to be proving. If I wanted to convince you that (2) is false, then I would really have to prove negatives. But it’s ok, I don’t actually disagree with (2), that claim is trivially true. If (2) is how you understand “usefulness”, then your post is correct, but also basically void of meaning.
(1) is the claim that some real, living, non-straw humans disagree with and it is not a claim that you defend well. And to disagree with (2) I don’t need to prove negatives, I only need to pick one hypothetical I find rather useless, and ask you to show me that it is really useful. And then, if you’re successful in convincing me, you will have proven that I do sometimes underestimate the value of such hypotheticals.
I tried to do this with the “no-one will ever know” hypotheticals, and I found your replies unconvincing. For example, you said:
When you say that something is not zero, you are talking about (2). If you wanted to talk about (1), you could try to explain why this 1% is either very important, or a reasonable starting point, but then I could change the initial assumption to 0.1% and so on (in fact I initially wanted to say that it applies to 0% situations, but hesitated). At some point you have to agree that my beliefs about brains are making the whole “no-one will ever know” class of hypotheticals near useless to me, which sort of contradicts your initial point.