I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
I don’t think that’s a consequentialist thought experiment, though?
What do you mean by “consequentialist thought experiment”?
I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
What do you mean by “consequentialist thought experiment”?
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then…
(a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values?
(b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions).
What process of reflection are you using that you think leads people toward a single value?
Does it avoid the problems with my old one that I described?
Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
why should contemplating tradeoffs between how much we can get values force us to pick one?
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.” I’m not clear on where I claimed otherwise, though… can you point me at that claim?
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
I don’t think that’s a consequentialist thought experiment, though? Could you give examples of how it’s illustrated in trolley problems, ticking time bomb scenarios, even forced-organ-donation-style “for the greater good” arguments? If it’s not too much trouble—I realize you’re probably not anticipating huge amounts of expected value here.
(I think most LW-style utilitarian consequentialists would agree there is probably an optimal one, but unilaterally deciding that yourself might lead to additional consequences—better to avoid selfish infighting and, most importantly, perceived unfairness, especially when you may be too uncertain about the outcomes anyway. So that’s a data point for you.)
What do you mean by “consequentialist thought experiment”?
Yes, you can always argue that any behavior is instrumental, replacing it with the reason it came to be thought of as moral, but if you go down that route, you’ll end up concluding the purpose of life is to maximize inclusive genetic fitness.
One of the standard thought experiments used to demonstrate and/or explain consequentialism. I’m really just trying to see what your model of consequentialism is based on.
Well, we’re adaptation-executors, not fitness-maximizers—the environment has changed. But yeah, there’s a very real danger in coming up with grandiose rationalizations for how all your moral intuitions are really consequences of your beautifully simple unified theory.
And there’s a very real danger of this being a fully general counterargument against any sufficiently simple moral theory.
You’re absolutely right about that. In fact, there’s a danger that it can be a fully general counterargument against any moral theory at all! After all, they might simply be rationalizing away the flaws...
I wouldn’t endorse using it as a counterargument at all, honestly. If you can point out actual rationalizations, that’s one thing, but merely calling someone a sophisticated arguer is absolutely a Bad Idea.
Well, as Eliezer explained here, simple moral systems are in fact likely to be wrong.
I think that’s one of the areas where Eliezer got it completely wrong. Value isn’t that complex, and it’s a mistake to take people’s apparent values at face value as he seems to.
Our values are psychological drives from a time in our evolutionary history before we could possibly be consequentialist enough to translate a simple underlying value into all the actions required to satisfy it. Which means that evolution had to bake in the “break this down into subgoals” operation, leaving us with the subgoals as our actual values. Lots of different things are useful for reproduction, so we value lots of different things. I would not have found that wiki article convincing either back when I believed as you believe, but have you read “Thou art godshatter?”
People have drives to value different things, but a drive to value is not the same thing as a value. For example, people have an in-group bias (tribalism), but that doesn’t mean that it’s an actual value.
If values are not drives (Note I am saying values are drives, not “driives are values”, “drives to value are values”, or anything else besides “values are drives”), what functional role do they play in the brain? What selection pressure built them into us? Or are they spandrels? If this role is not “things that motivate us to choose one action over another,” why are they motivating you to choose one action over another? If that is their role, you are using a weird definition of “drive”, so define “Fhqwhgads” as “things that motivate us to choose one action over another”, and substitute that in place of “value” in my last argument.
If values are drives, but not all drives are values, then… (a) if a value is a drive you reflectively endorse and a drive you reflectively endorse is a value, then why would we evolve to reflectively endorse only one of our evolved values? (b) otherwise, why would either you or I care about what our “values” are?
I agree that values are drives, but not all drives are values. I dispute that we would reflectively endorse more than one of our evolved drives as values. Most people aren’t in a reflective equilibrium, so they appear to have multiple terminal values—but that is only because they aren’t’ in a reflective equilibrium.
What manner of reflection process is it that eliminates terminal values until you only have one left? Not the one that I use (At least, not anymore, since I have reflected on my reflection process). A linear combination (or even a nonlinear combination) of terminal values can fit in exactly the same spot that a single value could in a utility function. You could even give that combination a name, like “goodness”, and call it a single value (though it would be a complex one). So there is nothing inconsistent about having several separate values.
Let me hazard a guess, based on my own previous reflection process, now abandoned due to meta-reflection. First, I would find a pair of thought experiments where I had strong feelings for an object-level choice in each, and I felt I was being inconsistent between them. Of course, object-level choices in two different scenarios can’t be inconsistent. There is a computation that returns both of those answers, namely, whatever was going on in your pre-reflection brain.
For example, “throw the level, redirect the trolley to kill 1 instead of 5” and “don’t butcher the healthy patient and steal their organs to save five.”
The inconsistence is in the two principles I would have automatically come up with to explain two different object-level choices. Or, if my reasons for one emotional reaction are too complicated for me to realize, then it’s between one principle and the emotional reaction. Of course, the force behind the principle comes from the emotional reaction to the thought experiment which motivated it.
Then, I would let the two emotions clash against each other, letting my mind flip between the two scenarios back and forth until one started to weaken. The winner would become stronger, because it survived a clash. And so did the principle my mind coughed up to explain it.
What are the problems with this?
It favors simple principles for the sole reason that they are easier to guess by my conscious mind, which of course doesn’t really have access to the underlying reasons. It just thinks it does. This means it depends on my ignorance of other more complicated principles. This part can be destroyed by the truth.
The strength of the emotion for the object-level choice is often lent to the principle by something besides what you think it is. Yvain covered this in an essay that you, being a hedonistic utilitarian would probably like: Wirehead Gods on Lotus Thrones. His example is that being inactive and incredibly happy without interuption forever sounds good to him if he thinks of Buddhists sitting on lotuses and being happy, but bad if he thinks of junkies sticking needles in their arms and being happy. With this kind of reflection, you consciously think something like: “Of course, sitting on the lotus isn’t inherently valuable, and needles in arms aren’t inherently disvaluable either,” but unconsciously, your emotional reaction to that is what’s determining which explicit principles like “wireheading is good” or “wireheading is bad” you consciously endorse.
All of your standard biases are at play in generating the emotional reactions in the first place. Scope insensitivity, status quo bias, commitment bias, etc.
This reflection process can go down different paths depending on the order that thought experiments are encountered. If you get the “throw switch, redirect trolley” one first, and then are told you are a consequentialist, and that there are other people who don’t throw the switch because then they are personally killing someone, and you think about their thought process and reject it as a bad principle, and then you see the “push the fat man off the bridge” one, and you think “wow, this really feels like I shouldn’t push him off the bridge, but [I have this principle established where I act to save the most lives, not to keep my hands clean]”, and slowly your instinct (like mine did) starts to become “push the fat man off the bridge.” And then you hear the transplant version, and you become a little more consequentialist. And so on. It would be completely different if most people heard the transplant one first (or an even more deontology-skewed thought experiment). I am glad of course, that I have gone down this path as far as I have. Being a consequentialist has good consequences, and I like that! But my past self might not have agreed, and likewise I probably won’t agree with most possible changes to my values. Each version of me judges differences between the versions under its own standards.
There’s the so called sacred vs. secular value divide (I actually think it’s more of a hierarchy, with several layers of increasing sacredness, each of which feels like it should lexically override the last), where pitting a secular value vs a sacred value makes the secular value weaker and the sacred one stronger. But which values are secular or sacred is largely a function of what your peers value.
And whether a value becomes stronger or weaker through this process depends largely on which pairs of thought experiments you happen to think of. Is a particular value, say “artistic expression”, being compared to the value of life, and therefore growing weaker, or is it being compared to the value of not being offended, and therefore growing stronger?
So that you don’t ignore my question like you did the one in the last post, I’ll reiterate it. (And I’ll add some other questions). What process of reflection are you using that you think leads people toward a single value? Does it avoid the problems with my old one that I described? Is this a process of reflection most people would meta-reflectively endorse over alternative ones that don’t shrink them down to one value? (If you are saying that people who have several values are out of reflective equilibrium, then you’d better argue for this point.)
Edited: formatting.
I endorse the the process you rejected. I don’t think the problems you describe are inevitable. Given that, if people’s values cause them conflict in object-level choices, they should decide what matters more, until they’re at a reflective equilibrium and have only one value.
But how do you avoid those problems? Also, why should contemplating tradeoffs between how much we can get values force us to pick one? I bet you can imagine tradeoffs between bald people being happy, and people with hair being happy, but that doesn’t mean you should change your value from “happiness” to one of the two. Which way you choose in each situation depends on how many bald people there are, and how many non-bald people there are. Similarly, with the right linear combination, these are just tradeoffs, and there is no reason to stop caring about one term because you care about the other more. And you didn’t answer my last question. Why would most people meta-reflectively endorse this method of reflection?
1, as you said, can be destroyed by the truth (if they’re actually wrong), so it’s part of a learning process. 2 isn’t a problem once you isolate the principle by itself, outside of various emotional factors. 3 is a counterargument against any kind of decisionmaking, it means that we should be careful, not that we shouldn’t engage in this sort of reflection. 4 is the most significant of these problems, but again it’s just something to be careful about, same is in 3. As for 5, that’s to be solved by realizing that there are no sacred values.
It doesn’t, you’re right. At least, contemplating tradeoffs doesn’t by itself guarantee that people would choose only one value, But it can force people to endorse conclusions that would seem absurd to them—preserving one apparent value at the expense of another. Once confronted, these tensions lead to the reduction to one value.
As for why people would meta-reflectively endorse this method of reflection—simply, because it makes sense.
So what, on your view, is the simple thing that humans actually value?
Pleasure, as when humans have enough of it (wireheading) they will like it more than anything else.
(nods) Well, that’s certainly simple.
So it seems to follow that if I offer someone the choice of murdering their child in exchange for greater pleasure, and they turn me down, we can confidently infer that they simply don’t believe I’ll follow through on the offer, because if they did, they would accept. Yes?
Believing that there is no such thing as greater pleasure than the loss from having your child murdered, is a subset of “not believing you’ll follow through on your offer”.
Yes, that’s true. If you believe what I’m offering doesn’t exist, it follows that you ought not believe I’ll follow through on that offer.
I don’t think you’re following that to the logical conclusion, though. You were implicitly arguing that most people’s refusal would not be based on “doesn’t believe I’ll follow through”. It is entirely plausible that most people would give the reason which I described, and as you have admitted, the reason which I described is a type of “doesn’t believe I’ll follow through”. Therefore, your argument fails, because contrary to what you claimed, most people’s refusal would (or at least plausibly could) be based on “doesn’t believe I’ll follow through”.
I agree that most people’s refusal would be based on some version of “doesn’t believe I’ll follow through.”
I’m not clear on where I claimed otherwise, though… can you point me at that claim?
It’s true that you didn’t explicitly claim people wouldn’t do that, but in context, you did implicitly claim that. In context, you were responding to something you disagreed with and so it must mean that you thought that they would not in fact do that and you were presenting the claim that they would not do that to support your argument.
https://en.wikipedia.org/wiki/Implicature https://en.wikipedia.org/wiki/Cooperative_principle
I see.
OK. Thanks for clearing that up.
Someone recently suggested that there should be a list of 5 geek linguistic fallacies and I wonder if something like this should go in the list.
Your response seems very strange because either you meant to imply what you implied (in which case you thought you could misrepresent yourself as not implying anything), or you didn’t (in which case you said a complete non-sequitur that by pure coincidence sounded exactly like an argument you might have made for real)
What response were you expecting?
My original question was directed to blacktrance, in an attempt to clarify my understanding of their position. They answered my question, clarifying the point I wanted to clarify; as far as I’m concerned it was an entirely sucessful exchange.
You’ve made a series of assertions about my question, and the argument you inferred from it, and various fallacies in that argument. You are of course welcome to do so, and I appreciate you answering my questions about your inferences, but none of that requires any particular response on my part as far as I can tell. You’ve shared your view of what I’m saying, and I’ve listened and learned from it. As far as I’m concerned that was an entirely successful exchange.
I infer that you find it unsatisfying, though. Well, OK. Can you state what it is you’re trying to achieve in this exchange, and how I can help you achieve it?
It appeared that you’re either willfully deceptive or incapable of communicating clearly, in such a way that it looks willfully deceptive. I was hoping you’d offer another alternative than those.
The other alternative I offer is that you’ve been mistaken about my goals from the beginning.
As I said a while back: I asked blacktrance a question about their working model, which got me the information I wanted about their model, which made it clear where our actual point of disagreement was (specifically, that blacktrance uses “values” to refer to what people like and not what we want). I echoed my understanding of that point, they agreed that I’d understood it correctly, at which point I thanked him and was done.
My goal was to more clearly understand blacktrance’s model and where it diverged from mine; it wasn’t to challenge it or argue a position. Meanwhile, you started from the false assumption that I was covertly making an argument, and that has informed our exchange since.
If you’re genuinely looking for another alternative, I recommend you back up and examine your reasons for believing that.
That said, I assume from your other comments that you don’t believe me and that you’ll see this response as more deception. More generally, I suspect I can’t give you want you want in a form you’ll find acceptable.
If I’m right, then perhaps we should leave it at that?
No, for a few reasons. First, they may not believe that what you’re offering is possible—they believe that the loss of a child would outweigh the pleasure that you’d give them. They think that you’d kill the child and give them something they’d enjoy otherwise, but doesn’t make up for losing a child. Though this may count as not believing that you’ll follow through on your offer. Second, people’s action-guiding preferences and enjoyment-governing preferences aren’t always in agreement. Most people don’t want to be wireheaded, and would reject it even if it were offered for free, but they’d still like it once subjected to it. Most people have an action-guiding preference of not letting their children die, regardless of what their enjoyment-governing preference is. Third, there’s a sort-of Newcomblike expected value decision at work, which is that deriving enjoyment from one’s children requires valuing them in such a way that you’d reject offers of greater pleasure—it’s similar to one-boxing.
Ah, OK. And when you talk about “values”, you mean exclusively the things that control what we like, and not the things that control what we want.
Have I got that right?
That is correct. As I see it, wants aren’t important in themselves, only as far as they’re correlated with and indicate likes.
OK. Thanks for clarifying your position.
How would you test this theory?
Give people pleasure, and see whether they say they like it more than other things they do.
This begs the question of whether the word “pleasure” names a real entity. How do you give someone “pleasure”? As opposed to providing them with specific things or experiences that they might enjoy? When they do enjoy something, saying that they enjoy it because of the “pleasure” it gives them is like saying that opium causes sleep by virtue of its dormitive principle.
Do you mean “forcibly wirehead people and see if they decide to remove the pleasure feedback”? Also, see this post.
That’s one way to do it, but not the only way, and it may not even be conclusive, because people’s wants and likes aren’t always in agreement. The test is to see whether they’d like it, not whether they’d want it.
Establishing a lower bound on the complexity of a moral theory that has all the features we want seems like a reasonable thing to do. I don’t think the connotations of “fully general counterargument” are appropriate here. “Fully general” means you can apply it against a theory without really looking at the details of the theory. If you have to establish that the theory is sufficiently simple before applying the counterargument, you are referencing the details of the theory in a way that differentiates from other theories, and the counterargument is not “fully general”.
“This theory is too simple” is something that can be argued against almost any theory you disagree with. That’s why it’s fully general.
No, it isn’t: Anyone familiar with the linguistic havoc sociological theory of systems deigns to inflict on its victims will assure you of that!