Overall I think my views are pretty orthodox for LW/OB. But (and this is just my own impression) it seems like the LW/OB community generally considers utilitarian values to be fundamentally rational. My own view is that our goal values are truly subjective, so there isn’t a set of objectively rational goal values, although I personally prefer utilitarianism myself.
True, there are rational goals for each individual, but those depend on their own personal values. My point was there doesn’t seem to be one set of objective goal values that every mind can agree on.
Um, the referenced The Psychological Unity of Humankind article isn’t right. Humans vary considerably—from total vegetables up to Einstein. There are many ways for the human brain to malfunction as a result of developmental problems or pathologies.
Similarly, humans have many different goals—from catholic priests to suicide bombers. That is partly as a result of the influence of memetic brain infections. Humans may share similar genes, but their memes vary considerably—and both contribute a lot to the adult phenotype.
That brings me to a LessWrong problem. Sure, this is Eliezer’s blog—but there seems to be much more uncritical parroting of his views among the commentators than is healthy.
There are many ways for the human brain to malfunction as a result of developmental problems or pathologies.
And also many ways for human brains to develop differently, says the autistic woman who seems to be doing about as well at handling life as most people do.
Didn’t we even have a post about this recently? Really, once you get past “maintain homeostasis”, I’m pretty sure there’s not a lot that can be said to be universal among all humans, if we each did what we personally most wanted to do. It just looks like there’s more agreement than there is because of societal pressure on a large scale, and selection bias on an individual scale.
You don’t take into account that people can be wrong about their own values, with randomness in their activities not reflecting the unity of their real values.
As far as I can tell, it is based on a whole bunch of wishful thinking intended to make the idea of Extrapolated Volition seem more plausible, by minimising claims that there will be goal conflicts between living humans. With a healthy dose of “everyone’s equal” political-corectness mixed in for the associated warm fuzzy feelings.
I recommend making this a top level post, but expand a little more on this implications of your view versus Eliezer’s and C&T’s. This could be done in a follow-up post.
Simply stating your opinion is of little value, only a good argument turns it into useful knowledge (making authority cease to matter in the same movement).
You are not making your case, Tim. You’ve been here for a long time, but persist in not understanding certain ideas, at the same time arguing unconvincingly for own views.
You should either work on better presentation of you views, if you are convinced they have some merit, or on trying to understand the standard position, but repeating your position indignantly, over and over, is not a constructive behavior. It’s called trolling.
I cited a detailed argument explaining one of the problems. You offer no counter-argument, and instead just rubbish my position, saying I am trolling. You then advise me to clean up my presentation. Such unsolicited advice simply seems patronising and insulting. I recommend either making proper counter-arguments—or remaining silent.
Remaining silent if you don’t have an argument that’s likely to convince, educate or at least interest your opponent is generally a good policy. I’m not arguing with you, because I don’t think I’ll be able to change your mind (without extraordinary effort that I’m not inclined to make).
Trolling consists in writing text that falls deaf on the ears of the intended audience. Professing advanced calculus on a cooking forum or to 6-year olds is trolling, even though you are not wrong. When people don’t want to hear you, or are incapable of understanding you, or can’t stand the way you present your material, that’s trolling on your part.
It does not say that trolling consists in writing text that falls deaf on the ears of the intended audience. What it says is that trolls have the primary intent of provoking other users into an emotional response or to generally disrupt normal on-topic discussion.
This is a whole thread where we are supposed to be expressing “dissenting views”. I do have some dissenting views—what better place for them than here?
I deny trolling activities. I am here to learn, to debate, to make friends, to help others, to get feedback—and so on—my motives are probably not terribly different from those of most other participants.
One thing that I am is critical. However, critics are an amazingly valuable and under-appreciated section of the population! About the only people I have met who seem to understand that are cryptographers.
Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.
Well, of course I don’t mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
As for the thesis above, its motivation can be stated thusly: If you can’t be wrong, you can never get better.
the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
How do you know what their real values are? Even after everyone’s professed values get destroyed by the truth, it’s not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you’ll see disagreement again. I just don’t see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn’t seem likely to me either; that kind of type distinction doesn’t seem to be built into human values. What could possibly force that kind of convergence?
Even after everyone’s professed values get destroyed by the truth, it’s not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure.
Your conclusion may be right, but the HedWeb isn’t strong evidence—as far as I recall David Pearce holds a philosophically flawed belief called “psychological hedonism” that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.
It’s an argument for it’s being possible that behavior isn’t representative of the actual values. That actual values are more united than the behaviors is a separate issue.
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change
I’m uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Let’s review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:
If a person likes watching TV, and spends much time watching TV, he must really care about TV, and saying that he’s wrong and actually watching TV is a mistake is just meaningless.
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Correct.
I state that the ethical diversity is less than the currently observed behavioral diversity.
I agree, and agree that the argument form you paraphrase is fallacious.
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn’t see “appeal to consequences” of a belief sufficiently distinct from desire to ensure the truth of the belief.
Human values are frequently in conflict with each other—which is the main explanation for all the fighting and wars in human history.
The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes.
Unfortunately, everyone behaves as though they want to maximise the representation of their own genome—and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives—which explains cooperation within families.
This doesn’t seem particularly complicated to me. What exactly is the problem?
That brings me to a LessWrong problem. Sure, this is Eliezer’s blog—but there seems to be much more uncritical parroting of his views among the commentators than is healthy.
You may be right. If so, fixing it requires greater specificity. If you have time to write top-level posts that would be great. Regardless, I value the contributions you make in the comments.
I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness—without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals.
True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails—the outcomes are impossible to predict. Exogeny anyone?
In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking “I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..”. To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yield better long term results.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics, and since Eliezer seems to be very focused on Bayesian probability that would seem to conflict with consequential utilitarianism.
However I may be read the whole thing wrong.
ED: If there is significant disagreement please explicate so I can see how my reasoning is not clear or believed to be flawed.
Whether a given process is computationally feasible or not has no bearing on whether it’s morally right. If you can’t do the right thing (whether due to computational constraints or any other reason), that’s no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can.
If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it’s just an approximation, and be willing to switch if a superior heuristic ever shows up.
See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics
Just because Aristotle founded formal logic doesn’t mean he was right about ethics too, any more than about physics.
Rather, you just have to find the closest approximation of right that you can.
This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate.
You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing.
I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue—as the second tier does. One problem with the Two Tier solution as it is presented is that it’s solutions to the consequentialist problems are based on vague terms:
Must be moral principles that identify a situation or class of situations and call for an action in that/those situation(s).
Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation?
Or on virtue:
Must guide you in actions that are consistent with the expressions of virtue and integrity.
I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior.
Therefore in order to be a consequentialist you must first answer “What consequence is right/correct/just?” The answer then is the correct philosophy, not simply how you got to it.
Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes—and consequentialism may be the best.
Ed; Seriously people, if you are going to down vote my reply then explain why.
Overall I think my views are pretty orthodox for LW/OB. But (and this is just my own impression) it seems like the LW/OB community generally considers utilitarian values to be fundamentally rational. My own view is that our goal values are truly subjective, so there isn’t a set of objectively rational goal values, although I personally prefer utilitarianism myself.
There probably is for each individual, but none that are universal.
True, there are rational goals for each individual, but those depend on their own personal values. My point was there doesn’t seem to be one set of objective goal values that every mind can agree on.
All minds can’t have common goals, but every human, and minds we choose to give life to, can.
Values aren’t objective, but can well be said to be subjectively objective.
Um, the referenced The Psychological Unity of Humankind article isn’t right. Humans vary considerably—from total vegetables up to Einstein. There are many ways for the human brain to malfunction as a result of developmental problems or pathologies.
Similarly, humans have many different goals—from catholic priests to suicide bombers. That is partly as a result of the influence of memetic brain infections. Humans may share similar genes, but their memes vary considerably—and both contribute a lot to the adult phenotype.
That brings me to a LessWrong problem. Sure, this is Eliezer’s blog—but there seems to be much more uncritical parroting of his views among the commentators than is healthy.
And also many ways for human brains to develop differently, says the autistic woman who seems to be doing about as well at handling life as most people do.
Didn’t we even have a post about this recently? Really, once you get past “maintain homeostasis”, I’m pretty sure there’s not a lot that can be said to be universal among all humans, if we each did what we personally most wanted to do. It just looks like there’s more agreement than there is because of societal pressure on a large scale, and selection bias on an individual scale.
AdeleneDawner, I’m being off-topic for this thread, but have you posted on the intro thread?
I have now...
You don’t take into account that people can be wrong about their own values, with randomness in their activities not reflecting the unity of their real values.
Are you suggesting that you still think that the cited material is correct?!?
The supporting genetic argument is wrong as well. I explain in more detail here:
http://alife.co.uk/essays/species_unity/
As far as I can tell, it is based on a whole bunch of wishful thinking intended to make the idea of Extrapolated Volition seem more plausible, by minimising claims that there will be goal conflicts between living humans. With a healthy dose of “everyone’s equal” political-corectness mixed in for the associated warm fuzzy feelings.
All fun stuff—but marketing, not science.
I recommend making this a top level post, but expand a little more on this implications of your view versus Eliezer’s and C&T’s. This could be done in a follow-up post.
Simply stating your opinion is of little value, only a good argument turns it into useful knowledge (making authority cease to matter in the same movement).
You are not making your case, Tim. You’ve been here for a long time, but persist in not understanding certain ideas, at the same time arguing unconvincingly for own views.
You should either work on better presentation of you views, if you are convinced they have some merit, or on trying to understand the standard position, but repeating your position indignantly, over and over, is not a constructive behavior. It’s called trolling.
I cited a detailed argument explaining one of the problems. You offer no counter-argument, and instead just rubbish my position, saying I am trolling. You then advise me to clean up my presentation. Such unsolicited advice simply seems patronising and insulting. I recommend either making proper counter-arguments—or remaining silent.
Remaining silent if you don’t have an argument that’s likely to convince, educate or at least interest your opponent is generally a good policy. I’m not arguing with you, because I don’t think I’ll be able to change your mind (without extraordinary effort that I’m not inclined to make).
Trolling consists in writing text that falls deaf on the ears of the intended audience. Professing advanced calculus on a cooking forum or to 6-year olds is trolling, even though you are not wrong. When people don’t want to hear you, or are incapable of understanding you, or can’t stand the way you present your material, that’s trolling on your part.
OK, then. Regarding trolling, see: http://en.wikipedia.org/wiki/Internet_troll
It does not say that trolling consists in writing text that falls deaf on the ears of the intended audience. What it says is that trolls have the primary intent of provoking other users into an emotional response or to generally disrupt normal on-topic discussion.
This is a whole thread where we are supposed to be expressing “dissenting views”. I do have some dissenting views—what better place for them than here?
I deny trolling activities. I am here to learn, to debate, to make friends, to help others, to get feedback—and so on—my motives are probably not terribly different from those of most other participants.
One thing that I am is critical. However, critics are an amazingly valuable and under-appreciated section of the population! About the only people I have met who seem to understand that are cryptographers.
Yes, but why expect unity? Clearly there is psychological variation amongst humans, and I should think it a vastly improbable coincidence that none of it has anything to do with real values.
Well, of course I don’t mean literal unity, but the examples that immediately jump to mind of different things about which people care (what Tim said) are not representative of their real values.
As for the thesis above, its motivation can be stated thusly: If you can’t be wrong, you can never get better.
How do you know what their real values are? Even after everyone’s professed values get destroyed by the truth, it’s not at all clear to me that we end up in roughly the same place. Intellectuals like you or I might aspire to growing up to be a superintelligence, while others seem to care more about pleasure. By what standard are we right and they wrong? Configuration space is vast: however much humans might agree with each other on questions of value compared to an arbitrary mind (clustered as we are into a tiny dot of the space of all possible minds), we still disagree widely on all sorts of narrower questions (if you zoom in on the tiny dot, it becomes a vast globe, throughout which we are widely dispersed). And this applies on multiple scales: I might agree with you or Eliezer far more than I would with an arbitrary human (clustered as we are into a tiny dot of the space of human beliefs and values), but ask a still yet narrower question, and you’ll see disagreement again. I just don’t see how the granting of veridical knowledge is going to wipe away all this difference into triviality. Some might argue that while we can want all sorts of different things for ourselves, we might be able to agree on some meta-level principles on what we want to do: we could agree to have a diverse society. But this doesn’t seem likely to me either; that kind of type distinction doesn’t seem to be built into human values. What could possibly force that kind of convergence?
Okay, I’m writing this one down.
Your conclusion may be right, but the HedWeb isn’t strong evidence—as far as I recall David Pearce holds a philosophically flawed belief called “psychological hedonism” that says all humans are motivated by is pleasure and pain and therefore nothing else matters, or some such. So I would say that his moral system has not yet had to withstand a razing attempt from all the truth hordes that are out there roaming the Steppes of Fact.
If “the thesis above” is the unity of values, this is not an argument. (I agree with ZM.)
It’s an argument for it’s being possible that behavior isn’t representative of the actual values. That actual values are more united than the behaviors is a separate issue.
It seems to me that it’s an appeal to the good consequences of believing that you can be wrong.
Well, obviously. So I’m now curious about what do you read in the discussion, so that you see this remark as worth making?
That the discussion was originally about whether the unity of values is true; that you moved from this to whether we should believe in it without clearly marking the change; that this is very surprising to me, since you seem elsewhere to favor epistemic over instrumental rationality.
I’m uncertain as to how to parse this, a little redundancy please! My best guess is that you are saying that I moved the discussion from the question of the fact of ethical unity of humankind, to the question of whether we should adopt a belief in the ethical unity of humankind.
Let’s review the structure of the argument. First, there is psychological unity of humankind, in-born similarity of preferences. Second, there is behavioral diversity, with people apparently caring about very different things. I state that the ethical diversity is less than the currently observed behavioral diversity. Next, I anticipate the common belief of people not trusting in the possibility of being morally wrong; simplifying:
To this I reply with “If you can’t be wrong, you can never get better.” This is not an endorsement to self-deceivingly “believe” that you can be wrong, but an argument for it being a mistake to believe that you can never be morally wrong, if it’s possible to get better.
Correct.
I agree, and agree that the argument form you paraphrase is fallacious.
Are you saying you were using modus tollens – you can get better (presumed to be accepted by all involved), therefore you can be wrong? This wasn’t clear, especially since you agreed that it’s an appeal to consequences.
Right. Since I consider epistemic rationality, as any other tool, an arrangement that brings about what I prefer, in itself or instrumentally, I didn’t see “appeal to consequences” of a belief sufficiently distinct from desire to ensure the truth of the belief.
Human values are frequently in conflict with each other—which is the main explanation for all the fighting and wars in human history.
The explanation for this is pretty obvious: humans are close relatives of animals whose main role in life has typically been ensuring the survival and reproducion of their genes.
Unfortunately, everyone behaves as though they want to maximise the representation of their own genome—and such values conflict with the values of practically every other human on the planet, except perhaps for a few close relatives—which explains cooperation within families.
This doesn’t seem particularly complicated to me. What exactly is the problem?
It would be great if you could expand on this.
You may be right. If so, fixing it requires greater specificity. If you have time to write top-level posts that would be great. Regardless, I value the contributions you make in the comments.
Some people tend to value things that people happen to have in common, others are more likely to value things which people have less in common.
I contend otherwise. The utilitarian model comes down to a subjective utility calculation which is impossible (I use the word impossible realizing the extremity of the word) to do currently. This can be further explicated somewhere else but without an unbiased consciousness—without one which does not fall prey to random changes of desires and mis-interpretations or mis-calculations (in other words the AI we wish to build), there cannot be a reasonable calculation of utility such that it would accurately model a basket of preferences. As a result it is not a reasonable nor reliable method for determining outcomes or understanding individual goals.
True there may be instances in which a crude utilitarian metric can be devised which accurately represents reality at one point in time, however the consequential argument seems to divine that the accumulated outcome of any specific action taken through consequential thought will align reasonably if not perfectly with the predicted outcome. This is how utilitarianism epistemologically fails—the outcomes are impossible to predict. Exogeny anyone?
In fact, what seems to hold truest to form in terms of long term goal and short term action setting is the virtue ethics which Aristotle so eloquently explicated. This is how in my view people come to their correct conclusions while falsely attributing their positive outcomes to other forms such as utilitarianism. E.G. someone thinking “I think that the outcomes of this particular decision will be to my net benefit in the long run because from this will lead to this etc..”. To be sure it is possible that a utilitarian calculation could be in agreement with the virtue of the decision if the known variables are finite and the exogenous variables are by and large irrelevant, however it would seem to me that when the variables are complicated past current available calculations understanding the virtue behind an action, behavior or those which are indigenous to the actor will yield better long term results.
It is odd because objective Bayesian probability is rooted in Aristotelian logic which is predicated on virtue ethics, and since Eliezer seems to be very focused on Bayesian probability that would seem to conflict with consequential utilitarianism.
However I may be read the whole thing wrong.
ED: If there is significant disagreement please explicate so I can see how my reasoning is not clear or believed to be flawed.
Whether a given process is computationally feasible or not has no bearing on whether it’s morally right. If you can’t do the right thing (whether due to computational constraints or any other reason), that’s no excuse to go pursue a completely different goal instead. Rather, you just have to find the closest approximation of right that you can.
If it turns out that e.g. virtue ethics produce consistently better consequences than direct attempts at expected utility maximization, then that very fact is a consquentialist reason to use virtue ethics for your object-level decisions. But a consequentialist would do so knowing that it’s just an approximation, and be willing to switch if a superior heuristic ever shows up.
See Two-Tier Rationalism for more discussion, and Ethical Injunctions for why you might want to do a little of this even if you can directly compute expected utility.
Just because Aristotle founded formal logic doesn’t mean he was right about ethics too, any more than about physics.
This assumes that we know on which track the right thing to do is. You cannot approximate if you do not even know what it is you are trying to approximate.
You can infer, or state that maximizing happiness is what you are trying to approximate however that may not be indeed what is the right thing.
I am familiar with two tier rationalism and all other consequentialist philosophies. All must boil down eventually to a utility calculation or an appeal to virtue—as the second tier does. One problem with the Two Tier solution as it is presented is that it’s solutions to the consequentialist problems are based on vague terms:
Ok, WHICH moral principals, and based on what? How are we to know the right action in any particular situation?
Or on virtue:
I do take issue with Alicorns definition of virtue-busting, as it relegates virtue to simply patterns of behavior.
Therefore in order to be a consequentialist you must first answer “What consequence is right/correct/just?” The answer then is the correct philosophy, not simply how you got to it.
Consequentialism then may be the best guide to virtue but it cannot stand on its own without an ideal. That ideal in my mind is best represented as virtue. Virtue ethics then are the values to which there may be many routes—and consequentialism may be the best.
Ed; Seriously people, if you are going to down vote my reply then explain why.