I agree that finding the optimal course of action for humans dosen’t mean much if it dosen’t include ethics. But humans in order to do that often construct and reason within systems that don’t include ethics in their optimization criteria.
There is a sometimes subtle but important difference between thinking and and considering “this is the optimal course of action optimizing for X and only X” and discussing it and between saying “you obviously should be optimizing for X and only X.”
I argue that this is former is sometimes a useful tool for the latter, because it allows one to survey how possible action space differs taking away or adding axiom to your goals. It is impossible to think about what people who might have such axioms do, or how dangerous or benign to your goals the development of such goals seeking systems might be. You need to in essence know the opportunity costs of many of your axioms and how you will measure up either in a game theoretic sense (because you may find yourself in a conflict with such an agent and need to asses it capabilities and strategic options) or in a evolutionary sense (where you wish to understand how much fitness your values have in the set of all possible values and how much you need to be concerned with evolution messing up your long term plans).
In short I think that: generally It is not unethical to think about how a specific hypothetical unethical mind would think. It may indeed perhaps be risky for some marginal cases, but also very potentially rewarding in expected utility.
One can say that while this is theoretically fine but in practice actually quite risky in people with their poor quality minds. But let me point out that people are generally biased against stabbing 10 people and similar unpleasant courses of action. One can perhaps say that a substantial minority isn’t, and using self-styled ethical agents cognitive capacity to emulate thinking of (in their judgement) unethical agents and sharing that knowledge willy-nilly with others will lead to unethical agents having lower enough costs and greater enough efficiency that it cancels out or overwhelms the gains of the “ethical agents”.
This however seems to lead towards a generalize argument against all rationality and sharing of knowledge, because all of it involves “morally constrained” agents potentially sharing the fruits of their cognitive work with less constrained agents who then out compete them in the struggle to order the universe into certain states. I maintain this is a meaningless fear unless there is good evidence that sharing particular knowledge (say schematics for a nuclear weapon or death ray) or rationality enhancing techniques will cause more harm than good one can rely on more people being biased against doing harmful things than not and thus using the knowledge for non-harmful purposes. But this is a pretty selected group. I would argue the potential for abuse among the readers of this forum is much lower than average. Also how in the world are we supposed to be concerned about nuclear weapons or death rays if we don’t have any good ideas if they are even possible? Can you ethically strongly condemn the construction of non-functioning death ray? Is it worth invading a country to stop the construction of a non-functioning death ray?
And note that at this point I’m already basically blowing the risks way out of proportion because quite honestly the disutility from a misused death ray is orders of magnitude larger than anything that can arise from what amounts to some unusually practical tips on improving one’s social life.
But humans in order to do that often construct and reason within systems that don’t include ethics in their optimization criteria.
How can something not include “ethics” in its “optimization criteria”? Do you just mean that you’re looking at a being with a utility function that does not include the putative human universals?
ETA: Confusion notwithstanding, I generally agree with the parent.
EDIT: (responding to edits)
This however seems to lead towards a generalize argument against all rationality and sharing of knowledge, because all of it involves “morally constrained” agents potentially sharing the fruits of their cognitive work with less constrained agents who then out compete them in the struggle to order the universe into certain states.
I actually wasn’t thinking anything along those lines.
people are generally biased against stabbing 10 people and similar unpleasant courses of action
Sure, but people do unhealthy / bad things all the time, and are biased in favor of many of them. I’m not supposing that someone might “use our power for evil” or something like that. Rather, I think we should include our best information.
A discussion of how best to ingest antifreeze should not go by without someone mentioning that it’s terribly unhealthy to ingest antifreeze, in case a reader didn’t know that. Antifreeze is very tasty and very deadly, and children will drink a whole bottle if they don’t know any better.
Sure, but people do unhealthy / bad things all the time, and are biased in favor of many of them. I’m not supposing that someone might “use our power for evil” or something like that. Rather, I think we should include our best information.
Our disagreement seems to boil down to:
A … net cost of silly biased human brains letting should cloud their assessment of is. B … net cost of silly biased human brains letting is cloud their assessment of should.
Statement: Among Lesswrong readers: P(A>B) > P(B>A)
I say TRUE.
You say FALSE.
Do you (and the readers) agree with this interpretation of the debate?
Do you (and the readers) agree with this interpretation of the debate?
I don’t.
My point is that a discussion of PUA, by its nature, is a discussion of “should”. The relevant questions are things like “How does one best achieve X?” Excluding ethics from that discussion is wrong, and probably logically inconsistent.
I’m actually still a bit unclear on what you are referring to by this “letting should cloud their assessment of is” and its reverse.
Aha. I agree. Do people really make that mistake a lot around here?
Also note that ‘should’ goes on the map too. Not just in the “here be dragons” sense, but also indicated by the characterization that the way is “best”.
Do people really make that mistake a lot around here?
There is a whole host of empirically demonstrated biases in humans that work in these two directions under different circumstances. LWers may be aware of many of them, but they are far from immune.
Also note that ‘should’ goes on the map too. Not just in the “here be dragons” sense, but also indicated by the characterization that the way is “best”.
Agreed but, should is coloured in with different ink on the map than is. I admit mapping should can prove to be as much of a challenge as is.
I see my proposal of two quarantined threads, as a proposal to lets stop messing up the map by all of us drawing with the same color at the same time, and first draw out “is” in black and then once the colour is dry add in should with red so we don’t forget where we want to go. Then use that as our general purpose map and update both red and black as we along our path and new previously unavailable empirical evidence meets our eyes.
I see my proposal of two quarantined threads, as a proposal to lets stop messing up the map by all of us drawing with the same color at the same time, and first draw out “is” in black and then once the colour is dry add in should with red so we don’t forget where we want to go. Then use that as our general purpose map and update both red and black as we along our path and new previously unavailable empirical evidence meets our eyes.
I didn’t point this out before, but this is actually a good argument in favor of the ‘ethics later’ approach. It makes no sense to start drawing paths on your map before you’ve filled in all of the nodes. (Counterargument: assume stochasticity / a non-fully-observable environment).
Also, if this technique actually works, it should be able to be applied to political contexts as well. PUA is a relatively safer area to test this, since while it does induce mind-killing (a positive feature for purposes of this test) it does not draw in a lot of negative attention from off-site, which is one of the concerns regarding political discussion.
I am majorly in favor of researching ways of reducing/eliminating mind-killing effects.
I see my proposal of a two quarantined threads, as a proposal to lets stop messing up the colours, and first draw out is in black and then once the colour is dry add in should with red so we don’t forget where we want to go. Then use that as our general purpose map.
So an analogous circumstance would be: if we were constructing a weighted directed graph representing routes between cities, we’d first put in all the nodes and connections and weights in black ink, and then plan the best route and mark it in red ink?
If so, that implies the discussion of “PUA” would include equal amounts of “X results in increased probability of the subject laughing at you” and “Y results in increased probability of the subject slapping you” and “Z results in increased probability of the subject handing you an aubergine”.
If the discussion is not goal-directed, I don’t see how it could be useful, especially for such a large space as human social interaction.
But it would be goal directed: “To catalogue beliefs and practices of PUAs and how well they map to reality.”
Without breaking the metaphor, we are taking someone else’s map and comparing it to our own map. Our goal being to update our map where their map of reality (black ink) is clearly better or at the very least learn if their map sucks. And to make this harder we aren’t one individual but a committee comparing the two maps. Worse some of us love their black ink more than their red one and vice versa, and can’t shut up about them. Let’s set up separate work meetings for the two issues so we know that black ink arguments have no place on meeting number 2. and the person is indulging his interests at the expense of good map making.
The reason why I favour black first is that going red first we risk drawing castles in clouds rather than a realizable destinations.
To catalogue beliefs and practices of PUAs and how well they map to reality.
Oops.
Yes, that’s absolutely possible, and is (on reflection) what we have been talking about this whole time.
So the challenge, then, would be to distinguish the black from red ink in PUA, and make sure we’re only talking about the black ink in the ‘no ethics’ thread.
I have no intuitions about how feasible that is, but I withdraw my assertion it is ‘impossible’, as I was clearly talking about something else.
So the challenge, then, would be to distinguish the black from red ink in PUA, and make sure we’re only talking about the black ink in the ‘no ethics’ thread.
Yes! We can’t take PUA’s at their word. Even when they believe with unwavering certainty they are talking black ink, they are in the best of cases as confused as we are and in the worst quite a bit more confused. Also some people just like to lie to others to get to the red destination faster (heh).
It is hard, but apparently enough posters think we can make a decent map of the reality of romance that they try and write up posts and start discussions about them. Limiting ourselves to the more popular PUAs I think we can also get a pretty good idea of what their idea of reality is.
Comparing the two and seeking evidence to prove or disprove our speculations about reality seems like a worthy exercise.
I think this is a horrendously bad idea—“more popular” is not always positively correlated with “more correct”. ;-)
Also, “more popular” isn’t always positively correlated with “more useful”, either. The most popular PUA material is about indirect game, social tricks, and the like… and that’s why it’s popular. That doesn’t mean those things are the most useful ways to get into relationships.
Consider Bob, who believes he is unattractive to women. Will Bob be more interested in course A which tells him there are secrets to make women like him, or course B, which teaches him how to notice which women are already attracted to him? His (quite common) belief makes course B less attractive, even if course B would be far more useful.
Of available courses that fit Bob’s existing belief structure, the ones that will be most popular will be the ones that purport to explain his unattractiveness (and the attractiveness of other men) in a way that Bob can understand. And if they offer him a solution that doesn’t sound too difficult (i.e. act like a jerk), then this will be appealing.
What’s more, because Bob is convinced of his unattractiveness and fundamental low worth where women are concerned, Bob will be most attracted to courses that involve pretending to be someone he is not: after all, if who he is is unattractive, then of course he needs to pretend to be somebody else, right?
I could go on, but my point here is that popularity is a horrible way to select what to discuss, because there’s a systematic bias towards “tricks” as being the most marketable thing. However, even companies that sell tricks on the low end of the market to get people interested, usually sell some form of self-improvement as their “advanced” training. (That is, stuff that involves people actually being a different sort of man, rather than simply pretending to be one.)
(There are probably exceptions to this, of course.)
Anyway, a better selection criterion would be goal relevance. Most PUA sales material has end-user goals explicitly or implicitly stated—why not select materials on the basis of goals that LW has use for, and evaluate them for how well they achieve those stated goals?
What popular PUA is saying matters quite a bit because it helps us understand the PUA community as a cultural phenomena it also can help us by helping expose some biases that probably exist to some degree in harder to detect form in higher quality material. Perhaps well respected or esteemed authors (within the PUA community) rather than the ones that sell the most material (where would we even get that data?), are even better for this purpose.
But overall I’m not saying we shouldn’t extend our analysis to PUA’s that are less well known but seem particularly compelling to LessWrong readers. The thing is they have to be put in context.
I think this is a horrendously bad idea—“more popular” is not always positively correlated with “more correct”. ;-)
How is that bad? The whole point is to locate actual beliefs and test them. If they’re incorrect, all the better—our job is probably easier. By focusing on the general population, we can cull down the potentially-useful beliefs without needing to ourselves bring ethics into the discussion. Thus, we only test “X results in a 20% chance of being handed an aubergine” if that is a belief some PUA practitioner actually holds.
Also, “more popular” isn’t always positively correlated with “more useful”, either. The most popular PUA material is about indirect game, social tricks, and the like… and that’s why it’s popular. That doesn’t mean those things are the most useful ways to get into relationships.
Anyway, a better selection criterion would be goal relevance. Most PUA sales material has end-user goals explicitly or implicitly stated—why not select materials on the basis of goals that LW has use for, and evaluate them for how well they achieve those stated goals?
These are all ‘red ink’ concerns. The ‘black ink’ thread is supposed to evaluate beliefs of PUAs without reference to how effective they are at achieving goals. You can’t presuppose what the goals are supposed to be or determine whether they’re optimal ways of achieving goals. Thus, we can test for the presence of aubergines, but we don’t need to know whether we actually want aubergines.
These are all ‘red ink’ concerns. The ‘black ink’ thread is supposed to evaluate beliefs of PUAs without reference to how effective they are at achieving goals.
My initial response to your comment, is, “WTF?”
My second, more polite response, is simply that your suggestion isn’t particularly compatible with finding out useful things, since your proposed selection criteria will tend to filter them out before you have anything to evaluate.
I’m talking about following the strategy laid out by Konkvistador above. Have you been following this thread?
My second, more polite response, is simply that your suggestion isn’t particularly compatible with finding out useful things, since your proposed selection criteria will tend to filter them out before you have anything to evaluate.
Possibly. But the goal was to have separate threads for non-normative and normative claims, and this is how it will be accomplished. My initial response was “That cannot be done”, probably from intuitions similar to yours, but that turned out to be false.
But the goal was to have separate threads for non-normative and normative claims,
My understanding was that the goal was to have a useful discussion, minus the mindkilling. AFAICT your proposal of the means by which to accomplish this, is to throw out the usefulness along with the mindkilling.
Different goals. The goal was indeed to have a useful discussion mins the mindkilling. The proposed subgoal was to have one thread for non-normative claims and another for normative claims. It might be throwing the baby out with the bathwater, but if you think so then you should (at a much lower level of nesting) propose a different strategy for having the discussion minus the mindkilling. Or at least say you have a problem with that subgoal at the start of the thread, rather than the end.
I didn’t mean to misrepresent your position or the debate so far. I was just trying to communicate how I’m seeing the debate. Hope you didn’t take my question the wrong way! :)
Ethics is a whole different thing than putative human universals. Very few things that I would assert as ethics would I claim to be human universals. “Normative human essentials” might fit in that context. (By way of illustration, we all likely consider ‘Rape Bad’ as an essential ethical value but I certainly wouldn’t say that’s a universal human thing. Just that the ethics of those who don’t think Rape Is Bad suck!)
Different ethical systems are possible to implement even on “normal” human hardware (which is far from the set of all humans!). We have ample evidence in favour of this hypothesis. I think Westerners in particular seem especially apt to forget to think of this when convenient.
I think I agree with what you are saying but I can’t be sure. Could you clarify it for me a tad (seems like a word is missing or something.)
Human can and do value different things. Sometimes even when they start out valuing the same things, different experiences/cricumstances lead them to systematize this into different reasonably similarly consistent ethical systems.
Westerners certainly seem to forget this type of thing. Do others really not so much?
Modern Westerners often identify their values as being the product of reason, which must be universal.
While this isn’t exactly rare, it is I think less pronounced in most human cultures throughout history. I think a more common explanation to “they just haven’t sat down and thought about stuff and seen we are right yet” is “they are wicked” (have different values). Which obviously has its own failure modes, just not this particular one.
It would be interesting to trace the relationship between the idea of universal moral value, and the idea of universal religion. Moldbug argues that the latter pretty much spawned the former (that’s probably a rough approximation), though I don’t trust his scholarship on the history of ideas that much. I don’t know to what extent the ancient Greeks and Romans and Chinese and Arabs considered their values to be universal (though apparently Romans legal scholars had the concept of “natural law” which they got from the Greeks which seems to map pretty closely to that idea, independently of Christianity and related universal religions).
Ethics can arguably be reduced to “what is my utility function?” and “how can I best optimize for it?” So for a being not to include ethics in its optimization criteria, I’m confused what that would mean. I had guessed Konkvistador was referring to some sort of putative human universals.
I’m still not sure what you mean when you say their ethics suck, or what criteria you use when alleging something as ethics.
Ethics aren’t about putative human universals. I’m honestly not sure how to most effectively explain that since I can’t see a good reason why putative human universals came up at all!
I had guessed Konkvistador was referring to some sort of putative human universals.
Cooperative tribal norms seems more plausible. Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.
Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.
Not at all. Many prominent ethicists and anthropologists and doctors agree that there are human universals. And any particular fact about a being has ethical implications.
For example, humans value continued survival. There are exceptions and caveats, but this is something that occurs in all peoples and amongst nearly all of the population for most of their lives (that last bit you can just about get a priori). Also, drinking antifreeze will kill a person. Thus, “one should not drink antifreeze without a damn good reason” is a human universal.
If these people don’t frequently disagree with others about ethics, they become unemployed.
This group’s opinions on the subject are less correlated with reality than most groups’.
ETA: I have no evidence for this outside of some of their outlandish positions, the reasons for which I have some guesses for but have not looked into, and this is basically a rhetorical argument.
Actually, if anything I think I’d be more likely to believe that the actual job security ethicists enjoy tends to decrease their opinions’ correlation with reality, as compared to the beliefs about their respective fields of others who will be fired if they do a bad job.
If it was intended to be merely a formal argument, compare:
If prominent mathematicians don’t frequently disagree with others about math, they become unemployed.
This group’s opinions on the subject are less correlated with reality than most groups’.
I thought you had an overwhelming point there until my second read. Then I realized that the argument would actually be a reasonable argument if the premise wasn’t bogus. In fact it would be much stronger than the one about ethicists. If mathematicians did have to constantly disagree with other people about maths it would be far better to ask an intelligent amateur about maths than a mathemtician.
You can’t use an analogy to an argument which uses a false premise that would support a false conclusion as a reason why arguments of that form don’t work!
If mathematicians did have to constantly disagree with other people about maths it would be far better to ask an intelligent amateur about maths than a mathemtician.
The best reason I could come up with why someone would think ethicists need to disagree with each other to keep their jobs, is that they need to “publish or perish”. But that applies equally well to other academic fields, like mathematics. If it’s not true of mathematicians, then I’m left with no reason to think it’s true of ethicists.
You appear to have presented the following argument:
If prominent mathematicians don’t frequently disagree with others about math, they become unemployed.
This group’s opinions on the subject are less correlated with reality than most groups’.
… as an argument which would be a bad inference to make even if the premise was true—bad enough as to be worth using as an argument by analogy. You seem to be defending this position even when it is pointed out that the conclusion would obviously follow from the counterfactual premise. This was a time when “Oops” (or perhaps just dropping the point) would have been a better response.
I’d expect “publish or perish” to apply broadly in academic fields, but there are several
broad categories of ways in which one can publish. One can indeed come up with a
disagreement with a view that someone else is promoting, but one can also come up
with a view on an entirely different question than anyone else addresses. In a very
abstract sense, this might count as as disagreement, a kind of claim that the “interesting”
area of a field is located in a different area than existing publications have pointed to,
but this isn’t quite the same thing as claiming that the existing claims about the previously
investigated area are wrong.
Mathematicians—along with scientists—discover new things (what is a proof other than a discovery of a new mathematical property). That’s what their job is. In order for Ethicists to be comparable, wouldn’t they need to discover new ethics?
In order for Ethicists to be comparable, wouldn’t they need to discover new ethics?
Sure, and they do. One out of the three major subfields of ethics is “applied ethics”, which simply analyzes actual or potential circumstances using their expertise in ethics. The space for that is probably as big as the space for mathematical proofs.
So the premise, then, is that the institution of Ethics would come tumbling down if it were not the case that ethicists seemed to have special knowledge that the rest of the populace did not?
if so, I think it again applies equally well to any academic discipline, and is false.
So the premise, then, is that the institution of Ethics would come tumbling down if it were not the case that ethicists seemed to have special knowledge that the rest of the populace did not?
Yes.
applies equally well to any academic discipline
Other academic disciplines are tested against reality because they make “is” statements. Philosophy is in a middle ground, I suppose.
Which reality was it that the ethicists were not correlated with again? Oh, right, making factual claims about universals of human behavior. I don’t disbelieve you.
Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.
Not at all. Many prominent ethicists and anthropologists and doctors agree that there are human universals. And any particular fact about a being has ethical implications.
Was the “Not at all.” some sort of obscure sarcasm?
No. I was disagreeing with your point, and then offering supporting evidence. You took one piece of my evidence and noted that it does not itself constitute a refutation. I don’t know how it is even helpful to point this out; if I thought that piece of evidence constituted a refutation, I would not have needed to say anything else. Also, most arguments do not contain or constitute refutations.
This does not refute. It also does not constitute an argument against the position you are contradicting in the way you seem to be thinking. The sequence of quote → ‘not at all’ → argument is non sequitur. Upon being prompted to take a second look this should be apparent to most readers.
Unless an individual had an ethical system which is “people should behave in accordance to any human universals and all other behaviors are ethically neutral” the fact that there are human universals doesn’t have any particular impact on the claim. Given that a priori humans can’t be going against human universals that particular special case is basically pointless.
I think I’m confused about what your actual claim was. It seemed to be that human universals have nothing to do with ethics. But it’s easy to show that some things that are putatively human universals have something to do with ethics, and a lot of the relevant people think putative human universals have a large role in ethics.
Was it just that ethics is not equivalent to human universals? If so, I agree.
Sorry for posting the first few paragraphs and then immediately editing to add the later ones. It was a long post and I wanted to stop at several points but I kept getting hit by “one more idea/comment/argument” moments.
I agree that finding the optimal course of action for humans dosen’t mean much if it dosen’t include ethics. But humans in order to do that often construct and reason within systems that don’t include ethics in their optimization criteria.
There is a sometimes subtle but important difference between thinking and and considering “this is the optimal course of action optimizing for X and only X” and discussing it and between saying “you obviously should be optimizing for X and only X.”
I argue that this is former is sometimes a useful tool for the latter, because it allows one to survey how possible action space differs taking away or adding axiom to your goals. It is impossible to think about what people who might have such axioms do, or how dangerous or benign to your goals the development of such goals seeking systems might be. You need to in essence know the opportunity costs of many of your axioms and how you will measure up either in a game theoretic sense (because you may find yourself in a conflict with such an agent and need to asses it capabilities and strategic options) or in a evolutionary sense (where you wish to understand how much fitness your values have in the set of all possible values and how much you need to be concerned with evolution messing up your long term plans).
In short I think that: generally It is not unethical to think about how a specific hypothetical unethical mind would think. It may indeed perhaps be risky for some marginal cases, but also very potentially rewarding in expected utility.
One can say that while this is theoretically fine but in practice actually quite risky in people with their poor quality minds. But let me point out that people are generally biased against stabbing 10 people and similar unpleasant courses of action. One can perhaps say that a substantial minority isn’t, and using self-styled ethical agents cognitive capacity to emulate thinking of (in their judgement) unethical agents and sharing that knowledge willy-nilly with others will lead to unethical agents having lower enough costs and greater enough efficiency that it cancels out or overwhelms the gains of the “ethical agents”.
This however seems to lead towards a generalize argument against all rationality and sharing of knowledge, because all of it involves “morally constrained” agents potentially sharing the fruits of their cognitive work with less constrained agents who then out compete them in the struggle to order the universe into certain states. I maintain this is a meaningless fear unless there is good evidence that sharing particular knowledge (say schematics for a nuclear weapon or death ray) or rationality enhancing techniques will cause more harm than good one can rely on more people being biased against doing harmful things than not and thus using the knowledge for non-harmful purposes. But this is a pretty selected group. I would argue the potential for abuse among the readers of this forum is much lower than average. Also how in the world are we supposed to be concerned about nuclear weapons or death rays if we don’t have any good ideas if they are even possible? Can you ethically strongly condemn the construction of non-functioning death ray? Is it worth invading a country to stop the construction of a non-functioning death ray?
And note that at this point I’m already basically blowing the risks way out of proportion because quite honestly the disutility from a misused death ray is orders of magnitude larger than anything that can arise from what amounts to some unusually practical tips on improving one’s social life.
How can something not include “ethics” in its “optimization criteria”? Do you just mean that you’re looking at a being with a utility function that does not include the putative human universals?
ETA: Confusion notwithstanding, I generally agree with the parent.
EDIT: (responding to edits)
I actually wasn’t thinking anything along those lines.
Sure, but people do unhealthy / bad things all the time, and are biased in favor of many of them. I’m not supposing that someone might “use our power for evil” or something like that. Rather, I think we should include our best information.
A discussion of how best to ingest antifreeze should not go by without someone mentioning that it’s terribly unhealthy to ingest antifreeze, in case a reader didn’t know that. Antifreeze is very tasty and very deadly, and children will drink a whole bottle if they don’t know any better.
Our disagreement seems to boil down to:
A … net cost of silly biased human brains letting should cloud their assessment of is.
B … net cost of silly biased human brains letting is cloud their assessment of should.
Statement: Among Lesswrong readers: P(A>B) > P(B>A)
I say TRUE. You say FALSE.
Do you (and the readers) agree with this interpretation of the debate?
I don’t.
My point is that a discussion of PUA, by its nature, is a discussion of “should”. The relevant questions are things like “How does one best achieve X?” Excluding ethics from that discussion is wrong, and probably logically inconsistent.
I’m actually still a bit unclear on what you are referring to by this “letting should cloud their assessment of is” and its reverse.
Should and is assessed as they should be:
I have a good map, it shows the best way to get from A to B and … also C. I shouldn’t go to C, it is a nasty place.
Should and is assessed as they unfortunately often are:
I don’t want to go to C so I shouldn’t draw out that part around C on my map. I hope I still find a good way to B and don’t get lost.
I have a good map, it shows the best way to get from A to B and … also C. Wow C’s really nearby, lets go there!
Aha. I agree. Do people really make that mistake a lot around here?
Also note that ‘should’ goes on the map too. Not just in the “here be dragons” sense, but also indicated by the characterization that the way is “best”.
There is a whole host of empirically demonstrated biases in humans that work in these two directions under different circumstances. LWers may be aware of many of them, but they are far from immune.
Agreed but, should is coloured in with different ink on the map than is. I admit mapping should can prove to be as much of a challenge as is.
I see my proposal of two quarantined threads, as a proposal to lets stop messing up the map by all of us drawing with the same color at the same time, and first draw out “is” in black and then once the colour is dry add in should with red so we don’t forget where we want to go. Then use that as our general purpose map and update both red and black as we along our path and new previously unavailable empirical evidence meets our eyes.
I didn’t point this out before, but this is actually a good argument in favor of the ‘ethics later’ approach. It makes no sense to start drawing paths on your map before you’ve filled in all of the nodes. (Counterargument: assume stochasticity / a non-fully-observable environment).
Also, if this technique actually works, it should be able to be applied to political contexts as well. PUA is a relatively safer area to test this, since while it does induce mind-killing (a positive feature for purposes of this test) it does not draw in a lot of negative attention from off-site, which is one of the concerns regarding political discussion.
I am majorly in favor of researching ways of reducing/eliminating mind-killing effects.
So an analogous circumstance would be: if we were constructing a weighted directed graph representing routes between cities, we’d first put in all the nodes and connections and weights in black ink, and then plan the best route and mark it in red ink?
If so, that implies the discussion of “PUA” would include equal amounts of “X results in increased probability of the subject laughing at you” and “Y results in increased probability of the subject slapping you” and “Z results in increased probability of the subject handing you an aubergine”.
If the discussion is not goal-directed, I don’t see how it could be useful, especially for such a large space as human social interaction.
But it would be goal directed: “To catalogue beliefs and practices of PUAs and how well they map to reality.”
Without breaking the metaphor, we are taking someone else’s map and comparing it to our own map. Our goal being to update our map where their map of reality (black ink) is clearly better or at the very least learn if their map sucks. And to make this harder we aren’t one individual but a committee comparing the two maps. Worse some of us love their black ink more than their red one and vice versa, and can’t shut up about them. Let’s set up separate work meetings for the two issues so we know that black ink arguments have no place on meeting number 2. and the person is indulging his interests at the expense of good map making.
The reason why I favour black first is that going red first we risk drawing castles in clouds rather than a realizable destinations.
Oops.
Yes, that’s absolutely possible, and is (on reflection) what we have been talking about this whole time.
So the challenge, then, would be to distinguish the black from red ink in PUA, and make sure we’re only talking about the black ink in the ‘no ethics’ thread.
I have no intuitions about how feasible that is, but I withdraw my assertion it is ‘impossible’, as I was clearly talking about something else.
Yes! We can’t take PUA’s at their word. Even when they believe with unwavering certainty they are talking black ink, they are in the best of cases as confused as we are and in the worst quite a bit more confused. Also some people just like to lie to others to get to the red destination faster (heh).
It is hard, but apparently enough posters think we can make a decent map of the reality of romance that they try and write up posts and start discussions about them. Limiting ourselves to the more popular PUAs I think we can also get a pretty good idea of what their idea of reality is.
Comparing the two and seeking evidence to prove or disprove our speculations about reality seems like a worthy exercise.
I think this is a horrendously bad idea—“more popular” is not always positively correlated with “more correct”. ;-)
Also, “more popular” isn’t always positively correlated with “more useful”, either. The most popular PUA material is about indirect game, social tricks, and the like… and that’s why it’s popular. That doesn’t mean those things are the most useful ways to get into relationships.
Consider Bob, who believes he is unattractive to women. Will Bob be more interested in course A which tells him there are secrets to make women like him, or course B, which teaches him how to notice which women are already attracted to him? His (quite common) belief makes course B less attractive, even if course B would be far more useful.
Of available courses that fit Bob’s existing belief structure, the ones that will be most popular will be the ones that purport to explain his unattractiveness (and the attractiveness of other men) in a way that Bob can understand. And if they offer him a solution that doesn’t sound too difficult (i.e. act like a jerk), then this will be appealing.
What’s more, because Bob is convinced of his unattractiveness and fundamental low worth where women are concerned, Bob will be most attracted to courses that involve pretending to be someone he is not: after all, if who he is is unattractive, then of course he needs to pretend to be somebody else, right?
I could go on, but my point here is that popularity is a horrible way to select what to discuss, because there’s a systematic bias towards “tricks” as being the most marketable thing. However, even companies that sell tricks on the low end of the market to get people interested, usually sell some form of self-improvement as their “advanced” training. (That is, stuff that involves people actually being a different sort of man, rather than simply pretending to be one.)
(There are probably exceptions to this, of course.)
Anyway, a better selection criterion would be goal relevance. Most PUA sales material has end-user goals explicitly or implicitly stated—why not select materials on the basis of goals that LW has use for, and evaluate them for how well they achieve those stated goals?
What popular PUA is saying matters quite a bit because it helps us understand the PUA community as a cultural phenomena it also can help us by helping expose some biases that probably exist to some degree in harder to detect form in higher quality material. Perhaps well respected or esteemed authors (within the PUA community) rather than the ones that sell the most material (where would we even get that data?), are even better for this purpose.
But overall I’m not saying we shouldn’t extend our analysis to PUA’s that are less well known but seem particularly compelling to LessWrong readers. The thing is they have to be put in context.
How is that bad? The whole point is to locate actual beliefs and test them. If they’re incorrect, all the better—our job is probably easier. By focusing on the general population, we can cull down the potentially-useful beliefs without needing to ourselves bring ethics into the discussion. Thus, we only test “X results in a 20% chance of being handed an aubergine” if that is a belief some PUA practitioner actually holds.
These are all ‘red ink’ concerns. The ‘black ink’ thread is supposed to evaluate beliefs of PUAs without reference to how effective they are at achieving goals. You can’t presuppose what the goals are supposed to be or determine whether they’re optimal ways of achieving goals. Thus, we can test for the presence of aubergines, but we don’t need to know whether we actually want aubergines.
My initial response to your comment, is, “WTF?”
My second, more polite response, is simply that your suggestion isn’t particularly compatible with finding out useful things, since your proposed selection criteria will tend to filter them out before you have anything to evaluate.
I’m talking about following the strategy laid out by Konkvistador above. Have you been following this thread?
Possibly. But the goal was to have separate threads for non-normative and normative claims, and this is how it will be accomplished. My initial response was “That cannot be done”, probably from intuitions similar to yours, but that turned out to be false.
My understanding was that the goal was to have a useful discussion, minus the mindkilling. AFAICT your proposal of the means by which to accomplish this, is to throw out the usefulness along with the mindkilling.
Different goals. The goal was indeed to have a useful discussion mins the mindkilling. The proposed subgoal was to have one thread for non-normative claims and another for normative claims. It might be throwing the baby out with the bathwater, but if you think so then you should (at a much lower level of nesting) propose a different strategy for having the discussion minus the mindkilling. Or at least say you have a problem with that subgoal at the start of the thread, rather than the end.
I didn’t mean to misrepresent your position or the debate so far. I was just trying to communicate how I’m seeing the debate. Hope you didn’t take my question the wrong way! :)
Not at all (I think).
Ethics is a whole different thing than putative human universals. Very few things that I would assert as ethics would I claim to be human universals. “Normative human essentials” might fit in that context. (By way of illustration, we all likely consider ‘Rape Bad’ as an essential ethical value but I certainly wouldn’t say that’s a universal human thing. Just that the ethics of those who don’t think Rape Is Bad suck!)
Different ethical systems are possible to implement even on “normal” human hardware (which is far from the set of all humans!). We have ample evidence in favour of this hypothesis. I think Westerners in particular seem especially apt to forget to think of this when convenient.
I think I agree with what you are saying but I can’t be sure. Could you clarify it for me a tad (seems like a word is missing or something.)
Westerners certainly seem to forget this type of thing. Do others really not so much?
Human can and do value different things. Sometimes even when they start out valuing the same things, different experiences/cricumstances lead them to systematize this into different reasonably similarly consistent ethical systems.
Modern Westerners often identify their values as being the product of reason, which must be universal. While this isn’t exactly rare, it is I think less pronounced in most human cultures throughout history. I think a more common explanation to “they just haven’t sat down and thought about stuff and seen we are right yet” is “they are wicked” (have different values). Which obviously has its own failure modes, just not this particular one.
It would be interesting to trace the relationship between the idea of universal moral value, and the idea of universal religion. Moldbug argues that the latter pretty much spawned the former (that’s probably a rough approximation), though I don’t trust his scholarship on the history of ideas that much. I don’t know to what extent the ancient Greeks and Romans and Chinese and Arabs considered their values to be universal (though apparently Romans legal scholars had the concept of “natural law” which they got from the Greeks which seems to map pretty closely to that idea, independently of Christianity and related universal religions).
Thankyou. And yes, I wholeheartedly agree!
I suspect you meant “I certainly wouldn’t say”… confirm?
Confirm.
What would be an example of a “Normative human essential”?
Killing young children is bad.
My guess is you wanted another example?
Yea, Konkvistador supplied well.
Raping folks is bad!
That’s not very helpful to me.
Ethics can arguably be reduced to “what is my utility function?” and “how can I best optimize for it?” So for a being not to include ethics in its optimization criteria, I’m confused what that would mean. I had guessed Konkvistador was referring to some sort of putative human universals.
I’m still not sure what you mean when you say their ethics suck, or what criteria you use when alleging something as ethics.
Ethics aren’t about putative human universals. I’m honestly not sure how to most effectively explain that since I can’t see a good reason why putative human universals came up at all!
Cooperative tribal norms seems more plausible. Somebody thinking their ethics are human universals requires that they, well, are totally confused about what is universal about humans.
Not at all. Many prominent ethicists and anthropologists and doctors agree that there are human universals. And any particular fact about a being has ethical implications.
For example, humans value continued survival. There are exceptions and caveats, but this is something that occurs in all peoples and amongst nearly all of the population for most of their lives (that last bit you can just about get a priori). Also, drinking antifreeze will kill a person. Thus, “one should not drink antifreeze without a damn good reason” is a human universal.
If these people don’t frequently disagree with others about ethics, they become unemployed.
This group’s opinions on the subject are less correlated with reality than most groups’.
ETA: I have no evidence for this outside of some of their outlandish positions, the reasons for which I have some guesses for but have not looked into, and this is basically a rhetorical argument.
Actually, if anything I think I’d be more likely to believe that the actual job security ethicists enjoy tends to decrease their opinions’ correlation with reality, as compared to the beliefs about their respective fields of others who will be fired if they do a bad job.
I don’t believe this, and am not aware of any evidence that it’s the case.
If it was intended to be merely a formal argument, compare:
ETA: Note that many prominent ethicists are tenured, and so don’t get fired for anything short of overt crime.
I thought you had an overwhelming point there until my second read. Then I realized that the argument would actually be a reasonable argument if the premise wasn’t bogus. In fact it would be much stronger than the one about ethicists. If mathematicians did have to constantly disagree with other people about maths it would be far better to ask an intelligent amateur about maths than a mathemtician.
You can’t use an analogy to an argument which uses a false premise that would support a false conclusion as a reason why arguments of that form don’t work!
The best reason I could come up with why someone would think ethicists need to disagree with each other to keep their jobs, is that they need to “publish or perish”. But that applies equally well to other academic fields, like mathematics. If it’s not true of mathematicians, then I’m left with no reason to think it’s true of ethicists.
You appear to have presented the following argument:
… as an argument which would be a bad inference to make even if the premise was true—bad enough as to be worth using as an argument by analogy. You seem to be defending this position even when it is pointed out that the conclusion would obviously follow from the counterfactual premise. This was a time when “Oops” (or perhaps just dropping the point) would have been a better response.
The premise wasn’t present. Yes, it would be formally valid given some counterfactual premise, but then so would absolutely every possible argument.
I’d expect “publish or perish” to apply broadly in academic fields, but there are several broad categories of ways in which one can publish. One can indeed come up with a disagreement with a view that someone else is promoting, but one can also come up with a view on an entirely different question than anyone else addresses. In a very abstract sense, this might count as as disagreement, a kind of claim that the “interesting” area of a field is located in a different area than existing publications have pointed to, but this isn’t quite the same thing as claiming that the existing claims about the previously investigated area are wrong.
Mathematicians—along with scientists—discover new things (what is a proof other than a discovery of a new mathematical property). That’s what their job is. In order for Ethicists to be comparable, wouldn’t they need to discover new ethics?
Sure, and they do. One out of the three major subfields of ethics is “applied ethics”, which simply analyzes actual or potential circumstances using their expertise in ethics. The space for that is probably as big as the space for mathematical proofs.
I don’t think they need to disagree with each other, only with outsiders.
So the premise, then, is that the institution of Ethics would come tumbling down if it were not the case that ethicists seemed to have special knowledge that the rest of the populace did not?
if so, I think it again applies equally well to any academic discipline, and is false.
Or am I still missing something?
Yes.
Other academic disciplines are tested against reality because they make “is” statements. Philosophy is in a middle ground, I suppose.
Which reality was it that the ethicists were not correlated with again? Oh, right, making factual claims about universals of human behavior. I don’t disbelieve you.
This does not refute!
No duh. Though it does suggest.
Was the “Not at all.” some sort of obscure sarcasm?
EDIT: At time of this comment the quote was entirety of parent—although I would have replied something along those lines anyway I suppose.
No. I was disagreeing with your point, and then offering supporting evidence. You took one piece of my evidence and noted that it does not itself constitute a refutation. I don’t know how it is even helpful to point this out; if I thought that piece of evidence constituted a refutation, I would not have needed to say anything else. Also, most arguments do not contain or constitute refutations.
It seems necessary to strengthen my reply to
Unless an individual had an ethical system which is “people should behave in accordance to any human universals and all other behaviors are ethically neutral” the fact that there are human universals doesn’t have any particular impact on the claim. Given that a priori humans can’t be going against human universals that particular special case is basically pointless.
I think I’m confused about what your actual claim was. It seemed to be that human universals have nothing to do with ethics. But it’s easy to show that some things that are putatively human universals have something to do with ethics, and a lot of the relevant people think putative human universals have a large role in ethics.
Was it just that ethics is not equivalent to human universals? If so, I agree.
Sorry for posting the first few paragraphs and then immediately editing to add the later ones. It was a long post and I wanted to stop at several points but I kept getting hit by “one more idea/comment/argument” moments.