I got as far as “some things actually are good versus evil, we all know this, right?” at 4:00, and lost all respect for the man. I didn’t watch the rest.
Other than how we treat them, what’s the difference between a story and a theory or hypothesis?
Edit: I’m guessing from the downvote that I may’ve been misunderstood. The above question is not rhetorical; it’s intended to spark conversation.
It is fine to decide, after four minutes, that you don’t think its worth watching the rest of the lecture (I might not finish it either because it is directed to a non-specialist audience), but to tell us you “lost all respect for the man,” only shows that you were too quick to rush to judgment.
Based only on the five minutes of it I watched I know that he is making the exact same points (good vs. evil stories are curiosity stoppers), you accuse him (below) of missing.
My thought wasn’t that he wouldn’t have anything true to say. It was that if he’s still defending good and evil as obviously existing, in that context, he’s far enough behind me on the issue that I can safely assume that he doesn’t have anything major to teach me, and that what he says is untrustworthy enough (because there’s an obvious flaw in his thought process) that I’d have to spend an inordinate amount of time checking his logic before using even the parts that appear good—time that would be better spent elsewhere.
Many people here appear to have a similar epistemic immune response to people who bring up God in discussions of ethics. I’m surprised it’s considered an issue in this case.
It is often worthwhile to listen to intelligent people, even if they are fantastically wrong about basic facts of the very subject that they’re discussing. One often hears someone reasoning within a context of radically wrong assumptions. A priori, one would expect such reasoning to be almost wholly worthless. How could false premises lead to reliable conclusions?
But somehow, in my experience, it often doesn’t work that way. Of course, the propositional content of the claims will often be false. Nonetheless, within the system of inferences, substructures of inferences will often be isomorphic to deep structures of inferences following from premises that I do accept.
The moral reasoning of moral realists can serve as an example. A moral realist will base his moral conclusions on the assumption that moral properties (such as good and evil) exist independently of how people think. His arguments, read literally, are riddled with this assumption through-and-through. Nonetheless, if he is intelligent, the inferences that he makes often map to highly nontrivial, but valid, inferences within my own system of moral thought. It might be necessary to do some relabeling of terms. But once I learn the relabeling “dictionary”, I find that I can learn highly nontrivial implications of my premises by translating the implications that the realist inferred from his premises.
Interesting idea. I’m not sure I completely understand it, though. Could you give an example?
Here’s a made-up example. I chose this example for simplicity, not because it really represents the kind of insight that makes it worthwhile to listen to someone.
Prior to Darwin, many philosophers believed that the most fundamental explanations were teleological. To understand a thing, they held, you had to understand its purpose. Material causes were dependent upon teleological ones. (For example, a thing’s purpose would determine what material causes it was subjected to in the first place). These philosophers would then proceed to use teleology as the basis of their reasoning about living organisms. For example, on seeing a turtle for the first time, they might have reasoned as follows:
Premise 1: This turtle has a hard shell.
Premise 2: The purpose of a hard shell is to deflect sharp objects.
Conclusion: Therefore, this turtle comes from an environment containing predators that attack with sharp objects (e.g., teeth).
But, of course, there is something deeply wrong with such an explanation. Insofar as a thing has a purpose, that purpose is something that the thing will do in the future. Teleology amounts to saying that the future somehow reached back in time and caused the thing to acquire properties in the past. Teleology is backwards causation.
After Darwin, we know that the turtle has a hard shell because hard shells are heritable and helped the turtle’s ancestors to reproduce. The teleological explanation doesn’t just violate causality—it also ignores the real reason that the turtle has a shell: natural selection. So the whole argument above might seem irredeemably wrong.
But now suppose that we introduce the following scheme for translating from the language of teleology to Darwinian language:
“The purpose of this organism’s having property X is to perform action Y.”
becomes
“The use of property X by this organism’s ancestors to perform action Y caused this organism to have property X.
Applying this scheme to the argument above produces a valid and correct chain of reasoning. Moreover, once I figure out the scheme, I can apply it to many (but not all) chains of inferences made by the teleologist to produce what I regard to be correct and interesting inferences. In the example above, I only applied the translation scheme to a premise, but sometimes I’ll get interesting results when I apply the scheme to a conclusion, too.
Of course, not all inferences by the teleologist will be salvageable. Many will be inextricably intertwined with false premises. It takes work to separate the wheat from the chaff. But, in my experience, it often turns out to be worth the effort.
Good thing I asked; that wasn’t what I originally thought you meant. It’s similar enough to translating conversational shorthand that I probably already do that occasionally without even realizing it, but it’d be good to keep in in mind as a tool to use purposely. Thanks. :)
It’s similar enough to translating conversational shorthand that . . .
I probably shouldn’t have used the term “translation”. Part of my point is that the “translation” does not preserve meaning. Only the form of the inference is preserved. The facts being asserted can change significantly, both in the premises and in the conclusion. (In my example, only the assertions in the premises changed.) In general, the arguer no longer agrees with the inference after the “translation”. Moreover, his disagreement is not just semantic.
I’d somehow gotten the idea that you were talking about taking the proposed pattern of relationships between ideas and considering its applicability to other, unrelated ideas. As an extremely simple example, if the given theory was “All dogs are bigger than cats”, make note of the “all X are bigger than Y” idea, so it can be checked as a theory in other situations, like “all pineapples are bigger than cherries”. That seems like a ridiculously difficult thing to do in practice, though, which is why I thought you might have meant something else.
My thought wasn’t that he wouldn’t have anything true to say. It was that if he’s still defending good and evil as obviously existing, in that context, he’s far enough behind me on the issue that I can safely assume that he doesn’t have anything major to teach me, and that what he says is untrustworthy enough (because there’s an obvious flaw in his thought process) that I’d have to spend an inordinate amount of time checking his logic before using even the parts that appear good—time that would be better spent elsewhere.
That’s not a good heuristic. There are a lot of people—Eliezer would name Robert Aumann, I think—who are incredibly bright, highly knowledgeable, and capable of conveying that knowledge who are wrong about the answers to what some of us would consider easy questions.
Now, I know Berserk Buttons (warning: TV Tropes) as well as anyone, and I’ve dismissed some works of fiction which others have considered quite good (e.g. Alfred Bester’s The Demolished Man, TV sitcom The Modern Family) because they pushed those buttons, but when it comes to factual information, even stupid people can teach you.
(Granted, you may be right about the worthlessness of this particular speech to you—I haven’t watched it. But the heuristic is poor.)
The heuristic isn’t widely applicable, but I disagree about it being poor altogether. As I pointed out above, it’s not just that he defended good vs. evil. It’s that he did it in the context of a presentation on a subtopic of how we conceptualize the world. He may have things to teach me in other areas, obviously.
That’s why I compared it to someone bringing God into a discussion on ethics specifically. (Or, say, evolution.) That person may be brilliant at physics, but on the topic at hand, not so much.
It also occurs to me that this heuristic may be unusually useful to me because of my neurology. It does seem to take much more time and effort for me to deconstruct and find flaws in new ideas presented by others, compared to most people, and because of the extra time, there’s a risk of getting distracted and not completing the process. It’s enough of an issue that even a flawed heuristic to weed out bad memes is (or, feels—I’m not sure how one would actually test that) useful.
Okay, I’ll grant you that. It’s better to have a sufficiently strict filter that loses some useful information than a weaker filter which lets in garbage data. I would presume (or, at least, advise) that you make a particular effort to analyze data which you previously rejected but which remains widely discussed, however—an example from my own experience being Searle’s Chinese Room argument. Such items should be uncommon enough.
I agree, but the fantastic thing is that you lose so little when you reject too hastily. If the ideas you ignored turn out to be useful and true, someone you’re willing to listen to will advocate them eventually.
That works if you assiduously and diligently and without flaw, start paying attention after no more than the third time you hear the idea advocated, and without using the idea itself to judge untrustworthy those who otherwise see competent.
In practice, people usually reject the idea itself and go on rejecting it, when they claim to be acting under cover of rejecting people. Consider those who die of rejecting cryonics; consider what policy they would have to follow in order to not do that. What good is it to quickly reject bad ideas if you quickly reject good ideas as well? Discrimination is the whole trick here.
I suppose we might have no recourse but to judge people and shut our ears to most of them, in the Internet age, but to say that we “lose so little” far understates the danger of a very dangerous policy.
I agree that people often don’t make the necessary distinction between ideas they have evidence against, and unevaluated ideas they’ve been ignoring because they’ve only heard them advocated by kooks. As you point out, only ideas in the prior category properly discredit their advocates.
There’s more than just the one non-failure mode to this kind of thing. My method involves taking the time to consider the information gathered up to the point where I decided to stop listening to the person, as if I hadn’t stopped listening to them at all. Information that I would’ve gotten from them after that point isn’t affected by my opinion of them, since I haven’t heard it (where it would be, if I were distracted by thinking ‘this person’s an idiot’ as I listened to them), and I give as fair of a trial as I’m able to to the rest.
It may also be noteworthy that I didn’t judge him for an argument he was making, and I make something of a point of not doing so unless the logic being used is painfully bad. (Tangential realization: That’s why activists who aren’t willing to have any 101-level discussions with newbies get a (mild) negative reaction from me; discarding a whole avenues of discourse like that cuts off a valuable, if noisy, source of information.)
Strength of emotional response and certainty of the underlying heuristic’s accuracy aren’t the same thing. It may not’ve been clear that I was reporting the former, but I was, and one of the possible responses to that comment that I was prepared for was “yes, but he went on to make this good point...”.
Based on the first five minutes, the whole point of his lecture is that stories, explicitly including but not limited to those framed as good vs. evil, are often dangerous oversimplifications.
I’m telling you, as someone who has read quite a lot by Tyler Cowen, that he is not as naive about good and evil as you seem to think. You’ve read too much into the one sentence you’ve quoted.
I’m not sure I can answer this coherently; I came to the conclusion that good and evil are not objectively real, or even useful concepts, long enough ago that I can’t accurately recreate the steps that got me there.
I do occasionally have conversations with people who use those words, and mentally translate ‘good’ (in that sense) to ‘applause light-generating’ and ‘evil’ to ‘revulsion-generating’, ‘unacceptable in modern society’, and/or ‘considered by the speaker to do more harm than good’, in estimated order of frequency of occurrence. (I often agree that things labeled evil do more harm than good, but if the person doing the ‘evil’ thing agreed, they wouldn’t be doing it, so it’s obviously at least somewhat debatable.) I don’t use the word ‘evil’ at all, myself, and don’t use ‘good’ in the good-vs.-evil sense.
Those words are also curiosity-stoppers—it’s not very useful to label an action or viewpoint as ‘evil’; it’s much more useful to explore why the person doing that thing or holding that attitude believes that it’s correct. Likewise, labeling something as ‘good’ reduces the chance of thinking critically about it, and noticing flaws or areas that could be improved.
I haven’t gotten around to deconstructing those terms yet, but off the top of my head:
A ‘harmful to X’ action is one that has a long-term effect on it that reduces its ability to function. Examples:
Taking some RAM out of a computer
Adding grit to a machine, increasing the rate at which it wears out
Injuring a person in such a way that they lose use of a body part, or develop PTSD, or are in ongoing pain (because ongoing pain reduces their ability to function, not because pain is intrinsically harmful)
Extremist activism, where doing so makes the movement less credible and decreases the rate at which more sensible activists can create change. (I assume here and below that, disregarding the extremism, the activism is promoting good in the sense at hand.)
A ‘good for X’ action, in this sense (‘helpful’ would be a better word), is one that has a long-term effect on it that increases its ability to function. Examples:
Adding RAM to a computer
Performing maintenance on a machine
Teaching a person, giving them medical help, establishing a relationship with them such that they can approach you for advice or help in the future
Extremest activism, where doing so moves the Overton window.
The question isn’t usually whether an action does harm or good or both. The question is how much importance to give to the various harms and goods involved.
Plenty of people have been tortured and not ended up with PTSD. Moreover, we classify instances of those things as harmful long before the DSM even lets us diagnose PTSD.
Also, there are approximately fifty arguments in that post and comments, none demonstrating that pain isn’t intrinsically harmful so I really have no idea what you want me to take away from that link.
PTSD or other long-term psychological (or physical) impairment, then—which may be sub-clinical or considered normal. An example: Punishment causes a psychological change that reduces the person’s ability to do the thing that they were punished for. We don’t (to the best of my knowledge) have a name for that change, but it observably happens, and when it does, the punishment has caused harm. (It may also be helpful, for example if the punished action would have reduced the person’s ability to function in other ways. The two aren’t always mutually exclusive. Compare it to charging someone money for a class—teaching is helpful, taking their money is harmful.)
Also, I do believe that there could be situations where someone is tortured and doesn’t experience a long-term reduction in functionality, in which case, yes, the torture wasn’t harmful. The generalization that torture is harmful is useful because those situations are rare, and because willingness to attempt to harm someone is likely to lead to harm, and should be addressed as such.
The most relevant point in the discussion of pain is right at the beginning—people who don’t experience any pain tend to have very short or very difficult lives. That makes it obvious that being able to experience pain is useful to the experiencer, rather than net-harmful. So, even though some pain is observably harmful, some pain must be helpful enough to make up the difference. That doesn’t jive with ‘pain is intrinsically harmful’, unless you’re using a very different definition of the word, in which case I request that you clarify how you’re defining it.
Also, I do believe that there could be situations where someone is tortured and doesn’t experience a long-term reduction in functionality, in which case, yes, the torture wasn’t harmful.
Well, anyone else who thinks this is wrong feel free to modus tollens away the original definition.
…
I was hoping to make my point by way of counter example. Since you’re not recognizing the counter example I have to go back through the whole definition and the context to see where we lost each other. But thats a mess to do because right now this is a semantic debate. To make it not one I need the cash value of you belief that something is harmful. Do you always try to avoid harm to yourself? Is something being harmful necessary for you to avoid it/avoid doing it to others? Is it sufficient? Does this just apply to you? All humans? AI? Animals? Plants? Thermostats? Futons? Is something other than help and harm at work in your decision making? You don’t have to answer all of these, obviously, just give me an idea of what I should see if something is harmful so I can actually check to see if your definition works. Otherwise you can’t be wrong.
Then we can see if “causing decreased functionality” leads to the right response in all the circumstances. For example, I think there are times where people want to limit their net functionality and are right to do so even and especially when they know what they’re doing.
No; if I can help someone else (or my future self) enough by harming myself or risking harm, I’ll do so. Example: Giving a significant sum of money to someone in need, when I don’t have an emergency fund myself.
Is something being harmful necessary for you to avoid it/avoid doing it to others?
No, there are other reasons that I avoid doing things, such as to avoid inconvenience or temporary pain or offending people.
Is it sufficient?
I use a modified version of the logic that I use to determine whether I should harm myself to decide whether it’s worth it to harm others. I generally try to err on the side of avoiding harming others, because it’s harder to estimate the effect that a given harm will have on their life than it is to estimate its effect on mine.
Does this just apply to you? All humans? AI? Animals? Plants? Thermostats? Futons?
My definition is meant to be general enough to cover all of those, but in each case the meaning of ‘function’ has to be considered. Humans get to determine for themselves what it means to function. AIs’ functions are determined by their programmers (not necessarily intentionally). In practice, I consider animals on a case-by-case basis; as an omnivore, it’d be hypocritical of me to ignore that I consider the function of chickens to be ‘become tasty meat’, but I generally consider pets and wild animals to determine their own functions. (A common assigned function for pets, among those who do assign them functions, is ‘provide companionship’. Some wild animals are assigned functions, too, like ‘keep this ecosystem in balance’ or ‘allow me to signal that I care about the environment, by existing for me to protect’.) I lump plants in with inanimate and minimally-animate objects, whose functions are determined by the people owning them, and can be changed at any time—it’s harmful to chop up a futon that was intended to be sat on, but chopping up an interestingly shaped pile of firewood with some fabric on it isn’t harmful.
Is something other than help and harm at work in your decision making?
In a first-order sense, yes, but in each case that I can think of at the moment, the reason behind the other thing eventually reduces to reducing harm or increasing help. Avoiding temporary pain, for example, is a useful heuristic for avoiding harming my body. Habitually avoiding temporary inconveniences leaves more time for more useful things and helps generate a reputation of being someone with standards, which is useful in establishing the kind of social relationships that can help me. Avoiding offending people is also useful in maintaining helpful social relationships.
You don’t have to answer all of these, obviously, just give me an idea of what I should see if something is harmful so I can actually check to see if your definition works. Otherwise you can’t be wrong.
In a first-order sense, yes, but in each case that I can think of at the moment, the reason behind the other thing eventually reduces to reducing harm or increasing help.
So If I am in extraordinary pain it would never be helpful/ not-harmful for me to kill myself or for you to assist me?
Also, where does fulfilling your function fit into this? Unless you function is just increasing functionality.
Finally, I guess you’re comfortable with the fact that the function of different things is determined in totally different ways? Some things get to determine their own function while other things have people determine it for them? As far as I can tell, people determine the function of tools and this notion that people determine their own function, while true in a sense, is just Aristotelian natural law theory rearing its ugly head. It used to be that we have purposes because God created us and instilled us in one. But if there is no God it seems that the right response is to conclude that purpose, as it applied to humans, was a category error, not that we decide our own purpose.
I haven’t gotten around to deconstructing those terms yet, but off the top of my head:
This is beta-version-level thought. It isn’t surprising that it still has a few rough spots or places where I haven’t noticed that I need to explain one thing for another to make sense.
Also, where does fulfilling your function fit into this? Unless you function is just increasing functionality.
Function as I’m intending to talk about it isn’t something you fulfill, it’s an ability you have: The ability to achieve the goals you’re interested in achieving. Those goals vary not just from person to person, but also with time, whether they’re achieved or not. Also, people do have more than one goal at any given time.
I have used the word ‘function’ in the other sense, above, mistakenly. I’ll be more careful.
So If I am in extraordinary pain it would never be helpful/ not-harmful for me to kill myself or for you to assist me?
There are two overlapping types of situations that are relevant; if either one of them is true, then it’s helpful to assist the person in avoiding/removing the pain. One is that the person has ‘avoid pain’ as a relevant goal in the given instance, and helping achieve that goal doesn’t interfere with other goals that the person considers more important. The other is that the pain is signaling harmful damage to the person’s body. There are situations that don’t come under either of those umbrellas—certain BDSM practices, where experiencing pain is the goal, for example, or situations where doing certain things evokes pain but not actual (relevant to the individual’s goals) harm, and the only way to avoid the pain is to give up on accomplishing more important goals, which is common in certain disabilities and some activities like training for a sport or running a marathon.
Whether suicide would be considered helpful or harmful in a given situation is a function of the goals of the person considering the suicide. If you’re in a lot of pain, have an ‘avoid pain’ goal that’s very important, and don’t have other strong goals or the pain (or underlying damage causing the pain) makes it impossible for you to achieve your other strong goals, the answer is fairly obvious: It’s helpful. If your ‘avoid pain’ goal is less important, or you have other goals that you consider important and that the pain doesn’t prevent you from achieving, or both, it’s not so obvious. Another relevant factor is that pain can be adapted to, and new goals that the pain doesn’t interfere with can be generated. I leave that kind of judgment call up to the individual, but tend to encourage them to think about adapting, and take the possibility into account before making a decision, mostly because people so often forget to take that into account. (Expected objection: Severe pain can’t be adapted to. My response: I know someone who has. The post where she talks about that in particular is eluding me at the moment, but I’ll continue looking if you’re interested.)
If it weren’t illegal or if there was a very low chance of getting caught, I’d be comfortable with helping someone commit suicide, if they’d thought the issue through well, or in some cases where the person would be unable to think it through. I know not everyone thinks about this in the same way that I do: ‘If they’ve thought the issue through well’ doesn’t mean ‘if they’ve fulfilled the criteria for me to consider the suicide non-harmful’. Inflicting my way of thinking on others has the potential to be harmful to them, so I don’t.
Finally, I guess you’re comfortable with the fact that the function of different things is determined in totally different ways? Some things get to determine their own function while other things have people determine it for them?
There’s an underlying concept there that I failed to make clear. When it comes to accomplishing goals, it works best to consider an individual plus their possessions (including abstract things like knowledge or reputation or, to a degree, relationships) as a single unit. One goal of the unit ‘me and my stuff’ is to maintain a piece of territory that’s safe and comfortable to myself and guests. My couch is functional (able to accomplish the goal) in that capacity, specifically regarding the subgoal of ‘have comfortable places available for sitting’. My hands are similarly functional in that capacity, though obviously for different subgoals: I use them to manipulate other tools for cleaning and other maintenance tasks and to type the programs that I trade for the money I spend on rent, for example.
(This is based on the most coherent definition of ‘ownership’ that I’ve been able to come up with, and I’m aware that the definition is unusual; discussion is welcome.)
As far as I can tell, people determine the function of tools and this notion that people determine their own function, while true in a sense, is just Aristotelian natural law theory rearing its ugly head. It used to be that we have purposes because God created us and instilled us in one. But if there is no God it seems that the right response is to conclude that purpose, as it applied to humans, was a category error, not that we decide our own purpose.
I think I’ve already made it clear in this comment that this isn’t the concept I’m working with. The closest I come to this concept is the observation that people (and animals, and possibly AI) have goals, and since those goals are changeable and tend to be temporary (with the possible exception of AIs’ goals), they really are something entirely different. I also don’t believe that there’s any moral or objective correctness or incorrectness in the act of achieving, failing at, or discarding a goal.
(Expected objection: Severe pain can’t be adapted to. My response: I know someone who has. The post where she talks about that in particular is eluding me at the moment, but I’ll continue looking if you’re interested.)
Found it, or a couple related things anyway. This is a post discussing the level of pain she’s in, her level of adaptation, and people’s reactions to those facts, and this is a post discussing her opinion on the matter of curing pain and disability. I remember there being another, clearer post about her pain level, but I still haven’t found that, and I may be remembering a discussion I had with her personally rather than a blog post. She’s also talked more than once about her views on the normalization of suicide, assisted suicide, and killing for ‘compassionate’ reasons for people with disabilities (people in ongoing severe pain are inferred to be included in the group in question), though she usually just calls it murder (her definition of murder includes convincing someone that their life is so worthless or burdensome to others that they should commit suicide) - browsing this category turns up several of the posts.
This is beta-version-level thought. It isn’t surprising that it still has a few rough spots or places where I haven’t noticed that I need to explain one thing for another to make sense.
Sure. I don’t mean to come on too forcefully.
Function as I’m intending to talk about it isn’t something you fulfill, it’s an ability you have: The ability to achieve the goals you’re interested in achieving. Those goals vary not just from person to person, but also with time, whether they’re achieved or not. Also, people do have more than one goal at any given time.
So help and harm aren’t the only things in your decision-making, there are also goals. What is the relation between the two? Why can’t the help-harm framework be subsumed under “goals”? This question is especially salient if “goals” is just going to be the box where you throw in everything relevant to decision making and ethics that doesn’t fit with your concepts of help and harm.
Also, something that might make where I’m coming from clearer: when I was using “pain” before I meant it generally, not just in a physical sense. So I just read these examples about BDSM or needing pain to avoid early death as cases of one kind of pain leading another kind of pleasure of preventing another kind of pain. I’ll use the word “suffering” in the future to make this clearer. This might make the claim that pain is inherently harmful seem more plausible to you.
When it comes to accomplishing goals, it works best to consider an individual plus their possessions (including abstract things like knowledge or reputation or, to a degree, relationships) as a single unit. [...] (This is based on the most coherent definition of ‘ownership’ that I’ve been able to come up with, and I’m aware that the definition is unusual; discussion is welcome.)
So all my belongings share my goals? It seems pretty bizarre to attribute goals to any inanimate object much less give them the same goals their owner has. It also would be really strange if the fundamentals of decision-makign involved a particular and contingent socio-political construction (i.e. ownership and property). It also seems to me that possessing a reputation and possessing a car are two totally different things and that the fact that we use the same verb “have” to refer to someone’s reputation or knowledge is almost certainly a linguistic accident (maybe someone who speaks more languages can confirm this). So yes, I’d like to read a development of this notion of ownership if you want to provide one.
I also don’t believe that there’s any moral or objective correctness or incorrectness in the act of achieving, failing at, or discarding a goal.
A world in which 90% of goals were achieved wouldn’t be better than a world in which only 10% were achieved? Is a world where there is greater functionality better than a world in which there is less functionality? We might have step back further and see if we even agree what morality is.
So help and harm aren’t the only things in your decision-making, there are also goals. What is the relation between the two?
‘Helpful’ and ‘harmful’ are words that describe the effect of an action or circumstance on a person(+their stuff)’s ability to achieve a goal.
Why can’t the help-harm framework be subsumed under “goals”?
This question doesn’t make sense to me - ‘help and harm’ and ‘goals’ are two very different kinds of things, and I don’t see how they could be lumped together into one concept.
This question is especially salient if “goals” is just going to be the box where you throw in everything relevant to decision making and ethics that doesn’t fit with your concepts of help and harm.
This area of thought is complex, and my way of approaching it does view a lot of that complexity as goal-related, but the point of that isn’t to handwave the complexity, it’s to put it where I can see it and deal with it.
Also, something that might make where I’m coming from clearer: when I was using “pain” before I meant it generally, not just in a physical sense. So I just read these examples about BDSM or needing pain to avoid early death as cases of one kind of pain leading another kind of pleasure or preventing another kind of pain. I’ll use the word “suffering” in the future to make this clearer. This might make the claim that pain is inherently harmful seem more plausible to you.
Suffering is experiencing pain that the experiencer wanted to avoid—pain that’s addressed by their personal ‘avoid pain’ goal. We don’t disagree about those instances of experienced pain being harmful. Not all experiences of pain are suffering, though.
Also, people with CIPA are unable to experience the qualia of pain just as people who are blind are unable to experience the qualia of color. If you’re considering the injuries they sustain as a result painful, we’re not defining the word in the same way.
So all my belongings share my goals? It seems pretty bizarre to attribute goals to any inanimate object much less give them the same goals their owner has.
It’s only bizarre if you’re considering the owned thing and the owner as separate entities. Considering them the same for some purposes (basically anything involving the concept of ownership as our society uses it) is the only way of looking at it that I’ve found that adds back up to normality.
Does your body share your goals?
It also would be really strange if the fundamentals of decision-making involved a particular and contingent socio-political construction (i.e. ownership and property).
It’s a function of discussing owned things, not of discussing decision-making. Un-owned things that aren’t alive don’t have goals. (Or they may in a few cases retain the relevant goal of the person who last owned them—a machine will generally keep working when its owner dies or looses it or disowns it.)
It also seems to me that possessing a reputation and possessing a car are two totally different things and that the fact that we use the same verb “have” to refer to someone’s reputation or knowledge is almost certainly a linguistic accident (maybe someone who speaks more languages can confirm this).
The difference between those concepts may be relevant elsewhere, but I don’t think it’s relevant in this case.
So yes, I’d like to read a development of this notion of ownership if you want to provide one.
I’ll put it on my to-do list.
A world in which 90% of goals were achieved wouldn’t be better than a world in which only 10% were achieved? Is a world where there is greater functionality better than a world in which there is less functionality? We might have step back further and see if we even agree what morality is.
It depends what you mean by ‘better’, and for the most part I choose not to define or use the word except in situations with sufficient context to use it in phrases like ‘better for’ or ‘better at’.
So if you’re waiting for the bus and I kick you in the shin, there’s no harm? (there would be some ongoing pain for a little while, but with no impact on what you’re doing—waiting for the bus)
Adelene’s attitude as illustrated in this thread towards pain resembles my own.
I do not assign any intrinsic value to avoiding pain (or experiencing pleasure). (I am unsure whether Adelene goes this far.)
I must stress though that pain (and pleasure) are indispensible approximations or “predictors” for various (instrumental) values. If I had a Richard-friendly superintelligence as my constant companion, I could ignore the informational value of my pain (pleasure) sensations because I could consult the superintelligence to predict the long-term effects of the various actions I contemplate, but the way it is now, it is too expensive or impossible for me to estimate certain instrumental values (mostly around staying healthy) unless I consult my pain (and pleasure) sensations.
Moreover, I must stress that there are quite a few things that correlate with pain. Pain for example is a strong sign that I am in a mental state not conducive to learning or to the careful consideration many factors (such as is necessary to do a good job at fixing a computer program). I do not have complete control over the mental machinery that allows me to program computers, etc. I cannot for example choose to put myself in the mental state that I know to be most conductive to, e.g., computer programming while enduring certain conditions that tend to cause pain.
So, that is one thing that has not yet been mentioned in this thread that correlates with pain. Here is another. I probably cannot stay motivated to work hard at something unless I regularly take pleasure from that work (or at the least I have a realistic expectation of future pleasure resulting from the work). I do not (usually—see next paragraph) take that to mean that what I really care about is pleasure. Rather, I take that to mean that I have imperfect control over the means (e.g., my mind) by which I get my work done and that in particular, one of the circumstances that might prevent me from achieving what I really care about is that there is no way for me to stay motivated to do the things I would need to do to achieve the things I really care about—because my neurology just does not allow that (even though that would get me what I really care about).
Like many people in this day and age, I wish I had more motivation, that is, I wish my actual behavior was more in line with the policies and goals I have set for myself. In fact, my motivation has become so unreliable and so weak that I have entered upon an experiment in which I assume that life really is all about pleasure—or to be more precise, all about the search for something I care enough about so that protecting it or pursuing it is “naturally plenty motivating”. Nevertheless, this experiment is something I started only this year whereas the attitude towards pain (and pleasure) I describe below has been my attitude since 1992.
Moreover, the attitude toward pain (and pleasure) I describe below still strikes me as the best way to frame most high-stakes situations when it is important to avoid the natural human tendency towards self-deception, to avoid wrongly mistaking personal considerations for global considerations or to see past the cant, ideology and propaganda about morality that bombard every one of us in this day and age. There is in human nature a tension IMHO between perceiving reality correctly (and consequently avoiding the sources of bias I just listed) and having plenty of motivation.
So if you’re waiting for the bus and I kick you in the shin, there’s no harm?
Though the question was addressed to Adelene, I’ll give my answer. If you kick me in the shin hard enough to cause pain, then there is a non-negligible probability that the kick damaged bone, skin or such. Damage of that type is probably “cumulative” in that if enough damage occurs, my mobility will be permanently impaired. So, the kick in the shins will tend to reduce the amount of insult that part of my body can endure in the future, which reduces my behavioral options.
Now if I was waiting to be executed instead of waiting for the bus, and there was no chance of my avoiding execution, I (the current me) would be indifferent to whether you kicked me (the hypothetical, doomed me) in the shins. The reason I would be indifferent is that it is not going to change anything in the long term (since I will be dead by execution in the long term).
What I just said is “big-picture” true, but not true in detail. One detail that prevents its being completely true is that your kicking me in the shin might inspire in you a taste for kicking people in the shin, which I would prefer not to happen. Another detail is that my reputation will live on after my execution, and if onlookers observe your kicking me in the shin, it could concievable affect my reputation.
If I am faced with a choice between A1 and A2 and both A1 and A2 lead eventually to the same configuration of reality (“state of affairs” as the philosophers sometimes say) then I am indifferent between A1 and A2 even if A1 causes me to experience pleasure and A2 causes me to experience pain. Why? Because subjective experiences (in themselves, not counting the conditions—of which there are quite a few—that correlate with the subjective experiences) are impermanent, and my reason tells me that impermanent things are important only to the extent that they have permanent effects. (And by hypothesis, the kick in the shins in our latest thought experiment has no permanent effects.)
If reality was structured in such a way that the subjective experience of pain decremented some accumulator somewhere in reality, and if that accumulator could not at trivial cost be incremented again to cancel out the decrementing caused by the pain, well, then I would have to reconsider my position—unless I knew for sure that the contents of the accumulator will not have a permanent effect on reality.
This got a little long. I just wanted Adelene to know that not everyone here considers her comments on pain strange. (Also, apologies to those who have heard all this before more than once or twice in previous years on Overcoming Bias.)
If I am faced with a choice between A1 and A2 and both A1 and A2 lead eventually to the same configuration of reality (“state of affairs” as the philosophers sometimes say) then I am indifferent between A1 and A2 even if A1 causes me to experience pleasure and A2 causes me to experience pain. Why? Because subjective experiences (in themselves, not counting the conditions—of which there are quite a few—that correlate with the subjective experiences) are impermanent, and my reason tells me that impermanent things are important only to the extent that they have permanent effects. (And by hypothesis, the kick in the shins in our latest thought experiment has no permanent effects.)
According to several theories of cosmology, the end state of the universe is fixed: entropy will increase to maximum, and the universe will be in a state of uniform chaos. Therefore nothing we can do will have a truly permanent effect, as the final state of the universe will be the same regardless. Assuming that to be the case, are you really indifferent between being kicked in the shins and not being kicked in the shins, since the universe ends up the same either way?
Therefore nothing we can do will have a truly permanent effect, as the final state of the universe will be the same regardless. Assuming that to be the case, are you really indifferent between being kicked in the shins and not being kicked in the shins, since the universe ends up the same either way?
I wrote my reply to this (long) but meh, I’ll let it sit overnight before posting so I can review it again (with semi-fresh eyes).
According to several theories of cosmology, the end state of the universe is fixed: entropy will increase to maximum, and the universe will be in a state of uniform chaos. Therefore nothing we can do will have a truly permanent effect, as the final state of the universe will be the same regardless. Assuming that to be the case, are you really indifferent between being kicked in the shins and not being kicked in the shins, since the universe ends up the same either way?
I cannot make any strong statements about how my preferences would change if I learned for sure that I definitely cannot exert any permanent effect on reality. The question does not interest me. Also, the current me does not sympathize with any hypothetical me who is stuck in a reality on which he cannot exert any permanent effect. He is a non-person to me. (Yeah, I can be pretty callous towards future versions of myself. But it is not like I can rescue him from his (hypothetical) predicament.)
Finally, and this is nothing personal, Doug, but I will probably not take the time to answer future questions from you on this subject because I have resolved to stop trying to convert anyone to any particular moral position or system of valuing things, and this question I just answered pulled me back into that frame of mind for a couple of hours.
Finally, and this is nothing personal, Doug, but I will probably not take the time to answer future questions from you on this subject because I have resolved to stop trying to convert anyone to any particular moral position or system of valuing things, and this question I just answered pulled me back into that frame of mind for a couple of hours.
Expounding your moral position in response to a direct question is not proselytizing.
True. What I should have written is, Doug, please help me stop procrastinating! Please do not ask me any more questions for a while on morality or the fate of the universe.
I’ll note somewhat abstractly that while ‘expounding your moral position in response to a direct question’ is not proselytizing, it is certainly something that can pull one into that frame of mind. This is particularly the case when the direct questioning has persuasive motivation.
The objection is valid, so strike from grandparent what I quote above.
The decision is a decision but the claim appears false.
Finally, and this is nothing personal, Doug, but I will probably not take the time to answer future questions from you on this subject because I have resolved to stop trying to convert anyone to any particular moral position or system of valuing things, and this question I just answered pulled me back into that frame of mind for a couple of hours.
No problem. I was just confused because “avoid pain” seems to be a goal built into most neurologically intact people at a fairly basic level (even lizard brains try to avoid pain), and that, in that context, seeing someone say that they don’t care to avoid transient pain strikes me as an extremely bizarre thing to say.
Finally, and this is nothing personal, Doug, but I will probably not take the time to answer future questions from you on this subject
No problem. I was just confused because . . .
seeing someone say . . . strikes me as an extremely bizarre thing to say.
And since you participated in the discussions in past years on Overcoming Bias on this subject, Doug, and since I’ve seen a lot of your comments about your psychology, I know the hugeness of the inferential distance I would have to bridge to remove your confusion.
Indeed, the only way I can make sense of that statement in the context of what I know about animal/human behavior is to hypothesize a disconnect between consciously expressed values and revealed preferences. Am I wrong in assuming that you’d have a hard time sticking your hand in a pot of boiling water in exchange for some “permanent” positive change in the universe? After all, there are a lot of lower-level brain systems that act to make sure that we don’t go around sticking our hands in boiling water. (I wouldn’t be too surprised to find out that you’d be willing to let yourself be tied down and have someone else stick your hand in boiling water in exchange for something you want, but the instincts that protect us from deliberate self-harm tend to be difficult to overcome.)
Am I wrong in assuming that you’d have a hard time sticking your hand in a pot of boiling water in exchange for some “permanent” positive change in the universe?
You are wrong. Whether my force of will (or indifference to pain) is sufficient to keep the hand in there for more than 8 seconds, I do not know.
But Doug, many soldiers have volunteered for combat missions knowing that the decision to volunteer adds much more expected pain to the rest of their lives than sticking a hand in boiling water would. Moreover, many cancer patients have chosen chemo knowing that the decision adds much more expected pain than sticking a hand in boiling water would. (Moreover, every chemo session requires a new dose of resolve since choosing the first chemo session does not enable any doctor or loved one to force the cancer patient to submit to a second session.)
BTW, learning that a person has enough indifference to pain (or force of will) to stick their hand in boiling water and keep it there for 8 seconds does not tell you very much about whether they can choose to stay motivated during long tedious projects, such as graduating from college or sticking to a diet. They are separate skills. Moreover, if your goal in life is as much happiness as possible or living as long as possible, then the ability to stay motivated during long tedious projects is much more useful than the ability to choose the painful option and then stick to the choice for a short length of time (e.g., long enough to volunteer for that combat mission or to sign your name to that Declaration of Independence).
You are wrong. Whether my force of will (or indifference to pain) is sufficient to keep the hand in there for more than 8 seconds, I do not know.
Okay. I would have expected you to flinch and hesitate before eventually succeeding in submerging you hand, but I’ll take your word for it.
But Doug, many soldiers have volunteered for combat missions knowing that the decision to volunteer adds much more expected pain to the rest of their lives than sticking a hand in boiling water would. Moreover, many cancer patients have chosen chemo knowing that the decision adds much more expected pain than sticking a hand in boiling water would.
You’re right about that; “avoid pain” certainly isn’t the only goal people have, and other motivations can certainly lead someone to choose more pain over less pain, but I was pretty sure that it tends to be up there with “eat when hungry” as a basic animal drive.
/me shrugs
Of course, I’m sure you know a lot more about yourself than I do, so I think we’ve exhausted the topic at hand.
because I have resolved to stop trying to convert anyone to any particular moral position or system of valuing things, and this question I just answered pulled me back into that frame of mind for a couple of hours.
Thanks for adding this clarification. I understand the need for restraint.
ETA: I just finished appreciating this addition when the grandchild struck it out.
This isn’t as much about one’s personal attitude to pain, but the morality of inflicting pain to someone else.
Adelene seems to be saying, roughly, that inflicting pain on someone else is morally neutral as long as there is no long-term harm like losing an eye or developing PTSD. That seems very much at odds with most conceptions of human morality I know of.
I agree with almost everything rhollerith said (Most obvious exception: The pain from being kicked is probably more a warning that you’re in a situation that caries the danger of major damage than an indication of reduced capacity to take damage in the future.) and would like to point out that the examples I gave are not the only possible ones. In your example, the risk of miscalculating and actually doing long-term damage is relevant, as is the psychological implication of being attacked by a stranger in public. Plus, as I discussed here, enough people have a goal of avoiding pain that you can safely assume that any random stranger has that goal, so inflicting pain on them is harmful in light of that.
Edit: It’d also be harmful to me to go around kicking people in the shins—I’d quickly get a reputation as dangerous, and people would become unwilling to associate with me or help me, and there’s a significant chance that I’d wind up jailed, which is definitely harmful.
For any concept, you can find a sufficiently rich context that makes the concept inadequate. The concept would be useful in simpler situations, but breaks down in more sophisticated ones. It’s still recognized in them, by the same procedure that allows to recognize the concept where it is useful.
A concept is only genuinely useless if there hardly are any contexts where it’s useful, not if there are situations where it isn’t. You are too eager to explain useful tools away by presenting them with existence proofs of insurmountable challenges and the older cousins that should get deployed in them.
See also: least convenient possible world, fallacy of compression, scales of justice fallacy.
For any concept, you can find a sufficiently rich context that makes the concept inadequate. The concept would be useful in simpler situations, but breaks down in more sophisticated ones. It’s still recognized in them, by the same procedure that allows to recognize the concept where it is useful.
A concept is only genuinely useless if there hardly are any contexts where it’s useful, not if there are situations where it isn’t. You are too eager to explain useful tools away by presenting them with existence proofs of insurmountable challenges and the older cousins that should get deployed in them.
When you are worried about the fallacy of compression, that too many things interfere with each other when put in the same simplistic concept, remember that it’s a tradeoff: you necessarily place some not-identical things together, and necessarily become less accurate at tracking each of them than if you paid a little more attention just in this particular case. But on the overall scale, you can’t keep track of everything all the time, so whenever it’s feasible, any simplification should be welcome.
See also: least convenient possible world, fallacy of compression, scales of justice fallacy.
It’s getting more and more obvious that my neurology is a significant factor, here. I deal poorly with situations with some kinds of limited context; I seem to have never developed the heuristics that most people use to make sense of them, which is a fairly common issue for autistics. I don’t make the tradeoff you suggest as often as most people do, and I do tend to juggle more bits of information at any given time, because it’s the only way I’ve found that leads me, personally, to reasonably accurate conclusions. Instances where I can meaningfully address a situation with limited context are rare enough that tools to handle them seem useless to me.
I may need to work on not generalizing from one example about this kind of thing, though, to avoid offending people if nothing else.
Interesting post, but not terribly useful at first glance—it started with what sounded like a good description of how I work, diverged from how I do things at “But we are happy using the word “good” for all of them, and it doesn’t feel like we’re using the same word in several different ways, the way it does when we use “right” to mean both “correct” and “opposite of left”.”, and wound up offering a different (though useful for dealing with others) solution to the problem than the very personally efficient one that I’ve been using for a few years now. I do actually feel the difference in the different meanings of ‘good’ (I haven’t cataloged them (I don’t see any personal usefulness in doing so—note that I don’t think in words in general), but I estimate at least half a dozen common meanings and several rarer ones), but that’s somewhat beside the point.
My fix for the presented problem involves the following heuristic: The farther from neutral my general opinion of a class of things is, the more likely it is to be incorrect in any given case. Generally, a generalized strong positive or negative opinion is a sign that I’m underinformed in some way—I’ve been getting biased information, or I haven’t noticed a type of situation where that kind of thing has a different effect than the one I’m aware of, or I haven’t noticed that it’s related to another class of things in some important way. The heuristic doesn’t disallow strong positive or negative generalized opinions altogether, but it does enforce a higher standard of proof on more extreme ones, and leads me to explore the aspects of things that are counter to my existing opinion in an attempt to reach a more neutral (and complex, which is the real goal) opinion of them. It still allows strong contextual reactions, too, which I haven’t yet seen a problem with, and which do appear to be generally useful.
Regarding the concepts of good (in the ‘opposite of evil’ sense) and evil, my apparent non-neutrality is personal (which is a kind of persistent context) - they’re more harmful than helpful in achieving the kinds of goals that I tend to be most interested in, like gaining a comprehensive understanding of real-world conflicts or coming to appropriately-supported useful conclusions about moral questions, and while they seem to be more helpful than harmful in the pursuit of other goals, like manipulating people (which I am neutral on, to a degree that most people I know find disturbing) and creating coherent communities of irrational people, I personally don’t consider those things relevant enough to sway my opinion. Disregarding the personal aspects, I think I have a near-neutral opinion of the existence of the concepts, but it’s hard to tell; I haven’t spent much time thinking about the issue on that scale.
Edit: And I believed that this group has similar-enough interests to generate the same kind of ‘personal’ context. I may have been wrong, but I thought that they were generally more harmful than helpful in solving the kinds of problems that are considered important here and by the kinds of individuals who participate here. Otherwise, I wouldn’t’ve mentioned the issue at all, like I usually don’t.
My reaction in the original comment was contextual, in both the personal sense and with regards to the type of presentation it was, which follows a very different set of heuristics than the ones I use to regulate general opinions, and allows strong reactions much more easily, but limits the effects of those reactions to the context at hand—perhaps in a much stricter way than you (plural) are assuming. I haven’t taken the time to note the presenter’s name (and I’m moderately faceblind and not good at remembering people by their voices), so even another presentation by the same person on the same topic will be completely unaffected by my reaction to this presentation.
Rough definitions: good for agent X—net positive utility for agent X, evil for agent X—net negative utility for agent X. Or possibly: evil for agent X—a utility function that conflicts with that of agent X.
Good and evil don’t have to be “written into the structure of the universe” to be coherent concepts. I assume you make choices. What is your criterion for choice? I also assume that you aren’t completely selfish. You care about the welfare of other people at least to some degree right?
Of course, if two people/agents truly have differing utility functions, what is good to one may be evil to the other, but that doesn’t invalidate the concepts of good and evil.
I call an action “good” when it is what you should do - i.e. it has normative force behind it. This includes all choices. So, yes, it is a broader concept than traditional ‘goodness,’ but thats fine.
I usually reserve “desired and undesired” to refer to the psychological impulses that we sometimes fight and sometimes go along with. I may desire that second piece of chocolate cake, but if I really think it through, I don’t really want to eat it—I shouldn’t eat it. The economist’s utility function probably refers to desires since the goal is to model actual behavior, but the ethicist’s utility function is built with a completely different goal in mind.
I often agree that things labeled evil do more harm than good, but if the person doing the ‘evil’ thing agreed, they wouldn’t be doing it, so it’s obviously at least somewhat debatable.
They cause harm to you, and good to the person doing it. Nothing to disagree about.
The discussions in question have generally been about the actions of third-parties in other parts of the world, which haven’t had any appreciable effect on my life (unless you count ‘taking thought-time away from other issues’ as an effect).
In cases where the discussion is about something that’s been done to me, I still don’t use the word ‘evil’, and I’ve actually been known to object to other people doing so in those cases. ‘Selfish’, ‘misguided’, ‘poorly informed’, ‘emotion driven’, and the like cover those situations much more usefully.
Then, ‘harm to someone’. Not necessarily to you. My point was that disagreement about the good/evil label doesn’t mean there’s disagreement about doing good or harm to someone.
I also felt annoyed by this and another political comment of the man. I still watched the whole and think he has a great point. Like everything: keep the good and throw away the bad.
Good vs. evil thinking causes us to lower our value of a person’s opinion, or dismiss it altogether, if we find out that person has behaved badly. We no longer wish to affiliate with those people and furthermore we feel epistemically justified in dismissing them.
Sometimes this tendency will lead us to intellectual mistakes.
[nitpick]
Aren’t earthquakes a necessary component of geologic processes that are/were essential for the formation and maintenance of life of Earth?
[/nitpick]
This is ivory tower bullshit. I can type out a longer response, but it still boils down to the rocky core: This comment is just self-congratulatory bullshit.
EDIT:: “He said something so dumb that I know every tangentially related thought he has will be worthless” is self-congratulatory bullshit. It is! That’s what it is.
It is bullshit. I didn’t vote you down but I could have done without the random shot at academia. People in the ivory tower are usually smarter than this.
I got as far as “some things actually are good versus evil, we all know this, right?” at 4:00, and lost all respect for the man. I didn’t watch the rest.
Other than how we treat them, what’s the difference between a story and a theory or hypothesis?
Edit: I’m guessing from the downvote that I may’ve been misunderstood. The above question is not rhetorical; it’s intended to spark conversation.
It is fine to decide, after four minutes, that you don’t think its worth watching the rest of the lecture (I might not finish it either because it is directed to a non-specialist audience), but to tell us you “lost all respect for the man,” only shows that you were too quick to rush to judgment.
Based only on the five minutes of it I watched I know that he is making the exact same points (good vs. evil stories are curiosity stoppers), you accuse him (below) of missing.
My thought wasn’t that he wouldn’t have anything true to say. It was that if he’s still defending good and evil as obviously existing, in that context, he’s far enough behind me on the issue that I can safely assume that he doesn’t have anything major to teach me, and that what he says is untrustworthy enough (because there’s an obvious flaw in his thought process) that I’d have to spend an inordinate amount of time checking his logic before using even the parts that appear good—time that would be better spent elsewhere.
Many people here appear to have a similar epistemic immune response to people who bring up God in discussions of ethics. I’m surprised it’s considered an issue in this case.
It is often worthwhile to listen to intelligent people, even if they are fantastically wrong about basic facts of the very subject that they’re discussing. One often hears someone reasoning within a context of radically wrong assumptions. A priori, one would expect such reasoning to be almost wholly worthless. How could false premises lead to reliable conclusions?
But somehow, in my experience, it often doesn’t work that way. Of course, the propositional content of the claims will often be false. Nonetheless, within the system of inferences, substructures of inferences will often be isomorphic to deep structures of inferences following from premises that I do accept.
The moral reasoning of moral realists can serve as an example. A moral realist will base his moral conclusions on the assumption that moral properties (such as good and evil) exist independently of how people think. His arguments, read literally, are riddled with this assumption through-and-through. Nonetheless, if he is intelligent, the inferences that he makes often map to highly nontrivial, but valid, inferences within my own system of moral thought. It might be necessary to do some relabeling of terms. But once I learn the relabeling “dictionary”, I find that I can learn highly nontrivial implications of my premises by translating the implications that the realist inferred from his premises.
Interesting idea. I’m not sure I completely understand it, though. Could you give an example?
Here’s a made-up example. I chose this example for simplicity, not because it really represents the kind of insight that makes it worthwhile to listen to someone.
Prior to Darwin, many philosophers believed that the most fundamental explanations were teleological. To understand a thing, they held, you had to understand its purpose. Material causes were dependent upon teleological ones. (For example, a thing’s purpose would determine what material causes it was subjected to in the first place). These philosophers would then proceed to use teleology as the basis of their reasoning about living organisms. For example, on seeing a turtle for the first time, they might have reasoned as follows:
Premise 1: This turtle has a hard shell.
Premise 2: The purpose of a hard shell is to deflect sharp objects.
Conclusion: Therefore, this turtle comes from an environment containing predators that attack with sharp objects (e.g., teeth).
But, of course, there is something deeply wrong with such an explanation. Insofar as a thing has a purpose, that purpose is something that the thing will do in the future. Teleology amounts to saying that the future somehow reached back in time and caused the thing to acquire properties in the past. Teleology is backwards causation.
After Darwin, we know that the turtle has a hard shell because hard shells are heritable and helped the turtle’s ancestors to reproduce. The teleological explanation doesn’t just violate causality—it also ignores the real reason that the turtle has a shell: natural selection. So the whole argument above might seem irredeemably wrong.
But now suppose that we introduce the following scheme for translating from the language of teleology to Darwinian language:
“The purpose of this organism’s having property X is to perform action Y.”
becomes
“The use of property X by this organism’s ancestors to perform action Y caused this organism to have property X.
Applying this scheme to the argument above produces a valid and correct chain of reasoning. Moreover, once I figure out the scheme, I can apply it to many (but not all) chains of inferences made by the teleologist to produce what I regard to be correct and interesting inferences. In the example above, I only applied the translation scheme to a premise, but sometimes I’ll get interesting results when I apply the scheme to a conclusion, too.
Of course, not all inferences by the teleologist will be salvageable. Many will be inextricably intertwined with false premises. It takes work to separate the wheat from the chaff. But, in my experience, it often turns out to be worth the effort.
Good thing I asked; that wasn’t what I originally thought you meant. It’s similar enough to translating conversational shorthand that I probably already do that occasionally without even realizing it, but it’d be good to keep in in mind as a tool to use purposely. Thanks. :)
I’m curious: What did you think I meant?
I probably shouldn’t have used the term “translation”. Part of my point is that the “translation” does not preserve meaning. Only the form of the inference is preserved. The facts being asserted can change significantly, both in the premises and in the conclusion. (In my example, only the assertions in the premises changed.) In general, the arguer no longer agrees with the inference after the “translation”. Moreover, his disagreement is not just semantic.
I’d somehow gotten the idea that you were talking about taking the proposed pattern of relationships between ideas and considering its applicability to other, unrelated ideas. As an extremely simple example, if the given theory was “All dogs are bigger than cats”, make note of the “all X are bigger than Y” idea, so it can be checked as a theory in other situations, like “all pineapples are bigger than cherries”. That seems like a ridiculously difficult thing to do in practice, though, which is why I thought you might have meant something else.
Regarding ‘translation’, yep, I get it.
That’s not a good heuristic. There are a lot of people—Eliezer would name Robert Aumann, I think—who are incredibly bright, highly knowledgeable, and capable of conveying that knowledge who are wrong about the answers to what some of us would consider easy questions.
Now, I know Berserk Buttons (warning: TV Tropes) as well as anyone, and I’ve dismissed some works of fiction which others have considered quite good (e.g. Alfred Bester’s The Demolished Man, TV sitcom The Modern Family) because they pushed those buttons, but when it comes to factual information, even stupid people can teach you.
(Granted, you may be right about the worthlessness of this particular speech to you—I haven’t watched it. But the heuristic is poor.)
The heuristic isn’t widely applicable, but I disagree about it being poor altogether. As I pointed out above, it’s not just that he defended good vs. evil. It’s that he did it in the context of a presentation on a subtopic of how we conceptualize the world. He may have things to teach me in other areas, obviously.
That’s why I compared it to someone bringing God into a discussion on ethics specifically. (Or, say, evolution.) That person may be brilliant at physics, but on the topic at hand, not so much.
It also occurs to me that this heuristic may be unusually useful to me because of my neurology. It does seem to take much more time and effort for me to deconstruct and find flaws in new ideas presented by others, compared to most people, and because of the extra time, there’s a risk of getting distracted and not completing the process. It’s enough of an issue that even a flawed heuristic to weed out bad memes is (or, feels—I’m not sure how one would actually test that) useful.
Okay, I’ll grant you that. It’s better to have a sufficiently strict filter that loses some useful information than a weaker filter which lets in garbage data. I would presume (or, at least, advise) that you make a particular effort to analyze data which you previously rejected but which remains widely discussed, however—an example from my own experience being Searle’s Chinese Room argument. Such items should be uncommon enough.
Agreed.
Quickly judging people as not worth listening to is a fabulous heuristic, especially given the Internet explosion of available alternatives.
But sharing such judgment risks offending people who didn’t make the same cut.
Following such a heuristic doesn’t at all mean making strong high-certainty judgments.
I agree, but the fantastic thing is that you lose so little when you reject too hastily. If the ideas you ignored turn out to be useful and true, someone you’re willing to listen to will advocate them eventually.
That works if you assiduously and diligently and without flaw, start paying attention after no more than the third time you hear the idea advocated, and without using the idea itself to judge untrustworthy those who otherwise see competent.
In practice, people usually reject the idea itself and go on rejecting it, when they claim to be acting under cover of rejecting people. Consider those who die of rejecting cryonics; consider what policy they would have to follow in order to not do that. What good is it to quickly reject bad ideas if you quickly reject good ideas as well? Discrimination is the whole trick here.
I suppose we might have no recourse but to judge people and shut our ears to most of them, in the Internet age, but to say that we “lose so little” far understates the danger of a very dangerous policy.
I agree that people often don’t make the necessary distinction between ideas they have evidence against, and unevaluated ideas they’ve been ignoring because they’ve only heard them advocated by kooks. As you point out, only ideas in the prior category properly discredit their advocates.
There’s more than just the one non-failure mode to this kind of thing. My method involves taking the time to consider the information gathered up to the point where I decided to stop listening to the person, as if I hadn’t stopped listening to them at all. Information that I would’ve gotten from them after that point isn’t affected by my opinion of them, since I haven’t heard it (where it would be, if I were distracted by thinking ‘this person’s an idiot’ as I listened to them), and I give as fair of a trial as I’m able to to the rest.
It may also be noteworthy that I didn’t judge him for an argument he was making, and I make something of a point of not doing so unless the logic being used is painfully bad. (Tangential realization: That’s why activists who aren’t willing to have any 101-level discussions with newbies get a (mild) negative reaction from me; discarding a whole avenues of discourse like that cuts off a valuable, if noisy, source of information.)
Strength of emotional response and certainty of the underlying heuristic’s accuracy aren’t the same thing. It may not’ve been clear that I was reporting the former, but I was, and one of the possible responses to that comment that I was prepared for was “yes, but he went on to make this good point...”.
I figured that out, but bringing it up seemed like it would just compound the problem.
Based on the first five minutes, the whole point of his lecture is that stories, explicitly including but not limited to those framed as good vs. evil, are often dangerous oversimplifications.
I’m telling you, as someone who has read quite a lot by Tyler Cowen, that he is not as naive about good and evil as you seem to think. You’ve read too much into the one sentence you’ve quoted.
Are no things actually good vs. evil? Say, Schindler vs. the S.S.?
I’m not sure I can answer this coherently; I came to the conclusion that good and evil are not objectively real, or even useful concepts, long enough ago that I can’t accurately recreate the steps that got me there.
I do occasionally have conversations with people who use those words, and mentally translate ‘good’ (in that sense) to ‘applause light-generating’ and ‘evil’ to ‘revulsion-generating’, ‘unacceptable in modern society’, and/or ‘considered by the speaker to do more harm than good’, in estimated order of frequency of occurrence. (I often agree that things labeled evil do more harm than good, but if the person doing the ‘evil’ thing agreed, they wouldn’t be doing it, so it’s obviously at least somewhat debatable.) I don’t use the word ‘evil’ at all, myself, and don’t use ‘good’ in the good-vs.-evil sense.
Those words are also curiosity-stoppers—it’s not very useful to label an action or viewpoint as ‘evil’; it’s much more useful to explore why the person doing that thing or holding that attitude believes that it’s correct. Likewise, labeling something as ‘good’ reduces the chance of thinking critically about it, and noticing flaws or areas that could be improved.
...
Do more what than what?
I haven’t gotten around to deconstructing those terms yet, but off the top of my head:
A ‘harmful to X’ action is one that has a long-term effect on it that reduces its ability to function. Examples:
Taking some RAM out of a computer
Adding grit to a machine, increasing the rate at which it wears out
Injuring a person in such a way that they lose use of a body part, or develop PTSD, or are in ongoing pain (because ongoing pain reduces their ability to function, not because pain is intrinsically harmful)
Extremist activism, where doing so makes the movement less credible and decreases the rate at which more sensible activists can create change. (I assume here and below that, disregarding the extremism, the activism is promoting good in the sense at hand.)
A ‘good for X’ action, in this sense (‘helpful’ would be a better word), is one that has a long-term effect on it that increases its ability to function. Examples:
Adding RAM to a computer
Performing maintenance on a machine
Teaching a person, giving them medical help, establishing a relationship with them such that they can approach you for advice or help in the future
Extremest activism, where doing so moves the Overton window.
The question isn’t usually whether an action does harm or good or both. The question is how much importance to give to the various harms and goods involved.
Any definition of harmful that doesn’t include say, electro-shock torture and water-boarding is a really really bad definition.
Hint: Pain really is intrinsically harmful.
Those come under ‘injuring in such a way as to cause the person to develop PTSD’, and no, it’s not.
Plenty of people have been tortured and not ended up with PTSD. Moreover, we classify instances of those things as harmful long before the DSM even lets us diagnose PTSD.
Also, there are approximately fifty arguments in that post and comments, none demonstrating that pain isn’t intrinsically harmful so I really have no idea what you want me to take away from that link.
PTSD or other long-term psychological (or physical) impairment, then—which may be sub-clinical or considered normal. An example: Punishment causes a psychological change that reduces the person’s ability to do the thing that they were punished for. We don’t (to the best of my knowledge) have a name for that change, but it observably happens, and when it does, the punishment has caused harm. (It may also be helpful, for example if the punished action would have reduced the person’s ability to function in other ways. The two aren’t always mutually exclusive. Compare it to charging someone money for a class—teaching is helpful, taking their money is harmful.)
Also, I do believe that there could be situations where someone is tortured and doesn’t experience a long-term reduction in functionality, in which case, yes, the torture wasn’t harmful. The generalization that torture is harmful is useful because those situations are rare, and because willingness to attempt to harm someone is likely to lead to harm, and should be addressed as such.
The most relevant point in the discussion of pain is right at the beginning—people who don’t experience any pain tend to have very short or very difficult lives. That makes it obvious that being able to experience pain is useful to the experiencer, rather than net-harmful. So, even though some pain is observably harmful, some pain must be helpful enough to make up the difference. That doesn’t jive with ‘pain is intrinsically harmful’, unless you’re using a very different definition of the word, in which case I request that you clarify how you’re defining it.
Well, anyone else who thinks this is wrong feel free to modus tollens away the original definition. …
I was hoping to make my point by way of counter example. Since you’re not recognizing the counter example I have to go back through the whole definition and the context to see where we lost each other. But thats a mess to do because right now this is a semantic debate. To make it not one I need the cash value of you belief that something is harmful. Do you always try to avoid harm to yourself? Is something being harmful necessary for you to avoid it/avoid doing it to others? Is it sufficient? Does this just apply to you? All humans? AI? Animals? Plants? Thermostats? Futons? Is something other than help and harm at work in your decision making? You don’t have to answer all of these, obviously, just give me an idea of what I should see if something is harmful so I can actually check to see if your definition works. Otherwise you can’t be wrong.
Then we can see if “causing decreased functionality” leads to the right response in all the circumstances. For example, I think there are times where people want to limit their net functionality and are right to do so even and especially when they know what they’re doing.
No; if I can help someone else (or my future self) enough by harming myself or risking harm, I’ll do so. Example: Giving a significant sum of money to someone in need, when I don’t have an emergency fund myself.
No, there are other reasons that I avoid doing things, such as to avoid inconvenience or temporary pain or offending people.
I use a modified version of the logic that I use to determine whether I should harm myself to decide whether it’s worth it to harm others. I generally try to err on the side of avoiding harming others, because it’s harder to estimate the effect that a given harm will have on their life than it is to estimate its effect on mine.
My definition is meant to be general enough to cover all of those, but in each case the meaning of ‘function’ has to be considered. Humans get to determine for themselves what it means to function. AIs’ functions are determined by their programmers (not necessarily intentionally). In practice, I consider animals on a case-by-case basis; as an omnivore, it’d be hypocritical of me to ignore that I consider the function of chickens to be ‘become tasty meat’, but I generally consider pets and wild animals to determine their own functions. (A common assigned function for pets, among those who do assign them functions, is ‘provide companionship’. Some wild animals are assigned functions, too, like ‘keep this ecosystem in balance’ or ‘allow me to signal that I care about the environment, by existing for me to protect’.) I lump plants in with inanimate and minimally-animate objects, whose functions are determined by the people owning them, and can be changed at any time—it’s harmful to chop up a futon that was intended to be sat on, but chopping up an interestingly shaped pile of firewood with some fabric on it isn’t harmful.
In a first-order sense, yes, but in each case that I can think of at the moment, the reason behind the other thing eventually reduces to reducing harm or increasing help. Avoiding temporary pain, for example, is a useful heuristic for avoiding harming my body. Habitually avoiding temporary inconveniences leaves more time for more useful things and helps generate a reputation of being someone with standards, which is useful in establishing the kind of social relationships that can help me. Avoiding offending people is also useful in maintaining helpful social relationships.
So If I am in extraordinary pain it would never be helpful/ not-harmful for me to kill myself or for you to assist me?
Also, where does fulfilling your function fit into this? Unless you function is just increasing functionality.
Finally, I guess you’re comfortable with the fact that the function of different things is determined in totally different ways? Some things get to determine their own function while other things have people determine it for them? As far as I can tell, people determine the function of tools and this notion that people determine their own function, while true in a sense, is just Aristotelian natural law theory rearing its ugly head. It used to be that we have purposes because God created us and instilled us in one. But if there is no God it seems that the right response is to conclude that purpose, as it applied to humans, was a category error, not that we decide our own purpose.
Reminder:
This is beta-version-level thought. It isn’t surprising that it still has a few rough spots or places where I haven’t noticed that I need to explain one thing for another to make sense.
Function as I’m intending to talk about it isn’t something you fulfill, it’s an ability you have: The ability to achieve the goals you’re interested in achieving. Those goals vary not just from person to person, but also with time, whether they’re achieved or not. Also, people do have more than one goal at any given time.
I have used the word ‘function’ in the other sense, above, mistakenly. I’ll be more careful.
There are two overlapping types of situations that are relevant; if either one of them is true, then it’s helpful to assist the person in avoiding/removing the pain. One is that the person has ‘avoid pain’ as a relevant goal in the given instance, and helping achieve that goal doesn’t interfere with other goals that the person considers more important. The other is that the pain is signaling harmful damage to the person’s body. There are situations that don’t come under either of those umbrellas—certain BDSM practices, where experiencing pain is the goal, for example, or situations where doing certain things evokes pain but not actual (relevant to the individual’s goals) harm, and the only way to avoid the pain is to give up on accomplishing more important goals, which is common in certain disabilities and some activities like training for a sport or running a marathon.
Whether suicide would be considered helpful or harmful in a given situation is a function of the goals of the person considering the suicide. If you’re in a lot of pain, have an ‘avoid pain’ goal that’s very important, and don’t have other strong goals or the pain (or underlying damage causing the pain) makes it impossible for you to achieve your other strong goals, the answer is fairly obvious: It’s helpful. If your ‘avoid pain’ goal is less important, or you have other goals that you consider important and that the pain doesn’t prevent you from achieving, or both, it’s not so obvious. Another relevant factor is that pain can be adapted to, and new goals that the pain doesn’t interfere with can be generated. I leave that kind of judgment call up to the individual, but tend to encourage them to think about adapting, and take the possibility into account before making a decision, mostly because people so often forget to take that into account. (Expected objection: Severe pain can’t be adapted to. My response: I know someone who has. The post where she talks about that in particular is eluding me at the moment, but I’ll continue looking if you’re interested.)
If it weren’t illegal or if there was a very low chance of getting caught, I’d be comfortable with helping someone commit suicide, if they’d thought the issue through well, or in some cases where the person would be unable to think it through. I know not everyone thinks about this in the same way that I do: ‘If they’ve thought the issue through well’ doesn’t mean ‘if they’ve fulfilled the criteria for me to consider the suicide non-harmful’. Inflicting my way of thinking on others has the potential to be harmful to them, so I don’t.
There’s an underlying concept there that I failed to make clear. When it comes to accomplishing goals, it works best to consider an individual plus their possessions (including abstract things like knowledge or reputation or, to a degree, relationships) as a single unit. One goal of the unit ‘me and my stuff’ is to maintain a piece of territory that’s safe and comfortable to myself and guests. My couch is functional (able to accomplish the goal) in that capacity, specifically regarding the subgoal of ‘have comfortable places available for sitting’. My hands are similarly functional in that capacity, though obviously for different subgoals: I use them to manipulate other tools for cleaning and other maintenance tasks and to type the programs that I trade for the money I spend on rent, for example.
(This is based on the most coherent definition of ‘ownership’ that I’ve been able to come up with, and I’m aware that the definition is unusual; discussion is welcome.)
I think I’ve already made it clear in this comment that this isn’t the concept I’m working with. The closest I come to this concept is the observation that people (and animals, and possibly AI) have goals, and since those goals are changeable and tend to be temporary (with the possible exception of AIs’ goals), they really are something entirely different. I also don’t believe that there’s any moral or objective correctness or incorrectness in the act of achieving, failing at, or discarding a goal.
Found it, or a couple related things anyway. This is a post discussing the level of pain she’s in, her level of adaptation, and people’s reactions to those facts, and this is a post discussing her opinion on the matter of curing pain and disability. I remember there being another, clearer post about her pain level, but I still haven’t found that, and I may be remembering a discussion I had with her personally rather than a blog post. She’s also talked more than once about her views on the normalization of suicide, assisted suicide, and killing for ‘compassionate’ reasons for people with disabilities (people in ongoing severe pain are inferred to be included in the group in question), though she usually just calls it murder (her definition of murder includes convincing someone that their life is so worthless or burdensome to others that they should commit suicide) - browsing this category turns up several of the posts.
Sure. I don’t mean to come on too forcefully.
So help and harm aren’t the only things in your decision-making, there are also goals. What is the relation between the two? Why can’t the help-harm framework be subsumed under “goals”? This question is especially salient if “goals” is just going to be the box where you throw in everything relevant to decision making and ethics that doesn’t fit with your concepts of help and harm.
Also, something that might make where I’m coming from clearer: when I was using “pain” before I meant it generally, not just in a physical sense. So I just read these examples about BDSM or needing pain to avoid early death as cases of one kind of pain leading another kind of pleasure of preventing another kind of pain. I’ll use the word “suffering” in the future to make this clearer. This might make the claim that pain is inherently harmful seem more plausible to you.
So all my belongings share my goals? It seems pretty bizarre to attribute goals to any inanimate object much less give them the same goals their owner has. It also would be really strange if the fundamentals of decision-makign involved a particular and contingent socio-political construction (i.e. ownership and property). It also seems to me that possessing a reputation and possessing a car are two totally different things and that the fact that we use the same verb “have” to refer to someone’s reputation or knowledge is almost certainly a linguistic accident (maybe someone who speaks more languages can confirm this). So yes, I’d like to read a development of this notion of ownership if you want to provide one.
A world in which 90% of goals were achieved wouldn’t be better than a world in which only 10% were achieved? Is a world where there is greater functionality better than a world in which there is less functionality? We might have step back further and see if we even agree what morality is.
‘Helpful’ and ‘harmful’ are words that describe the effect of an action or circumstance on a person(+their stuff)’s ability to achieve a goal.
This question doesn’t make sense to me - ‘help and harm’ and ‘goals’ are two very different kinds of things, and I don’t see how they could be lumped together into one concept.
This area of thought is complex, and my way of approaching it does view a lot of that complexity as goal-related, but the point of that isn’t to handwave the complexity, it’s to put it where I can see it and deal with it.
Suffering is experiencing pain that the experiencer wanted to avoid—pain that’s addressed by their personal ‘avoid pain’ goal. We don’t disagree about those instances of experienced pain being harmful. Not all experiences of pain are suffering, though.
Also, people with CIPA are unable to experience the qualia of pain just as people who are blind are unable to experience the qualia of color. If you’re considering the injuries they sustain as a result painful, we’re not defining the word in the same way.
It’s only bizarre if you’re considering the owned thing and the owner as separate entities. Considering them the same for some purposes (basically anything involving the concept of ownership as our society uses it) is the only way of looking at it that I’ve found that adds back up to normality.
Does your body share your goals?
It’s a function of discussing owned things, not of discussing decision-making. Un-owned things that aren’t alive don’t have goals. (Or they may in a few cases retain the relevant goal of the person who last owned them—a machine will generally keep working when its owner dies or looses it or disowns it.)
The difference between those concepts may be relevant elsewhere, but I don’t think it’s relevant in this case.
I’ll put it on my to-do list.
It depends what you mean by ‘better’, and for the most part I choose not to define or use the word except in situations with sufficient context to use it in phrases like ‘better for’ or ‘better at’.
So if you’re waiting for the bus and I kick you in the shin, there’s no harm? (there would be some ongoing pain for a little while, but with no impact on what you’re doing—waiting for the bus)
Adelene’s attitude as illustrated in this thread towards pain resembles my own.
I do not assign any intrinsic value to avoiding pain (or experiencing pleasure). (I am unsure whether Adelene goes this far.)
I must stress though that pain (and pleasure) are indispensible approximations or “predictors” for various (instrumental) values. If I had a Richard-friendly superintelligence as my constant companion, I could ignore the informational value of my pain (pleasure) sensations because I could consult the superintelligence to predict the long-term effects of the various actions I contemplate, but the way it is now, it is too expensive or impossible for me to estimate certain instrumental values (mostly around staying healthy) unless I consult my pain (and pleasure) sensations.
Moreover, I must stress that there are quite a few things that correlate with pain. Pain for example is a strong sign that I am in a mental state not conducive to learning or to the careful consideration many factors (such as is necessary to do a good job at fixing a computer program). I do not have complete control over the mental machinery that allows me to program computers, etc. I cannot for example choose to put myself in the mental state that I know to be most conductive to, e.g., computer programming while enduring certain conditions that tend to cause pain.
So, that is one thing that has not yet been mentioned in this thread that correlates with pain. Here is another. I probably cannot stay motivated to work hard at something unless I regularly take pleasure from that work (or at the least I have a realistic expectation of future pleasure resulting from the work). I do not (usually—see next paragraph) take that to mean that what I really care about is pleasure. Rather, I take that to mean that I have imperfect control over the means (e.g., my mind) by which I get my work done and that in particular, one of the circumstances that might prevent me from achieving what I really care about is that there is no way for me to stay motivated to do the things I would need to do to achieve the things I really care about—because my neurology just does not allow that (even though that would get me what I really care about).
Like many people in this day and age, I wish I had more motivation, that is, I wish my actual behavior was more in line with the policies and goals I have set for myself. In fact, my motivation has become so unreliable and so weak that I have entered upon an experiment in which I assume that life really is all about pleasure—or to be more precise, all about the search for something I care enough about so that protecting it or pursuing it is “naturally plenty motivating”. Nevertheless, this experiment is something I started only this year whereas the attitude towards pain (and pleasure) I describe below has been my attitude since 1992.
Moreover, the attitude toward pain (and pleasure) I describe below still strikes me as the best way to frame most high-stakes situations when it is important to avoid the natural human tendency towards self-deception, to avoid wrongly mistaking personal considerations for global considerations or to see past the cant, ideology and propaganda about morality that bombard every one of us in this day and age. There is in human nature a tension IMHO between perceiving reality correctly (and consequently avoiding the sources of bias I just listed) and having plenty of motivation.
Though the question was addressed to Adelene, I’ll give my answer. If you kick me in the shin hard enough to cause pain, then there is a non-negligible probability that the kick damaged bone, skin or such. Damage of that type is probably “cumulative” in that if enough damage occurs, my mobility will be permanently impaired. So, the kick in the shins will tend to reduce the amount of insult that part of my body can endure in the future, which reduces my behavioral options.
Now if I was waiting to be executed instead of waiting for the bus, and there was no chance of my avoiding execution, I (the current me) would be indifferent to whether you kicked me (the hypothetical, doomed me) in the shins. The reason I would be indifferent is that it is not going to change anything in the long term (since I will be dead by execution in the long term).
What I just said is “big-picture” true, but not true in detail. One detail that prevents its being completely true is that your kicking me in the shin might inspire in you a taste for kicking people in the shin, which I would prefer not to happen. Another detail is that my reputation will live on after my execution, and if onlookers observe your kicking me in the shin, it could concievable affect my reputation.
If I am faced with a choice between A1 and A2 and both A1 and A2 lead eventually to the same configuration of reality (“state of affairs” as the philosophers sometimes say) then I am indifferent between A1 and A2 even if A1 causes me to experience pleasure and A2 causes me to experience pain. Why? Because subjective experiences (in themselves, not counting the conditions—of which there are quite a few—that correlate with the subjective experiences) are impermanent, and my reason tells me that impermanent things are important only to the extent that they have permanent effects. (And by hypothesis, the kick in the shins in our latest thought experiment has no permanent effects.)
If reality was structured in such a way that the subjective experience of pain decremented some accumulator somewhere in reality, and if that accumulator could not at trivial cost be incremented again to cancel out the decrementing caused by the pain, well, then I would have to reconsider my position—unless I knew for sure that the contents of the accumulator will not have a permanent effect on reality.
This got a little long. I just wanted Adelene to know that not everyone here considers her comments on pain strange. (Also, apologies to those who have heard all this before more than once or twice in previous years on Overcoming Bias.)
According to several theories of cosmology, the end state of the universe is fixed: entropy will increase to maximum, and the universe will be in a state of uniform chaos. Therefore nothing we can do will have a truly permanent effect, as the final state of the universe will be the same regardless. Assuming that to be the case, are you really indifferent between being kicked in the shins and not being kicked in the shins, since the universe ends up the same either way?
I wrote my reply to this (long) but meh, I’ll let it sit overnight before posting so I can review it again (with semi-fresh eyes).
Also, on a more personal level, even if the universe doesn’t die, you will...
I cannot make any strong statements about how my preferences would change if I learned for sure that I definitely cannot exert any permanent effect on reality. The question does not interest me. Also, the current me does not sympathize with any hypothetical me who is stuck in a reality on which he cannot exert any permanent effect. He is a non-person to me. (Yeah, I can be pretty callous towards future versions of myself. But it is not like I can rescue him from his (hypothetical) predicament.)
Finally, and this is nothing personal, Doug, but I will probably not take the time to answer future questions from you on this subject because I have resolved to stop trying to convert anyone to any particular moral position or system of valuing things, and this question I just answered pulled me back into that frame of mind for a couple of hours.
Expounding your moral position in response to a direct question is not proselytizing.
True. What I should have written is, Doug, please help me stop procrastinating! Please do not ask me any more questions for a while on morality or the fate of the universe.
I’ll note somewhat abstractly that while ‘expounding your moral position in response to a direct question’ is not proselytizing, it is certainly something that can pull one into that frame of mind. This is particularly the case when the direct questioning has persuasive motivation.
The decision is a decision but the claim appears false.
No problem. I was just confused because “avoid pain” seems to be a goal built into most neurologically intact people at a fairly basic level (even lizard brains try to avoid pain), and that, in that context, seeing someone say that they don’t care to avoid transient pain strikes me as an extremely bizarre thing to say.
And since you participated in the discussions in past years on Overcoming Bias on this subject, Doug, and since I’ve seen a lot of your comments about your psychology, I know the hugeness of the inferential distance I would have to bridge to remove your confusion.
Indeed, the only way I can make sense of that statement in the context of what I know about animal/human behavior is to hypothesize a disconnect between consciously expressed values and revealed preferences. Am I wrong in assuming that you’d have a hard time sticking your hand in a pot of boiling water in exchange for some “permanent” positive change in the universe? After all, there are a lot of lower-level brain systems that act to make sure that we don’t go around sticking our hands in boiling water. (I wouldn’t be too surprised to find out that you’d be willing to let yourself be tied down and have someone else stick your hand in boiling water in exchange for something you want, but the instincts that protect us from deliberate self-harm tend to be difficult to overcome.)
You are wrong. Whether my force of will (or indifference to pain) is sufficient to keep the hand in there for more than 8 seconds, I do not know.
But Doug, many soldiers have volunteered for combat missions knowing that the decision to volunteer adds much more expected pain to the rest of their lives than sticking a hand in boiling water would. Moreover, many cancer patients have chosen chemo knowing that the decision adds much more expected pain than sticking a hand in boiling water would. (Moreover, every chemo session requires a new dose of resolve since choosing the first chemo session does not enable any doctor or loved one to force the cancer patient to submit to a second session.)
BTW, learning that a person has enough indifference to pain (or force of will) to stick their hand in boiling water and keep it there for 8 seconds does not tell you very much about whether they can choose to stay motivated during long tedious projects, such as graduating from college or sticking to a diet. They are separate skills. Moreover, if your goal in life is as much happiness as possible or living as long as possible, then the ability to stay motivated during long tedious projects is much more useful than the ability to choose the painful option and then stick to the choice for a short length of time (e.g., long enough to volunteer for that combat mission or to sign your name to that Declaration of Independence).
Okay. I would have expected you to flinch and hesitate before eventually succeeding in submerging you hand, but I’ll take your word for it.
You’re right about that; “avoid pain” certainly isn’t the only goal people have, and other motivations can certainly lead someone to choose more pain over less pain, but I was pretty sure that it tends to be up there with “eat when hungry” as a basic animal drive.
/me shrugs
Of course, I’m sure you know a lot more about yourself than I do, so I think we’ve exhausted the topic at hand.
Thanks for adding this clarification. I understand the need for restraint.
ETA: I just finished appreciating this addition when the grandchild struck it out.
This isn’t as much about one’s personal attitude to pain, but the morality of inflicting pain to someone else.
Adelene seems to be saying, roughly, that inflicting pain on someone else is morally neutral as long as there is no long-term harm like losing an eye or developing PTSD. That seems very much at odds with most conceptions of human morality I know of.
I agree with almost everything rhollerith said (Most obvious exception: The pain from being kicked is probably more a warning that you’re in a situation that caries the danger of major damage than an indication of reduced capacity to take damage in the future.) and would like to point out that the examples I gave are not the only possible ones. In your example, the risk of miscalculating and actually doing long-term damage is relevant, as is the psychological implication of being attacked by a stranger in public. Plus, as I discussed here, enough people have a goal of avoiding pain that you can safely assume that any random stranger has that goal, so inflicting pain on them is harmful in light of that.
Edit: It’d also be harmful to me to go around kicking people in the shins—I’d quickly get a reputation as dangerous, and people would become unwilling to associate with me or help me, and there’s a significant chance that I’d wind up jailed, which is definitely harmful.
For any concept, you can find a sufficiently rich context that makes the concept inadequate. The concept would be useful in simpler situations, but breaks down in more sophisticated ones. It’s still recognized in them, by the same procedure that allows to recognize the concept where it is useful.
A concept is only genuinely useless if there hardly are any contexts where it’s useful, not if there are situations where it isn’t. You are too eager to explain useful tools away by presenting them with existence proofs of insurmountable challenges and the older cousins that should get deployed in them.
See also: least convenient possible world, fallacy of compression, scales of justice fallacy.
For any concept, you can find a sufficiently rich context that makes the concept inadequate. The concept would be useful in simpler situations, but breaks down in more sophisticated ones. It’s still recognized in them, by the same procedure that allows to recognize the concept where it is useful.
A concept is only genuinely useless if there hardly are any contexts where it’s useful, not if there are situations where it isn’t. You are too eager to explain useful tools away by presenting them with existence proofs of insurmountable challenges and the older cousins that should get deployed in them.
When you are worried about the fallacy of compression, that too many things interfere with each other when put in the same simplistic concept, remember that it’s a tradeoff: you necessarily place some not-identical things together, and necessarily become less accurate at tracking each of them than if you paid a little more attention just in this particular case. But on the overall scale, you can’t keep track of everything all the time, so whenever it’s feasible, any simplification should be welcome.
See also: least convenient possible world, fallacy of compression, scales of justice fallacy.
It’s getting more and more obvious that my neurology is a significant factor, here. I deal poorly with situations with some kinds of limited context; I seem to have never developed the heuristics that most people use to make sense of them, which is a fairly common issue for autistics. I don’t make the tradeoff you suggest as often as most people do, and I do tend to juggle more bits of information at any given time, because it’s the only way I’ve found that leads me, personally, to reasonably accurate conclusions. Instances where I can meaningfully address a situation with limited context are rare enough that tools to handle them seem useless to me.
I may need to work on not generalizing from one example about this kind of thing, though, to avoid offending people if nothing else.
Btw, see also Yvain’s The Trouble With “Good”.
Interesting post, but not terribly useful at first glance—it started with what sounded like a good description of how I work, diverged from how I do things at “But we are happy using the word “good” for all of them, and it doesn’t feel like we’re using the same word in several different ways, the way it does when we use “right” to mean both “correct” and “opposite of left”.”, and wound up offering a different (though useful for dealing with others) solution to the problem than the very personally efficient one that I’ve been using for a few years now. I do actually feel the difference in the different meanings of ‘good’ (I haven’t cataloged them (I don’t see any personal usefulness in doing so—note that I don’t think in words in general), but I estimate at least half a dozen common meanings and several rarer ones), but that’s somewhat beside the point.
My fix for the presented problem involves the following heuristic: The farther from neutral my general opinion of a class of things is, the more likely it is to be incorrect in any given case. Generally, a generalized strong positive or negative opinion is a sign that I’m underinformed in some way—I’ve been getting biased information, or I haven’t noticed a type of situation where that kind of thing has a different effect than the one I’m aware of, or I haven’t noticed that it’s related to another class of things in some important way. The heuristic doesn’t disallow strong positive or negative generalized opinions altogether, but it does enforce a higher standard of proof on more extreme ones, and leads me to explore the aspects of things that are counter to my existing opinion in an attempt to reach a more neutral (and complex, which is the real goal) opinion of them. It still allows strong contextual reactions, too, which I haven’t yet seen a problem with, and which do appear to be generally useful.
Regarding the concepts of good (in the ‘opposite of evil’ sense) and evil, my apparent non-neutrality is personal (which is a kind of persistent context) - they’re more harmful than helpful in achieving the kinds of goals that I tend to be most interested in, like gaining a comprehensive understanding of real-world conflicts or coming to appropriately-supported useful conclusions about moral questions, and while they seem to be more helpful than harmful in the pursuit of other goals, like manipulating people (which I am neutral on, to a degree that most people I know find disturbing) and creating coherent communities of irrational people, I personally don’t consider those things relevant enough to sway my opinion. Disregarding the personal aspects, I think I have a near-neutral opinion of the existence of the concepts, but it’s hard to tell; I haven’t spent much time thinking about the issue on that scale.
Edit: And I believed that this group has similar-enough interests to generate the same kind of ‘personal’ context. I may have been wrong, but I thought that they were generally more harmful than helpful in solving the kinds of problems that are considered important here and by the kinds of individuals who participate here. Otherwise, I wouldn’t’ve mentioned the issue at all, like I usually don’t.
My reaction in the original comment was contextual, in both the personal sense and with regards to the type of presentation it was, which follows a very different set of heuristics than the ones I use to regulate general opinions, and allows strong reactions much more easily, but limits the effects of those reactions to the context at hand—perhaps in a much stricter way than you (plural) are assuming. I haven’t taken the time to note the presenter’s name (and I’m moderately faceblind and not good at remembering people by their voices), so even another presentation by the same person on the same topic will be completely unaffected by my reaction to this presentation.
Rough definitions: good for agent X—net positive utility for agent X, evil for agent X—net negative utility for agent X. Or possibly: evil for agent X—a utility function that conflicts with that of agent X.
Good and evil don’t have to be “written into the structure of the universe” to be coherent concepts. I assume you make choices. What is your criterion for choice? I also assume that you aren’t completely selfish. You care about the welfare of other people at least to some degree right?
Of course, if two people/agents truly have differing utility functions, what is good to one may be evil to the other, but that doesn’t invalidate the concepts of good and evil.
That’s not ‘good and evil’, just ‘desired and undesired’ - much milder and broader concepts.
I call an action “good” when it is what you should do - i.e. it has normative force behind it. This includes all choices. So, yes, it is a broader concept than traditional ‘goodness,’ but thats fine.
I usually reserve “desired and undesired” to refer to the psychological impulses that we sometimes fight and sometimes go along with. I may desire that second piece of chocolate cake, but if I really think it through, I don’t really want to eat it—I shouldn’t eat it. The economist’s utility function probably refers to desires since the goal is to model actual behavior, but the ethicist’s utility function is built with a completely different goal in mind.
They cause harm to you, and good to the person doing it. Nothing to disagree about.
The discussions in question have generally been about the actions of third-parties in other parts of the world, which haven’t had any appreciable effect on my life (unless you count ‘taking thought-time away from other issues’ as an effect).
In cases where the discussion is about something that’s been done to me, I still don’t use the word ‘evil’, and I’ve actually been known to object to other people doing so in those cases. ‘Selfish’, ‘misguided’, ‘poorly informed’, ‘emotion driven’, and the like cover those situations much more usefully.
Then, ‘harm to someone’. Not necessarily to you. My point was that disagreement about the good/evil label doesn’t mean there’s disagreement about doing good or harm to someone.
I also felt annoyed by this and another political comment of the man. I still watched the whole and think he has a great point. Like everything: keep the good and throw away the bad.
Cowen on “The limits of good vs. evil thinking”:
The difference between a weather forecast and a weather forecasting system, I’d guess. The latter are (often) used to generate the former.
Earthquakes are evil. Food that’s tasty, cheap, and healthy is good.
[nitpick] Aren’t earthquakes a necessary component of geologic processes that are/were essential for the formation and maintenance of life of Earth? [/nitpick]
This is ivory tower bullshit. I can type out a longer response, but it still boils down to the rocky core: This comment is just self-congratulatory bullshit.
EDIT:: “He said something so dumb that I know every tangentially related thought he has will be worthless” is self-congratulatory bullshit. It is! That’s what it is.
I believe your objection is likely similar to mine—it would behoove you to address the text of her response to it.
Yes, thanks.
It is bullshit. I didn’t vote you down but I could have done without the random shot at academia. People in the ivory tower are usually smarter than this.
If you can’t be bothered to offer arguments to back up your opinion, just use the vote buttons.