ETA: I hate that I have to say this, but can people respond instead of just downvoting? I’m honestly curious as to why this particular post is controversial—or have I missed something?
I haven’t downvoted, for what it is worth. Sure, you may be an evil baby killing advocate but it’s not like l care!
I haven’t seen anyone respond to your request for feedback about votes, so let me do so, despite not being one of the downvoters.
By my lights, at least, your posts have been fine. Obviously, I can’t speak for the site as a whole… then again, neither can anyone else.
Basically, it’s complicated, because the site isn’t homogenous. Expressing conventionally “bad” moral views will usually earn some downvotes from people who don’t want such views expressed; expressing them clearly and coherently and engaging thoughtfully with the responses will usually net you upvotes.
I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.
My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says ‘Kill everything that looks unlike yourself’? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?
In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?
If it is unwise, then it would make sense to weaken that strand of thought in society—to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.
You did not answer me on the human question—how we’d like powerful humans to think .
No. But we do want a rule that says something like “the closer things are to being people, the more importance should be given to them”. As a consequence of this rule, I think it should be legal to kill your newborn children.
This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you’ve defined people in terms of how closely they realise your ideology:
Extremely young children are lacking basically all of the traits I’d want a “person” to have.
You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you—maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity—and then what happens to those who exceed that capacity? And surely the AI itself would do so....
There are a lot of ways it can go wrong.
I’m observably a person.
You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.
Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So… pretty sure.
The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.
Oh, and I’m never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).
-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don’t have a good definition for person, but no reasonable partial definition I can come up with includes babies.
How did I misinterpret? I read that you don’t include babies and I said that I do include babies. That’s (preference) disagreement, not a problem with interpretation.
This line gave me the impression that you thought I was saying I want my definition of “person”, for the moral calculus, to include things like “worthwhile”.Which was not what I was saying -
Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)
If you don’t understand the distinction between “legal” and “encouraged”, we’re going to have a very difficult time communicating.
“Encouraged” is very clearly not absolute but relative here, “somewhat less discouraged than now” can just be written as “encouraged” for brevity’s sake.
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don’t have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn’t at all mean that just because I would like people to be nice to each other, and so on, I wouldn’t consider people who aren’t nice not to be people. I’d intended to convey this distinction by the quotation marks.
How are you deciding whether your definition is reasonable?
Obviously. There’s a lot of ways any AI can go wrong. But you have to do something. Is your rule “don’t kill humans”? For what definition of human, and isn’t that going to be awfully unfair to aliens? I think “don’t kill people” is probably about as good as you’re going to do.
‘Don’t kill anything that can learn,’ springs to mind as a safer alternative—were I inclined to program this stuff in directly, which I’m not.
I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society—or at least an idealised model of part of it—will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.
In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.
I don’t want the rule to be “don’t kill people” for whatever values of “kill” and “people” you have in your book. For all I know you’re going to interpet this as something I’d understand more like “don’t eat pineapples”. I want the rule to be “don’t kill people” with your definitions in accordance with mine.
Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.
It’d be great if I could just say ‘I want you to do good—with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions—AIs may well grow up with different definitions—and if you’ve got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.
If you do think both of the above things, then my task is either to understand why you don’t feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.
I’m not certain whether or not it’s germane to the broader discussion, but “think X is immoral” and “think X should be illegal” are not identical beliefs.
Suppose hypothetically that I think “don’t kill people” is a good broad moral rule, and I think babies are people. It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.
If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think “don’t kill people” is a good law, then all else being equal I should think “don’t kill babies” is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.
It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.
Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it’s arguably better at it. Have I committed a serious moral wrong if I delete its source code?
If I were programming an AI to be a perfect world-guiding moral paragon, I’d rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.
Somewhat late, I must have missed this reply agessss ago when it went up.
there’s a bunch of things in my mind for which the label “person” seems appropriate [...] There’s also a bunch of things for which said label seems inappropriate
That’s not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you’re doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we’re not doing anything particularly logical or reasonable here—we’re not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don’t.
If we try and agree on a common list. Well, you’re agreeing that aliens and powerful AIs go on the list—so biology isn’t the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can—they’re just not particularly good at it yet.
Conversely, what do all your other examples have in common that infants don’t?
Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it’s arguably better at it. Have I committed a serious moral wrong if I delete its source code?
Arguably that would be a good heuristic to keep around. I don’t know I’d call it a moral wrong – there’s not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.
[...] odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren’t conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.
I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn’t have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there’s something emotionally maladjusted in you – by the standards of the needs of society. If we’d not had the precept, and magically appeared out of nowhere, I think we’d have invented it pretty quick.
You think I’m going to try to program an AI in English?
Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.
Yeah, a lack of reply notification’s a real pain in the rear.
It seems to me that this thread of the debate has come down to “Should we consider babies to be people?” There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define ‘people’ in terms of other, broader terms (this being the former case) or by defining ‘people’ via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category ‘babies’ belong.
Edit: You can skip to the next break line if you’re not interested in reading about the methodological component so much as you are continuing the infants argument.
What we’re doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I’m talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we’d have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about—properties—it’s a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you’re talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns—with our language that allows us to do this compression—that you can’t come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance—“Most numerous group of bipedal tools users on Earth.” That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn’t have to lead to more confusion—as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures—while my terms would refer exactly, while I’d even be able to stick you in front of a machine and point to it directly—would you even recognise it without going to a computer? I wouldn’t. The reference falls beyond the level of my experience.
I don’t see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn’t exactly define what I mean by human. Even by reference to genetic structure I’ve no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let’s go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It’s the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
I can come up with a rough definition, but rough definitions fail in exactly those cases where there is potential disagreement.
Eh, functioning is a very rough definition and we’ve got to that pretty quickly.
So will we rather say that we include mentally disabled humans above a certain level of functioning? The problem then is that babies almost certainly fall well below that threshold, wherever you might set it.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don’t, I’ve got the dilemma as to whether to stop eating animals or start eating babies.
And it’s not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I’ve not seen one do anything like that. Predatory animals—wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I’d resolve the dilemma if it did occur, I’m leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don’t want to put a precedent in place that might get me eaten at some point.
I assume you’ve granted that sufficiently advanced AIs ought to be counted as people.
By fiat—sufficiently advanced for what? But I suppose I’ll grant any AI that can pass the Turing test qualifies, yes.
Am I killing a person if I terminate this script before compilation completes? That is, does “software which will compile and run an AI” belong to the “people” or the “not people” group?
That depends on the nature of the script. If it’s just performing some relatively simple task over and over, then I’m inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I’m inclined to think it belongs in the people group.
Really? It seems to me that someone did invent the taboo[1] on, say, slavery.
I suppose, what I really mean to say is they’re taboos because that taboo has some desirable effect on society.
The point I’m trying to make here is that if you started with your current set of rules minus the rule about “don’t rape people” (not to say your hypothetical morals view it as acceptable, merely undecided), I think you could quite naturally come to conclude that rape was wrong. But it seems to me that this would not be the case if instead you left out the rule about “don’t kill babies”.
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common—as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up—and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they’d have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
Regardless, I have no doubt that pigs are closer to functioning adult humans than babies are. You’d best give up pork.
I suppose I had, yes. It never really occurred to me that they might be that intelligent—but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
I’d be interested in what standard of “functional” you might propose that newborns would meet, though. Perhaps give examples of things which seem close to to line, on either side? For example, do wolves seem to you like people? Should killing a wolf be considered a moral wrong on par with murder?
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There’s not to say that I think it should result in legal punishment on par with killing a human, mind, it’s easier to work out that humans are people than it is to work out that wolves are—it’s a reasonable mistake.
Insects like wasps and flies don’t seem like people. Red pandas do. Dolphins do. Cows… don’t. But given what I’ve discovered about pigs that bears some checking—and now cows do. Hnn. Damn it, now I won’t be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you’re imaging would have, at what point does your ability to empathise with the imaginary-you break down?
I have to ask, at this point: have you seriously considered the possibility that babies aren’t people?
As far as I know how, yes. If you’ve got some ways of thinking that we haven’t been talking about here, feel free to post them and I’ll do my best to run them.
If Babies weren’t people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours—people kill people all the time, I don’t expect them not to without other criteria being involved.
We are supposing that it’s still on the first step, compilation. However, with no interaction on our part, it’s going to finish compiling and begin running the sufficiently-advanced AI. Unless we interrupt it before compilation finishes, in which case it will not.
Not a person.
It is, for example, almost certainly maladaptive to allow all women to go into higher education and industry, because those correlate strongly with having fewer children and that causes serious problems. (Witness Japan circa now.) This is, as you put it, a poor gambling strategy. Does that imply it’s immoral for society to allow women to be educated? Do reasonable people look at people who support women’s rights and wonder what’s wrong with them? Of course not.
No, because we’ve had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don’t want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don’t think one can seriously forward an argument that keeps women at home and doesn’t create something that can be used against him in fairly horrifying ways. Even if you don’t have a fairly inclusive definition of people, it seems unwise to treat other humans in that way—you, after all, are the other human to another human.
What about fish? I’m pretty sure many fish are significantly more functional than one-month-old humans, possibly up to two or three months. (Younger than that I don’t think babies exhibit the ability to anticipate things. Haven’t actually looked this up anywhere reputable, though.)
I don’t know enough about them—given they’re so different to us in terms of gross biology I imagine it’s often going to be quite difficult to distinguish between functioning and instinct—this:
Frequently. It’s scary. But if I were in a body in which intelligence was not easy to express, and I was killed by someone who didn’t think I was sufficiently functional to be a person, that would be a tragic accident, not a moral wrong.
The legal definition of an accident is an unforeseeable event. I don’t agree with that entirely because, well everything’s foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there’s much room for doubt, there’s so much suffering involved in killing and eating animals that we shouldn’t do it even if we only argue ourselves to some low probability of their being people.
About age four, possibly a year or two earlier. I’m reasonably confident I had introspection at age four; I don’t think I did much before that. I find myself completely unable to empathize with a ‘me’ lacking introspection.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
OK. So the point of this analogy is that newborns seem a lot like the script described, on the compilation step. Yes, they’re going to develop advanced, functioning behaviors eventually, but no, they don’t have them yet. They’re just developing the infrastructure which will eventually support those behaviors.
I agree, if it doesn’t have the capabilities that will make it a person there’s no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven’t killed a human.
I know the question I actually want to ask: do you think behaviors are immoral if and only if they’re maladaptive?
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we’re not rational fitness maximisers, we just tend that way on the more readily apparent issues.
It doesn’t matter if a pig is smarter than a baby. It wouldn’t matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
do you get less and less preferential treatment as you become less and less human?
I’d say so, yeah. It’s kind of a tricky function, though, since there are two reasons I’m logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it’s descendants contributing to an environment in which creatures similar to myself would thrive.
Anyway, “species” isn’t a hard-edged category built in to nature—do you get less and less preferential treatment as you become less and less human?
It’s a lot more hard-edged than intelligence. Of all the animals (I’m talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
Also, what’s the standard against which beings are compared to determine how “human” they are? Phenotypically average among the current population? Nasty prospects for the cryonics advocates among us. And the mind-uploading camp.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren’t..
Also veers dangerously close to negative eugenics, if you’re going to start declaring some people are less human than others.
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say “All objects that have X property have moral worth”, I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don’t think your example about women in higher education is as cut and dried as you present it.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don’t exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.
Disagree? What do you mean by this?
Edit:
If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn’t an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here—the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I’m a moral anti-realist and don’t believe the evo. psych → morality story.
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
A sleeping man. A cryonics patient. A nonverbal 3-year-old. A drunk, passed out.
I think these are all people, they’re pretty close to babies, and we shouldn’t kill any of them.
The reason they all feel like babies to me, from the perspective of “are they people?”, is that they’re in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.
EDIT: That doesn’t mean we have to pay any cost to follow that path—the value we assign to a person’s life can be high but must be finite, and sometimes the correct, moral decision is to not pay that price. But just because we don’t pay that cost doesn’t mean it’s not a person.
I don’t think the time frame matters, either. If I found Fry from Futurama in the cryostasis tube today, and I killed him because I hated him, that would be murder even though he isn’t going to talk, learn, or have self-awareness until the year 3000.
Gametes are not people, even though we know how to make people from them. I don’t know why they don’t count.
EDIT: oh shit, better explain myself about that last one. What I mean is that it is not possible to murder a gamete—they don’t have the moral weight of personhood. You can, potentially, in some situations, murder a baby (and even a fetus): that is possible to do, because they count as people.
I’ve never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn’t very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.
This is most interesting to me:
From these examples, I think “will become a person” is only significant for objects which were people in the past
I know we’re talking about intuitions, but this is one description that can’t jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can’t ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot
at the distinction between them?
This question is fraught with politics and other highly sensitive topics, so I’ll try to avoid getting too specific, but it seems to me that thinking of this sort of thing purely in terms of a potentiality relation rather misses the point. A self-extracting binary, a .torrent file, a million lines of uncompiled source code, and a design document are all, in different ways, potential programs, but they differ from each other both in degree and in type of potentiality. Whether you’d call one a program in any given context depends on what you’re planning to do with it.
Gametes are not people, even though we know how to make people from them.
I’m not at all sure a randomly selected human gamete is less likely to become a person than a randomly selected cryonics patient (at least, with currently-existing technology).
Might be better to talk about this in terms of conversion cost rather than probability. To turn a gamete into a person you need another gamete, $X worth of miscellaneous raw materials (including, but certainly not limited to, food), and a healthy female of childbearing age. She’s effectively removed from the workforce for a predictable period of time, reducing her probable lifetime earning potential by $Y, and has some chance of various medical complications, which can be mitigated by modern treatments costing $Z but even then works out to some number of QALYs in reduced life expectancy. Finally, there’s some chance of the process failing and producing an undersized corpse, or a living creature which does not adequately fulfill the definition of “person.”
In short, a gamete isn’t a person for the same reason a work order and a handful of plastic pellets aren’t a street-legal automobile.
Figuring out how to define human (as in “don’t kill humans”) so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.
The hard question is deciding which transhumans—including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way—can reasonably be considered as entities which shouldn’t be killed.
And I agree that we should treat animals better. I’m vegetarian.
and will become people one day
I agree that this discussion is slightly complex. Gwern’s abortion dialogue contains a lot of relevant material.
However, I don’t feel that saying that “we should protect babies because one day they will be human” requires aggregate utilitarianism as opposed to average utilitarianism, which I in general prefer. Babies are already alive, and already experience things.
and lots of people care about them
This argument has two functions. One is the literal meaning of “we should respect people’s preferences”. See discussion on the Everybody Draw Mohammed day. The other is that other people’s strong moral preferences are some evidence towards the correct moral path.
ETA: I hate that I have to say this, but can people respond instead of just downvoting? I’m honestly curious as to why this particular post is controversial—or have I missed something?
I often “claim” my downvotes (aka I will post “downvoted” and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant −1 on downvoting comments), and people who disagree with your reason for downvoting will also downvote you.
Also, many people on this site are just a-holes. Sorry.
Common reasons I downvote with no comment: I think the mistake is obvious to most readers (or already mentioned) and there’s little to be gained from teaching the author. I think there’s little insight and much noise—length, unpleasant style, politically disagreeable implications that would be tedious to pick apart (especially in tone rather than content). I judge that jerkishness is impairing comprehension; cutting out the courtesies and using strong words may be defensible, but using insults where explanations would do isn’t.
On the “just a-holes” note (yes, I thought “Is this about me?”): It might be that your threshold for acceptable niceness is unusually high. We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness. People who want LW to be nicer usually do it by being especially nice, not by especially punishing meanness. I notice you’re on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people, which is a bad thing if you love Postel’s law. (Which, by Postel’s law, nobody but me has to.) The only LessWronger I think is an asshole is wedrifid, and I think this is one of his good traits.
We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness.
I think there is a difference between choosing bluntness where niceness would tend to obscure the truth, and choosing between two forms of expression which are equally illuminating but not equally nice. I don’t know about anyone else, but I’m using “a-hole” here to mean “One who routinely chooses the less nice variant in the latter situation.”
(This is not a specific reference to you; your comment just happened to provide a good anchor for it.)
I notice you’re on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people,
Would you mind discussing this with me, because I find it disturbing that I come off as having double-standards, and am interested to know more about where that impression comes from. I personally feel that I do not expect better behaviour from others than I practice, but would like to know (and update my behaviour) if I am wrong about this.
I admit to lowering my level of “niceness” on LW, because I can’t seem to function when I am nice and no one else is. However MY level of being “not nice” means that I don’t spend a lot of time finding ways to word things in the most inoffensive manner. I don’t feel like I am exceptionally rude, and am concerned if I give off that impression.
I also feel like I keep my “punishing meanness” levels to a pretty high standard too: I only “punish” (by downvoting or calling out) what I consider to be extremely rude behavior (ie “I wish you were dead” or “X is crap.”) that is nowhere near the level of “meanness” that I feel like my posts ever get near.
You come off as having single-standards. That is, I think the minimal level of niceness you accept from others is also the minimal level of niceness you practice—you don’t allow wiggle room for others having different standards. I sincerely don’t resent that! My model of nice people in general suggests y’all practice Postel’s law (“Be liberal in what you accept and conservative in what you send”), but I don’t think it’s even consistent to demand that someone follow it.
extremely rude behavior (ie “I wish you were dead” or “X is crap.”)
...I’m never going to live that one down, am I? Let’s just say that there’s an enormous amount of behaviours that I’d describe as “slightly blunter than politeness would allow, for the sake of clarity” and you’d describe as “extremely rude”.
Also, while I’ve accepted the verdict that ” is crap” is extremely rude and I shouldn’t ever say it, I was taken aback at your assertion that it doesn’t contribute anything. Surely “Don’t use this thing for this purpose” is non-empty. By the same token, I’d actually be pretty okay with being told “I wish you were dead” in many contexts. For example, in a discussion of eugenics, I’d be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.
Maybe the lesson for you is that many people suck really bad at phrasing things, so you should apply the principle of charity harder and be tolerant if they can’t be both as nice and as clear as you’d have been and choose to sacrifice niceness? The lesson I’ve learned is that I should be more polite in general, more polite to you in particular, look harder for nice phrasings, and spell out implications rather than try to bake them in connotations.
For example, in a discussion of eugenics, I’d be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.
I’m fine with positions that imply I should never have been born (although I have yet to hear one that includes me), but I’d feel very differently about one implying that I should be dead!
Many people don’t endorse anything similar to the principle that “any argument for no more of something should explain why there is a perfect amount of that thing or be counted as an argument for less of that thing.”
E.g. thinking arguments that “life extension is bad” generally have no implications regarding killing people were it to become available. So those who say I shouldn’t live to be 200 are not only basically arguing I should (eventually, sooner than I want) be dead, the implication I take is often that I should be killed (in the future).
If someone tells me I should die now, I understand that to mean that my life from this point forward is of negative value to them. If they tell me I should never have been born, I understand that to mean not only that my life from this point forward is of negative value, but also that my life up to this point has been of negative value.
Interesting. I don’t read it as necessarily a judgment of value at all to be told that I should never have been born (things that should not have happened may accidentally have good consequences). Additionally, someone who doesn’t think that I should have been born, but also doesn’t think I should die, will not try to kill me, though they may push policies that will prevent future additions to my salient reference class; someone who thinks I should die could try to make that happen!
For my part, I don’t treat saying things like “I think you should be dead” as particularly predictive of actually trying to kill me. Perhaps I ought to, but I don’t.
If it helps, I didn’t even remember that one of the times I’ve called someone out on “X is crap” was you. So consider it “lived down”.
taken aback at your assertion that it doesn’t contribute anything.
You’re right. How about an assertion that it doesn’t contribute anything that couldn’t be easily rephrased in a much better way? Your example of “Don’t use this thing for this purpose”, especially if followed by a brief explanation, is an order of magnitude better than “X is crap”, and I doubt it took you more than 5 seconds to write.
I often “claim” my downvotes (aka I will post “downvoted” and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.
On the other hand if people agree with your reasons they often upvote it (especially back up towards zero if it dropped negative).
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant −1 on downvoting comments)
I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.
Also, many people on this site are just a-holes. Sorry.
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant −1 on downvoting comments)
I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.
Do you think that’s a good thing, or just a likely outcome?
Downvoting explanations of downvotes seems like a really bad idea, regardless how you feel about the downvote. It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.
I don’t see how downvoting explanations and more explicit behavior is helpful for rational discourse in any way.
It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.
This is exactly the reaction I want to trolls, basic questions outside of dedicated posts, and stupid mistakes. Are downvotes of explanations in those cases also read as an incentive not to post explanations in general?
Speaking for myself, yes. I read it as “don’t engage this topic on this site, period”.
I agree with downvoting (and ignoring) the types of comments you mentioned, but not explanations of such downvotes. The explanations don’t add any noise, so they shouldn’t be punished. (Maybe if they got really excessive, but currently I have the impression that too few downvotes are explained, rather than too many.)
Do you think that’s a good thing, or just a likely outcome?
Comments can serve as calls to action encouraging others to downvote or priming people with a negative or unintended interpretation of a comment—be it yours or that of someone else -that influence is something to be discouraged. This is not the case with all explanations of downvotes but it certainly describes the effect and often intent of the vast majority of “Downvoted because” declarations. Exceptions include explanations that are requested and occasionally reasons that are legitimately surprising or useful. Obviously also an exception is any time when you actually agree they have a point.
I might well consider an explanation of a downvote on a comment of mine to be a valuable contribution, even if I continue to disagree with the thinking behind it. Actually, that’s not uncommon.
If I downvote with comment, it’s usually for a fairly specific problem, and usually one that I expect can be addressed if it’s pointed out; some very clear logical problem that I can throw a link at, for example, or an isolated offensive statement. I may also comment if the post is problematic for a complicated reason that the poster can’t reasonably be expected to figure out, or if its problems are clearly due to ignorance.
Otherwise it’s fairly rare for me to do so; I see downvotes as signaling that I don’t want to read similar posts, and replying to such a post is likely to generate more posts I don’t want to read. This goes double if I think the poster is actually trolling rather than just exhibiting some bias or patch of ignorance. Basically it’s a cost-benefit analysis regarding further conversation; if continuing to reply would generate more heat than light, better to just downvote silently and drive on.
It’s uncommon for me to receive retaliatory downvotes when I do comment, though.
Also, many people on this site are just a-holes. Sorry.
I think it’s more that there are a few a-holes, but they are very prolific (well, that and the same bias that causes us to notice how many red lights we get stopped at but not how many green lights we speed through also focuses our attention on the worst posting behavior).
Explicitly naming names accomplishes nothing except inducing hostility, as it will be taken as a status challenge. Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.
I agree with you that naming names can be taken as a status challenge. Of course, this whole topic positions you as an abjudicator of appropriate calibration, which can be taken as a status grab, for the excellent reason that it is one. Not that there’s anything wrong with going for status. All of that notwithstanding, if you prefer to diffuse your assertions of individual inappropriate behavior over an entire community, that’s your privilege.
I care about my status on this site only to the extent that it remains above some minimum required for people not to discount my posts simply because they were written by me.
My interest in this thread is that like Daenerys I think the current norm for discourse is suboptimal, but I think I give greater weight to the possibility of that some of the suboptimal behavior is people defecting by accident; hence the subtle push for occasional recalibration of tone.
Just to be clear: I’m fine with you pushing for a norm that’s optimal for you. Blatantly, if you want to; subtly if you’d rather.
But I don’t agree that the norm you’re pushing is optimal for me, and I consider either of us pushing for the establishment of norms that we’re most comfortable with to be a status-linked social maneuver.
I agree that pretty much all communication does this, yes. Sometimes explicitly, sometimes implicitly.
As to why… because I see the norm you’re pushing as something pretty close to the cultural baseline of the “friendly” pole of the American mainstream, which I see as willing to trade off precision and accuracy for getting along. You may even be pushing for something even more “get along” optimized than that.
I mostly don’t mind that the rest of my life more or less optimizes for getting along, though I often find it frustrating when it means that certain questions simply can’t ever be asked in the first place, and that certain answers can’t be believed when they’re given because alternative answers are deemed too impolite to say. Still, as I say, I accept it as a fact about my real-life environment. I probably even prefer it, as I acknowledge that optimizing for precision and accuracy at the expense of getting along would be problematic if I could never get away from it, however tired or upset I was.
That said, I value the fact that LW uses a different standard, one that optimizes for accuracy and precision, and therefore efforts to introduce the baseline “get along” standard to LW remove local value for me.
Again, let me stress that I’m not asserting that you ought not make those efforts. If that’s what you want, then by all means push for it. If you are successful, LW will become less valuable to me, but you’re not under any kind of moral obligation to preserve the value of the Internet to me.
But speaking personally, I’d prefer you didn’t insist as you did so that those efforts are actually in my best interests, with the added implication that I can’t recognize my interests as well as you can.
Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.
It left me evaluating whether it was me personally that was being called an asshole or others in the community and whether those others are people that deserve the insult or not. Basically I needed to determine whether it was a defection against me, an ally or my tribe in general. Then I had to decide what, if any, was an appropriate, desirable and socially acceptable tit-for-tat response. I decided to mostly ignore him because engaging didn’t seem like it would do much more than giving him a platform from which to gripe more.
Why do you feel it’s correct to interpret it as defection in the first place?
In case you were wondering the translation of this from social-speak to Vulcan is:
Calling people assholes isn’t a defection, therefore you saying—and in particular feeling—that labeling people as assholes is a defection says something personal about you. I am clever and smooth for communicating this rhetorically.
So this too is a defection. Not that I mind—because it is a rather mild defection that is well within the bounds of normal interaction. I mean… it’s not like you called me an asshole or anything. ;)
That is not a correct translation. Calling someone an asshole may or may not be defection. In this case, I’m not sure whether it was. Examining why you feel that it was may be enlightening to me or to you or hopefully both. Defecting by accident is a common flaw, for sure, but interpreting a cooperation as a defection is no less damaging and no less common.
I’m already working on not being an asshole in general, and on not being an asshole to specific people on LW. If someone answers “yes” to that I’ll work harder at being a non-asshole on LW. Or post less. Or try to do one of those for two days then forget about the whole thing.
You haven’t stood out as someone who has been an asshole to me or anyone I didn’t think deserved it in the context, those being the only cases salient enough that I could expect myself to remember.
If you’re already working on it, you’re probably in the clear. Not being an a-hole is a high-effort activity for many of us; in this case I will depart from primitive consquentialism and say that effort counts for something.
Yeah, I do retailate quite commonly (less than 60% retailation ITT though), but I’ve never been an asshole on LW until this thread. Not particularly planning on repeating this, but I’m not sorry at all. Forced civility just doesn’t fit the mood of this topic at all in my eyes.
I haven’t downvoted, for what it is worth. Sure, you may be an evil baby killing advocate but it’s not like l care!
I think you accidentally a word.
I haven’t seen anyone respond to your request for feedback about votes, so let me do so, despite not being one of the downvoters.
By my lights, at least, your posts have been fine. Obviously, I can’t speak for the site as a whole… then again, neither can anyone else.
Basically, it’s complicated, because the site isn’t homogenous. Expressing conventionally “bad” moral views will usually earn some downvotes from people who don’t want such views expressed; expressing them clearly and coherently and engaging thoughtfully with the responses will usually net you upvotes.
I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.
My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says ‘Kill everything that looks unlike yourself’? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?
In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?
If it is unwise, then it would make sense to weaken that strand of thought in society—to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.
You did not answer me on the human question—how we’d like powerful humans to think .
This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you’ve defined people in terms of how closely they realise your ideology:
You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you—maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity—and then what happens to those who exceed that capacity? And surely the AI itself would do so....
There are a lot of ways it can go wrong.
You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.
The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.
-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.
How did I misinterpret? I read that you don’t include babies and I said that I do include babies. That’s (preference) disagreement, not a problem with interpretation.
Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)
“Encouraged” is very clearly not absolute but relative here, “somewhat less discouraged than now” can just be written as “encouraged” for brevity’s sake.
How are you deciding whether your definition is reasonable?
‘Don’t kill anything that can learn,’ springs to mind as a safer alternative—were I inclined to program this stuff in directly, which I’m not.
I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society—or at least an idealised model of part of it—will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.
In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.
Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.
It’d be great if I could just say ‘I want you to do good—with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions—AIs may well grow up with different definitions—and if you’ve got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.
I’m not certain whether or not it’s germane to the broader discussion, but “think X is immoral” and “think X should be illegal” are not identical beliefs.
I was with you, until your summary.
Suppose hypothetically that I think “don’t kill people” is a good broad moral rule, and I think babies are people.
It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.
If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think “don’t kill people” is a good law, then all else being equal I should think “don’t kill babies” is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.
It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.
Oh good! I don’t usually nitpick about such things, but you had me genuinely puzzled.
If I were programming an AI to be a perfect world-guiding moral paragon, I’d rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.
Somewhat late, I must have missed this reply agessss ago when it went up.
That’s not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you’re doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we’re not doing anything particularly logical or reasonable here—we’re not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don’t.
If we try and agree on a common list. Well, you’re agreeing that aliens and powerful AIs go on the list—so biology isn’t the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can—they’re just not particularly good at it yet.
Conversely, what do all your other examples have in common that infants don’t?
Arguably that would be a good heuristic to keep around. I don’t know I’d call it a moral wrong – there’s not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.
I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn’t have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there’s something emotionally maladjusted in you – by the standards of the needs of society. If we’d not had the precept, and magically appeared out of nowhere, I think we’d have invented it pretty quick.
Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.
Yeah, a lack of reply notification’s a real pain in the rear.
Edit: You can skip to the next break line if you’re not interested in reading about the methodological component so much as you are continuing the infants argument.
What we’re doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I’m talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we’d have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about—properties—it’s a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you’re talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns—with our language that allows us to do this compression—that you can’t come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance—“Most numerous group of bipedal tools users on Earth.” That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn’t have to lead to more confusion—as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures—while my terms would refer exactly, while I’d even be able to stick you in front of a machine and point to it directly—would you even recognise it without going to a computer? I wouldn’t. The reference falls beyond the level of my experience.
I don’t see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn’t exactly define what I mean by human. Even by reference to genetic structure I’ve no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let’s go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It’s the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
Eh, functioning is a very rough definition and we’ve got to that pretty quickly.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don’t, I’ve got the dilemma as to whether to stop eating animals or start eating babies.
And it’s not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I’ve not seen one do anything like that. Predatory animals—wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I’d resolve the dilemma if it did occur, I’m leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don’t want to put a precedent in place that might get me eaten at some point.
By fiat—sufficiently advanced for what? But I suppose I’ll grant any AI that can pass the Turing test qualifies, yes.
That depends on the nature of the script. If it’s just performing some relatively simple task over and over, then I’m inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I’m inclined to think it belongs in the people group.
I suppose, what I really mean to say is they’re taboos because that taboo has some desirable effect on society.
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common—as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up—and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they’d have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
I suppose I had, yes. It never really occurred to me that they might be that intelligent—but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There’s not to say that I think it should result in legal punishment on par with killing a human, mind, it’s easier to work out that humans are people than it is to work out that wolves are—it’s a reasonable mistake.
Insects like wasps and flies don’t seem like people. Red pandas do. Dolphins do. Cows… don’t. But given what I’ve discovered about pigs that bears some checking—and now cows do. Hnn. Damn it, now I won’t be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you’re imaging would have, at what point does your ability to empathise with the imaginary-you break down?
As far as I know how, yes. If you’ve got some ways of thinking that we haven’t been talking about here, feel free to post them and I’ll do my best to run them.
If Babies weren’t people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours—people kill people all the time, I don’t expect them not to without other criteria being involved.
Not a person.
No, because we’ve had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don’t want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don’t think one can seriously forward an argument that keeps women at home and doesn’t create something that can be used against him in fairly horrifying ways. Even if you don’t have a fairly inclusive definition of people, it seems unwise to treat other humans in that way—you, after all, are the other human to another human.
I don’t know enough about them—given they’re so different to us in terms of gross biology I imagine it’s often going to be quite difficult to distinguish between functioning and instinct—this:
http://news.bbc.co.uk/1/hi/england/west_yorkshire/3189941.stm
Says that scientists observed some of them using tools, and that definitely seems like people though.
Yes.
Shared attention, recognition, prediction, bonding -
The legal definition of an accident is an unforeseeable event. I don’t agree with that entirely because, well everything’s foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there’s much room for doubt, there’s so much suffering involved in killing and eating animals that we shouldn’t do it even if we only argue ourselves to some low probability of their being people.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
I agree, if it doesn’t have the capabilities that will make it a person there’s no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven’t killed a human.
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we’re not rational fitness maximisers, we just tend that way on the more readily apparent issues.
Am I the only who bit the speciesist bullet?
It doesn’t matter if a pig is smarter than a baby. It wouldn’t matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
I’d say so, yeah. It’s kind of a tricky function, though, since there are two reasons I’m logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it’s descendants contributing to an environment in which creatures similar to myself would thrive.
It’s a lot more hard-edged than intelligence. Of all the animals (I’m talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren’t..
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say “All objects that have X property have moral worth”, I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don’t think your example about women in higher education is as cut and dried as you present it.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don’t exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn’t an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here—the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I’m a moral anti-realist and don’t believe the evo. psych → morality story.
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
Consider this set:
A sleeping man. A cryonics patient. A nonverbal 3-year-old. A drunk, passed out.
I think these are all people, they’re pretty close to babies, and we shouldn’t kill any of them.
The reason they all feel like babies to me, from the perspective of “are they people?”, is that they’re in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.
EDIT: That doesn’t mean we have to pay any cost to follow that path—the value we assign to a person’s life can be high but must be finite, and sometimes the correct, moral decision is to not pay that price. But just because we don’t pay that cost doesn’t mean it’s not a person.
I don’t think the time frame matters, either. If I found Fry from Futurama in the cryostasis tube today, and I killed him because I hated him, that would be murder even though he isn’t going to talk, learn, or have self-awareness until the year 3000.
Gametes are not people, even though we know how to make people from them. I don’t know why they don’t count.
EDIT: oh shit, better explain myself about that last one. What I mean is that it is not possible to murder a gamete—they don’t have the moral weight of personhood. You can, potentially, in some situations, murder a baby (and even a fetus): that is possible to do, because they count as people.
I’ve never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn’t very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.
This is most interesting to me:
I know we’re talking about intuitions, but this is one description that can’t jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can’t ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot at the distinction between them?
This question is fraught with politics and other highly sensitive topics, so I’ll try to avoid getting too specific, but it seems to me that thinking of this sort of thing purely in terms of a potentiality relation rather misses the point. A self-extracting binary, a .torrent file, a million lines of uncompiled source code, and a design document are all, in different ways, potential programs, but they differ from each other both in degree and in type of potentiality. Whether you’d call one a program in any given context depends on what you’re planning to do with it.
I’m not at all sure a randomly selected human gamete is less likely to become a person than a randomly selected cryonics patient (at least, with currently-existing technology).
Might be better to talk about this in terms of conversion cost rather than probability. To turn a gamete into a person you need another gamete, $X worth of miscellaneous raw materials (including, but certainly not limited to, food), and a healthy female of childbearing age. She’s effectively removed from the workforce for a predictable period of time, reducing her probable lifetime earning potential by $Y, and has some chance of various medical complications, which can be mitigated by modern treatments costing $Z but even then works out to some number of QALYs in reduced life expectancy. Finally, there’s some chance of the process failing and producing an undersized corpse, or a living creature which does not adequately fulfill the definition of “person.”
In short, a gamete isn’t a person for the same reason a work order and a handful of plastic pellets aren’t a street-legal automobile.
What’s the cutoff probability?
You are right; retracted.
Figuring out how to define human (as in “don’t kill humans”) so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.
The hard question is deciding which transhumans—including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way—can reasonably be considered as entities which shouldn’t be killed.
Well, it sure looks like babies have a lot of things in common with people, and will become people one day, and lots of people care about them.
I meant humans, not people. Sorry.
And I agree that we should treat animals better. I’m vegetarian.
I agree that this discussion is slightly complex. Gwern’s abortion dialogue contains a lot of relevant material.
However, I don’t feel that saying that “we should protect babies because one day they will be human” requires aggregate utilitarianism as opposed to average utilitarianism, which I in general prefer. Babies are already alive, and already experience things.
This argument has two functions. One is the literal meaning of “we should respect people’s preferences”. See discussion on the Everybody Draw Mohammed day. The other is that other people’s strong moral preferences are some evidence towards the correct moral path.
I often “claim” my downvotes (aka I will post “downvoted” and then give reason.) However, I know that when I do this, I will be downvoted myself. So that is probably one big deterrent to others doing the same.
For one thing, the person you are downvoting will generally retaliate by downvoting you (or so it seems to me, since I tend to get an instant −1 on downvoting comments), and people who disagree with your reason for downvoting will also downvote you.
Also, many people on this site are just a-holes. Sorry.
Common reasons I downvote with no comment: I think the mistake is obvious to most readers (or already mentioned) and there’s little to be gained from teaching the author. I think there’s little insight and much noise—length, unpleasant style, politically disagreeable implications that would be tedious to pick apart (especially in tone rather than content). I judge that jerkishness is impairing comprehension; cutting out the courtesies and using strong words may be defensible, but using insults where explanations would do isn’t.
On the “just a-holes” note (yes, I thought “Is this about me?”): It might be that your threshold for acceptable niceness is unusually high. We have traditions of bluntness and flaw-hunting (mostly from hackers, who correctly consider niceness noise when discussing bugs in X), so we ended up rather mean on average, and very tolerant of meanness. People who want LW to be nicer usually do it by being especially nice, not by especially punishing meanness. I notice you’re on my list of people I should be exceptionally nice to, but not on my list of exceptionally nice people, which is a bad thing if you love Postel’s law. (Which, by Postel’s law, nobody but me has to.) The only LessWronger I think is an asshole is wedrifid, and I think this is one of his good traits.
.
I think there is a difference between choosing bluntness where niceness would tend to obscure the truth, and choosing between two forms of expression which are equally illuminating but not equally nice. I don’t know about anyone else, but I’m using “a-hole” here to mean “One who routinely chooses the less nice variant in the latter situation.”
(This is not a specific reference to you; your comment just happened to provide a good anchor for it.)
Of course, if that’s the meaning, then before I judge someone to be an “a-hole” I need to know what they intended to illumine.
Would you mind discussing this with me, because I find it disturbing that I come off as having double-standards, and am interested to know more about where that impression comes from. I personally feel that I do not expect better behaviour from others than I practice, but would like to know (and update my behaviour) if I am wrong about this.
I admit to lowering my level of “niceness” on LW, because I can’t seem to function when I am nice and no one else is. However MY level of being “not nice” means that I don’t spend a lot of time finding ways to word things in the most inoffensive manner. I don’t feel like I am exceptionally rude, and am concerned if I give off that impression.
I also feel like I keep my “punishing meanness” levels to a pretty high standard too: I only “punish” (by downvoting or calling out) what I consider to be extremely rude behavior (ie “I wish you were dead” or “X is crap.”) that is nowhere near the level of “meanness” that I feel like my posts ever get near.
You come off as having single-standards. That is, I think the minimal level of niceness you accept from others is also the minimal level of niceness you practice—you don’t allow wiggle room for others having different standards. I sincerely don’t resent that! My model of nice people in general suggests y’all practice Postel’s law (“Be liberal in what you accept and conservative in what you send”), but I don’t think it’s even consistent to demand that someone follow it.
...I’m never going to live that one down, am I? Let’s just say that there’s an enormous amount of behaviours that I’d describe as “slightly blunter than politeness would allow, for the sake of clarity” and you’d describe as “extremely rude”.
Also, while I’ve accepted the verdict that ” is crap” is extremely rude and I shouldn’t ever say it, I was taken aback at your assertion that it doesn’t contribute anything. Surely “Don’t use this thing for this purpose” is non-empty. By the same token, I’d actually be pretty okay with being told “I wish you were dead” in many contexts. For example, in a discussion of eugenics, I’d be quite fine with a position that implies I should be dead, and would much rather hear it than have others dance around the implication.
Maybe the lesson for you is that many people suck really bad at phrasing things, so you should apply the principle of charity harder and be tolerant if they can’t be both as nice and as clear as you’d have been and choose to sacrifice niceness? The lesson I’ve learned is that I should be more polite in general, more polite to you in particular, look harder for nice phrasings, and spell out implications rather than try to bake them in connotations.
I’m fine with positions that imply I should never have been born (although I have yet to hear one that includes me), but I’d feel very differently about one implying that I should be dead!
Many people don’t endorse anything similar to the principle that “any argument for no more of something should explain why there is a perfect amount of that thing or be counted as an argument for less of that thing.”
E.g. thinking arguments that “life extension is bad” generally have no implications regarding killing people were it to become available. So those who say I shouldn’t live to be 200 are not only basically arguing I should (eventually, sooner than I want) be dead, the implication I take is often that I should be killed (in the future).
Personally, I’d be far more insulted by the suggestion that I should never have been born, than by the suggestion that I should die now.
Why?
If someone tells me I should die now, I understand that to mean that my life from this point forward is of negative value to them. If they tell me I should never have been born, I understand that to mean not only that my life from this point forward is of negative value, but also that my life up to this point has been of negative value.
Interesting. I don’t read it as necessarily a judgment of value at all to be told that I should never have been born (things that should not have happened may accidentally have good consequences). Additionally, someone who doesn’t think that I should have been born, but also doesn’t think I should die, will not try to kill me, though they may push policies that will prevent future additions to my salient reference class; someone who thinks I should die could try to make that happen!
Interesting.
For my part, I don’t treat saying things like “I think you should be dead” as particularly predictive of actually trying to kill me. Perhaps I ought to, but I don’t.
Upvoted, and thank you for the explanation.
If it helps, I didn’t even remember that one of the times I’ve called someone out on “X is crap” was you. So consider it “lived down”.
You’re right. How about an assertion that it doesn’t contribute anything that couldn’t be easily rephrased in a much better way? Your example of “Don’t use this thing for this purpose”, especially if followed by a brief explanation, is an order of magnitude better than “X is crap”, and I doubt it took you more than 5 seconds to write.
.
Correcting for my differing speech patterns across languages and need to speak to stuck-up authorities… probably roughly as much.
On the other hand if people agree with your reasons they often upvote it (especially back up towards zero if it dropped negative).
I certainly hope so. I would expect that they disagree with your reasons for downvoting or else they would have not made their comment. It would take a particularly insightful explanation for your vote for them to believe that you influencing others toward thinking their contribution is negative is itself a valuable contribution.
*arch*
Do you think that’s a good thing, or just a likely outcome?
Downvoting explanations of downvotes seems like a really bad idea, regardless how you feel about the downvote. It strongly incentives people to not explain themselves, not open themselves up for debates, but just vote and then remove themselves from the discussion.
I don’t see how downvoting explanations and more explicit behavior is helpful for rational discourse in any way.
This is exactly the reaction I want to trolls, basic questions outside of dedicated posts, and stupid mistakes. Are downvotes of explanations in those cases also read as an incentive not to post explanations in general?
Speaking for myself, yes. I read it as “don’t engage this topic on this site, period”.
I agree with downvoting (and ignoring) the types of comments you mentioned, but not explanations of such downvotes. The explanations don’t add any noise, so they shouldn’t be punished. (Maybe if they got really excessive, but currently I have the impression that too few downvotes are explained, rather than too many.)
Comments can serve as calls to action encouraging others to downvote or priming people with a negative or unintended interpretation of a comment—be it yours or that of someone else -that influence is something to be discouraged. This is not the case with all explanations of downvotes but it certainly describes the effect and often intent of the vast majority of “Downvoted because” declarations. Exceptions include explanations that are requested and occasionally reasons that are legitimately surprising or useful. Obviously also an exception is any time when you actually agree they have a point.
I might well consider an explanation of a downvote on a comment of mine to be a valuable contribution, even if I continue to disagree with the thinking behind it. Actually, that’s not uncommon.
If I downvote with comment, it’s usually for a fairly specific problem, and usually one that I expect can be addressed if it’s pointed out; some very clear logical problem that I can throw a link at, for example, or an isolated offensive statement. I may also comment if the post is problematic for a complicated reason that the poster can’t reasonably be expected to figure out, or if its problems are clearly due to ignorance.
Otherwise it’s fairly rare for me to do so; I see downvotes as signaling that I don’t want to read similar posts, and replying to such a post is likely to generate more posts I don’t want to read. This goes double if I think the poster is actually trolling rather than just exhibiting some bias or patch of ignorance. Basically it’s a cost-benefit analysis regarding further conversation; if continuing to reply would generate more heat than light, better to just downvote silently and drive on.
It’s uncommon for me to receive retaliatory downvotes when I do comment, though.
I think it’s more that there are a few a-holes, but they are very prolific (well, that and the same bias that causes us to notice how many red lights we get stopped at but not how many green lights we speed through also focuses our attention on the worst posting behavior).
Interesting. Who are the prolific “a-holes”?
Explicitly naming names accomplishes nothing except inducing hostility, as it will be taken as a status challenge. Not explicitly naming names, one hopes, leaves everyone re-examining whether their default tone is appropriately calibrated.
I agree with you that naming names can be taken as a status challenge.
Of course, this whole topic positions you as an abjudicator of appropriate calibration, which can be taken as a status grab, for the excellent reason that it is one. Not that there’s anything wrong with going for status.
All of that notwithstanding, if you prefer to diffuse your assertions of individual inappropriate behavior over an entire community, that’s your privilege.
I care about my status on this site only to the extent that it remains above some minimum required for people not to discount my posts simply because they were written by me.
My interest in this thread is that like Daenerys I think the current norm for discourse is suboptimal, but I think I give greater weight to the possibility of that some of the suboptimal behavior is people defecting by accident; hence the subtle push for occasional recalibration of tone.
There was a subtle push? I must of missed that while I was distracted by the blatant one!
See, it’s working!
Just to be clear: I’m fine with you pushing for a norm that’s optimal for you. Blatantly, if you want to; subtly if you’d rather.
But I don’t agree that the norm you’re pushing is optimal for me, and I consider either of us pushing for the establishment of norms that we’re most comfortable with to be a status-linked social maneuver.
Why? (A sincere question, not a rhetorical one)
I’m not sure how every post doesn’t do this; many posts push to maintain a status-quo, but all posts implicitly favor some set of norms.
I agree that pretty much all communication does this, yes. Sometimes explicitly, sometimes implicitly.
As to why… because I see the norm you’re pushing as something pretty close to the cultural baseline of the “friendly” pole of the American mainstream, which I see as willing to trade off precision and accuracy for getting along. You may even be pushing for something even more “get along” optimized than that.
I mostly don’t mind that the rest of my life more or less optimizes for getting along, though I often find it frustrating when it means that certain questions simply can’t ever be asked in the first place, and that certain answers can’t be believed when they’re given because alternative answers are deemed too impolite to say. Still, as I say, I accept it as a fact about my real-life environment. I probably even prefer it, as I acknowledge that optimizing for precision and accuracy at the expense of getting along would be problematic if I could never get away from it, however tired or upset I was.
That said, I value the fact that LW uses a different standard, one that optimizes for accuracy and precision, and therefore efforts to introduce the baseline “get along” standard to LW remove local value for me.
Again, let me stress that I’m not asserting that you ought not make those efforts. If that’s what you want, then by all means push for it. If you are successful, LW will become less valuable to me, but you’re not under any kind of moral obligation to preserve the value of the Internet to me.
But speaking personally, I’d prefer you didn’t insist as you did so that those efforts are actually in my best interests, with the added implication that I can’t recognize my interests as well as you can.
It left me evaluating whether it was me personally that was being called an asshole or others in the community and whether those others are people that deserve the insult or not. Basically I needed to determine whether it was a defection against me, an ally or my tribe in general. Then I had to decide what, if any, was an appropriate, desirable and socially acceptable tit-for-tat response. I decided to mostly ignore him because engaging didn’t seem like it would do much more than giving him a platform from which to gripe more.
If it makes you feel better, when I read his post I thought lovingly of you. (I also believe your response was appropriate.)
Why do you feel it’s correct to interpret it as defection in the first place?
In case you were wondering the translation of this from social-speak to Vulcan is:
So this too is a defection. Not that I mind—because it is a rather mild defection that is well within the bounds of normal interaction. I mean… it’s not like you called me an asshole or anything. ;)
That is not a correct translation. Calling someone an asshole may or may not be defection. In this case, I’m not sure whether it was. Examining why you feel that it was may be enlightening to me or to you or hopefully both. Defecting by accident is a common flaw, for sure, but interpreting a cooperation as a defection is no less damaging and no less common.
Am I an asshole?
I’m already working on not being an asshole in general, and on not being an asshole to specific people on LW. If someone answers “yes” to that I’ll work harder at being a non-asshole on LW. Or post less. Or try to do one of those for two days then forget about the whole thing.
You haven’t stood out as someone who has been an asshole to me or anyone I didn’t think deserved it in the context, those being the only cases salient enough that I could expect myself to remember.
If you’re already working on it, you’re probably in the clear. Not being an a-hole is a high-effort activity for many of us; in this case I will depart from primitive consquentialism and say that effort counts for something.
And, equivalently, signalling effectively that you are expending effort counts for something.
Yeah, I do retailate quite commonly (less than 60% retailation ITT though), but I’ve never been an asshole on LW until this thread. Not particularly planning on repeating this, but I’m not sorry at all. Forced civility just doesn’t fit the mood of this topic at all in my eyes.