[link] Is Alu Life?
I recently read (in Dawkins’ The Ancestor’s Tale) about the Alu sequence, and went on to read about transposons generally. Having as I do a rather broad definition of life, I concluded that Alu (and others like it) are lifeforms in their own right, although parasitic ones. I found the potential ethical implications somewhat staggering, especially given the need to shut up and multiply those implications by the rather large number of transposon instances in a typical multicellular organism.
I have written out my thoughts on the subject, at http://jttlov.no-ip.org/writings/alulife.htm. I don’t claim to have a well-worked out position, just a series of ideas and questions I feel to be worthy of discussion.
ETA: I have started editing the article based on the discussion below. For reference with the existing discussion, I have preserved a copy of the original article as well, linked from the current version.
If the moral status of transposons seems to depend on the definition of an English word, then something has gone horribly wrong.
Your comment is a very good argument against a position—but unfortunately not the position I hold. I may have poorly expressed my meaning; it’s not strictly the definition of the English word ‘life’ that I care about, but rather the exploration of my utility function, and whether my preferences are consistent and coherent, or whether they make an arbitrary distinction between “life with moral status” (people, chimps, and kittens) and “life without moral status” (cockroaches, E. coli, and transposons).
Can you suggest a good way for me to explain this in the article itself?
At the very least, you should reconsider the syllogism at the heart of your article:
All life has ethical value.
Transposons are life.
Therefore, transposons have ethical value.
We can substitute in your tentative definition of life:
All “self-replicating structures with a genotype which determines their phenotype and is susceptible to mutation and selection” have ethical value.
Transposons are self-replicating structures with a genotype which determine their phenotype and are susceptible to mutation and selection.
Therefore, transposons have ethical value.
Premise 2 is an empirical claim. Premise 1 is a moral claim that is strictly stronger than the conclusion, and you do not justify it at all.
If you have moral intuitions or moral arguments for the first premise, then perhaps you should write about those instead. And your arguments ought to make sense without using the word “life”. If your argument is along the lines of “well, humans and chimpanzees have ethical value, and they’re both self-replicating structures with genomes etc., so it only makes sense that transposons have ethical value too”, that’s not good enough. You’d have to say why being a self-replicating structure with a genome etc. is the reason why humans and chimpanzees have ethical value. If humans and chimpanzees have ethical value because of some other feature, then perhaps transposons don’t share that feature and they don’t have ethical value after all.
Hmm. I do understand that, but I still don’t think it’s relevant. I don’t try to argue that Premise 1 is true (except in a throwaway parenthetical which I am considering retracting), rather I’m arguing that Premise 2 is true, and that consequently Premise 1 implies the conclusion (“transposons have ethical value”) which in turn implies various things ranging from the disconcerting to the absurd. In fact I believed Premise 1 (albeit without great examination) until I learned about transposons, and now I doubt it (though I haven’t rejected it so far; I’m cognitively marking it as “I’m confused about this”). That’s why I felt there was something worth writing about: namely, that transposons expose the absurdity of an assumption that had previously been part of my moral theory, and by extension may perhaps be part of others’.
Edit: well, that’s one reason I wrote the article. The other reason was to raise the questions in the hope of creating a discussion through which I might come to better understand the problem.
Further edit: actually, I’m not sure the first reason was my reason for writing the article; I think I was indeed (initially) arguing for Premise 1, and I have been trying to make excuses and pretend I’d never argued for it. Yet I still can’t let go of Premise 1 completely. Thought experiment: imagine a planet with a xenobiology that only supports plant life—nothing sentient lives there or could do so—and there is (let us assume) no direct benefit to us to be derived from its existence. Would we think it acceptable to destroy that planet? I think not, yet the obvious “feature conferring ethical value on humans and chimps” would be sentience. I remain confused.
Interestingly, the gut reaction I had to destroying plant planet was “NO! We could learn so much!”. But then I think I value interesting information, not life.
I think this scenario is a little difficult to visualize- an entire biosphere we can’t derive a benefit from, even for sheer curiosity’s sake? So, applying the LCPW: the planet has been invaded by a single species of xenokudzu, which has choked out all other life but is thriving merrily on its own (maybe it’s an ecocidal bioweapon or something). Would it be acceptable to destroy that planet? I’d say yes. Agree / disagree / think my changes alter the question?
Agree, and think your changes alter the question I was trying to ask, which is, not whether destroying Xenokudzu Planet would be absolutely unacceptable (as a rule, most things aren’t), but whether we’d need a sufficiently good reason.
I think the LCPW for you here is to suppose that this planet is only capable of supporting this xenokudzu, and no other kind of life. (Maybe the xenokudzu is plasma helices, and the ‘planet’ is actually a sun, and suppose for the sake of argument that that environment can’t support sentient life)
So, more generally, let the gain (to non-xenokudzu utility) from destroying Xeno Planet tend to zero. Is there a point at which you choose not to destroy, or will any nonzero positive gain to sentient life justify wiping out Xeno Planet?
So, if I were building a planet-destroying superlaser (for, um, mining I guess) I wouldn’t see any particular difference between testing it on Kudzu World or the barren rock next door.
That’s interesting, because I would see a difference. Given the choice, I’d test it on the barren rock. However, I can’t justify that, nor am I sure how much benefit I’d have to derive to be willing to blow up Eta Kudzunae.
Fortunately, I don’t care about something just because it fits a definition of “life.” I care about something because it’s something I care about. Circular, yes, but at least give me credit for it not being false :P
But it is false. Nothing can cause itself, and that includes your caring.
Positive feedback exists. You can care about something now primarily because you’ve cared about it in the past. We even have a name for this: the sunk cost fallacy.
That isn’t quite what the sunk cost fallacy means and one can care about something because they cared about it in the past without ever committing the sunk cost fallacy. The sunk cost fallacy requires miscalculating the expected value of a decision to pursue a particular goal due to past expenditure in seeking that goal—ie. behaving as if expenses already incurred are actually anticipated future losses for abandoning the course of action.
The potential moral implications of Alu being life have nothing to do with multiplying by the number of transposons, they have to do with realizing that what you value isn’t “life”.
What definition of “life” is satisfied by both a transposon and an AI?
Edit: Did you learn ethics from Orson Scott Card or something?
My ethics were influenced a nonzero amount by reading Orson Scott Card. More to the point, OSC provided terminology which I felt was both useful and likely to be understood by my audience.
I now think that my use of the word “must” in the above-quoted passage was a mistake.
Am I the only one who’s bothered by the colour scheme of the article? (BTW, are there people who take the Sapir—Whorf hypothesis so seriously as to believe that speakers of languages with separate words for ‘navy blue’ and ‘sky blue’ would find it easier?)
I don’t believe it, but it sounds like it should be testable, and if it hasn’t been tested I’d be somewhat interested in doing so. I believe there are standard methods of comparing legibility or readability of two versions of a text (although, IIRC, they tend to show no statistically significant difference between perfect typesetting and text that would make a typographer scream).
You’re probably not the only one bothered by the colour scheme, though; historically, every colour scheme I’ve used on the various iterations of my website has bothered many people. The previous one was bright green on black :S
(In case it wasn’t clear, I wasn’t serious about the speakers-of-languages-with-several-words-for-blue thing.)