Savulescu: “Genetically enhance humanity or face extinction”
In this video, Julian Savulescu from the Uehiro centre for Practical Ethics argues that human beings are “Unfit for the future”—that radical technological advance, liberal democracy and human nature will combine to make the 21st century the century of global catastropes, perpetrated by terrorists and psychopaths, with tools such as engineered viruses. He goes on to argue that enhanced intelligence and a reduced urge to violence and defection in large commons problems could be achieved using science, and may be a way out for humanity.
Skip to 1:30 to avoid the tedious introduction
Genetically enhance humanity or face extinction—PART 1 from Ethics of the New Biosciences on Vimeo.
Genetically enhance humanity or face extinction—PART 2 from Ethics of the New Biosciences on Vimeo.
Well, I have already said something rather like this. Perhaps this really is a good idea, more important, even, than coding a friendly AI? AI timelines where super-smart AI doesn’t get invented until 2060+ would leave enough room for human intelligence enhancement to happen and have an effect. When I collected some SIAI volunteers’ opinions on this, most thought that there was a very significant chance that super-smart AI will arrive sooner than that, though.
A large portion of the video consists of pointing out the very strong scientific case that our behavior is a result of the way our brains are structured, and that this means that changes in our behavior are the result of changes in the way our brains are wired.
Biased sample!
Yes, it is a biased sample. However, reality is not a democracy: some people have better ideas than others.
Personally, I think that the within-SIAI view of AI takeoff timelines will suffer from bias: it’ll be emotionally tempted into putting down timelines that are too near term. But I don’t know how much to correct for this.
A primitive outside view analysis that I did indicates a ~50% probability of superintelligent AI by 2100.
Could you elaborate a bit on this analysis? It’d be interesting how you arrived at that number.
take a log-normal prior for when AI at the human level will be developed, with t_0 at 1956. Choose the remaining two parameters to line up with the stated beliefs of the first AI researchers—i.e. they expected human level AI to not have occurred within a year, but they seem to have assigned significant probability to it happening by 1970. Then update that prior on the fact that, in 2010, we still have no human level AI.
This “outside view” model takes into account the evidence provided by the failure of the past 64 years of AI, and I think it is a reasonable model.
Thanks, that was indeed interesting.
Now, the only point I do not understand yet is how the expectations of the original AI researchers are a factor in this. Do you have some reason to believe that their expectations were too optimistic by a factor of about 10 (1970 vs 2100) rather than some other number?
They are a factor because their opinions in 1956, before the data had been seen, form a basis for constructing a prior that was not causally affected by the data.
One might be generous and say it was a (relatively) expert sample, since SIAI volunteers presumably know more about the subject than normal people.
A small dose of outside view shows that it’s all nonsense. The idea of evil terrorist or criminal mastermind is based on nothing—such people don’t exist. Virtually all terrorists and criminals are idiots, and neither are interested in maximizing destruction.
See everything Schneier has ever written about it if you need data confirming what I just said.
We forecast technology becoming more powerful and available to more people with time. As a corollary, the un-maximized destructive power of idiots also grows, eventually enough to cause x-risk scenarios.
What about the recent reports of Muslim terrorists being (degreed) engineers in disproportionate numbers? While there’s some suggestion of an economic/cultural explanation, it does indicate that at least some terrorists are people who were at least able to get engineering degrees.
Kinda funny, the first terrorist which came to my mind was this guy.
From Wikipedia: Kaczynski was born in Chicago, Illinois, where, as an intellectual child prodigy, he excelled academically from an early age. Kaczynski was accepted into Harvard University at the age of 16, where he earned an undergraduate degree, and later earned a PhD in mathematics from the University of Michigan. He became an assistant professor at the University of California, Berkeley at age 25, but resigned two years later.
It took the FBI 17 years to arrest the Una-Bomber and he only got caught because he published a pamphlet in the New York Times, which his brother could identify.
Anyway, IMO Savalescu merely says that with further technological progress it could be possible for smart ( say IQ around 130 ) sociopaths to kill millions of people. Do you really believe that this is impossible?
Wikipedia describes Una Bomber’s feats as “mail bombing spree that spanned nearly 20 years, killing three people and injuring 23 others”.
3 people in twenty years just proves my point that he either never cared about maximizing destruction or was really horrible about it. You can do better in one evening by getting an SUV, filling it with gas canisters for extra effect, and driving it into a school bus at full speed. See Mythbusters for some ideas.
The facts of the matter are such people don’t exist. They’re possible in a way that Russell’s Teapot is possible.
Yeah, good points, but Kaczynski tried to kill especially math or science professors or generally people who contributed to technological progress. He didn’t try to kill as many people as possible, so detonating a bunch of school kids was not on his agenda.
Anyway, IMO it is odd to believe that there is less than a 5% probability that some psychopath in the next 50 years could kill millions of people, perhaps through advanced bio-technology ( Let alone nanotechnology or uFAI). That such feats were nearly impossible in the past does not imply that they will be impossible in the future.
Unless you believe distribution of damaging psychopaths is extremely fat tailed, lack of moderately successful ones puts a very tight bound on probability of extremely damaging psychopath.
All the “advanced biotech / nanotech / ai” is not going to happen like that. If it happens at all, it will give more power to large groups with enough capital to research and develop them, not to lone psychopaths.
I hope you’re right, and I also think that it is more likely than not. But you seem to be overly confident. If we are speculating about the future it is probably wise to widen our confidence intervals...
Savulescu explicitly discusses smart sociopaths.
I think Schneier is one of the most intelligent voices in the debate on terrorism but I’m not convinced you sum up his position entirely accurately. I had a browse around his site to see if I could find some specific data to confirm your claim and had trouble finding anything. The best I could find was Portrait of the Modern Terrorist as an Idiot but it doesn’t contain actual data. I’m rather confused why you linked to the specific blog post you chose which seems largely unrelated to your claim. Do you have any better links you could share?
Note that in the article I link he states:
There a terrorist attempt only recently:
“Nation on edge after Christmas terrorism attempt”
http://www.latimes.com/news/nation-and-world/la-na-terror-plane28-2009dec28,0,6963038.story
Read some Schneier. A more accurate headline should be: “Nation on edge after an idiot demonstrates his idiocy”. Nearly all terrorism has been performed by people who have serious mental deficiencies—even the 9/11 attacks depended on a lot of luck to succeed. Shit happens, but random opportunities usually aid the competent more than the incompetent. And nearly all criminals and terrorists are of lower intelligence, the few that are reasonably intelligent are seriously lacking in impulse control, which screws up their ability to make and carry through plans. Besides Bruce Schneier’s work, see “The Bell Curve”, and most newer literature on intelligence.
So, we could decompile humans, and do FAI to them. Or we could just do FAI. Isn’t the latter strictly simpler?
Well, the attention of those capable of solving FAI should be undivided. Those who aren’t equipped to work on FAI and who could potentially make progress on intelligence enhancing therapies, should do so.
I don’t think so. The problem with FAI is that there is an abrupt change, whereas IA is a continuous process with look-ahead: you can test out a modification on just one human mind, so the process can correct any mistakes.
If you get the programming on your seed AI wrong, you’re stuffed.
I believe it’s almost backwards: with IA, you get small mistakes accumulating into irreversible changes (with all sorts of temptations to declare the result “good enough”), while with FAI you have a chance of getting it absolutely right at some point. The process of designing FAI doesn’t involve any abrupt change, the same way as you’d expect for IA. On the other hand, if there is no point with IA where you can “let go” and be sure the result holds the required preference, the “abrupt change” of deploying FAI is the point where you actually win.
X-risk-alleviating AGI just has to be days late to the party for a supervirus created by a terrorist cell to have crashed it. I guess I’d judge against putting all our eggs in the AI basket.
“We” aren’t deciding where to put all our eggs. The question that matters is how to allocate marginal units of effort. I agree, though, that the answer isn’t always “FAI research”.
From a thread http://esr.ibiblio.org/?p=1551#comments in Armed and Dangerous:
Indeed, I have made the argument on a Less Wrong thread about existential risk that the best available mitigation is libertarianism. Not just political, but social libertarianism, by which I meant a wide divergence of lifestyles; the social equivalent of genetic, behavioral dispersion.
The LW community, like most technocratic groups (eg, socialists), seems to have this belief that there is some perfect cure for any problem. But there isn’t always, in fact for most complex and social problems there isn’t. Besides the Hayek mentioned earlier, see Thomas Sowell’s “A Conflict of Visions”, its sequel “Vision of the Anointed”, and his expansion on Hayek’s essay “Knowledge and Decisions”.
There is no way to ensure humanity’s survival, but the centralizing tendency seems a good way to prevent its survival should the SHTF.
Libertarianism decreases some types of existential risk and bad outcomes in general, but increases other types (like UFAI). It also seems to lead to Robin Hanson’s ultra-competitive, malthusian scenario, which many of us would consider to be a dystopia.
Have you already considered these objections, and still think that more libertarianism is desirable at this point? If so, how do you propose to substantially nudge the future in the direction of more libertarianism?
I think you misunderstand Robin’s scenario; if we survive, the Malthusian scenario is inevitable after some point.
Robin outright dismisses the possibility of a singleton (AI, groupmind or political entity) farsighted enough to steer clear of Malthusian scenarios until the universe runs down. I tend to think this dismissal is mistaken, but I could be convinced that there is a rough trichotomy of human futures: extinction, singleton or burning the cosmic commons.
Of the three possibilities for the far future, the Malthusian scenario is the least bad. A singleton would be worse, and extinction worse yet. That doesn’t mean I favor a Malthusian result, just that the alternatives are worse.
I don’t agree that there are only three non-negligible possibilities, but putting that aside, why do you think the Malthusian scenario would be better than a singleton? (I believe even Robin thinks that a singleton, if benevolent, would be better than the Malthusian scenario.)
He says that a singleton is unlikely but not negligibly so.
Ah, I see that you are right. Thanks.
There may not be a single strategy that is perfect on it’s own, but there will always be an optimum course of action, which may be a mixture of strategies (eg dump $X into nanotech safety, $Y into intelligence enhancement, and $Z into AGI development). You might never have enough information to know the optimal strategy to maximise your utility function, but one still exists, and it is worth trying to estimate it.
I mention this because previously I have heard “there is no perfect solution” as an excuse to give up and abandon systematic/mathematical analysis of a problem, and just settle with some arbitrary suggestion of a “good enough” course of action.
It isn’t just that there is no “perfect” solution, to many problems there is no solution at all; just a continuing difficulty that must be continually worked through. Claims of some optimal (or even good enough) solution to these sorts of social problems is usually a means to advance the claimants’ agendas, especially when they propose using gov’t coercion to force everybody to follow their prescriptions.
That claims of this type are sometimes made to advance agendas does not mean we shouldn’t make these claims, or that all such claims are false. It means such claims need to be scrutinised more carefully.
I agree that more often than not there is not a simple solution, and people often accept a false simple solution too readily. But the absence of a simple solution does not mean there is no theoretical optimal strategy for continually working through the difficulty.
Who’s doing that? Governments also use surveillance, intelligence, tactical invasions and other strategies to combat terrorism.
Your first link seems to be broken.
I didn’t watch the full video, but does he actually propose how human beings should be made more docile and intelligent? I don’t mean a technical method, but rather a political method of ensuring that most of humanity gets these augmentations. This is borderline impossible in a liberal democracy. I think this explains why programming an AI is a more practical approach. Consider how many people are furious because they believe that fluoridated water turns people into docile consumers, or that vaccines give kids autism. Now imagine actually trying to convince people that the government should be allowed to mess around with their brains. And if the government doesn’t mandate it, then the most aggressive and dangerous people will simply opt out.
In the Q&A at 15:30, he opines that it will take the first technologically enabled act of mass terrorism to persuade people. I agree: I don’t think anything will get done on x-risks until there’s a gigadeath event.
Even in such a scenario, some rotten eggs would probably refuse the smart drug treatment or the gene therapy injection—perhaps exactly those who would be the instigators of extinction events? Or at least the two groups would overlap somewhat, I fear.
I’m starting to think it would be rational to disperse our world-saving drug of choice by means of an engineered virus of our own, or something equally radically effective. But don’t quote me on that. Or whatever, go ahead.
Not just “rotten eggs” either. If there is one thing that I could nearly guarantee to bring on serious opposition from independent and extremely intelligent people, that is convince people with brains to become “criminals”, it is mandating gov’t meddling with their brains. I, for example, don’t use alcohol or any other recreational drug, I don’t use any painkiller stronger than ibuprofen without excrutiating (shingles or major abcess level) pain, most of the more intelligent people I know feel to some extent the same, and I am a libertarian; do you really think I would let people I despise mess around with my mind?
On the topic of shingles, shingles is associated with depression. Should I ask my GP for the vaccine for prevention given that I live in Australia, have had chickenpox, but haven’t had shingles?
You don’t have to trust the government, you just have to trust the scientists who developed the drug or gene therapy. They are the ones who would be responsible for the drug working as advertised and having negligible side-effects.
But yes, I sympathize with you, I’m just like that myself actually. Some people wouldn’t be able to appreciate the usefulness of the drug, no matter how hard you tried to explain to them that it’s safe, helpful and actually globally risk-alleviating. Those who were memetically sealed off to believing that or just weren’t capable of grasping it, would oppose it strongly—possiby enough to base a war on the rest of the world on it.
It would also take time to reach the whole population with a governmentally mandated treatment. There isn’t even a world government right now. We are weak and slow. And one comparatively insane man on the run is one too many.
Assuming an efficient treatment for human stupidity could be developed (and assuming that would be a rational solution to our predicament), then the right thing to do would be delivering it in the manner causing the least bit of social upheaval and opposition. That would be a covert dispersal, most definitely. A globally coordinated release of a weaponized retro virus, for example.
We still have some time before even that can be accomplished, though. And once that tech gets here we have the hugely increasing risk of bioterrorism or just accidental catastrophies by the hand of some clumsy research assistant, before we have a chance to even properly prototype & test our perfect smart drug.
If I was convinced of the safety and efficacy of an intelligent enhancing treatment I would be inclined to take it and use my enhanced intelligence to combat any government attempts to mandate such treatment.
it might only be a small enhancement. +30 IQ points across the board would save the world, +30 to just you would not make much difference.
I find that claim highly dubious.
I certainly haven’t supported it; that was the kind of scenario I had in mind, though. Whether cognitive enhancement alone is enough is another debate entirely.
30 additional points of intelligence for everzone could mean that AI gets developed sooner and therefore there less time for FAI research.
The same goes for biological research that might lead to biological weapons.
My personal suspicion, and what motivates me to think that IA is a good idea, is that the human race is facing a massive commons problem with respect to AGI. Realizing that there is a problem requires a lot of intelligence. If no one, or very few, realize that something is wrong, then it is unlikely that anything will be done about it. If this is the case, it doesn’t matter how much time we have: if there’s little support for the project of managing the future, little money and little manpower, then even a century or a millennium is not long enough.
The notion that higher IQ means that more money will be allocated to solving FAI is idealistic. Reality is complex and the reason for which money gets allocated are often political in nature and depend on whether institutions function right. Even if individuals have a high IQ that doesn’t mean that they don’t fall in the group think of their institution.
Real world feedback however helps people to see problem regardless of their intelligence. Real world feedback provides truth when high IQ can just mean that you are better stacking ideas on top of each other.
Christian, FAI is hard because it doesn’t necessarily provide any feedback. There lots of are scenarios where the first failed FAI just kills us all.
That’s why I am advocating IA as a way to up the odds of the human race producing FAI before uFAI.
But really, the more I think about it, the more I think that we would do better to avoid AGI all together, and build brain emulations. Editing the mental states of ems and watching the results will provide feedback, and will allow us to “look before we jump”.
Some sub-ideas of a FAI theory might be put to test in artificial intelligence that isn’t smart enough to improve itself.
“Editing the mental states of ems” sounds ominous. We would (at some point) be dealing with conscious beings, and performing virtual brain surgery on them has ethical implications.
Moreover, it’s not clear that controlled experiments on ems, assuming we get past the ethical issues, will yield radical insight on the structure of intelligence, compared to current brain science.
It’s a little like being able to observe a program by running it under a debugger, versus examining its binary code (plus manual testing). Yes this is a much better situation, but it’s still way more cumbersome than looking at the source code; and that in turn is vastly inferior to constructing a theory of how to write similar programs.
When you say you advocate intelligence augmentation (this really needs a more searchable acronym), do you mean only through genetic means or also through technological “add-ons” ? (By that I mean devices plugging you into Wikipedia or giving you access to advanced math skills in the same way that a calculator boosts your arithmetic.)
Hopefully volunteers could be found; but in any case, the stakes here are the end of the world, the end justifies the means.
To whoever downvoted Roko’s comment—check out the distinction between these ideas:
One Life Against the World
Ends Don’t Justify Means (Among Humans)
I’d volunteer and I’m sure I’m not the only one here.
Heroes of the future sign up in this thread ;-)
You’re not, though I’m not sure I’d be an especially useful data source.
I’ve met at least one person who would like a synesthesia on-off switch for their brain—that would make your data useful right there.
Looks to me like that’d be one of the more complicated things to pull off, unfortunately. Too bad; I know a few people who’d like that, too.
Please expand on what “the end” means in this case. What do you expect we would gain from perfecting whole-brain emulation, I assume of humans ? How does that get us out of our current mess, exactly ?
I think that WBE stands a greater chance of precipitating a friendly singularity.
It doesn’t have to; working ems would be good enough to lift us out of the problematic situation we’re in at the moment.
I worry these modified ems won’t share our values to a sufficient extent.
It is a valid worry. But under the right conditions, where we take care not to let evolutionary dynamics take hold, we might be able to get a better shot at a friendly singularity than any other way.
Possibly. But I’d rather use selected human geniuses with the right ideas copied and sped up, and wait for them to crack FAI before going further (even if FAI doesn’t give a powerful intelligence explosion—then FAI is simply formalization and preservation of preference, rather than power to enact this preference).
That’s correct. So why do I think it would help? What does the risk landscape as a function of population average Intelligence look like?
So individual autonomy is more important? I just don’t get that. It’s what’s behind the wheels of the autonomous individuals that matters. It’s a hedonic equation. The risk that unaltered humans pose to the happiness and progress of all other individuals might just work out to “way too fracking high”.
It’s everyone’s happiness and progress that matters. If you can raise the floor for everyone, so that we’re all just better, what’s not to like about giving everybody that treatment?
The same that’s not to like about forcing anything on someone against their will because despite their protestations you believe it’s in their own best interests. You can justify an awful lot of evil with that line of argument.
Part of the problem is that reality tends not to be as simple as most thought experiments. The premise here is that you have some magic treatment that everyone can be 100% certain is safe and effective. That kind of situation does not arise in the real world. It takes a generally unjustifiable certainty in the correctness of your own beliefs to force something on someone else against their wishes because you think it is in their best interests.
On the other hand, if you look around at the real world it’s also pretty obvious that most people frequently do make choices not in their own best interests, or even in line with their own stated goals.
Forcing people to not do stupid things is indeed an easy road to very questionable practices, but a stance that supports leaving people to make objectively bad choices for confused or irrational reasons doesn’t really seem much better. “Sure, he may not be aware of the cliff he’s about to walk off of, but he chose to walk that way and we shouldn’t force him not to against his will.” Yeah, that’s not evil at all.
Not to mention that, in reality, a lot of stupid decisions negatively impact people other than just the person making them. I’m willing to grant letting people make their own mistakes but I have to draw the line when they start screwing things up for me.
I find it interesting that you make a distinction between people making choices that are not in their own best interests and choices not in line with their own stated goals. The implication is that some people’s stated goals are not in line with their own ‘best interests’. While that may be true, presuming that you (or anyone else) are qualified to make that call and override their stated goals in favour of what you judge to be their best interest is a tendency that I consider extremely pernicious.
There’s a world of difference between informing someone of a perceived danger that you suspect they are unaware of (a cliff they’re about to walk off) and forcibly preventing them from taking some action once they have been made aware of your concerns. There is also a world of difference between offering assistance and forcing something on someone to ‘help’ them against their will.
Incidentally I don’t believe there is a general moral obligation to warn someone away from taking an action that you believe may harm them. It may be morally praiseworthy to go out of your way to warn them but it is not ‘evil’ to refrain from doing so in my opinion.
In general this is in a different category from the kinds of issues we’ve been talking about (forcing ‘help’ on someone who doesn’t want it). I have no problem with not allowing people to drive while intoxicated for example to prevent them causing harm to other road users. In most such cases you are not really imposing your will on them, rather you are withholding their access to some resource (public roads in this case) based on certain criteria designed to reduce negative externalities imposed on others.
Where this issue does get a little complicated is when the negative externalities you are trying to prevent cannot be eliminated without forcing something upon others. The current vaccination debate is an example—there should be no problem allowing people to refuse vaccines if they only harmed themselves but they may pose risks to the very old and the very young (who cannot be vaccinated for medical reasons) through their choices. In theory you could resolve this dilemma by denying access to public spaces for people who refused to be vaccinated but there are obvious practical implementation difficulties with that approach.
Generally what I had in mind there is selecting concrete goals without regard for likely consequences, or with incorrect weighting due to, e.g. extreme hyperbolic discounting, or being cognitively impaired. In other words, when someone’s expectations about a stated goal are wrong and the actual outcome will be something they personally consider undesirable.
If they really do know what they’re getting into and are okay with it, then fine, not my problem.
If it helps, I also have no problem with someone valuing self-determination so highly that they’d rather suffer severe negative consequences than be deprived of choice, since in that case interfering would lead to an outcome they’d like even less, which misses the entire point. I strongly doubt that applies to more than a tiny minority of people, though.
Actually making someone aware of a danger they’re approaching is often easier said than done. People have a habit of disregarding things they don’t want to listen to. What’s that Douglas Adams quote? Something like, “Humans are remarkable among species both for having the ability to learn from others’ mistakes, and for their consistent disinclination to do so.”
I strenuously disagree that inaction is ever morally neutral. Given an opportunity to intervene, choosing to do nothing is still a choice to allow the situation to continue. Passivity is no excuse to dodge moral responsibility for one’s choices.
I begin to suspect that may be the root of our actual disagreement here.
It’s a completely different issue, actually.
...but there’s a huge amount of overlap. Simply by virtue of living in society, almost any choice an individual makes imposes some sort of externality on others, positive or negative. The externalities may be tiny, or diffuse, but still there.
Tying back to the “helping people against their will” issue, for instance: Consider an otherwise successful individual, who one day has an emotional collapse after a romantic relationship fails, goes out and gets extremely drunk. Upon returning home, in a fit of rage, he destroys and throws out a variety of items that were gifts from the ex-lover. Badly hung over, he doesn’t show up to work the next day and is fired from his job. He eventually finds a new, lower-paid and less skilled, job, but is now unable to make mortgage payments and loses his house.
On the surface, his actions have harmed only himself. However, consider what the society as a whole has lost: 1) The economic value of his work for the period where he was unemployed 2) The greater economic value of a skilled, better-paid worker 3) The wealth represented by the destroyed gifts 4) The transaction costs and economic inefficiency resulting from the foreclosure, job search, &c. 5) The value of any other economic activity he would have participated in, had these events not occurred. [0]
A very serious loss? Not really. Certainly, it would be extremely dubious to say the least for some authority to intervene. But the loss remains, and imposes a very real, if small, negative impact on every other individual.
Now, multiply the essence of that scenario by countless individuals; the cumulative foolishness of the masses, reckless and irrational, the costs of their mistakes borne by everyone alike. Justification for micromanaging everyone’s lives? No—if only because that doesn’t generally work out very well. Yet, lacking a solution doesn’t make the problem any less real.
So, to return to the original discussion, with a hypothetical medical procedure to make people smarter and more sensible, or whatever; if it would reduce the losses from minor foolishness, then not forcing people to accept it is equivalent to forcing people to continue paying the costs incurred by those mistakes.
Not to say I wouldn’t also be suspicious of such a proposition, but don’t pretend that opposing the idea is free. It’s not, so long as we’re all sharing this society.
Maybe you’re happy to pay the costs of allowing other people to make mistakes, but I’m not. It may very well be that the alternatives are worse, but that doesn’t make the situation any more pleasant.
Complicated? That’s clear as day. People can either accept the vaccine or find another society to live in. Freeloading off of everyone else and objectively endangering those who are truly unable to participate is irresponsible, intolerable, reckless idiocy of staggering proportion.
[0] One might be tempted to argue that many of these aren’t really a loss, because someone else will derive value from selling the house, the destroyed items will increase demand for items of that type, &c. This is the mistake of treating wealth as zero-sum, isomorphic to the Broken Window Fallacy, wherein the whole economy takes a net loss even though some individuals may profit.
Explaining to them why you believe they’re making a mistake is justified. Interfering if they choose to continue anyway, not.
I don’t recognize a moral responsibility to take action to help others, only a moral responsibility not to take action to harm others. That may indeed be the root of our disagreement.
This is tangential to the original debate though, which is about forcing something on others against their will because you perceive it to be for the good of the collective.
I don’t want to nitpick but if you are free to create a hypothetical example to support your case you should be able to do better than this. What kind of idiot employer would fire someone for missing one day of work? I understand you are trying to make a point that an individual’s choices have impacts beyond himself but the weakness of your argument is reflected in the weakness of your example.
This probably ties back again to the root of our disagreement you identified earlier. Your hypothetical individual is not depriving society as a whole of anything because he doesn’t owe them anything. People make many suboptimal choices but the benefits we accrue from the wise choices of others are not our god-given right. If we receive a boon due to the actions of others that is to be welcomed. It does not mean that we have a right to demand they labour for the good of the collective at all times.
I chose this example because I can recognize a somewhat coherent case for enforcing vaccinations. I still don’t think the case is strong enough to justify compulsion. It’s not something I have a great deal of interest in however so I haven’t looked for a detailed breakdown of the actual risks imposed on those who are not able to be vaccinated. There would be a level at which I could be persuaded but I suspect the actual risk is far below that level. I’m somewhat agnostic on the related issue of whether parents should be allowed to make this decision for their children—I lean that way only because the alternative of allowing the government to make the decision is less palatable. A side benefit is that allowing parents to make the decision probably improves the gene pool to some extent.
I might be wrong in my beliefs about their best interests, but that is a separate issue.
Given the assumption that undergoing the treatment is in everyone’s best interests, wouldn’t it be rational to forgo autonomous choice? Can we agree on that it would be?
It’s not a separate issue, it’s the issue.
You want me to take as given the assumption that undergoing the treatment is in everyone’s best interests but we’re debating whether that makes it legitimate to force the treatment on people who are refusing it. Most of them are presumably refusing the treatment because they don’t believe it is in their best interests. That fact should make you question your original assumption that the treatment is in everyone’s best interests, or you have to bite the bullet and say that you are right, they are wrong and as a result their opinions on the matter can just be ignored.
Just out of curiosity, are you for or against the Friendly AI project? I tend to think that it might go against the expressed beforehand will of a lot of people, who would rather watch Simpsons and have sex than have their lives radically transformed by some oversized toaster.
I think that AI with greater than human intelligence will happen sooner or later and I’d prefer it to be friendly than not so yes, I’m for the Friendly AI project.
In general I don’t support attempting to restrict progress or change simply because some people are not comfortable with it. I don’t put that in the same category as imposing compulsory intelligence enhancement on someone who doesn’t want it.
Well, the AI would “presume to know” what’s in everyone’s best interests. How is that different? It’s smarter than us, that’s it. Self-governance isn’t holy.
An AI that forced anything on humans ‘for their own good’ against their will would not count as friendly by my definition. A ‘friendly AI’ project that would be happy building such an AI would actually be an unfriendly AI project in my judgement and I would oppose it. I don’t think that the SIAI is working towards such an AI but I am a little wary of the tendency to utilitarian thinking amongst SIAI staff and supporters as I have serious concerns that an AI built on utilitarian moral principles would be decidedly unfriendly by my standards.
I definitely seem to have a tendency to utilitarian thinking. Could you give me a reading tip on the ethical philosophy you subscribe to, so that I can evaluate it more in-depth?
The closest named ethical philosophy I’ve found to mine is something like Ethical Egoism. It’s not close enough to what I believe that I’m comfortable self identifying as an ethical egoist however. I’ve posted quite a bit here in the past on the topic—a search for my user name and ‘ethics’ using the custom search will turn up quite a few posts. I’ve been thinking about writing up a more complete summary at some point but haven’t done so yet.
The category “actions forced on humans ‘for their own good’ against their will” is not binary. There’s actually a large gray area. I’d appreciate it if you would detail where you draw the line. A couple examples near the line: things someone would object to if they knew about them, but which are by no reasonable standard things that are worth them knowing about (largely these would be things people only weakly object to); an AI lobbying a government to implement a broadly supported policy that is opposed by special interests. I suppose the first trades on the grayness in “against their will” and the second in “forced”.
It doesn’t have to radically transform their lives, if they wouldn’t want it to upon reflection. FAI ≠ enforced transhumanity.
Gene therapy of the type we do at the moment always works through a engineered virus. But then as technique progresses you don’t have to be a nation state anymore to do genetical engineering. A small group of super empowered individuals might be able to it.
Right… I might have my chance then to save the world. The problem is, everyone will get access to the technology at roughly the same time, I imagine. What if the military get there first? This has probably been discussed elsewhere here on LW though...
I suspect that once most people have had themselves or their children cognitively enhanced, you are in much better shape for dealing with the 10% of sticklers in a firm but fair way.
I’m not sure quite what you’re advocating here but ‘dealing with the 10% of sticklers in a firm but fair way’ has very ominous overtones to me.
Those people don’t get jobs or university education that they would need to use the dangerous knowledge about how to manufacture artificial viruses because they aren’t smart enough in competition to the rest.
Well, presumably Roko means we would be restricting the freedom of the irrational sticklers—possibly very efficiently due to our superior intelligence—rather than overriding their will entirely (or rather, making informed guesses as to what is in their ultimate interests, and then acting on that).
presumably you refer to the violation of individuals’ rights here—forcing people to undergo some kind of cognitive modification in order to participate in society sounds creepy?
But how would you feel if the first people to undergo the treatments were politicians; they might be enhanced so that they were incapable of lying. Think of the good that that could do.
I think I’d feel bad about the resulting fallout in the politicians’ home lives.
lol… ok, maybe you’d have to couple this with marriage counseling or whatever,
My feeling is that if you rendered politicians incapable of lying it would be hard to distinguish from rendering them incapable of speaking.
If to become a politician you had to undergo some kind of process to enhance intelligence or honesty I wouldn’t necessarily object. Becoming a politician is a voluntary choice however and so that’s a very different proposition from forcing some kind of treatment on every member of society.
Simply using a lie detector for politicians might be a much better idea. It’s also much easier. Of course a lie detector doesn’t really detect whether someone would be lying but the same goes for any cognitive enhancement.
Out of curiosity, what do you have in mind here as “participate in society”?
That is, if someone wants to reject this hypothetical, make-you-smarter-and-nicer cognitive modification, what kind of consequences might they face, and what would they miss out on?
The ethical issues of simply forcing people to accept it are obvious, but most of the alternatives that occur to me don’t actually seem that much better. Hence your point about “the people who do get made smarter can figure it out”, I guess.
I am very skeptical about any human gene-engineering proposals (for anything other than targeted medical treatment purposes.)
Even if we disregard superhuman artificial intelligences, there are a lot of more direct and therefore much quicker prospective technologies in sight: electronic/chemical brain-enhancing/control, digital supervision technologies, memetic engineering, etc.
IMO, the prohibitively long turnaround time of large scale genetic engineering and its inherently inexact (indirect) nature makes it inferior to almost any thinkable alternatives.
We have had successful trials of gene therapy in the last year to let apes see additional colors. We will have the possibility to sequence the gene of all of humanity sometimes in the next decade. We will have the tech to choose to do massive testing and correlate the test scores with genes and develop gene therapy to switch those genes off in the next decade.
If we don’t have ethical problems with doing so we could probably start pilot trials at the end of this decade for genetical engineering with gene therapy.
Much the same tech as is used to make intelligent machines augments human intelligence—by preprocessing its sensory inputs and post-processing its motor outputs.
In general, it’s much quicker and easier to change human culture and the human environment than it is to genetically modify human nature.
how?
“Richard Dawkins—The Shifting Moral Zeitgeist”
http://www.youtube.com/watch?v=uwz6B8BFkb4
Human culture is more end-user-modifiable than the human genome is—since we created it in the first place.
The problem is that culture is embedded in the genetic/evolutionary matrix; there are severe limits on what is possible to change culturally.
Culture is what separates us from cavemen. They often killed their enemies and ate their brains. Clearly culture can be responsible for a great deal of change in the domain of moral behaviour.
If Robin Hanson is right, moral progress is simply a luxury we indulge in in this time of plenty.
Did crime increase significantly during the Great Depression? Wouldn’t this potentially be falsifying evidence for Hanson’s hypothesis?
Perhaps the Great Depression just wasn’t bad enough, but it seems to cast doubt on the hypothesis, at the very least.
Crime is down during the current recession. It’s possible that the shock simply hasn’t been strong enough, but it may be evidence nonetheless.
I think Hanson’s hypothesis was more about true catastrophes, though—if some catastrophe devastated civilization and we were thrown back into widespread starvation, people wouldn’t worry about morality.
Probably testable—if we can find some poor civilised folk to study.
Indeed, rarely do we eat brains.
Culture has also produced radical Islam. Just look at http://www.youtube.com/watch?v=xuAAK032kCA to get a bit more pessimistic about the natural moral zeitgeist evolution in culture.
What fraction of the population, though? Some people are still cannibals. It doesn’t mean there hasn’t been moral progress. Update 2011-08-04 - the video link is now busted.
The persistence of the taboo against cannibalism is an example where we haven’t made moral progress. There’s no good moral reason to treat eating human meat as any different than meat of other animals, once the animals in question are dead, though there may be health reasons. It’s just an example of prejudice and unreasonable moral disgust.
Hmmm. The problem is, I don’t think that Dawkins argues that the changes are deliberate, rather that they are part of a random drift. Also, he speaks in terms of changes over 100-40 years. That is hardly “quick”, or even “quicker” than the 40-60 years that I claimed would be a minimum requirement for scientific alteration of human nature to work.
Personally, I think the changes are rather directional—and represent moral progress. However, that is a whole different issue.
Think how much the human genome has changed in the last 40-100 years to see how much more rapid cultural evolution can be. Culture is likely to continue to evolve much faster than DNA does—due to ethical concerns, and the whole “unmaintainable spaghetti code” business.
I like today’s morals better than those of any other time and I’d prefer if the idea of moral progress was defensible, but I have no good answer to the criticism “well, you would, you are of this time”.
I don’t think most people living in other times & places privately agreed with their society’s public morality, to the same extent that we do today.
For most of history (not prehistory), there was no option for public debate or even for openly stating opinions. Morality was normally handed down from above, from the rulers, as part of a religion. If those people had an opportunity to live in our society and be acclimatized to it, many of them may have preferred our morality. I don’t believe the reverse is true, however.
This doesn’t prove that our morality is objectively better—it’s impossible to prove this, by definition—but it does dismiss the implication of the argument that “you like today’s morality because you live today”. Only the people who live today are likely to like their time’s morality.
Thanks, this is a good point—and of course there’s plenty to dislike about lots of morality to be found today, there’s reason to hope the people of tomorrow will overall like tomorrow’s morality even better. As you say, this doesn’t lead to objective morality, but it’s a happy thought.
In the middle ages in Europe the middle class lived after much stricter morality than the ruling class when it comes to question such as having sex.
Morality was often the way of the powerless to feel like they are better than the ruling class.
I think that this is a compelling consideration. Whilst morality is subjective, whether someone’s preferences are satisfied is more objective.
If drift were a good hypothesis, steps “forwards” (from our POV) would be about as common as steps “backwards”. Are those “backwards” steps really that common?
If we model morality as a one-dimensional scale and change as a random walk, then what you say is true. However, if we model it as a million-dimensional scale on which each step affects only one dimension, after a thousand steps we would expect to find that nearly every step brought us closer to our current position.
EDIT: simulation seems to indicate I’m wrong about this. Will investigate further. EDIT: it was a bug in the simulation. Numpy code available on request.
I would regard any claim that abolition of hanging, burning witches, caning children in schools, torture, stoning, flogging, keel-hauling and stocks are “morally orthogonal” with considerable suspicion.
There no abolishion of torture anyone in the US. Some clever people ran a campaign in last decade that eroded the consensus that torture is always wrong. At the same time the US hasn’t reproduced burning witches.
That’s not the case. The United States signed and ratified the United Nations Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment.
Last year the US blackmailed the UK demanding that the UK either violates the United Nations Convention against torture or that the US will stop giving the UK intelligence about possible terrorist plots that might kill UK citizens. The US under the Obama administration doesn’t only violate the document themselves but also it also blackmails other countries to violate it as well.
Just because it is done by the government doesn’t make it legal.
Right—but it has been banned elsewhere:
http://en.wikipedia.org/wiki/European_Convention_on_Human_Rights#Article_3_-_torture
I’m happy to see those things abolished too, but since I’m not a moral realist I can’t see how to build a useful model of “moral progress”.
According to:
http://en.wikipedia.org/wiki/Moral_realism
...this involves attributing truth and falsity to moral statements—whereas it seems more realistic to say that moral truth has a subjective component.
However, the idea of moral progress does not mean there is “one true morality”.
It just means that some moralities are better than others. The moral landscape could have many peaks—not just one.
I see no problem with the concept of moral progress. The idea that all moralities are of equal merit seems like totally inexcusable cultural relativism to me. Politically correct, perhaps—but also silly.
Morality is about how best to behave. We have a whole bunch of theory from evolutionary biology that relates to that issue—saying what goals organisms have—which actions are most likely to attain them—how individual goals conflict with goals that are seen acceptable to society—and so on. Some of it will be a reflection of historical accidents—while other parts of it will be shared with most human cultures—and most alien races.
My position on these things is currently very close to that set out in THE TERRIBLE, HORRIBLE, NO GOOD, VERY BAD TRUTH ABOUT MORALITY AND WHAT TO DO ABOUT IT.
Well, I hope I explained how a denial of “moral realism” was quite compatible with the idea of moral progress.
Since that was your stated reason for denying moral progress, do you disagree with my analysis, or do you have a new reason for objecting to moral progress, or have you changed your mind about it?
I certainly don’t think there is anything wrong with the idea of moral progress in principle.
Finding some alien races, would throw the most light on the issue of convergent moral evolution—but in the mean time, our history, and the behaviour of other animals (e.g. dolphins) do offer some support for the idea, it seems to me.
Conway Morris has good examples of convergent evolution. It is a common phenomenon—and convergent moral evolution would not be particularly surprising.
If moral behaviour arises in a space which is subject to attractors, then some moral systems will be more widespread than others. If there is one big attractor, then moral realism would have a concrete basis.
No, sorry, I don’t see it at all. When you say “some moralities are better than others”, better by what yardstick? If you’re not a moral realist, then everyone has their own yardstick.
I really recommend against ever using the thought-stopping phrase “political correctness” ever for any purpose, but I absolutely reject the “cultural relativism” that you attribute to me as a result, by the way. Someone performing a clitorectomy may be doing the right thing by their own lights, but by my lights they’re doing totally the wrong thing, and since my lights are what I care about I’m quite happy to step in and stop them if I have the power to, or to see them locked up for it.
To continue with your analogy, moral realists claim there is one true yardstick. If you deny that it doesn’t mean you can’t measure anything, and that all attempts are useless. For example, people could still use yardsticks if they were approximately the same length.
I’m still not catching it. There isn’t one true yardstick, but there has been moral progress. I’m guessing that this is against a yardstick which sounds a bit more “objective” when you state it, such as “maximizing happiness” or “maximising human potential” or “reducing hypocrisy” or some such. But you agree that thinking that such a yardstick is a good one is still a subjective, personal value judgement that not everyone will share, and it’s still only against such a judgement that there can be moral progress, no?
I don’t expect everyone to agree about morality. However, there are certainly common elements in the world’s moral systems—common in ways that are not explicable by cultural common descent.
Cultural evolution is usually even more blatantly directional than DNA evolution is. One obvious trend is moral evolution is its increase in size. Caveman morality was smaller than most modern moralities.
Cultural evolution also exhibits convergent evolution—like DNA evolution does.
Most likely, like DNA evolution, it will eventually slow down—as it homes in on an deep, isolated optimum.
If there is one such optimum, and many systems eventually find it, moral realism would have a pretty good foundation. If there were many different optima with wildly-different moralities, it would not. Probably an intermediate position is most realistic—with advanced moral systems agreeing on a many things—but not everything.
(Replying again here rather than at the foot of a nugatory meta-discussion.)
I suggested C.S. Lewis’ “The Abolition of Man” as proposing a candidate for an optimum towards which moral systems have gravitated.
C.S. Lewis was, as Tim Tyler points out, a Christian, but I shall trust that we are all rational enough here to not judge the book from secondary data, when the primary source is so short, clearly written, and online. We need not don the leather cloak and posied beak to avoid contamination from the buboes of this devilish theist oozing Christian memes. It is anyway not written from a Christian viewpoint. To provide a summary would be to make soup of the soup. Those who do not wish to read that, are as capable of not reading this, which is neither written from a Christian viewpoint, nor by a Christian.
I am sufficiently persuaded that the eight heads under which he summarises the Tao can be found in all cultures everywhere: these are things that everyone thinks good. One might accuse him of starting from New Testament morality and recognising only that in his other sources, but if so, the defects are primarily of omission. For example, his Tao contains no word in praise of wisdom: such words can be found in the traditions he draws on, but are not prominent in the general doctrines of Christianity (though not absent either). His Tao is silent on temperance, determination, prudence, and excellence.
Those unfamiliar with talk of virtue can consult this handy aide-memoire and judge for themselves which of them are also to be found in all major moral systems and which are parochial. Those who know many languages might also try writing down all the names of virtues they can think of in each language: what do those lists have in common?
Here’s an experiment for everyone to try: think it good to eat babies. Don’t merely imagine thinking that: actually think it. I do not expect anyone to succeed, any more than you can look at your own blood and see it as green, or decide to believe that two and two make three.
What is the source of this universal experience?
Lewis says that the Tao exists, it is constant, and it is known to all. People and cultures differ only in how well they have apprehended it. It cannot be demonstrated to anyone, only recognised. He does not speculate in this work on where it comes from, but elsewhere he says that it is the voice of God within us. The less virtuous among us are those who hear that voice more faintly; the evil are those who do not hear it at all, or hear it and hate it. I think there will be few takers for that here.
Some—well, one, at least—reverse the arrow, saying that God is the good that we do, which presumably makes Satan the evil that we do.
Others say that there are objective moral facts which we discern by our moral sense, just as we discern objective physical facts by our physical senses; in both cases the relationship requires some effort to attain to the objective truth.
Others say, this is how we are made: we are so constituted as to judge some things virtuous, just as we are so constituted as to judge some things red. They may or may not give evpsych explanations of how this came to be, but whatever the explanation, we are stuck with this sense just as much as we are stuck with our experience of colour or of mathematical truth. We may arrive at moral conclusions by thought and experience, but cannot arbitrarily adopt them. Some claim to have discarded them altogether, but then, some people have managed to put their eyes out or shake their brains to pieces.
Come the Singularity, of course, all this goes by the board. Friendliness is an issue beyond just AGI.
We’re still going in circles. Optimal by what measure? By the measure of maximizes the sort of things I value? Morals have definitely got better by that measure. Please, when you reply, don’t use words like “best” or “optimal” or “merit” or any such normative phrase without specifying the measure against which you’re maximising.
Re: “Optimal by what measure? By the measure of maximizes the sort of things I value?”
No!
The basic idea is that some moral systems are better than other—in nature’s eyes. I.e. they are more likely to exist in the universe. Invoking nature as arbitrator will probably not please those who think that nature favours the immoral—but they should at least agree that nature provides a yardstick with which to measure moral systems.
I don’t have access to the details of which moral systems nature favours. If I did—and had a convincing supporting argument—there would probably be fewer debates about morality. However, the moral systems we have seen on the planet so far certainly seem to be pertinent evidence.
Measured by this standard, moral progress cannot fail to occur. In any case, that’s a measure of progress quite orthogonal to what I value, and so of course gives me no reason to celebrate moral progress.
Re: “moral progress cannot fail to occur”
Moral degeneration would typically correspond to devolution—which happens in highly radioactive environments, or under frequent meteorite impacts, or other negative local environmental condittions—provided these are avoidable elsewhere.
However, we don’t see very much devolution happening on this planet—which explains why I think moral progress is happening.
I am inclined to doubt that nature’s values are orthogonal to your own. Nature built you, and you are part of a successful culture produced by a successful species. Nature made you and your values—you can reasonably be expected to agree on a number of things.
From the perspective of the universe at large, humans are at best an interesting anomaly. Humans, plus all domesticated animals, crops, etc, compose less than 2% of the earth’s biomass. The entire biomass is a few parts per billion of the earth (maybe it’s important as a surface feature, but life is still outmassed by about a million times by the oceans and a thousand times by the atmosphere). The earth itself is a few parts per million of the solar system, which is one of several billion like it in the galaxy.
All of the mass in this galaxy, and all the other galaxy, quasars, and other visible collections of matter, are outmassed five to ten times by hydrogen atoms in intergalactic space.
And all that, all baryonic matter, composes a few percent of the mass-energy of the universe.
Negative?! They’re great for the bacteria that survive.
And I suspect those with “devolved” morality would feel the same way.
Sufficiently hostile environmental conditions destroy living things by causing error catastrophes / mutational meltdowns. You have to go in the opposite direction to see constructive, adaptive evolution—which is basically what I was talking about.
Most living systems can be expected to seek out those conditions. If they are powerful enough to migrate, they will mostly exist where living is practical, and mostly die out under conditions which are unfavourable.
If your environment is insufficiently hostile there will be no natural selection at all. Evolution does not have a direction. The life that survives survives the life that does not, does not. That’s it. Conditions are favorable for some life and unfavorable for others. There are indeed conditions where few complex, macroscopic life forms will develop-- but that is because in those conditions it is disadvantageous to be complex or macroscopic. If you live next to an underwater steam vent you’re probably the kind of thing that likes to live there and won’t do well in Monaco.
Re: “Evolution does not have a direction.”
My essay about that: http://originoflife.net/direction/
See also, the books “Non-Zero” and “Evolution’s Arrow”.
There is no reason to associate complexity with moral progress.
Sure. The evidence for moral progress is rather different—e.g. see:
“Richard Dawkins—The Shifting Moral Zeitgeist”
http://www.youtube.com/watch?v=uwz6B8BFkb4
Wait a minute. This entire conversation begins with you conflating moral progress and directional evolution.
Is the relationship between biological and ethical evolution just an analogy or something more for you?
Then I say: what you call good biological changes other organisms would experience as negative changes and vice versa.
You throw out the thesis about evolution having a direction because life fills more and more niches and is more and more complex. If those are things that are important to you, great. But that doesn’t mean any particular organism should be excited about evolution or that there is a fact of the matter about things getting better. If you have the adaptations to survive in a complex, niche-saturated environment good for your DNA! If you don’t, you’re dead. If you like complexity things are getting better. If you don’t things are getting worse. But the ‘getting better’ or ‘getting worse’ is in your head. All that is really happening is that things are getting more complex.
And this is the point about the ‘shifting moral Zeitgeist’ (which is a perfectly fine turn of phrase btw, because it doesn’t imply the current moral Zeitgeist is any truer than the last one). Maybe you can identify trends in how values change but that doesn’t make the new values better. But since the moral Zeitgeist is defined by the moral beliefs most people hold, most people will always see moral history up to that point in time as progressive. Similarly, most young people will experience moral progress the rest of their lives as the old die out.
I think there is some kind of muddle occurring here.
I cited the material about directional evolution in response to the claim that: “Evolution does not have a direction.”
It was not to do with morality, it was to do with whether evolution is directional. I thought I made that pretty clear by quoting the specific point I was responding to.
Evolution is a gigantic optimization mechanism, a fitness maximizer. It operates in a relatively benign environment that permits cumulative evolution—thus the rather obvious evolutionary arrow.
Re: “Is the relationship between biological and ethical evolution just an analogy or something more for you?”
Ethics is part of biology, so there is at least some link. Beyond that, I am not sure what sort of analogy you are suggesting. Maybe in some evil parallel universe, morality gets progressively nastier over time. However, I am more concerned with the situation in the world we observe.
The section you quoted is out of context. I was actually explaining how the idea that “moral progress cannot fail to occur” was not a logical consequence of moral evolution—because of the possibility of moral devolution. It really is possible to look back and conclude that your ancestors had better moral standards.
We have already discussed the issue of whether organisms can be expected to see history as moral progress on this thread, starting with:
“If drift were a good hypothesis, steps “forwards” (from our POV) would be about as common as steps “backwards”.”
http://lesswrong.com/lw/1m5/savulescu_genetically_enhance_humanity_or_face/1ffn
I haven’t read the books, though I’m familiar with the thesis. Your essay is afaict a restatement of that thesis. Now, maybe the argument is sufficiently complex that it needs to be made in a book and I’ll remain ignorant until I get around to reading one of these books. But it would be convenient if someone could make the argument in few enough words that I don’t have to spend a month investigating it.
Re: “If your environment is insufficiently hostile there will be no natural selection at all.”
See Malthus on resource limitation, though.
So, “might as right” …
Nature is my candidate for providing an objective basis for morality.
Moral systems that don’t exist—or soon won’t exist—might have some interest value—but generally, it is not much use being good if you are dead.
“Might is right” does not seem like a terribly good summary of nature’s fitness criteria. They are more varied than that—e.g. see the birds of paradise—which are often more beautiful than mighty.
Ah, ok. That is enlightening. Of the Great Remaining Moral Realists, we have:
Tim Tyler: “The basic idea is that some moral systems are better than other—in nature’s eyes. I.e. they are more likely to exist in the universe.”
Stefan Pernar: “compassion as a rational moral duty irrespective of an agents level of intelligence or available resources.”
David Pearce: “Pleasure and pain are intrinsically motivating and objectively Good and Bad, respectively”
Gary Drescher: “Use the Golden Rule: treat others as you would have them treat you”
Drescher’s use of the Golden Rule comes from his views on acausal game-theoretic cooperation, not from moral realism.
But he furthermore thinks that this can be leveraged to create an objective morality.
Isn’t this a definitional dispute? I don’t think Drescher thinks some goal system is privileged in a queer way. Timeless game theory might talk about things that sound suspiciously like objective morality (all timelessly-trading minds effectively having the same compromise goal system?), but which are still mundane facts about the multiverse and counterfactually dependent on the distribution of existing optimizers.
When I spoke to Drescher at SS09 he seemed to imply a belief in moral realism. I’ll have to go read good and real to see what he actually says.
And there are plenty of moral realists who think that there is such a thing as morality, and our ethical theories track it, and we haven’t figured out how to fully specify it yet.
I don’t think Stefan Pernar makes much sense on this topic.
David Pearce’s position is more reasonable—and not very different from mine—since pleasure and pain (loosely speaking) are part of what nature uses to motivate and reward action in living things. However, I disagree with David on a number of things—and prefer my position. For example, I am concerned that David will create wireheads.
I don’t know about Gary’s position—but the Golden Rule is a platitude that most moral thinkers would pay lip service to—though I haven’t heard it used as a foundation of moral behaviour before. Superficially, things like sexual differences make the rule not-as-golden-as-all-that.
Also: “Some examples of robust “moral realists” include David Brink, John McDowell, Peter Railton, Geoffrey Sayre-McCord, Michael Smith, Terence Cuneo, Russ Shafer-Landau, G.E. Moore, Ayn Rand, John Finnis, Richard Boyd, Nicholas Sturgeon, and Thomas Nagel.”
Here is one proposed candidate for that optimum.
That link is to “C.S. Lewis’s THE ABOLITION OF MAN”.
And I would be interested to know what people think of Lewis’ Tao, and the arguments he makes for it.
Since:
http://en.wikipedia.org/wiki/C._S._Lewis#Conversion_to_Christianity
...I figure there would need to be clearly-evident redeeming features for anyone here to bother.
Meh. If someone being a theist were enough reason to not bother reading their arguments, we wouldn’t read much at all.
You have to filter crap out somehow.
Using “christian nutjob” as one of my criteria usually seems to work pretty well for me. Doesn’t everyone do that?
C. S. Lewis is a Christian, but hardly a nutjob. I filter out Christian nutjobs, but not all Christians.
Are there Christian non-nutjobs? It seems to me that Christianity poisons a person’s whole world view—rendering them intellectually untrustworthy. If they believe that, they can believe anything.
Looking at:
http://en.wikipedia.org/wiki/C._S._Lewis#The_Christian_apologist
...there seems to be a fair quantity of nutjobbery to me.
Except insofar as Christianity is a form of nutjobbery, of course.
Well… yes and no. I wouldn’t trust a Christian’s ability to do good science, and I don’t think a Christian could write an AI (unless the Christianity was purely cultural and ceremonial). But Christians can and do write brilliant articles and essays on non-scientific subjects, especially philosophy. Even though I disagree with much of it, I still appreciate C.S. Lewis or G. K. Chesterton’s philosophical writing, and find it thought provoking.
In this case, the topic was moral realism. You think Christians have some worthwhile input on that? Aren’t their views on the topic based on the idea of morality coming from God on tablets of stone?
No, no more than we believe that monkeys turn into humans.
Christians believe human morality comes from god. Rather obviously disqualifies them from most sensible discussions about morality—since their views on the topic are utter nonsense.
This isn’t fully general to all Christians. For instance, my best friend is a Christian, and after prolonged questioning, I found that her morality boils down to an anti-hypocrisy sentiment and a social-contract-style framework to cover the rest of it. The anti-hypocrisy thing covers self-identified Christians obeying their own religion’s rules, but doesn’t extend them to anyone else.
You can’t read everything; you have to collect evidence on what’s going to be worth reading. A Christian on this sort of moral philosophy, I think that Lewis is often interesting but I plan to go to bed rather than read it, unless I get some extra evidence to push it the other way.
FWIW, I recommend it.
AFAIR, that, the Narnia stories, and the Ransom trilogy are the only Lewis I’ve read. Are there others you have found interesting?
They could be explicable by common evolutionary descent: for instance, our ethics probably evolved because it was useful to animals living in large groups or packs with social hierarchies.
No, not at all. That optimum may have evolved to be useful under the conditions we live in, but that doesn’t mean it’s objectively right.
You don’t seem to be entering into the spirit of this. The idea of there being one optimum which is found from many different starting conditions is not subject to the criticism that it’s location is a function of accidents in our history.
Rather obviously—since human morality is currently in a state of progressive development—it hasn’t reached any globally optimum value yet.
Maybe I misunderstood your original comment. You seemed to be arguing that moral progress is possible based on convergence. My point was even if it does reach a globally convergent value, that doesn’t mean that value is objectively optimal, or the true morality.
In order to talk about moral “progress”, or an “optimum” value, you need to first find some objective yardstick. Convergence does not establish that such a yardstick exists.
I agree with your comment, except that there are some meaningful definitions of morality and moral progress that don’t require morality to be anything but a property of the agents who feel compelled by it, and which don’t just assume that whatever happens is progress.
(In essence, it is possible— though very difficult for human beings— to figure out what the correct extrapolation from our confused notions of morality might be, remembering that the “correct” extrapolation is itself going to be defined in terms of our current morality and aesthetics. This actually ends up going somewhere, because our moral intuitions are a crazy jumble, but our more meta-moral intuitions like non-contradiction and universality are less jumbled than our object-level intuitions.)
Well, of course you can define “objectively optimal morality” to mean whatever you want.
My point was that if there is natural evolutionary convergence, then it makes reasonable sense to define “optimal morality” as the morality of the optimal creatures. If there was a better way of behaving (in the eyes of nature), then the supposedly optimal creatures would not be very optimal.
Additionally, the lengths of the yardsticks could be standardized to make them better—for example, as has actually occurred, by tying the units of “yards” to the previously-standardized metric system.
I was criticising the idea that “all moralities are of equal merit”. I was not attributing that idea to you. Looking at:
http://en.wikipedia.org/wiki/Cultural_relativism
...it looks like I used the wrong term.
http://en.wikipedia.org/wiki/Moral_relativism
...looks slightly better—but still is not quite the concept I was looking for—I give up for the moment.
I’m not sure if there’s standard jargon for “all moralities are of equal merit” (I’m pretty sure that’s isomorphic to moral nihilism, anyway). However, people tend to read various sorts of relativism that way, and it’s not uncommon in discourse to see “Cultural relativism” to be associated with such a view.
Believing that all moralities are of equal merit is a particularly insane brand of moral realism.
What I was thinking of was postmodernism—in particular the sometimes-fashionable postmodern conception that all ideas are equally valid. It is a position sometimes cited in defense of the idea that science is just another belief system.
Thanks for that link: I had seen that mentioned before and had wanted to read it.
I’ve been reading that (I’m on page 87), and I haven’t gotten to a part where he explains how that makes moral progress meaningless. Why not just define moral progress sort of as extrapolated volition (without the “coherent” part)? You don’t even have to reference convergent moral evolution.
I don’t think he talks about moral progress. But the point is that no matter how abstractly you define the yardstick by which you observe it, if someone else prefers a different yardstick there’s no outside way to settle it.
I don’t think it mentions moral progress. It just seems obvious that if there is no absolute morality, then the only measures against which there has been progress are those that we choose.
Of course it isn’t “objective” or absolute. I already disclaimed moral realism (by granting arguendo the validity of the linked thesis). Why does it follow that you “can’t see how to build a useful model of ‘moral progress’”? Must any model of moral progress be universal?
It is a truism that as the norms of the majority change the majority of people will see subjective moral progress. That kind of experience is assumed once you know that moralities change. So when you use the term moral progress it is reasonable to assume you think there is some measure for that progress other than your own morality. The way you’re using the word progress is throwing a couple of us off.
If you’re talking about progress relative to my values, then absolutely there has been huge progress.
I’m not talking specifically about that. Mainly what I’m wondering is what exactly motivated you to say “can’t see how …” in the first place. What makes a measure of progress that you choose (or is chosen based on some coherent subset of human moral values, etc.) somehow … less valid? not worthy of being used? something else?
It’s possible we’re violently agreeing here. By my own moral standards, and by yours, there has definitely been moral progress. Since there are no “higher” moral standards against which ours can be compared, there’s no way for my feelings about it to be found objectively wanting.
The reason why we have terrorism is because we don’t have a moral consensus that labels killing people as bad. The US does a lot to convince Arabs that killing people is just when there’s a good motive.
Switching to a value based foreign policy where the west doesn’t violate it’s moral norms in the mind of the Arabs could help us to get a moral consensus against terrorism but unfortunately that doesn’t seem politically viable at the moment.
I’d find this pleasant to believe, and I’ve been a longstanding critic of US foreign policy, but:
Terrorism isn’t a big problem, it should be a long way down the list of problems the US needs to think about. It’s interesting to speculate on what would make a difference to it, but it would be crazy to make it more than a very small influence on foreign policy.
Terrorists are already a long way from the moral consensus, which is one reason they’re so rare.
It seems incredibly implausible to me that they’re taking their moral lead from the US in any case.
And of course while killing people is bad all other things being equal, almost everyone already believes that; what they believe is that it’s defensible in the pursuit of some other good (such as saving lives elsewhere) which I also believe.
Terrorists usually aren’t a long way from the moral consensus of their community. If you take polls asking people what they think of the US the answers radically changed in the last ten years in the middle east.
In Iran the Western ideals of democracy work enough to destabilize the government a bit. Our values actually work. They are something that people can believe in and draw meaning from.
Doomsday predictions have never come true in the past, no matter much confidence the futurist had. Why should we believe this particular futurist?
And why would that be?...
I salute your sense of humor here, but I suspect that it needs spelling out…
Anthropic issues are relevant here.
It is not possible for humans to observe the end of the human race. so lack of that observation is not evidence.
Global catastropic risks that weren’t the the extinction of the race have happened. At one point, it is theorized that there were just 500 reproducing females left. That counts as a close shave.
Also, Homo Florensis and Homo Neanderthalis did, in fact, get wiped out.
I don’t think pre-modern catastrophes are relevant to this discussion.
The point about the anthropic issues are well taken, but I still contend that we should be skeptical of over-hyped predictions by supposed experts. Especially when they propose solutions that (apparently, to me) reduce ‘freedoms.’
There is a grand tradition of them failing.
And, if we do have the anthropic explanation to ‘protect us’ from doomsday-like outcomes, why should we worry about them?
Can you explain how it is not hypocritical to consider anthropic explanations relevant to previous experiences but not to future ones?
Observation that you current exist trivially implies that you haven’t been destroyed, but doesn’t imply that you won’t be destroyed. As simple as that.
I can’t observe myself getting destroyed either, however.
When you close your eyes, the World doesn’t go dark.
The world probably doesn’t go dark. We can’t know for sure without using sense data.
http://lesswrong.com/lw/pb/belief_in_the_implied_invisible/
Anthropics will prevent us from being able, after the event, to observe that the human race has ended. Dead people don’t do observations. However, it will have ended, which many consider to be a bad thing. I suspect that you’re confused about what it is that anthropics says: consider reading LW wiki or wikipedia on it.
Of course, if you bring Many Worlds QM into this mix, then you have the quantum immortality hypothesis, stating that nothing can kill you. However, I am still a little uncertain of what to make of QI.
I think I was equating quantum immortality with anthropic explanations, in general. My mistake.
No problem. QI still does confuse me somewhat. If my reading of the situation is correct, then properly implemented quantum suicide really would win you the lottery, without you especially losing anything. (yes, in the branches where you lose, you no longer exist, but since I am branching at a rate of 10^10^2 or so splits per second, who cares about a factor of 10^6 here or there? Survival for just one extra second would make up for it—the number of “me’s” is increasing so quickly that losing 99.999999% of them is negated by waiting a fraction of a second)
You’re talking about the number of branches, but perhaps the important thing is not that but measure, i.e., squared amplitude. Branching preserves measure, while quantum suicide doesn’t, so you can’t make up for it by branching more times if what you care about is measure.
It seems clear that on a revealed preference level, people do care about measure, and not the number of branches, since nobody actually attempts quantum suicide, nor do they try to do anything to increase the branching rate.
If you go further and ask why do we/should we care about measure instead of the number of branches, I have to answer I don’t know, but I think one clue is that those who do care about the number of branches but not measure will end up in a large number of branches but have small measure, and they will have high algorithmic complexity/low algorithmic probability as a result.
(I may have written more about this in a OB comment, and I’ll try to look it up. ETA: Nope, can’t find it now.)
Do you think that the thing that, as a historical fact, causes people to not try quantum suicide, is the argument that it decreases measure? I doubt this a lot. Do you think that if people were told that it preserved measure, they would be popping off to do it all the time?
I don’t think that people are revealing a preference for measure here. I think that they’re revealing that they trust their instinct to not do weird things that look like suicide to their subconscious.
No, I’m not claiming that. I think people avoid quantum suicide because they fear death. Perhaps we can interpret that as caring about measure, or maybe not. In either case there is still a question of why do we fear death, and whether it makes sense to care about measure. As I said, I don’t know the answers, but I think I do have a clue that others don’t seem to have noticed yet.
ETA: Or perhaps we should take the fear of death as a hint that we should care about measure, much like how Eliezer considers his altruistic feelings to be a good reason for adopting utilitarianism.
If quantum suicide works, then there’s little hurry to use it, since it’s not possible to die before getting the chance. Anyone who does have quantum immortality should expect to have it proven to them, by going far enough over the record age if nothing else. So attempting quantum suicide without such proof would be wrong.
Um, what? Why did we evolve to fear death? I suspect I’m missing something here.
You’re converting an “is” to an “ought” there with no explanation, or else I don’t know in what sense you’re using “should”.
That the way we fear death has the effect of maximizing our measure, but not the number of branches we are in, is perhaps a puzzle. See also http://lesswrong.com/lw/19d/the_anthropic_trilemma/14r8 starting at “But a problem with that”.
I’m pointing out a possible position one might take, not one that I agree with myself. See http://lesswrong.com/lw/196/boredom_vs_scope_insensitivity/14jn
Yes, but you didn’t explain why anyone would want to take that position, and I didn’t manage to infer why. One obvious reason, that the fear of death (the fear of a decrease in measure) is some sort of legitimate signal about what matters to many people, prompts the question of why I should care about what evolution has programmed into me. Or perhaps, more subtly, the question of why my morality function should (logically) similarly weight two quite different things—a huge extrinsic decrease in my measure (involuntary death) vs. an self-imposed selective decrease in measure—that were not at all separate as far as evolution is concerned, where only the former was possible in the EEA, and perhaps where upon reflection only the reasons for the former seem intuitively clear.
ETA: Also, I totally don’t understand why you think that it’s a puzzle that evolution optimized us solely for the branches of reality with the greatest measure.
Have you looked at Jacques Mallah’s papers?
Yes, and I had a discussion with him last year at http://old.nabble.com/language%2C-cloning-and-thought-experiments-tt22185985.html#a22189232 (Thanks for the reminder.)
If you follow the above link, you’ll see that I actually took a position that’s opposite of my position here: I said that people mostly don’t care about measure. I think the lesson here is that A) I have a very bad memory :-) and B) I don’t know how to formalize human preferences.
Well, Wei, I certainly agree that formalizing human preferences is tough!
Preserves measure of what, exactly? The integral of over all arrangements of particles that we classify into the “Roko ALIVE” category?
I.e. it preserves the measure of the set of all arrangements of particles that we classify into the “Roko ALIVE” category.
Yes, something like that.
But, suppose that what you really care about is what you’re about to experience next, rather than measure, i.e. the sum of absolute values of all the complex numbers premultiplying all of your branches?
I think this is a more reasonable alternative to “caring about measure” (as opposed to “caring about the number of branches” which is mainly what I was arguing against in my first reply to you in this thread). I’m not sure what I can say about this that might be new to you. I guess I can point out that this is not something that “evolution would do” if mind copying technology were available, but that’s another “clue” that I’m not sure what to make of.
Ok, I’ll appease the part of me that cares about what my genes want by donating to every sperm bank in the country (an exploit that very few people use), then I’ll use the money from that to buy 1000 lottery tickets determined by random qbits, and on with the QS moneymaker ;-)
Source? I’m curious how that’s calculated.
Well, if you have anyone that cares deeply about your continued living, then doing so would hurt them deeply in 99.999999% of universes. But if you’re completely alone in the world or a sociopath, then go for it! (Actually, I calculated the percentage for Mega Millions jackpot, which is 1-1/(56^5*46) = 1-1/2.5e10 = 99.999999996%. Doesn’t affect your argument, of course.)
Don’t trust this; it’s just my guess. This is roughly the number of photons that interact with you per second.
This is a legitimate heuristic, but how familiar are you with the object-level reasoning in this case, which IMO is much stronger?
not very. Thanks for the link.
So I assume you’re not afraid of AI?
“we don’t take seriously the possibility that science can illuminate, and change, basic parts of human behavior” is interesting, at 18:11 in the second video.
The video of the talk has two parts, only first of which was included in the post. Links to both parts:
Genetically enhance humanity or face extinction—PART 1
Genetically enhance humanity or face extinction—PART 2
Thanks, I noticed and corrected that.
The key question isn’t: Should we do genetic engineering when we know the complete effects of it but should we try genetically engineering even when we don’t know what result we will get.
Should we gather centralized databases of DNA sequences of every human being and mine them for gene data? Are potential side effects worth the risk of starting now with genetic engineering? Do we accept the increased inequality that could result out of genetic engineering. How do we measure what constitutes a good gene? Low incarnation rates, IQ, EQ?