Should ethicists be inside or outside a profession?
Originally written in 2007.
Marvin Minsky in an interview with Danielle Egan for New Scientist:
Minsky: The reason we have politicians is to prevent bad things from happening. It doesn’t make sense to ask a scientist to worry about the bad effects of their discoveries, because they’re no better at that than anyone else. Scientists are not particularly good at social policy.
Egan: But shouldn’t they have an ethical responsibility for their inventions
Minsky: No they shouldn’t have an ethical responsibility for their inventions. They should be able to do what they want. You shouldn’t have to ask them to have the same values as other people. Because then you won’t get them. They’ll make stupid decisions and not work on important things, because they see possible dangers. What you need is a separation of powers. It doesn’t make any sense to have the same person do both.
The Singularity Institute was recently asked to comment on this interview—which by the time it made it through the editors at New Scientist, contained just the unvarnished quote “Scientists shouldn’t have an ethical responsibility for their inventions. They should be able to do what they want. You shouldn’t have to ask them to have the same values as other people.” Nice one, New Scientist. Thanks to Egan for providing the original interview text.
This makes an interesting contrast with what I said in my “Cognitive biases” chapter for Bostrom’s Global Catastrophic Risks:
Someone on the physics-disaster committee should know what the term “existential risk” means; should possess whatever skills the field of existential risk management has accumulated or borrowed. For maximum safety, that person should also be a physicist. The domain-specific expertise and the expertise pertaining to existential risks should combine in one person. I am skeptical that a scholar of heuristics and biases, unable to read physics equations, could check the work of physicists who knew nothing of heuristics and biases.
Should ethicists be inside or outside a profession?
It seems to me that trying to separate ethics and engineering is like trying to separate the crafting of paintings into two independent specialties: a profession that’s in charge of pushing a paintbrush over a canvas, and a profession that’s in charge of artistic beauty but knows nothing about paint or optics.
The view of ethics as a separate profession is part of the problem. It arises, I think, from the same deeply flawed worldview that sees technology as something foreign and distant, something opposed to life and beauty. Technology is an expression of human intelligence, which is to say, an expression of human nature. Hunter-gatherers who crafted their own bows and arrows didn’t have cultural nightmares about bows and arrows being a mechanical death force, a blank-faced System. When you craft something with your own hands, it seems like a part of you. It’s the Industrial Revolution that enabled people to buy artifacts which they could not make or did not even understand.
Ethics, like engineering and art and mathematics, is a natural expression of human minds.
Anyone who gives a part of themselves to a profession discovers a sense of beauty in it. Writers discover that sentences can be beautiful. Programmers discover that code can be beautiful. Architects discover that house layouts can be beautiful. We all start out with a native sense of beauty, which already responds to rivers and flowers. But as we begin to create—sentences or code or house layouts or flint knives—our sense of beauty develops with use.
Like a sense of beauty, one’s native ethical sense must be continually used in order to develop further. If you’re just working at a job to make money, so that your real goal is to make the rent on your apartment, then neither your aesthetics nor your morals are likely to get much of a workout.
The way to develop a highly specialized sense of professional ethics is to do something, ethically, a whole bunch, until you get good at both the thing itself and the ethics part.
When you look at the “bioethics” fiasco, you discover bioethicists writing mainly for an audience of other bioethicists. Bioethicists aren’t writing to doctors or bioengineers, they’re writing to tenure committees and journalists and foundation directors. Worse, bioethicists are not using their ethical sense in bio-work, the way a doctor whose patient might have incurable cancer must choose how and what to tell the patient.
A doctor treating a patient should not try to be academically original, to come up with a brilliant new theory of bioethics. As I’ve written before, ethics is not supposed to be counterintuitive, and yet academic ethicists are biased to be just exactly counterintuitive enough that people won’t say, “Hey, I could have thought of that.” The purpose of ethics is to shape a well-lived life, not to be impressively complicated. Professional ethicists, to get paid, must transform ethics into something difficult enough to require professional ethicists.
It’s, like, a good idea to save lives? “Duh,” the foundation directors and the review boards and the tenure committee would say.
But there’s nothing duh about saving lives if you’re a doctor.
A book I once read about writing—I forget which one, alas—observed that there is a level of depth beneath which repetition ceases to be boring. Standardized phrases are called “cliches” (said the author of writing), but murder and love and revenge can be woven into a thousand plots without ever becoming old. “You should save people’s lives, mmkay?” won’t get you tenure—but as a theme of real life, it’s as old as thinking, and no more obsolete.
Boringly obvious ethics are just fine if you’re using them in your work rather than talking about them. The goal is to do it right, not to do it originally. Do your best whether or not it is “original”, and originality comes in its own time; not every change is an improvement, but every improvement is necessarily a change.
At the Singularity Summit 2007, several speakers alleged we should “reach out” to artists and poets to encourage their participation in the Singularity dialogue. And then a woman went to a microphone and said: “I am an artist. I want to participate. What should I do?”
And there was a long, delicious silence.
What I would have said to a question like that, if someone had asked it of me in the conference lobby, was: “You are not an ‘artist’, you are a human being; art is only one facet in which you express your humanity. Your reactions to the Singularity should arise from your entire self, and it’s okay if you have a standard human reaction like ‘I’m afraid’ or ‘Where do I send the check?’, rather than some special ‘artist’ reaction. If your artistry has something to say, it will express itself naturally in your response as a human being, without needing a conscious effort to say something artist-like. I would feel patronized, like a dog commanded to perform a trick, if someone presented me with a painting and said ‘Say something mathematical!’”
Anyone who calls on “artists” to participate in the Singularity clearly thinks of artistry as a special function that is only performed in Art departments, an icing dumped onto cake from outside. But you can always pick up some cheap applause by calling for more icing on the cake.
Ethicists should be inside a profession, rather than outside, because ethics itself should be inside rather than outside. It should be a natural expression of yourself, like math or art or engineering. If you don’t like trudging up and down stairs you’ll build an escalator. If you don’t want people to get hurt, you’ll try to make sure the escalator doesn’t suddenly speed up and throw its riders into the ceiling. Both just natural expressions of desire.
There are opportunities for market distortions here, where people get paid more for installing an escalator than installing a safe escalator. If you don’t use your ethics, if you don’t wield them as part of your profession, they will grow no stronger. But if you want a safe escalator, by far the best way to get one—if you can manage it—is to find an engineer who naturally doesn’t want to hurt people. Then you’ve just got to keep the managers from demanding that the escalator ship immediately and without all those expensive safety gadgets.
The first iron-clad steamships were actually much safer than the Titanic; the first ironclads were built by engineers without much management supervision, who could design in safety features to their heart’s content. The Titanic was built in an era of cutthroat price competition between ocean liners. The grand fanfare about it being unsinkable was a marketing slogan like “World’s Greatest Laundry Detergent”, not a failure of engineering prediction.
Yes, safety inspectors, yes, design reviews; but these just verify that the engineer put forth an effort of ethical design intelligence. Safety-inspecting doesn’t build an elevator. Ethics, to be effective, must be part of the intelligence that expresses those ethics—you can’t add it in like icing on a cake.
Which leads into the question of the ethics of AI. “Ethics, to be effective, must be part of the intelligence that expresses those ethics—you can’t add it in like icing on a cake.” My goodness, I wonder how I could have learned such Deep Wisdom?
Because I studied AI, and the art spoke to me. Then I translated it back into English.
The truth is that I can’t inveigh properly on bioethics, because I am not myself a doctor or a bioengineer. If there is a special ethic of medicine, beyond the obvious, I do not know it. I have not worked enough healing for that art to speak to me.
What I do know a thing or two about, is AI. There I can testify definitely and from direct knowledge, that anyone who sets out to study “AI ethics” without a technical grasp of cognitive science, is absolutely doomed.
It’s the technical knowledge of AI that forces you to deal with the world in its own strange terms, rather than the surface-level concepts of everyday life. In everyday life, you can take for granted that “people” are easy to identify; if you look at the modern world, the humans are easy to pick out, to categorize. An unusual boundary case, like Terri Schiavo, can throw a whole nation into a panic: Is she “alive” or “dead”? AI explodes the language that people are described of, unbundles the properties that are always together in human beings. Losing the standard view, throwing away the human conceptual language, forces you to think for yourself about ethics, rather than parroting back things that sound Deeply Wise.
All of this comes of studying the math, nor may it be divorced from the math. That’s not as comfortably egalitarian as my earlier statement that ethics isn’t meant to be complicated. But if you mate ethics to a highly technical profession, you’re going to get ethics expressed in a conceptual language that is highly technical.
The technical knowledge provides the conceptual language in which to express ethical problems, ethical options, ethical decisions. If politicians don’t understand the distinction between terminal value and instrumental value, or the difference between a utility function and a probability distribution, then some fundamental problems in Friendly AI are going to be complete gibberish to them—never mind the solutions. I’m sorry to be the one to say this, and I don’t like it either, but Lady Reality does not have the goal of making things easy for political idealists.
If it helps, the technical ethical thoughts I’ve had so far require only comparatively basic math like Bayesian decision theory, not high-falutin’ complicated damn math like real mathematicians do all day. Hopefully this condition does not hold merely because I am stupid.
Several of the responses to Minsky’s statement that politicians should be the ones to “prevent bad things from happening” were along the lines of “Politicans are not particularly good at this, but neither necessarily are most scientists.” I think it’s sad but true that modern industrial civilization, or even modern academia, imposes many shouting external demands within which the quieter internal voice of ethics is lost. It may even be that a majority of people are not particularly ethical to begin with; the thought seems to me uncomfortably elitist, but that doesn’t make it comfortably untrue.
It may even be true that most scientists, say in AI, haven’t really had a lot of opportunity to express their ethics and so the art hasn’t said anything in particular to them.
If you talk to some AI scientists about the Singularity / Intelligence Explosion they may say something cached like, “Well, who’s to say that humanity really ought to survive?” This doesn’t sound to me like someone whose art is speaking to them. But then artificial intelligence is not the same as artificial general intelligence; and, well, to be brutally honest, I think a lot of people who claim to be working in AGI haven’t really gotten all that far in their pursuit of the art.
So, if I listen to the voice of experience, rather to the voice of comfort, I find that most people are not very good at ethical thinking. Even most doctors—who ought properly to be confronting ethical questions in every day of their work—don’t go on to write famous memoirs about their ethical insights. The terrifying truth may be that Sturgeon’s Law applies to ethics as it applies to so many other human endeavors: “Ninety percent of everything is crap.”
So asking an engineer an ethical question is not a sure-fire way to get an especially ethical answer. I wish it were true, but it isn’t.
But what experience tells me, is that there is no way to obtain the ethics of a technical profession except by being ethical inside that profession. I’m skeptical enough of nondoctors who propose to tell doctors how to be ethical, but I know it’s not possible in AI. There are all sorts of AI-ethical questions that anyone should be able to answer, like “Is it good for a robot to kill people? No.” But if a dilemma requires more than this, the specialist ethical expertise will only come from someone who has practiced expressing their ethics from inside their profession.
This doesn’t mean that all AI people are on their own. It means that if you want to have specialists telling AI people how to be ethical, the “specialists” have to be AI people who express their ethics within their AI work, and then they can talk to other AI people about what the art said to them.
It may be that most AI people will not be above-average at AI ethics, but without technical knowledge of AI you don’t even get an opportunity to develop ethical expertise because you’re not thinking in the right language. That’s the way it is in my profession. Your mileage may vary.
In other words: To get good AI ethics you need someone technically good at AI, but not all people technically good at AI are automatically good at AI ethics. The technical knowledge is necessary but not sufficient to ethics.
What if you think there are specialized ethical concepts, typically taught in philosophy classes, which AI ethicists will need? Then you need to make sure that at least some AI people take those philosophy classes. If there is such a thing as special ethical knowledge, it has to combine in the same person who has the technical knowledge.
Heuristics and biases are critically important knowledge relevant to ethics, in my humble opinion. But if you want that knowledge expressed in a profession, you’ll have to find a professional expressing their ethics and teach them about heuristics and biases—not pick a random cognitive psychologist off the street to add supervision, like so much icing slathered over a cake.
My nightmare here is people saying, “Aha! A randomly selected AI researcher is not guaranteed to be ethical!” So they turn the task over to professional “ethicists” who are guaranteed to fail: who will simultaneously try to sound counterintuitive enough to be worth paying for as specialists, while also making sure to not think up anything really technical that would scare off the foundation directors who approve their grants.
But even if professional “AI ethicists” fill the popular air with nonsense, all is not lost. AIfolk who express their ethics as a continuous, non-separate, non-special function of the same life-existence that expresses their AI work, will yet learn a thing or two about the special ethics pertaining to AI. They will not be able to avoid it. Thinking that ethics is a separate profession which judges engineers from above, is like thinking that math is a separate profession which judges engineers from above. If you’re doing ethics right, you can’t separate it from your profession.
- Where Philosophy Meets Science by 12 Apr 2008 21:21 UTC; 61 points) (
- 3 Feb 2021 0:32 UTC; 10 points) 's comment on Yudkowsky on AGI ethics by (
- 27 Dec 2018 23:01 UTC; 2 points) 's comment on Resist the Happy Death Spiral by (
Thank you! This is a point I keep trying to make, less eloquently, in both bioethics and in AI safety.
We need fewer talking heads making suggestions for how to regulate, and more input from actual experts, and more informed advice going to decision makers. If “professional ethicists” have any role, it should be elicitation, attempting to reconcile or delineate different opinions, and translation of ethical opinions of experts into norms and policies.
(This is the last of the re-released series of Eliezer posts on bioethics)
Have you re-released “Transhumanists Don’t Need Special Dispositions”? If not, can I give you a nudge to do so? It’s one of my favorites.
It was released on Dec 7.
This seems a bit too strong. It seems to imply that I should ignore Bostrom’s writings about AI ethics, and only look to people such as Demis Hassabis.
Or if I thought that nobody was close to having the expertise to build a superintelligent AI, maybe I’d treat it as implying that it’s premature to have opinions about AI ethics.
Instead, I treat professional expertise as merely one piece of evidence about a person’s qualifications to inform us about ethics.
“Should regulators be inside or outside an industry”?
“Ethicist” is a weird thing. It’s a funny mix of applied philosophy (in a theory-less world), PR, and rulemaking. It clearly can’t align with the powerful (science or industry or whatever) or it’ll be captured. And it clearly can’t exist outside of the powerful, because it’s useless without it.
Fundamentally, I disagree with Minsky. Every human has ethical (and moral, if you distinguish the two for this purpose) responsibility for their impact on others. Outside agents are useful in identifying blindspots and creating some incentives to think about larger impacts. “insiders” are useful in understanding the detailed costs and possibilities, and in predicting actual likely outcomes.
Mathematicians mainly write to other mathematicians. This is a problem that effects every field in academia, and it probably always will, simply because people are interested in their fields and not other fields.
Most bioethicists are effectively legal counsel for hospitals, where they very much use their ethical sense in bio-work, dictating what is good practice for doctors and patients in particular scenarios. They sometimes even tell doctors how and what to tell the cancer patients. You seem to have a categorically wrong view of what an ethicist does for their day job.
I mean this is only true if it’s also true for any other profession. Professional ethicists, to get paid, must learn a lot of legal compliance and write SOPs for doctors to make ethics something easy enough for the doctors to utilize. I think your view is just fundamentally backwards on a lot of this.