I know (or could readily rediscover) how to build a binary adder from logic gates. If I can figure out how to make individual logic gates from Legos or ant trails or rolling ping-pong balls, then I can add two 32-bit unsigned integers using Legos or ant trails or ping-pong balls.
But once you see the essence, the structure that is addition, then you will automatically see addition whenever you see that structure. Legos, ant trails, or ping-pong balls.
Even if the system is—gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers. Even if someone—gasp! - designed the system, you will see that it was designed to add numbers. Even if the system was—gasp!- caused, you will see that it was caused to add numbers.
Let’s say that John is standing in front of an orphanage which is on fire, but not quite an inferno yet; trying to decide whether to run in and grab a baby or two. Let us suppose two slightly different versions of John—slightly different initial conditions. They both agonize. They both are torn between fear and duty. Both are tempted to run, and know how guilty they would feel, for the rest of their lives, if they ran. Both feel the call to save the children. And finally, in the end, John-1 runs away, and John-2 runs in and grabs a toddler, getting out moments before the flames consume the entranceway.
This, it seems to me, is the very essence of moral responsibility—in the one case, for a cowardly choice; in the other case, for a heroic one. And I don’t see what difference it makes, if John’s decision was physically deterministic given his initial conditions, or if John’s decision was preplanned by some alien creator that built him out of carbon atoms, or even if—worst of all—there exists some set of understandable psychological factors that were the very substance of John and caused his decision.
Imagine yourself caught in an agonizing moral dilemma. If the burning orphanage doesn’t work for you—if you wouldn’t feel conflicted about that, one way or the other—then substitute something else. Maybe something where you weren’t even sure what the “good” option was.
Maybe you’re deciding whether to invest your money in a startup that seems like it might pay off 50-to-1, or donate it to your-favorite-Cause; if you invest, you might be able to donate later… but is that what really moves you, or do you just want to retain the possibility of fabulous wealth? Should you donate regularly now, to ensure that you keep your good-guy status later? And if so, how much?
I’m not proposing a general answer to this problem, just offering it as an example of something else that might count as a real moral dilemma, even if you wouldn’t feel conflicted about a burning orphanage.
For me, the analogous painful dilemma might be how much time to spend on relatively easy and fun things that might help set up more AI researchers in the future—like writing about
rationality—versus just forgetting about the outside world and trying
to work strictly on AI.
Imagine yourself caught in an agonizing moral dilemma. If my examples don’t work, make something up. Imagine having not yet made your decision. Imagine yourself not yet knowing which decision you will make. Imagine that you care, that you feel a weight of moral responsibility; so that it seems to you that, by this choice, you might condemn or redeem yourself.
Okay, now imagine that someone comes along and says, “You’re a physically deterministic system.”
I don’t see how that makes the dilemma of the burning orphanage, or
the ticking clock of AI, any less agonizing. I don’t see how that
diminishes the moral responsibility, at all. It just says that if you take a hundred identical copies of me, they will all make the same decision. But which decision will we all make? That will be determined by my agonizing, my weighing of duties, my self-doubts, and my final effort to be good. (This is the idea of timeless control: If the result is deterministic, it is still caused and controlled by that portion of the deterministic physics which is myself. To cry “determinism” is only to step outside Time and see that the control is lawful.) So, not yet knowing the output of the deterministic process that is myself, and being duty-bound to determine it as best I can, the weight of moral responsibility is no less.
Someone comes along and says, “An alien built you, and it built you to make a particular decision in this case, but I won’t tell you what it is.”
Imagine a zillion possible people, perhaps slight variants of me, floating in the Platonic space of computations. Ignore quantum mechanics for the moment, so that each possible variant of me comes to only one decision. (Perhaps we can approximate a true quantum human as a deterministic machine plus a prerecorded tape containing the results of quantum branches.) Then each of these computations must agonize, and must choose, and must determine their deterministic output as best they can. Now an alien reaches into this space, and plucks out one person, and instantiates them. How does this change anything about the moral responsibility that attaches to how this person made their choice, out there in Platonic space… if you see what I’m trying to get at here?
The alien can choose which mind design to make real, but that doesn’t necessarily change the moral responsibility within the mind.
There are plenty of possible mind designs that wouldn’t make agonizing moral decisions, and wouldn’t be their own bearers of moral responsibility. There are mind designs that would just play back one decision like a tape recorder, without weighing alternatives or consequences, without evaluating duties or being tempted or making sacrifices. But if the mind design happens to be you… and you know your duties, but you don’t yet know your decision… then surely, that is the substance of moral responsibility, if responsibility can be instantiated in anything real at all?
We could think of this as an extremely generalized, Generalized Anti-Zombie Principle: If you are going to talk about moral responsibility, it ought not to be affected by anything that plays no role in your brain. It shouldn’t matter whether I came into existence as a result of natural selection, or whether an alien built me up from scratch five minutes ago, presuming that the result is physically identical. I, at least, regard myself as having moral responsibility. I am responsible here and now; not knowing my future decisions, I must determine them. What difference does the alien in my past make, if the past is screened off from my present?
Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin
would still be a jerk? Yes, that’s exactly what I’m suggesting. The
alien would be a bigger jerk. But if we assume that Lenin made his
decisions after the fashion of an ordinary human brain, and not by
virtue of some alien mechanism seizing and overriding his decisions,
then Lenin would still be exactly as much of a jerk as before.
And as for there being psychological factors that determine your decision—well, you’ve got to be something, and you’re too big to be an atom. If you’re going to talk about moral responsibility at all—and I do regard myself as responsible, when I confront my dilemmas—then you’ve got to be able to be something, and that-which-is-you must be able to do something that comes to a decision, while still being morally responsible.
Just like a calculator is adding, even though it adds deterministically, and even though it was designed to add, and even though it is made of quarks rather than tiny digits.
Causality and Moral Responsibility
Followup to: Thou Art Physics, Timeless Control, Hand vs. Fingers, Explaining vs. Explaining Away
I know (or could readily rediscover) how to build a binary adder from logic gates. If I can figure out how to make individual logic gates from Legos or ant trails or rolling ping-pong balls, then I can add two 32-bit unsigned integers using Legos or ant trails or ping-pong balls.
Someone who had no idea how I’d just done the trick, might accuse me of having created “artificial addition” rather than “real addition”.
But once you see the essence, the structure that is addition, then you will automatically see addition whenever you see that structure. Legos, ant trails, or ping-pong balls.
Even if the system is—gasp!- deterministic, you will see a system that, lo and behold, deterministically adds numbers. Even if someone—gasp! - designed the system, you will see that it was designed to add numbers. Even if the system was—gasp!- caused, you will see that it was caused to add numbers.
Let’s say that John is standing in front of an orphanage which is on fire, but not quite an inferno yet; trying to decide whether to run in and grab a baby or two. Let us suppose two slightly different versions of John—slightly different initial conditions. They both agonize. They both are torn between fear and duty. Both are tempted to run, and know how guilty they would feel, for the rest of their lives, if they ran. Both feel the call to save the children. And finally, in the end, John-1 runs away, and John-2 runs in and grabs a toddler, getting out moments before the flames consume the entranceway.
This, it seems to me, is the very essence of moral responsibility—in the one case, for a cowardly choice; in the other case, for a heroic one. And I don’t see what difference it makes, if John’s decision was physically deterministic given his initial conditions, or if John’s decision was preplanned by some alien creator that built him out of carbon atoms, or even if—worst of all—there exists some set of understandable psychological factors that were the very substance of John and caused his decision.
Imagine yourself caught in an agonizing moral dilemma. If the burning orphanage doesn’t work for you—if you wouldn’t feel conflicted about that, one way or the other—then substitute something else. Maybe something where you weren’t even sure what the “good” option was.
Maybe you’re deciding whether to invest your money in a startup that seems like it might pay off 50-to-1, or donate it to your-favorite-Cause; if you invest, you might be able to donate later… but is that what really moves you, or do you just want to retain the possibility of fabulous wealth? Should you donate regularly now, to ensure that you keep your good-guy status later? And if so, how much?
I’m not proposing a general answer to this problem, just offering it as an example of something else that might count as a real moral dilemma, even if you wouldn’t feel conflicted about a burning orphanage.
For me, the analogous painful dilemma might be how much time to spend on relatively easy and fun things that might help set up more AI researchers in the future—like writing about rationality—versus just forgetting about the outside world and trying to work strictly on AI.
Imagine yourself caught in an agonizing moral dilemma. If my examples don’t work, make something up. Imagine having not yet made your decision. Imagine yourself not yet knowing which decision you will make. Imagine that you care, that you feel a weight of moral responsibility; so that it seems to you that, by this choice, you might condemn or redeem yourself.
Okay, now imagine that someone comes along and says, “You’re a physically deterministic system.”
I don’t see how that makes the dilemma of the burning orphanage, or the ticking clock of AI, any less agonizing. I don’t see how that diminishes the moral responsibility, at all. It just says that if you take a hundred identical copies of me, they will all make the same decision. But which decision will we all make? That will be determined by my agonizing, my weighing of duties, my self-doubts, and my final effort to be good. (This is the idea of timeless control: If the result is deterministic, it is still caused and controlled by that portion of the deterministic physics which is myself. To cry “determinism” is only to step outside Time and see that the control is lawful.) So, not yet knowing the output of the deterministic process that is myself, and being duty-bound to determine it as best I can, the weight of moral responsibility is no less.
Someone comes along and says, “An alien built you, and it built you to make a particular decision in this case, but I won’t tell you what it is.”
Imagine a zillion possible people, perhaps slight variants of me, floating in the Platonic space of computations. Ignore quantum mechanics for the moment, so that each possible variant of me comes to only one decision. (Perhaps we can approximate a true quantum human as a deterministic machine plus a prerecorded tape containing the results of quantum branches.) Then each of these computations must agonize, and must choose, and must determine their deterministic output as best they can. Now an alien reaches into this space, and plucks out one person, and instantiates them. How does this change anything about the moral responsibility that attaches to how this person made their choice, out there in Platonic space… if you see what I’m trying to get at here?
The alien can choose which mind design to make real, but that doesn’t necessarily change the moral responsibility within the mind.
There are plenty of possible mind designs that wouldn’t make agonizing moral decisions, and wouldn’t be their own bearers of moral responsibility. There are mind designs that would just play back one decision like a tape recorder, without weighing alternatives or consequences, without evaluating duties or being tempted or making sacrifices. But if the mind design happens to be you… and you know your duties, but you don’t yet know your decision… then surely, that is the substance of moral responsibility, if responsibility can be instantiated in anything real at all?
We could think of this as an extremely generalized, Generalized Anti-Zombie Principle: If you are going to talk about moral responsibility, it ought not to be affected by anything that plays no role in your brain. It shouldn’t matter whether I came into existence as a result of natural selection, or whether an alien built me up from scratch five minutes ago, presuming that the result is physically identical. I, at least, regard myself as having moral responsibility. I am responsible here and now; not knowing my future decisions, I must determine them. What difference does the alien in my past make, if the past is screened off from my present?
Am I suggesting that if an alien had created Lenin, knowing that Lenin would enslave millions, then Lenin would still be a jerk? Yes, that’s exactly what I’m suggesting. The alien would be a bigger jerk. But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before.
And as for there being psychological factors that determine your decision—well, you’ve got to be something, and you’re too big to be an atom. If you’re going to talk about moral responsibility at all—and I do regard myself as responsible, when I confront my dilemmas—then you’ve got to be able to be something, and that-which-is-you must be able to do something that comes to a decision, while still being morally responsible.
Just like a calculator is adding, even though it adds deterministically, and even though it was designed to add, and even though it is made of quarks rather than tiny digits.