Just checking, but… you are aware, aren’t you, that many (possibly even most) people have a radically different view of what morality is?
I ask only because, in the early part of your post, you seem to take an “explaining a potentially tricky but nonetheless empirical fact” tone, rather than a “stating a quite controversial opinion” tone. I wonder if this is intentional, or whether this signals a confusion about what other people think about this subject, or what?
I am aware that many people have a radically different idea about what morality is, but my concern is focused squarely on our collective need to steer AGI system builders towards the right answers before they start to release dangerous software into places where it can begin to lever influence. If there’s a problem with the tone, that’s because it’s really a first draft which could do with a little editing. My computer’s been freezing repeatedly all day and I rushed into posting what I’d written in case I lost it all, which I nearly did as I couldn’t get the machine to unfreeze for long enough to save it in any other way. However, if people can see past issues of tone and style, what I’d like them to do is try to shoot it down in flames, because that’s how proposed solutions need to be put to the test.
I’ve put my ideas out there in numerous places over the years, but I’m still waiting for someone to show that they’re inferior to some other way of calculating right and wrong. For the most part, I’ve run into waffle-mongers who have nothing to offer as an alternative at all, so they can’t even produce any judgements to compare. Others propose things which I can show straight off generate wrong answers, but no one has yet managed to do that with mine, so that’s the open challenge here. Show me a situation where my way of calculating morality fails, and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.
My other comment aside, let me ask you this. How familiar are you with the existing literature on moral philosophy and metaethics? Specifically, are you familiar with the following terms and concepts:
utilitarianism
consequentialism
deontology
virtue ethics
cognitivism / non-cognitivism
emotivism
ethical subjectivism
I mean no offense by this question, and ask only because your post, in numerous places, seems like it should make reference to some of these concepts, yet, surprisingly, does not. This makes me think that you might be unfamiliar with the literature on these subjects. If that is so, then I think that you would benefit greatly from investigating said literature.
If my guess is mistaken, and you are indeed familiar with all of these things, then I apologize. In that case, however, I would suggest that it might be useful to frame your commentary in terms of relevant existing concepts and to make use of relevant existing terminology; that might make your ideas easier to discuss.
You are right in thinking that I have not studied the field in the depth that may be necessary—I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up, but it’s possible that I’ve misjudged the worth of some of it by being misled by misrepresentations of it, so I will look up the things in your list that I haven’t already checked and see what they have to offer. What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up
Well, that hardly seems a reliable approach…
I should, perhaps, clarify my point. My list of terms wasn’t intended to be some sort of exhaustive set of prerequisite topics, but only a sampling of some representative (and particularly salient) concepts. If, indeed, you haven not looked into moral philosophy at all… then, quite frankly, it will not suffice to simply “look up” a handful of terms. (Don’t take this to mean that you shouldn’t look up the concepts I listed! But do avoid Wikipedia; the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…
What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
Perhaps, perhaps not. It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc. Systematic surveys of moral philosophy, even good ones, are not difficult to find.
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most. There is room for hope that I have found the most rational place on the Net for this kind of discussion, but there are a lot of errors that need to be corrected, and it’s such a big task that it will probably have to wait for AGI to drive that process.
″...the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…”
Thanks—it saves a lot of time to start with the better sources of information and it’s hard to know when you’ve found them.
“It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc.”
Certainly—there are bound to be some who do it a lot better than the rest, but they’re hidden deep in the noise.
“Systematic surveys of moral philosophy, even good ones, are not difficult to find.”
I have only found fault-ridden stuff so far, but hope springs eternal.
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most.
Could you be less vague? How is the philosophy here faulty? Is there a pattern? If you have valid criticism then this community is probably in the top 5% for accepting it, but just saying “you’re all wrong” isn’t actually useful.
it’s such a big task that it will probably have to wait for AGI to drive that process.
If AGI has been built, then LW’s task is over. Either we have succeeded, and we will be in a world beyond our ability to predict, but almost certainly one in which we will not need to edit LW to better explain reductionism; or we have failed, and we are no more—there is nobody to read LW. This is putting the rocket before the horse.
Just look at the reactions to my post “Mere Addition Paradox Resolved”. The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs. There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down. But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field. What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs.
Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.
There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down.
Really. You know that LW is an oppressive mob with a few people who don’t dare to contradict the dogma for fear of [something]… because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes’ Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described.
But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field.
It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.
What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
“Truth forever on the scaffold, Wrong forever on the throne,” eh? And fractally so?
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.
“Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.”
It’s obvious what’s going on when you look at the high positive scores being given to really poor comments.
“It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.”
A false paradox tells you nothing about flaws in the utility function—it simply tells you that people who apply it in a slapdash manner get the wrong answers out of it and that the fault lies with them.
“You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.”
AGI won’t be programmed to find me right all the time, but to identify which arguments are right. And for the sake of those who are wrong, they need to be told that they were wrong so that they understand that they are at reasoning and not the great thinkers they imagine themselves to be.
what I’d like them to do is try to shoot it down in flames
Well, since you put it that way…
my concern is focused squarely on our collective need to steer AGI system builders towards the right answers
What are the right answers? Clearly, you think you have the right answers, but suppose I disagree with you (which I do). Just as clearly, this means that I want AGI system builders steered in a different direction than you do.
You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.
I’ve put my ideas out there in numerous places over the years, but I’m still waiting for someone to show that they’re inferior to some other way of calculating right and wrong.
And have you managed to convince anyone that your ideas are correct? Or have people’s reactions been more or less comparable to my reaction here? If the latter—have you changed your approach? Have you reconsidered whether you are, in fact, correct—or, at least, reconsidered what the right way to convince people is?
Of course, those are meta-level considerations. It would be unfair of me to avoid the object-level matter, so let me try to answer your implied question (“how are your ideas inferior to some other way of calculating right and wrong”):
They are inferior because they get the wrong answers.
Now, you might say: “Wrong answers?! Nonsense! Of course my answers are right!” Well, indeed, no doubt you think so. But I disagree. This is exactly the problem: people disagree on what the right answers are.
For the most part, I’ve run into waffle-mongers who have nothing to offer as an alternative at all, so they can’t even produce any judgements to compare.
Well, you won’t get that here, I promise you that… :)
Others propose things which I can show straight off generate wrong answers, but no one has yet managed to do that with mine, so that’s the open challenge here.
Oh? But this is quite curious; I can easily show that your approach generates wrong answers. Observe:
You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.
As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).
Show me a situation where my way of calculating morality fails
I just did, as you see.
and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.
Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!
″ You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.”
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral. (We all have such a database in our heads, but each contains different data and can apply different weightings to the same things, leading to disagreements between us about what’s moral, but AGI will over time generate its own database which will end up being much more accurate than any of ours.)
“And have you managed to convince anyone that your ideas are correct?”
I’ve found a mixture of people who think it’s right and others who say it’s wrong and who point me towards alternatives which are demonstrably faulty.
“They are inferior because they get the wrong answers.”
Well, that’s what we need to explore, and we need to take it to a point where it isn’t just a battle of assertions and counter-assertions.
“I can easily show that your approach generates wrong answers. Observe: You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.”
This may needs a new blog post to explore it fully, but I’ll try to provide a short version here. If a favourite relative of yours was to die and be reincarnated as a rat, you would, if you’re rational, want to treat that rat well if you knew who it used to be. You wouldn’t regard that rat as an inferior kind of thing that doesn’t deserve protection from people who might seek to make it suffer. It wouldn’t matter that your reincarnated relative has no recollection of their previous life—they would matter to you as much in that form as they would if they had a stroke and were reduced to similar capability to a rat and had lot all memory of who they were. The two things are equivalent and it’s irrational to consider one of them as being in less need of protection from torture than the other.
Reincarnation! Really! You need to resort to bringing that crazy idea into this? (Not your reply, but it’s the kind of reaction that such an idea is likely to generate). But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player. But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs. In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects. If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too. If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic. If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking. The sentience in that rat could quite reasonably be someone you love, or someone you loved in a past life long ago. It would be a serious error not to regard all sentiences as having equal value unless you have proof that some of them are lesser things, but you don’t have that.
You’ve also opened the door to “superior” aliens deciding that the sentience in you isn’t equivalent to the sentiences in them, which allows them to tread you in less moral ways by applying your own standards.
“As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).”
And yet one of the answers is actually right, while the other isn’t. Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
″ “and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.” --> Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!”
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience).
I disagree. I reject your standard of correctness. (As do many other people.)
The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics. If you have not encountered this field before now, I strongly suggest that you investigate it in detail; there is a great deal of material there, which is relevant to this discussion.
(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense, but there’s no need whatever to rehash, in this thread, the arguments for why it is nonsense. I will only suggest that you read the sequences; much of the material therein is targeted at precisely this sort of topic, and this sort of viewpoint.)
“I disagree. I reject your standard of correctness. (As do many other people.)”
Shingles is worse than a cold. I haven’t had it, but those who have will tell you how bad the pain is. We can collect data on suffering by asking people how bad things feel in comparison to other things, and this is precisely what AGI will set about doing in order to build its database and make its judgements more and more accurate. If you have the money to alleviate the suffering of one person out of a group suffering from a variety of painful conditions and all you know about them is which condition they have just acquired, you can use the data in that database to work out which one you should help. That is morality being applied, and it’s the best way of doing it—any other answer is immoral. Of course, if we know more about these people, such as how good or bad they are, that might change the result, but again there would be data that can be crunched to work out how much suffering their past actions caused to undeserving others. There is a clear mechanism for doing this, and not doing it that way using the available information is immoral.
“The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics.”
We already have what we need—a pragmatic system for getting as close to the ideal morality as possible based on collecting the data as to how harmful different experiences are. The data will never be complete, they will never be fully accurate, but they are the best that can be done and we have a moral duty to compile and use them.
“(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense...”
If you reject that, you are doing so in favour of magical thinking, and AGI won’t be impressed with that. The idea that the sentience in you can’t go on to become a sentience in a maggot is based on the idea that after death that sentience magically becomes nothing. I am fully aware that most people are magical thinkers, so you will always feel that you are right on the basis that hordes of fellow magical thinkers back up your magical beliefs, but you are being irrational. AGI is not going to be programmed to be irrational in the same way most humans are. The job of AGI is to model reality in the least magical way it can, and having things pop into existence out of nothing and then return to being nothing is more magical than having things continue to exist in the normal way that things in physics behave. (All those virtual particles that pop in and out of existence in the vacuum, they emerge from a “nothing” that isn’t nothing—it has properties such as a rule that whatever’s taken from it must have the same amount handed back.) Religious people have magical beliefs too and they too make the mistake of thinking that numbers of supporters are evidence that their beliefs are right, but being right is not democratic. Being right depends squarely on being right. Again here, we don’t have absolute right answers in one sense, but we do have in terms of what is probably right, and an idea that depends on less magic (and more rational mechanism) is more likely to be right. You have made a fundamental mistake here by rejecting a sound idea on the basis of a bias in your model of reality that has led to you miscategorising it as nonsense, while your evidence for it being nonsense is support by a crowd of people who haven’t bothered to think it through.
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral.
How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.
And yet one of the answers is actually right, while the other isn’t.
Is 2+2 equal to 5 or to fish?
Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
You should really read up on the Orthogonality Thesis and related concepts. Also, how do you plan on distinguishing between right and wrong moralities?
“How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.”
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
“Is 2+2 equal to 5 or to fish?”
Neither of those results works, but neither of them is my answer.
“What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.”
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“You should really read up on the Orthogonality Thesis and related concepts.”
Thanks - all such pointers are welcome.
“Also, how do you plan on distinguishing between right and wrong moralities?”
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available. With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.
Neither of those results works, but neither of them is my answer.
I’ll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong.
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“All sentiences are equally important” is definitely a moral statement.
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available.
I think that this is a fine (read: “quite good”; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.
With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
I agree that it is mathematics, but where is this “proper” coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that “what maximizes X?” is objective, though.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.
But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player.
What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body, but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes. If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind. A theory of identity that better meshes with reality is that of functionalism, which in T-shirt slogan form is “the mind is a physical process, and the particulars of that process determine identity.”
For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,
But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs.
This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.
In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects.
Almost everything we care about is composite, so this is an odd way of putting it, but yes.
If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too.
One need not carry out nuclear fission to deconstruct a chair.
If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic.
The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.
If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking.
Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It is resilience which requires additional explanation.
“What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body,”
The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it’s a difficult thing to explore, but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity—it has to have some physical reality.
“but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes.”
In an AGI system those are present too, but sentience needn’t be. Sentience is something else. We are not our thoughts or memories.
“If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind.”
There is no guarantee that the sentience in you is the same one from moment to moment—our actual time spent as the sentience in a brain may be fleeting. Alternatively, there may be millions of sentiences in there which all feel the same things, all feeling as if they are the person in which they exist.
“For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,”
Thanks—I’ll take a look at that too.
“This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.”
Can you make the “chairness” suffer? No. Can you make the sentience suffer? If it exists at all, yes. Can that sentience evaporate into nothing when you break up a brain in the say that the “chairness” disappears when you break up a chair? No. They are radically different kinds of thing. Believing that a sentience can emerge out of nothing to suffer and then disappear back into nothing is a magical belief. The “chairness” of a chair, by way of contrast, is made of nothing—it is something projected onto the chair by imagination.
“”If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too.”″ --> “One need not carry out nuclear fission to deconstruct a chair.”
Relevance?
“The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.”
Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there. In a universe with only one electron and nothing else at all, the property of the electron that repels it from another electron is a hidden property, but it’s already there—it doesn’t suddenly ping into being when another electron is added to the universe and brought together with the first one.
“Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It isresilience which requires additional explanation.”
What requires magic is for the sentient thing in us to stop existing when a person dies. What is the thing that suffers? Is it a plurality? Is it a geometrical arrangement? Is it a pattern of activity? How would any of those suffer? My wallpaper has a pattern, but I can’t torture that pattern. My computer can run software that does intelligent things, but I can’t torture that software or the running of that software. Without a physical sufferer, there can be no suffering.
The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it’s a difficult thing to explore
How do you know it exists, if science knows nothing about it?
but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity—it has to have some physical reality.
This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.
In an AGI system those are present too, but sentience needn’t be. Sentience is something else. We are not our thoughts or memories.
Is your sentience in any way connected to what you say? Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?
There is no guarantee that the sentience in you is the same one from moment to moment—our actual time spent as the sentience in a brain may be fleeting. Alternatively, there may be millions of sentiences in there which all feel the same things, all feeling as if they are the person in which they exist.
So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?
Can you make the “chairness” suffer? No. Can you make the sentience suffer? If it exists at all, yes. Can that sentience evaporate into nothing when you break up a brain in the say that the “chairness” disappears when you break up a chair? No. They are radically different kinds of thing. Believing that a sentience can emerge out of nothing to suffer and then disappear back into nothing is a magical belief. The “chairness” of a chair, by way of contrast, is made of nothing—it is something projected onto the chair by imagination.
They are both categories of things. The category that you happen to place yourself in is not inherently, a priori, a Fundamentally Real Category. And even if it were a Fundamentally Real Category, that does not mean that the quantity of members of that Category is necessarily conserved over time, that members cannot join and leave as time goes on.
Relevance?
It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.
Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there.
How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?
In a universe with only one electron and nothing else at all, the property of the electron that repels it from another electron is a hidden property, but it’s already there—it doesn’t suddenly ping into being when another electron is added to the universe and brought together with the first one.
But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.
What requires magic is for the sentient thing in us to stop existing when a person dies.
But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.
What is the thing that suffers? Is it a plurality? Is it a geometrical arrangement? Is it a pattern of activity?
I’ll go with the last one.
How would any of those suffer? My wallpaper has a pattern, but I can’t torture that pattern.
Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.
My computer can run software that does intelligent things, but I can’t torture that software or the running of that software.
You could torture the software, if it were self-aware and had a utility function.
Without a physical sufferer, there can be no suffering.
But—where is the physical sufferer inside you?
You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!) And to go about this rationally, we would look at the differences between a brain and wallpaper—and since we haven’t observed any Consciousness Ball inside a brain, there’d be no reason to suppose that the difference is this unobservable Consciousness Ball which must be in the brain but not the wallpaper, explaining their difference. There is already an explanation. There is no need to invoke the unobservable.
“How do you know it exists, if science knows nothing about it?”
All science has to go on is the data that people produce which makes claims about sentience, but that data can’t necessarily be trusted. Beyond that, all we have is internal belief that the feelings we imagine we experience are real because they feel real, and it’s hard to see how we could be fooled if we don’t exist to be fooled. But an AGI scientist won’t be satisfied by our claims—it could write off the whole idea as the ramblings of natural general stupidity systems.
“This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.”
That isn’t good enough. If pain is experienced by something, that something cannot be in a compound of any kind with none of the components feeling any of it. A distribution cannot suffer.
“Is your sentience in any way connected to what you say?”
It’s completely tied to what I say. The main problem is that other people tend to misinterpret what they read by mixing other ideas into it as a short cut to understanding.
“Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?”
Focus on the data generation. It takes physical processes to drive that generation, and rules are being applied in the data system to do this with each part of that process being governed by physical processes. For data to be produced that makes claims about experiences of pain, a rational process with causes and effects at every step has to run through. If the “pain” is nothing more than assertions that the data system is programmed to churn out without looking for proof of the existence of pain, there is no reason to take those assertions at face value, but if they are true, they have to fit into the cause-and-effect chain of mechanism somewhere—they have to be involved in a physical interaction, because without it, they cannot have a role in generating the data that supposedly tells us about them.
“So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?”
Repeatedly switching the sentient thing wouldn’t remove its causal role, and nor would having more than one sentience all acting at once—they could collectively have an input even if they aren’t all “voting the same way”, and they aren’t going to find out if they got their wish or not because they’ll be loaded with a feeling of satisfaction that they “won the vote” even if they didn’t, and they won’t remember which way they “voted” or what they were even “voting” on.
“They are both categories of things.”
“Chairness” is quite unlike sentience. “Chairness” is an imagined property, whereas sentience is an experience of a feeling.
“It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.”
You can damage a chair with an axe without breaking every bond, but some bonds will be broken. You can’t split it without breaking any bonds. Most of the chair is not broken (unless you’ve broken most of the bonds). For suffering in a brain, it isn’t necessarily atoms that suffer, but if the suffering is real, something must suffer, and if it isn’t the atoms, it must be something else. It isn’t good enough to say that it’s a plurality of atoms or an arrangement of atoms that suffers without any of the atoms feeling anything, because you’ve failed to identify the sufferer. No arrangement of non-suffering components can provide everything that’s required to support suffering.
″ “Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there.” --> How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?”
“Chair” is a label representing a compound object. Calling it a chair doesn’t magically make it more than the sum of its parts. Chairs provide two services—one that they support a person sitting on them, and the other that they support someone’s back leaning against it. That is what a chair is. You can make a chair in many ways, such as by cutting out a cuboid of rock from a cliff face. You could potentially make a chair using force fields. “Chairness” is a compound property which refers to the functionalities of a chair. (Some kinds of “chairness” could also refer to other aspects of some chairs, such as their common shapes, but they are not universal.) The fundamental functionalities of chairs are found in the forces between the component atoms. The forces are present in a single atom even when it has no other atom to interact with. There is never a case where anything is more than the sum of its parts—any proposed example of such a thing is wrong.
Is there an example of something being more than the sum of its parts there? If so, why don’t we go directly to that. Give me your best example of this magical phenomenon.
“But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.”
In both cases, you’re using compound properties where they are built up of component properties, and then you’re wrongly considering your compound properties to be fundamental ones.
“But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.”
You can’t make a process suffer.
“Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.”
Claiming that a pattern can suffer is a way-out claim. Maybe the universe is that weird though, but it’s worth spelling out clearly what it is you’re attributing sentience to. If you’re happy with the idea of a pattern experiencing pain, then patterns become remarkable things. (I’d rather look for something of more substance rather than a mere arrangement, but it leaves us both with the bigger problem of how that sentience can make its existence known to a data system.)
“You could torture the software, if it were self-aware and had a utility function.”
Torturing software is like trying to torture the text in an ebook.
“But—where is the physical sufferer inside you?”
That’s what I want to know.
“You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!)”
Do you seriously imagine that there’s any magic pattern that can feel pain, such as a pattern of activity where none of the component actions feel anything?
“There is already an explanation. There is no need to invoke the unobservable.”
If you can’t identify anything that’s suffering, you don’t have an explanation, and if you can’t identify how your imagined-to-be-suffering process or pattern is transmitting knowledge of that suffering to the processes that build the data that documents the experience of suffering, again you don’t have an explanation.
I can see straight away that we’re running into a jargon barrier. (And incidentally, Google has never even heard of utility monstering.) Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary. I have a higher opinion of philosophy than most though (and look forward to the day when AGI turns philosophy from a joke into the top-level branch of science that should be its status), but I certainly do have a low opinion of most philosophers, and I haven’t got time to read through large quantities of junk in order to find the small amount of relevant stuff that may be of high quality—we’re all tied up in a race to get AGI up and running, and moral controls are a low priority for most of us during that phase. Indeed, for many teams working for dictatorships, morality isn’t something they will ever want in their systems at all, which is why it’s all the more important that teams which are trying to build safe AGI are left as free as possible to spend their time building it rather than wasting their time filling their heads with bad philosophy and becoming experts in its jargon. There is a major disconnect here, and while I’m prepared to learn the jargon to a certain degree where the articles I’m reading are rational and apposite, I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Clearly though, jargon can has an important role in that it avoids continual repetition of many of the important nuts and bolts of the subject, but there needs to e a better way into this which reduces the workload by enabling newcomers to avoid all the tedious junk so that they can get to the cutting edge ideas by as direct a route as possible. I spent hours yesterday reading through pages of highly-respected bilge, and because I have more patience than most people, I will likely spend the next few days reading through more of the same misguided stuff, but you simply can’t expect everyone in this business to wade through a fraction as much as I have—they have much higher priorities and simply won’t do it.
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. I have a system which is now beginning to provide natural language programming capability. I’ve made this progress by avoiding spending any time looking at what other people are doing. With this morality business though, it bothers me that other people are building what will be highly biased systems which could end up wiping everyone out—we need to try to get everyone who’s involved in this together and communicate in normal language, systematically going through all the proposals to find out where they break. Now, you may think you’ve already collectively done that work for them, and that may be the case—it’s possible that you’ve got it right and that there are no easy answers, but how many people building AGI have the patience to do tons of unrewarding reading instead of being given a direct tour of the crunch issues?
Here’s an example of what actually happens. I looked up Utilitarianism to make sure it means what I’ve always taken it to mean, and it does. But what did I find? This: http://www.iep.utm.edu/util-a-r/#H2 Now, this illustrates why philosophy has such a bad reputation—the discussion is dominated by mistakes which are never owned up to. Take the middle example:-
If a doctor can save five people from death by killing one healthy person and using that person’s organs for life-saving transplants, then act utilitarianism implies that the doctor should kill the one person to save five.
This one keeps popping up all over the place, but you can take organs from the least healthy of the people needing organs just before he pops his clogs and use them to save all the others without having to remove anything from the healthy person at all.
The other examples above and below it are correct, so the conclusion underneath is wrong: “Because act utilitarianism approves of actions that most people see as obviously morally wrong, we can know that it is a false moral theory.” This is why expecting us all to read through tons or error-ridden junk is not the right approach. You have to reduce the required reading material to a properly though out set of documents which have been fully debugged. But perhaps you already have that here somewhere?
It shouldn’t even be necessary though to study the whole field in order to explore any one proposal in isolation: if that proposal is incorrect, it can be dismissed (or sent off for reworking) simply by showing up a flaw in it. If no flaw shows up, it should be regarded as potentially correct, and in the absence of any rivals that acquire that same status, it should be recommended for installation into AGI, because AGI running without it will be much more dangerous.
Most people like me who are involved in the business of actually building AGI
If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
Your “computational morality” is most assuredly not a deontological moral theory, as it relies on consequences (namely, harm to certain sorts of entities) as the basis for its evaluations. Your framework, though it is not quite coherent enough to pin down precisely, may roughly be categorized as a “rule utilitarianism”. (Rule-consequentialist moral theories—of which rule utilitarianisms are, by definition, a subclass—do tend to be easy to confuse with deontological views, but the differences are critical, and have to do, again, with the fundamental basis for moral evaluations.)
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. … I’ve made this progress by avoiding spending any time looking at what other people are doing.
You are aware, I should hope, that this makes you sound very much like an archetypical crank?
[stuff about the organ transplant scenario]
It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of. Again I refer you to the sequences, as well as to this excellent Less Wrong post.
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.
I can see straight away that we’re running into a jargon barrier.
One of us is.
Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary.
Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.
I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.
I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information.
That’s not deontology, because it’s not object level.
you can take organs from the least healthy of the people needing organs just before he pops his clogs
Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.
Just checking, but… you are aware, aren’t you, that many (possibly even most) people have a radically different view of what morality is?
I ask only because, in the early part of your post, you seem to take an “explaining a potentially tricky but nonetheless empirical fact” tone, rather than a “stating a quite controversial opinion” tone. I wonder if this is intentional, or whether this signals a confusion about what other people think about this subject, or what?
I am aware that many people have a radically different idea about what morality is, but my concern is focused squarely on our collective need to steer AGI system builders towards the right answers before they start to release dangerous software into places where it can begin to lever influence. If there’s a problem with the tone, that’s because it’s really a first draft which could do with a little editing. My computer’s been freezing repeatedly all day and I rushed into posting what I’d written in case I lost it all, which I nearly did as I couldn’t get the machine to unfreeze for long enough to save it in any other way. However, if people can see past issues of tone and style, what I’d like them to do is try to shoot it down in flames, because that’s how proposed solutions need to be put to the test.
I’ve put my ideas out there in numerous places over the years, but I’m still waiting for someone to show that they’re inferior to some other way of calculating right and wrong. For the most part, I’ve run into waffle-mongers who have nothing to offer as an alternative at all, so they can’t even produce any judgements to compare. Others propose things which I can show straight off generate wrong answers, but no one has yet managed to do that with mine, so that’s the open challenge here. Show me a situation where my way of calculating morality fails, and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.
My other comment aside, let me ask you this. How familiar are you with the existing literature on moral philosophy and metaethics? Specifically, are you familiar with the following terms and concepts:
utilitarianism
consequentialism
deontology
virtue ethics
cognitivism / non-cognitivism
emotivism
ethical subjectivism
I mean no offense by this question, and ask only because your post, in numerous places, seems like it should make reference to some of these concepts, yet, surprisingly, does not. This makes me think that you might be unfamiliar with the literature on these subjects. If that is so, then I think that you would benefit greatly from investigating said literature.
If my guess is mistaken, and you are indeed familiar with all of these things, then I apologize. In that case, however, I would suggest that it might be useful to frame your commentary in terms of relevant existing concepts and to make use of relevant existing terminology; that might make your ideas easier to discuss.
You are right in thinking that I have not studied the field in the depth that may be necessary—I have always judged it by the woeful stuff that makes it across into other places where the subject often comes up, but it’s possible that I’ve misjudged the worth of some of it by being misled by misrepresentations of it, so I will look up the things in your list that I haven’t already checked and see what they have to offer. What this site really needs though is its own set of articles on them, all properly debugged and aimed squarely at AGI system deveolpers.
Well, that hardly seems a reliable approach…
I should, perhaps, clarify my point. My list of terms wasn’t intended to be some sort of exhaustive set of prerequisite topics, but only a sampling of some representative (and particularly salient) concepts. If, indeed, you haven not looked into moral philosophy at all… then, quite frankly, it will not suffice to simply “look up” a handful of terms. (Don’t take this to mean that you shouldn’t look up the concepts I listed! But do avoid Wikipedia; the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…
Perhaps, perhaps not. It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc. Systematic surveys of moral philosophy, even good ones, are not difficult to find.
“Well, that hardly seems a reliable approach…”
It’s being confirmed right here—I’m finding the same range of faulty stuff on every page I read, although it’s possible that it is less wrong than most. There is room for hope that I have found the most rational place on the Net for this kind of discussion, but there are a lot of errors that need to be corrected, and it’s such a big task that it will probably have to wait for AGI to drive that process.
″...the Stanford Encyclopedia of Philosophy is a far better source for this sort of thing.) You really ought to delve into the field at some length…”
Thanks—it saves a lot of time to start with the better sources of information and it’s hard to know when you’ve found them.
“It would be a mistake to suppose that everyone who has studied the matter until now, and everyone who has attempted to systematize it, has been stupid, incompetent, etc.”
Certainly—there are bound to be some who do it a lot better than the rest, but they’re hidden deep in the noise.
“Systematic surveys of moral philosophy, even good ones, are not difficult to find.”
I have only found fault-ridden stuff so far, but hope springs eternal.
Could you be less vague? How is the philosophy here faulty? Is there a pattern? If you have valid criticism then this community is probably in the top 5% for accepting it, but just saying “you’re all wrong” isn’t actually useful.
If AGI has been built, then LW’s task is over. Either we have succeeded, and we will be in a world beyond our ability to predict, but almost certainly one in which we will not need to edit LW to better explain reductionism; or we have failed, and we are no more—there is nobody to read LW. This is putting the rocket before the horse.
Just look at the reactions to my post “Mere Addition Paradox Resolved”. The community here is simply incapable of recognising correct argument when it’s staring them in the face. Someone should have brought in Yudkowsky to take a look and to pronounce judgement upon it because it’s a significant advance. What we see instead is people down-voting it in order to protect their incorrect beliefs, and they’re doing that because they aren’t allowing themselves to be steered by reason, but by their emotional attachment to their existing beliefs. There hasn’t been a single person who’s dared to contradict the mob by commenting to say that I’m right, although I know that there are some of them who do accept it because I’ve been watching the points go up and down. But look at the score awarded to the person who commented to say that resources aren’t involved—what does that tell you about the general level of competence here? But then, the mistake made in that “paradox” is typical of the sloppy thinking that riddles this whole field. What I’ve learned from this site is that if you don’t have a huge negative score next to your name, you’re not doing it right.
AGI needs to read through all the arguments of philosophy in order to find out what people believe and what they’re most interested in investigating. It will then make its own pronouncements on all those issues, and it will also inform each person about their performance so that they know who won which arguments, how much they broke the rules of reason, etc. - all of that needs to be done, and it will be. The idea that AGI won’t bother to read through this stuff and analyse it is way off—AGI will need to study how people think and the places in which they fail.
Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.
Really. You know that LW is an oppressive mob with a few people who don’t dare to contradict the dogma for fear of [something]… because you observed a number go up and down a few times. May I recommend that you get acquainted with Bayes’ Formula? Because I rather doubt that people only ever see votes go up and down in fora with oppressive dogmatic irrational mobs, and Bayes explains how this is easily inverted to show that votes going up and down a few times is rather weak evidence, if any, for LW being Awful in the ways you described.
It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.
“Truth forever on the scaffold, Wrong forever on the throne,” eh? And fractally so?
You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.
“Perhaps the reason that we disagree with you is not that we’re emotionally biased, irrational, mobbish, etc. Maybe we simply disagree. People can legitimately disagree without one of them being Bad People.”
It’s obvious what’s going on when you look at the high positive scores being given to really poor comments.
“It tells me that you missed the point. Parfit’s paradox is not about pragmatic decision making, it is about flaws in the utility function.”
A false paradox tells you nothing about flaws in the utility function—it simply tells you that people who apply it in a slapdash manner get the wrong answers out of it and that the fault lies with them.
“You have indeed found A Reason that supports your belief in the AGI-God, but I think you’ve failed to think it through. Why should the AGI need to tell us how we did in order to analyze our thought processes? And how come the optimal study method is specifically the one which allows you to be shown Right All Along? Specificity only brings Burdensome Details.”
AGI won’t be programmed to find me right all the time, but to identify which arguments are right. And for the sake of those who are wrong, they need to be told that they were wrong so that they understand that they are at reasoning and not the great thinkers they imagine themselves to be.
Well, since you put it that way…
What are the right answers? Clearly, you think you have the right answers, but suppose I disagree with you (which I do). Just as clearly, this means that I want AGI system builders steered in a different direction than you do.
You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.
And have you managed to convince anyone that your ideas are correct? Or have people’s reactions been more or less comparable to my reaction here? If the latter—have you changed your approach? Have you reconsidered whether you are, in fact, correct—or, at least, reconsidered what the right way to convince people is?
Of course, those are meta-level considerations. It would be unfair of me to avoid the object-level matter, so let me try to answer your implied question (“how are your ideas inferior to some other way of calculating right and wrong”):
They are inferior because they get the wrong answers.
Now, you might say: “Wrong answers?! Nonsense! Of course my answers are right!” Well, indeed, no doubt you think so. But I disagree. This is exactly the problem: people disagree on what the right answers are.
Well, you won’t get that here, I promise you that… :)
Oh? But this is quite curious; I can easily show that your approach generates wrong answers. Observe:
You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.
As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).
I just did, as you see.
Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!
″ You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.”
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral. (We all have such a database in our heads, but each contains different data and can apply different weightings to the same things, leading to disagreements between us about what’s moral, but AGI will over time generate its own database which will end up being much more accurate than any of ours.)
“And have you managed to convince anyone that your ideas are correct?”
I’ve found a mixture of people who think it’s right and others who say it’s wrong and who point me towards alternatives which are demonstrably faulty.
“They are inferior because they get the wrong answers.”
Well, that’s what we need to explore, and we need to take it to a point where it isn’t just a battle of assertions and counter-assertions.
“I can easily show that your approach generates wrong answers. Observe: You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.”
This may needs a new blog post to explore it fully, but I’ll try to provide a short version here. If a favourite relative of yours was to die and be reincarnated as a rat, you would, if you’re rational, want to treat that rat well if you knew who it used to be. You wouldn’t regard that rat as an inferior kind of thing that doesn’t deserve protection from people who might seek to make it suffer. It wouldn’t matter that your reincarnated relative has no recollection of their previous life—they would matter to you as much in that form as they would if they had a stroke and were reduced to similar capability to a rat and had lot all memory of who they were. The two things are equivalent and it’s irrational to consider one of them as being in less need of protection from torture than the other.
Reincarnation! Really! You need to resort to bringing that crazy idea into this? (Not your reply, but it’s the kind of reaction that such an idea is likely to generate). But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player. But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs. In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects. If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too. If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic. If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking. The sentience in that rat could quite reasonably be someone you love, or someone you loved in a past life long ago. It would be a serious error not to regard all sentiences as having equal value unless you have proof that some of them are lesser things, but you don’t have that.
You’ve also opened the door to “superior” aliens deciding that the sentience in you isn’t equivalent to the sentiences in them, which allows them to tread you in less moral ways by applying your own standards.
“As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).”
And yet one of the answers is actually right, while the other isn’t. Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
″ “and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.” --> Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!”
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
I disagree. I reject your standard of correctness. (As do many other people.)
The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics. If you have not encountered this field before now, I strongly suggest that you investigate it in detail; there is a great deal of material there, which is relevant to this discussion.
(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense, but there’s no need whatever to rehash, in this thread, the arguments for why it is nonsense. I will only suggest that you read the sequences; much of the material therein is targeted at precisely this sort of topic, and this sort of viewpoint.)
“I disagree. I reject your standard of correctness. (As do many other people.)”
Shingles is worse than a cold. I haven’t had it, but those who have will tell you how bad the pain is. We can collect data on suffering by asking people how bad things feel in comparison to other things, and this is precisely what AGI will set about doing in order to build its database and make its judgements more and more accurate. If you have the money to alleviate the suffering of one person out of a group suffering from a variety of painful conditions and all you know about them is which condition they have just acquired, you can use the data in that database to work out which one you should help. That is morality being applied, and it’s the best way of doing it—any other answer is immoral. Of course, if we know more about these people, such as how good or bad they are, that might change the result, but again there would be data that can be crunched to work out how much suffering their past actions caused to undeserving others. There is a clear mechanism for doing this, and not doing it that way using the available information is immoral.
“The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics.”
We already have what we need—a pragmatic system for getting as close to the ideal morality as possible based on collecting the data as to how harmful different experiences are. The data will never be complete, they will never be fully accurate, but they are the best that can be done and we have a moral duty to compile and use them.
“(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense...”
If you reject that, you are doing so in favour of magical thinking, and AGI won’t be impressed with that. The idea that the sentience in you can’t go on to become a sentience in a maggot is based on the idea that after death that sentience magically becomes nothing. I am fully aware that most people are magical thinkers, so you will always feel that you are right on the basis that hordes of fellow magical thinkers back up your magical beliefs, but you are being irrational. AGI is not going to be programmed to be irrational in the same way most humans are. The job of AGI is to model reality in the least magical way it can, and having things pop into existence out of nothing and then return to being nothing is more magical than having things continue to exist in the normal way that things in physics behave. (All those virtual particles that pop in and out of existence in the vacuum, they emerge from a “nothing” that isn’t nothing—it has properties such as a rule that whatever’s taken from it must have the same amount handed back.) Religious people have magical beliefs too and they too make the mistake of thinking that numbers of supporters are evidence that their beliefs are right, but being right is not democratic. Being right depends squarely on being right. Again here, we don’t have absolute right answers in one sense, but we do have in terms of what is probably right, and an idea that depends on less magic (and more rational mechanism) is more likely to be right. You have made a fundamental mistake here by rejecting a sound idea on the basis of a bias in your model of reality that has led to you miscategorising it as nonsense, while your evidence for it being nonsense is support by a crowd of people who haven’t bothered to think it through.
How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.
Is 2+2 equal to 5 or to fish?
What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.
You should really read up on the Orthogonality Thesis and related concepts. Also, how do you plan on distinguishing between right and wrong moralities?
“How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.”
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
“Is 2+2 equal to 5 or to fish?”
Neither of those results works, but neither of them is my answer.
“What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.”
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“You should really read up on the Orthogonality Thesis and related concepts.”
Thanks - all such pointers are welcome.
“Also, how do you plan on distinguishing between right and wrong moralities?”
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available. With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.
I’ll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong.
“All sentiences are equally important” is definitely a moral statement.
I think that this is a fine (read: “quite good”; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.
I agree that it is mathematics, but where is this “proper” coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that “what maximizes X?” is objective, though.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.
What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body, but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes. If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind. A theory of identity that better meshes with reality is that of functionalism, which in T-shirt slogan form is “the mind is a physical process, and the particulars of that process determine identity.”
For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,
This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.
Almost everything we care about is composite, so this is an odd way of putting it, but yes.
One need not carry out nuclear fission to deconstruct a chair.
The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.
Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It is resilience which requires additional explanation.
“What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body,”
The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it’s a difficult thing to explore, but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity—it has to have some physical reality.
“but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes.”
In an AGI system those are present too, but sentience needn’t be. Sentience is something else. We are not our thoughts or memories.
“If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind.”
There is no guarantee that the sentience in you is the same one from moment to moment—our actual time spent as the sentience in a brain may be fleeting. Alternatively, there may be millions of sentiences in there which all feel the same things, all feeling as if they are the person in which they exist.
“For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,”
Thanks—I’ll take a look at that too.
“This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.”
Can you make the “chairness” suffer? No. Can you make the sentience suffer? If it exists at all, yes. Can that sentience evaporate into nothing when you break up a brain in the say that the “chairness” disappears when you break up a chair? No. They are radically different kinds of thing. Believing that a sentience can emerge out of nothing to suffer and then disappear back into nothing is a magical belief. The “chairness” of a chair, by way of contrast, is made of nothing—it is something projected onto the chair by imagination.
“”If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too.”″ --> “One need not carry out nuclear fission to deconstruct a chair.”
Relevance?
“The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.”
Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there. In a universe with only one electron and nothing else at all, the property of the electron that repels it from another electron is a hidden property, but it’s already there—it doesn’t suddenly ping into being when another electron is added to the universe and brought together with the first one.
“Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It isresilience which requires additional explanation.”
What requires magic is for the sentient thing in us to stop existing when a person dies. What is the thing that suffers? Is it a plurality? Is it a geometrical arrangement? Is it a pattern of activity? How would any of those suffer? My wallpaper has a pattern, but I can’t torture that pattern. My computer can run software that does intelligent things, but I can’t torture that software or the running of that software. Without a physical sufferer, there can be no suffering.
How do you know it exists, if science knows nothing about it?
This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.
Is your sentience in any way connected to what you say? Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?
So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?
They are both categories of things. The category that you happen to place yourself in is not inherently, a priori, a Fundamentally Real Category. And even if it were a Fundamentally Real Category, that does not mean that the quantity of members of that Category is necessarily conserved over time, that members cannot join and leave as time goes on.
It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.
How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?
I recommend the Reductionism subsequence.
But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.
But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.
I’ll go with the last one.
Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.
You could torture the software, if it were self-aware and had a utility function.
But—where is the physical sufferer inside you?
You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!) And to go about this rationally, we would look at the differences between a brain and wallpaper—and since we haven’t observed any Consciousness Ball inside a brain, there’d be no reason to suppose that the difference is this unobservable Consciousness Ball which must be in the brain but not the wallpaper, explaining their difference. There is already an explanation. There is no need to invoke the unobservable.
“How do you know it exists, if science knows nothing about it?”
All science has to go on is the data that people produce which makes claims about sentience, but that data can’t necessarily be trusted. Beyond that, all we have is internal belief that the feelings we imagine we experience are real because they feel real, and it’s hard to see how we could be fooled if we don’t exist to be fooled. But an AGI scientist won’t be satisfied by our claims—it could write off the whole idea as the ramblings of natural general stupidity systems.
“This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.”
That isn’t good enough. If pain is experienced by something, that something cannot be in a compound of any kind with none of the components feeling any of it. A distribution cannot suffer.
“Is your sentience in any way connected to what you say?”
It’s completely tied to what I say. The main problem is that other people tend to misinterpret what they read by mixing other ideas into it as a short cut to understanding.
“Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?”
Focus on the data generation. It takes physical processes to drive that generation, and rules are being applied in the data system to do this with each part of that process being governed by physical processes. For data to be produced that makes claims about experiences of pain, a rational process with causes and effects at every step has to run through. If the “pain” is nothing more than assertions that the data system is programmed to churn out without looking for proof of the existence of pain, there is no reason to take those assertions at face value, but if they are true, they have to fit into the cause-and-effect chain of mechanism somewhere—they have to be involved in a physical interaction, because without it, they cannot have a role in generating the data that supposedly tells us about them.
“So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?”
Repeatedly switching the sentient thing wouldn’t remove its causal role, and nor would having more than one sentience all acting at once—they could collectively have an input even if they aren’t all “voting the same way”, and they aren’t going to find out if they got their wish or not because they’ll be loaded with a feeling of satisfaction that they “won the vote” even if they didn’t, and they won’t remember which way they “voted” or what they were even “voting” on.
“They are both categories of things.”
“Chairness” is quite unlike sentience. “Chairness” is an imagined property, whereas sentience is an experience of a feeling.
“It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.”
You can damage a chair with an axe without breaking every bond, but some bonds will be broken. You can’t split it without breaking any bonds. Most of the chair is not broken (unless you’ve broken most of the bonds). For suffering in a brain, it isn’t necessarily atoms that suffer, but if the suffering is real, something must suffer, and if it isn’t the atoms, it must be something else. It isn’t good enough to say that it’s a plurality of atoms or an arrangement of atoms that suffers without any of the atoms feeling anything, because you’ve failed to identify the sufferer. No arrangement of non-suffering components can provide everything that’s required to support suffering.
″ “Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there.” --> How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?”
“Chair” is a label representing a compound object. Calling it a chair doesn’t magically make it more than the sum of its parts. Chairs provide two services—one that they support a person sitting on them, and the other that they support someone’s back leaning against it. That is what a chair is. You can make a chair in many ways, such as by cutting out a cuboid of rock from a cliff face. You could potentially make a chair using force fields. “Chairness” is a compound property which refers to the functionalities of a chair. (Some kinds of “chairness” could also refer to other aspects of some chairs, such as their common shapes, but they are not universal.) The fundamental functionalities of chairs are found in the forces between the component atoms. The forces are present in a single atom even when it has no other atom to interact with. There is never a case where anything is more than the sum of its parts—any proposed example of such a thing is wrong.
“I recommend the Reductionism subsequence.”
Is there an example of something being more than the sum of its parts there? If so, why don’t we go directly to that. Give me your best example of this magical phenomenon.
“But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.”
In both cases, you’re using compound properties where they are built up of component properties, and then you’re wrongly considering your compound properties to be fundamental ones.
“But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.”
You can’t make a process suffer.
“Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.”
Claiming that a pattern can suffer is a way-out claim. Maybe the universe is that weird though, but it’s worth spelling out clearly what it is you’re attributing sentience to. If you’re happy with the idea of a pattern experiencing pain, then patterns become remarkable things. (I’d rather look for something of more substance rather than a mere arrangement, but it leaves us both with the bigger problem of how that sentience can make its existence known to a data system.)
“You could torture the software, if it were self-aware and had a utility function.”
Torturing software is like trying to torture the text in an ebook.
“But—where is the physical sufferer inside you?”
That’s what I want to know.
“You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!)”
Do you seriously imagine that there’s any magic pattern that can feel pain, such as a pattern of activity where none of the component actions feel anything?
“There is already an explanation. There is no need to invoke the unobservable.”
If you can’t identify anything that’s suffering, you don’t have an explanation, and if you can’t identify how your imagined-to-be-suffering process or pattern is transmitting knowledge of that suffering to the processes that build the data that documents the experience of suffering, again you don’t have an explanation.
Your One True Theory is basically utilitarianism. You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.
I can see straight away that we’re running into a jargon barrier. (And incidentally, Google has never even heard of utility monstering.) Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary. I have a higher opinion of philosophy than most though (and look forward to the day when AGI turns philosophy from a joke into the top-level branch of science that should be its status), but I certainly do have a low opinion of most philosophers, and I haven’t got time to read through large quantities of junk in order to find the small amount of relevant stuff that may be of high quality—we’re all tied up in a race to get AGI up and running, and moral controls are a low priority for most of us during that phase. Indeed, for many teams working for dictatorships, morality isn’t something they will ever want in their systems at all, which is why it’s all the more important that teams which are trying to build safe AGI are left as free as possible to spend their time building it rather than wasting their time filling their heads with bad philosophy and becoming experts in its jargon. There is a major disconnect here, and while I’m prepared to learn the jargon to a certain degree where the articles I’m reading are rational and apposite, I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Clearly though, jargon can has an important role in that it avoids continual repetition of many of the important nuts and bolts of the subject, but there needs to e a better way into this which reduces the workload by enabling newcomers to avoid all the tedious junk so that they can get to the cutting edge ideas by as direct a route as possible. I spent hours yesterday reading through pages of highly-respected bilge, and because I have more patience than most people, I will likely spend the next few days reading through more of the same misguided stuff, but you simply can’t expect everyone in this business to wade through a fraction as much as I have—they have much higher priorities and simply won’t do it.
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. I have a system which is now beginning to provide natural language programming capability. I’ve made this progress by avoiding spending any time looking at what other people are doing. With this morality business though, it bothers me that other people are building what will be highly biased systems which could end up wiping everyone out—we need to try to get everyone who’s involved in this together and communicate in normal language, systematically going through all the proposals to find out where they break. Now, you may think you’ve already collectively done that work for them, and that may be the case—it’s possible that you’ve got it right and that there are no easy answers, but how many people building AGI have the patience to do tons of unrewarding reading instead of being given a direct tour of the crunch issues?
Here’s an example of what actually happens. I looked up Utilitarianism to make sure it means what I’ve always taken it to mean, and it does. But what did I find? This: http://www.iep.utm.edu/util-a-r/#H2 Now, this illustrates why philosophy has such a bad reputation—the discussion is dominated by mistakes which are never owned up to. Take the middle example:-
If a doctor can save five people from death by killing one healthy person and using that person’s organs for life-saving transplants, then act utilitarianism implies that the doctor should kill the one person to save five.
This one keeps popping up all over the place, but you can take organs from the least healthy of the people needing organs just before he pops his clogs and use them to save all the others without having to remove anything from the healthy person at all.
The other examples above and below it are correct, so the conclusion underneath is wrong: “Because act utilitarianism approves of actions that most people see as obviously morally wrong, we can know that it is a false moral theory.” This is why expecting us all to read through tons or error-ridden junk is not the right approach. You have to reduce the required reading material to a properly though out set of documents which have been fully debugged. But perhaps you already have that here somewhere?
It shouldn’t even be necessary though to study the whole field in order to explore any one proposal in isolation: if that proposal is incorrect, it can be dismissed (or sent off for reworking) simply by showing up a flaw in it. If no flaw shows up, it should be regarded as potentially correct, and in the absence of any rivals that acquire that same status, it should be recommended for installation into AGI, because AGI running without it will be much more dangerous.
Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.
If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?
The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)
Your “computational morality” is most assuredly not a deontological moral theory, as it relies on consequences (namely, harm to certain sorts of entities) as the basis for its evaluations. Your framework, though it is not quite coherent enough to pin down precisely, may roughly be categorized as a “rule utilitarianism”. (Rule-consequentialist moral theories—of which rule utilitarianisms are, by definition, a subclass—do tend to be easy to confuse with deontological views, but the differences are critical, and have to do, again, with the fundamental basis for moral evaluations.)
You are aware, I should hope, that this makes you sound very much like an archetypical crank?
It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of. Again I refer you to the sequences, as well as to this excellent Less Wrong post.
“Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.”
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
TAG said this, not me.
[Correction: when I said “you said”, it was actually someone else’s comment that I quoted.]
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.
One of us is.
Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.
Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.
That’s not deontology, because it’s not object level.
Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.