Most people like me who are involved in the business of actually building AGI
If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
Your “computational morality” is most assuredly not a deontological moral theory, as it relies on consequences (namely, harm to certain sorts of entities) as the basis for its evaluations. Your framework, though it is not quite coherent enough to pin down precisely, may roughly be categorized as a “rule utilitarianism”. (Rule-consequentialist moral theories—of which rule utilitarianisms are, by definition, a subclass—do tend to be easy to confuse with deontological views, but the differences are critical, and have to do, again, with the fundamental basis for moral evaluations.)
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. … I’ve made this progress by avoiding spending any time looking at what other people are doing.
You are aware, I should hope, that this makes you sound very much like an archetypical crank?
[stuff about the organ transplant scenario]
It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of. Again I refer you to the sequences, as well as to this excellent Less Wrong post.
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.
Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.
If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?
The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)
Your “computational morality” is most assuredly not a deontological moral theory, as it relies on consequences (namely, harm to certain sorts of entities) as the basis for its evaluations. Your framework, though it is not quite coherent enough to pin down precisely, may roughly be categorized as a “rule utilitarianism”. (Rule-consequentialist moral theories—of which rule utilitarianisms are, by definition, a subclass—do tend to be easy to confuse with deontological views, but the differences are critical, and have to do, again, with the fundamental basis for moral evaluations.)
You are aware, I should hope, that this makes you sound very much like an archetypical crank?
It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of. Again I refer you to the sequences, as well as to this excellent Less Wrong post.
“Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.”
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
TAG said this, not me.
[Correction: when I said “you said”, it was actually someone else’s comment that I quoted.]
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.