I can see straight away that we’re running into a jargon barrier. (And incidentally, Google has never even heard of utility monstering.) Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary. I have a higher opinion of philosophy than most though (and look forward to the day when AGI turns philosophy from a joke into the top-level branch of science that should be its status), but I certainly do have a low opinion of most philosophers, and I haven’t got time to read through large quantities of junk in order to find the small amount of relevant stuff that may be of high quality—we’re all tied up in a race to get AGI up and running, and moral controls are a low priority for most of us during that phase. Indeed, for many teams working for dictatorships, morality isn’t something they will ever want in their systems at all, which is why it’s all the more important that teams which are trying to build safe AGI are left as free as possible to spend their time building it rather than wasting their time filling their heads with bad philosophy and becoming experts in its jargon. There is a major disconnect here, and while I’m prepared to learn the jargon to a certain degree where the articles I’m reading are rational and apposite, I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Clearly though, jargon can has an important role in that it avoids continual repetition of many of the important nuts and bolts of the subject, but there needs to e a better way into this which reduces the workload by enabling newcomers to avoid all the tedious junk so that they can get to the cutting edge ideas by as direct a route as possible. I spent hours yesterday reading through pages of highly-respected bilge, and because I have more patience than most people, I will likely spend the next few days reading through more of the same misguided stuff, but you simply can’t expect everyone in this business to wade through a fraction as much as I have—they have much higher priorities and simply won’t do it.
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. I have a system which is now beginning to provide natural language programming capability. I’ve made this progress by avoiding spending any time looking at what other people are doing. With this morality business though, it bothers me that other people are building what will be highly biased systems which could end up wiping everyone out—we need to try to get everyone who’s involved in this together and communicate in normal language, systematically going through all the proposals to find out where they break. Now, you may think you’ve already collectively done that work for them, and that may be the case—it’s possible that you’ve got it right and that there are no easy answers, but how many people building AGI have the patience to do tons of unrewarding reading instead of being given a direct tour of the crunch issues?
Here’s an example of what actually happens. I looked up Utilitarianism to make sure it means what I’ve always taken it to mean, and it does. But what did I find? This: http://www.iep.utm.edu/util-a-r/#H2 Now, this illustrates why philosophy has such a bad reputation—the discussion is dominated by mistakes which are never owned up to. Take the middle example:-
If a doctor can save five people from death by killing one healthy person and using that person’s organs for life-saving transplants, then act utilitarianism implies that the doctor should kill the one person to save five.
This one keeps popping up all over the place, but you can take organs from the least healthy of the people needing organs just before he pops his clogs and use them to save all the others without having to remove anything from the healthy person at all.
The other examples above and below it are correct, so the conclusion underneath is wrong: “Because act utilitarianism approves of actions that most people see as obviously morally wrong, we can know that it is a false moral theory.” This is why expecting us all to read through tons or error-ridden junk is not the right approach. You have to reduce the required reading material to a properly though out set of documents which have been fully debugged. But perhaps you already have that here somewhere?
It shouldn’t even be necessary though to study the whole field in order to explore any one proposal in isolation: if that proposal is incorrect, it can be dismissed (or sent off for reworking) simply by showing up a flaw in it. If no flaw shows up, it should be regarded as potentially correct, and in the absence of any rivals that acquire that same status, it should be recommended for installation into AGI, because AGI running without it will be much more dangerous.
Most people like me who are involved in the business of actually building AGI
If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
Your “computational morality” is most assuredly not a deontological moral theory, as it relies on consequences (namely, harm to certain sorts of entities) as the basis for its evaluations. Your framework, though it is not quite coherent enough to pin down precisely, may roughly be categorized as a “rule utilitarianism”. (Rule-consequentialist moral theories—of which rule utilitarianisms are, by definition, a subclass—do tend to be easy to confuse with deontological views, but the differences are critical, and have to do, again, with the fundamental basis for moral evaluations.)
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. … I’ve made this progress by avoiding spending any time looking at what other people are doing.
You are aware, I should hope, that this makes you sound very much like an archetypical crank?
[stuff about the organ transplant scenario]
It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of. Again I refer you to the sequences, as well as to this excellent Less Wrong post.
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.
I can see straight away that we’re running into a jargon barrier.
One of us is.
Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary.
Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.
I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.
I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information.
That’s not deontology, because it’s not object level.
you can take organs from the least healthy of the people needing organs just before he pops his clogs
Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.
Your One True Theory is basically utilitarianism. You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.
I can see straight away that we’re running into a jargon barrier. (And incidentally, Google has never even heard of utility monstering.) Most people like me who are involved in the business of actually building AGI have a low opinion of philosophy and have not put any time into learning its specialist vocabulary. I have a higher opinion of philosophy than most though (and look forward to the day when AGI turns philosophy from a joke into the top-level branch of science that should be its status), but I certainly do have a low opinion of most philosophers, and I haven’t got time to read through large quantities of junk in order to find the small amount of relevant stuff that may be of high quality—we’re all tied up in a race to get AGI up and running, and moral controls are a low priority for most of us during that phase. Indeed, for many teams working for dictatorships, morality isn’t something they will ever want in their systems at all, which is why it’s all the more important that teams which are trying to build safe AGI are left as free as possible to spend their time building it rather than wasting their time filling their heads with bad philosophy and becoming experts in its jargon. There is a major disconnect here, and while I’m prepared to learn the jargon to a certain degree where the articles I’m reading are rational and apposite, I’m certainly not going to make the mistake of learning to speak in jargon, because that only serves to put up barriers to understanding which shut out the other people who most urgently need to be brought into the discussion.
Clearly though, jargon can has an important role in that it avoids continual repetition of many of the important nuts and bolts of the subject, but there needs to e a better way into this which reduces the workload by enabling newcomers to avoid all the tedious junk so that they can get to the cutting edge ideas by as direct a route as possible. I spent hours yesterday reading through pages of highly-respected bilge, and because I have more patience than most people, I will likely spend the next few days reading through more of the same misguided stuff, but you simply can’t expect everyone in this business to wade through a fraction as much as I have—they have much higher priorities and simply won’t do it.
You say that my approach is essentially utilitarianism, but no—morality ins’t about maximising happiness, although it certainly should not block such maximisation for those who want to pursue it. Morality’s role is to minimise the kinds of harm which don’t open the way to the pursuit of happiness. Suffering is bad, and morality is about trying to eliminate it, but not where that suffering is out-gunned by pleasures which make the suffering worthwhile for the sufferers.
You also say that I don’t embrace any kind of deontology, but I do, and I call it computational morality. I’ve set out how it works, and it’s all a matter of following rules which maximise the probability that any decision is the best one that could be made based on the available information. You may already use some other name for it which I don’t know yet, but it is not utilitarianism.
I’m an independent thinker who’s worked for decades on linguistics and AI in isolation, finding my own solutions for all the problems that crop up. I have a system which is now beginning to provide natural language programming capability. I’ve made this progress by avoiding spending any time looking at what other people are doing. With this morality business though, it bothers me that other people are building what will be highly biased systems which could end up wiping everyone out—we need to try to get everyone who’s involved in this together and communicate in normal language, systematically going through all the proposals to find out where they break. Now, you may think you’ve already collectively done that work for them, and that may be the case—it’s possible that you’ve got it right and that there are no easy answers, but how many people building AGI have the patience to do tons of unrewarding reading instead of being given a direct tour of the crunch issues?
Here’s an example of what actually happens. I looked up Utilitarianism to make sure it means what I’ve always taken it to mean, and it does. But what did I find? This: http://www.iep.utm.edu/util-a-r/#H2 Now, this illustrates why philosophy has such a bad reputation—the discussion is dominated by mistakes which are never owned up to. Take the middle example:-
If a doctor can save five people from death by killing one healthy person and using that person’s organs for life-saving transplants, then act utilitarianism implies that the doctor should kill the one person to save five.
This one keeps popping up all over the place, but you can take organs from the least healthy of the people needing organs just before he pops his clogs and use them to save all the others without having to remove anything from the healthy person at all.
The other examples above and below it are correct, so the conclusion underneath is wrong: “Because act utilitarianism approves of actions that most people see as obviously morally wrong, we can know that it is a false moral theory.” This is why expecting us all to read through tons or error-ridden junk is not the right approach. You have to reduce the required reading material to a properly though out set of documents which have been fully debugged. But perhaps you already have that here somewhere?
It shouldn’t even be necessary though to study the whole field in order to explore any one proposal in isolation: if that proposal is incorrect, it can be dismissed (or sent off for reworking) simply by showing up a flaw in it. If no flaw shows up, it should be regarded as potentially correct, and in the absence of any rivals that acquire that same status, it should be recommended for installation into AGI, because AGI running without it will be much more dangerous.
Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.
If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?
The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)
Your “computational morality” is most assuredly not a deontological moral theory, as it relies on consequences (namely, harm to certain sorts of entities) as the basis for its evaluations. Your framework, though it is not quite coherent enough to pin down precisely, may roughly be categorized as a “rule utilitarianism”. (Rule-consequentialist moral theories—of which rule utilitarianisms are, by definition, a subclass—do tend to be easy to confuse with deontological views, but the differences are critical, and have to do, again, with the fundamental basis for moral evaluations.)
You are aware, I should hope, that this makes you sound very much like an archetypical crank?
It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of. Again I refer you to the sequences, as well as to this excellent Less Wrong post.
“Au contraire: here is the Wikipedia article on utility monsters, and here is some guy’s blog post about utility monsters. This was easily found via Google.”
I googled “utility monstering” and there wasn’t a single result for it—I didn’t realise I had to change the ending on it. Now that I know what it means though, I can’t see why you brought it up. You said, “You don’t embrace any kind of deontology, but deontology can prevent Omelas, Uility Monstering, etc.” I’d already made it clear that feelings are different for different individuals, so either that means I’m using some kind of deontology already or something else that does the same job. There needs to be a database of knowledge of feelings, providing information on the average person, but data also needs to be collected on individuals to tune the calculations to them more accurately. Where you don’t know anything about the individual, you have to go by the database of the average person and apply that as it is more likely to be right than any other database that you randomly select.
“If you don’t mind my asking, are you affiliated with MIRI? In what way are you involved in “the business of actually building AGI”?”
I have no connection with MIRI. My involvement in AGI is simply that I’m building an AGI system of my own design, implementing decades of my own work in linguistics (all unpublished). I have the bulk of the design finished on paper and am putting it together module by module. I have a componential analysis dictionary which reduces all concepts down to their fundamental components of meaning (20 years’ worth of hard analysis went into building that). I have designed data formats to store thoughts in a language of thought quite independent of any language used for input, all based on concept codes linked together in nets—the grammar of thought is, incidentally, universal, unlike spoken languages. I’ve got all the important pieces and it’s just a matter of assembling the parts that haven’t yet been put together. The actual reasoning, just like morality, is dead easy.
“The class of moral theories referred to as “utilitarianism” does, indeed, include exactly such frameworks as you describe (which would fall, roughly, into the category of “negative utilitarianism”). (The SEP article about consequentialism provides a useful taxonomy.)”
I read up on negative utilitarianism years ago and didn’t recognise it as being what I’m doing, but perhaps your links are to better sources of information.
“You are aware, I should hope, that this makes you sound very much like an archetypical crank?”
It also makes me sound like someone who has not been led up the wrong path by the crowd. I found something in linguistics that makes things magnitudes easier than the mess I’ve seen other people wrestling with.
“It will not, I hope, surprise you to discover that your objection is quite common and well-known, and just as commonly and easily disposed of.”
No it is not easily disposed of, but I’ll get to that in a moment. The thought experiment is wrong and it gives philosophy a bad name, repelling people away from it by making them write off the junk they’re reading as the work of half-wits and making it harder to bring together all the people that need to be brought together to try to resolve all this stuff in the interests of making sure AGI is safe. It is essential to be rigorous in constructing thought experiments and to word them in such a way as to force the right answers to be generated from them. If you want to use that particular experiment, it needs wording to state that none of the ill people are compatible with each other, but the healthy person is close enough to each of them that his organs are compatible with them. It’s only by doing that that the reader will believe you have anything to say that’s worth hearing—you have to show that it has been properly debugged.
So, what does come out of it when you frame it properly? You run straight into other issues which you also need to eliminate with careful wording, such as blaming lifestyle for their health problems. The ill people also know that they’re on the way out if they can’t get a donor organ and don’t wish to inflict that on anyone else: no one decent wants a healthy person to die instead of them, and the guilt they would suffer from if it was done without their permission would ruin the rest of their life. Also, people accept that they can get ill and die in natural ways, but they don’t accept that they should be chosen to die to save other people who are in that position—if we had to live in a world where that kind of thing happened, we would all live not just in fear of becoming ill and dying, but in fear of being selected for death while totally healthy, and that’s a much bigger kind of fear. We can pursue healthy lifestyles in the hope that it will protect us from the kind of damage that can result in organ failure, and that drives most of the fear away—if we live carefully we are much more confident that it won’t happen to us, and sure enough, it usually does happen to other people who haven’t been careful. To introduce a system where you can simply be selected for death randomly is much more alarming, causing inordinately more harm—that is the vast bulk of the harm involved in this thought experiment, and these slapdash philosophers completely ignore it while pretending they’re the ones who are being rigorous. If you don’t take all of the harm into account, your analysis of the situation is a pile of worthless junk. All the harm must be weighed up, and it all has to be identified intelligently. This is again an example of why philosophers are generally regarded as fruitcakes.
TAG said this, not me.
[Correction: when I said “you said”, it was actually someone else’s comment that I quoted.]
It’s clear from the negative points that a lot of people don’t like hearing the truth. Let me spell this out even more starkly for them. What we have with the organ donor thought experiment is a situation where an approach to morality is being labelled as wrong as the result of a deeply misguided attack on it. It uses the normal human reactions to normal humans in this situation to make people feel that the calculation is wrong (based on their own instinctive reactions), but it claims that you’re going against the spirit of the thought experiment if the moral analysis works with normal humans—to keep to the spirit of the thought experiment you are required to dehumanise them, and once you’ve done that, those instinctive reactions are no longer being applied to the same thing at all.
Let’s look at the fully dehumanised version of the experiment. Instead of using people with full range of feelings, we replace them with sentient machines. We have five sentient machines which have developed hardware faults, and we can repair them all by using parts from another machine that is working fine. They are sentient, but all they’re doing is enjoying a single sensation that goes on and on. If we dismantle one, we prevent it from going on enjoying things, but this enables the five other machines to go on enjoying that same sensation in its place. In this case, it’s find to dismantle that machine to repair the rest. None of them have the capacity to feel guilt or fear and no one is upset by this decision. We may be upset that the decision has had to be made, but we feel that it is right. This is radically different from the human version of the experiment, but what the philosophers have done is use our reactions to the human version to make out that the proposed system of morality has failed because they have made it dehumanise the people and turn them into the machine version of the experiment.
In short, you’re breaking the rules and coming to incorrect conclusions, and you’re doing it time and time again because you are failing to handle the complexity in the thought experiments. That is why there is so much junk being written about this subject, and it makes it very hard for anyone to find the few parts that may be valid.
Minus four points already from anonymous people who can provide no counter-argument. They would rather continue to go on being wrong than make a gain by changing their position to become right. That is the norm for humans , sadly.
One of us is.
Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.
Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.
That’s not deontology, because it’s not object level.
Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.
“Philosophy isn’t relevant to many areas of AGI, but it is relevant to what you aer talking about here.”
Indeed it is relevant here, but it is also relevant to AGI in a bigger way, because AGI is a philosopher, and the vast bulk of what we want it to do (applied reasoning) is philosophy. AGI will do philosophy properly, eliminating the mistakes. It will do the same for maths and physics where there are also some serious mistakes waiting to be fixed.
“Learning to do something does entail having to do it. Knowing the jargon allows efficient communication with people who know more than you...if you countenance their existence.”
The problem with it is the proliferation of bad ideas—no one should have to become an expert in the wide range of misguided issues if all they need is to know how to put moral control into AGI. I have shown how it should be done, and I will tear to pieces any ill-founded objection that is made to it. If an objection comes up that actually works, I will abandon my approach if I can’t refine it to fix the fault.
“That’s not deontology, because it’s not object level.”
Does it matter what it is if it works? Show me where it fails.Get a team together and throw your best objection at me. If my approach breaks, we all win—I have no desire to cling to a disproven idea. If it stands up, you get two more goes. And if it stands up after three goes, I expect you to admit that it may be right and to agree that I might just have something.
“Someone who is days from death is not a “healthy person” as required. You may have been mistaken about other people’s mistakenness before.”
Great—you would wait as late as possible and transfer organs before multiple organ failure sets in. The important point is not the timing, but that it would be more moral than taking them from the healthy person.