An intelligent machine might make one of its first acts the assassination of other machine intelligence researchers—unless it is explicitly told not to do that. I figure we are going to want machines that will obey the law. That should be part of any sensible machine morality proposal.
As you can see, RobinZ, I’m trying to cure a particular kind of confusion here. The way people deploy their mental categories has consequences. The problem here is that “should” is already bound fairly tightly to certain concepts, no matter what sort of verbal definitions people think they’re deploying, and if they expand the verbal label beyond that, it has consequences for e.g. how they think aliens and AIs will work, and consequences for how they emotionally experience their own moralities.
It is odd how you apparently seem to think you are using the conventional definition of “should”—when you have several people telling you that your use of “should” and “ought” is counter-intuitive.
Most people are familiar with the idea that there are different human cultures, with somewhat different notions of right and wrong—and that “should” is often used in the context of the local moral climate.
For example:
If the owner of the restaurant serves you himself, you should still tip him;
You should not put your elbows on the table while you are eating;
Women should curtsey—“a little bob is quite sufficient”.
It is odd how you apparently seem to think you are using the conventional definition of “should”—when you have several people telling you that your use of “should” and “ought” is counter-intuitive.
To be fair, there are several quite distinct ways in which ‘should’ is typically used. Eliezer’s usage is one of them. It is used more or less universally by children and tends to be supplanted or supplemented as people mature with the ‘local context’ definition you mention and/or the ‘best action for agent given his preferences’ definition. In Eliezer’s case he seems to have instead evolved and philosophically refined the child version. (I hasten to add that I imply only that he matured his moral outlook in other ways than by transitioning usage of those particular words in the most common manner.)
I can understand such usage. However, we have things like: “I’m trying to cure a particular kind of confusion here”. The confusion he is apparently talking about is the conventional view of “ought” and “should”—and it doesn’t need “curing”.
In fact, it helps us to understand the moral customs of other cultures—rather than labeling them as being full of “bad” heathens—who need to be brought into the light.
My use is not counterintuitive. The fact that it is the intuitive use—that only humans ever think of what they should do in the ordinary sense, while aliens do what is babyeating; that looking at a paperclipper’s actions conveys no more information about what we should do than looking at evolution or a rockslide—is counterintuitive.
If you tell me that “should” has a usage which is unrelated to “right”, “good”, and “ought”, then that usage could be adapted for aliens.
If you tell me that “should” has a usage which is unrelated to “right”, “good”, and “ought”, then that usage could be adapted for aliens.
One of the standard usages is “doing this will most enhance your utility”. As in “you should kill that motherf@#$%”. This is distinct from ‘right’ and ‘good’ although ‘ought’ is used in the same way, albeit less frequently. It is advice, rather than exhortation.
Hell no. “The Fifth” is the only significant law-item that I’m explicitly familiar with. And I’m not even American.
Your personal utility is one thing—but “should” and “ought” often have more to do with what society thinks of your actions.
More often what you want society to think of people’s actions (either as a signal or as persuasion. I wonder which category my answers above fit into?).
It’s counterintuitive to me—and I’m not the only one—if you look at the other comments here.
Aliens could have the “right”, “good”, “ought” and “should” concept cluster—just as some other social animals can, or other tribes, or humans at other times.
Basically, there are a whole bunch of possible and actual moral frameworks—and these words normally operate relative to the framework under consideration.
There are some people who think that “right” and “wrong” have some kind of universal moral meaning. However most of those people are religious, and think morality comes straight from god—or some such nonsense.
To clarify, people agree that the moral “right” and “wrong” categories contain things that are moral and immoral respectively—but they disagree with each other about which actions are moral and which are immoral.
For example, some people think abortion is immoral. Other people think eating meat is immoral. Other people think homosexual union is immoral—and so on.
These opinions are not widely agreed upon—yet many of those who hold them defend them passionately.
Different people seem to find different parts of this counterintuitive.
And some people simply disagree with you. Some people say, for example, that ‘they don’t have a universal meaning’. They assert that ‘should’ claims are not claims that have truth value and allow that the value depends on the person speaking. They find this quite intuitive and even go so far as create words such as ‘normative’ and ‘subjective’ to describe these concepts when talking to each other.
It is not likely that aliens, for example, have the concept ‘should’ at all and so it is likely that other words will be needed. The Babyeaters, as described, seem to be using a concept sufficiently similar as to be within the variability of use within humans. ‘Should’ and ‘good’ would not be particularly poor translations. About the same as using, say, ‘tribe’ or ‘herd’ for example.
Okay, then these are the people I’m arguing against, as a view of morality. I’m arguing that, say, dragging 6-year-olds off the train tracks, as opposed to eating them for lunch, is every bit as much uniquely the right answer as it looks; and that the Space Cannibals are every bit as awful as they look; and that the aliens do not have a different view of the subject, but simply a view of a different subject.
As an intuition pump, it might help to imagine someone saying that “truth” has different values in different places and that we want to parameterize it by true and true. If Islam has a sufficiently different criterion for using the word “true”, i.e. “recorded in the Koran”, then we just want to say “recorded in the Koran”, not use the word “true”.
Another way of looking at it is that if we are not allowed to use the word “right” or any of its synonyms, at all, a la Empty Labels and Rationalist Taboo and Replace the Symbol with the Substance, then the new language that we are forced to use will no longer create the illusion that we and the aliens are talking about the same thing. (Like forcing people from two different spiritual traditions to say what they think exists without using the word “God”, thus eliminating the illusion of agreement.) And once you realize that we and the aliens are not talking about the same thing at all, and have no disagreement over the same subject, you are no longer tempted to try to relativize morality.
It’s all very well to tell me that I should stop arguing over definitions, but I seem to be at a loss to make people understand what I am trying to say here. You are, of course, welcome to tell me that this is my fault; but it is somewhat disconcerting to find everyone saying that they agree with me, while continuing to disagree with each other.
I disagree with you about what “should” means, and I’m not even a Space Cannibal. Or do I? Are you committed to saying that I, too, am talking past you if I type “should” to sincerely refer to things?
Are you basically declaring yourself impossible to disagree with?
Do you think we’re asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
Humans, especially humans from an Enlightenment tradition, I presume by default to be talking about the same thing as me—we share a lot of motivations and might share even more in the limit of perfect knowledge and perfect reflection. So when we appear to disagree, I assume by default and as a matter of courtesy that we are disagreeing about the answer to the same question or to questions sufficiently similar that they could normally be expected to have almost the same answer. And so we argue, and try to share thoughts.
With aliens, there might be some overlap—or might not; a starfish is pretty different from a mammal, and that’s just on Earth. With paperclip maximizers, they are simply not asking our question or anything like that question. And so there is no point in arguing, for there is no disagreement to argue about. It would be like arguing with natural selection. Evolution does not work like you do, and it does not choose actions the way you do, and it was not disagreeing with you about anything when it sentenced you to die of old age. It’s not that evolution is a less authoritative source, but that it is not saying anything at all about the morality of aging. Consider how many bioconservatives cannot understand the last sentence; it may help convey why this point is both metaethically important and intuitively difficult.
Do you think we’re asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
I really do not know. Our disagreements on ethics are definitely nontrivial—the structure of consequentialism inspires you to look at a completely different set of sub-questions than the ones I’d use to determine the nature of morality. That might mean that (at least) one of us is taking the wrong tack on a shared question, or that we’re asking different basic questions. We will arrive at superficially similar answers much of the time because “appeal to intuition” is considered a legitimate move in ethics and we have some similar intuitions about the kinds of answers we want to arrive at.
I think you are right that paperclip maximizers would not care at all about ethics. Babyeaters, though, seem like they do, and it’s not even completely obvious to me that the gulf between me and a babyeater (in methodology, not in result) is larger than the gulf between me and you. It looks to me a bit like you and I get to different parts of city A via bicycle and dirigible respectively, and then the babyeaters get to city B via kayak—yes, we humans have more similar destinations to each other than to the Space Cannibals, but the kind of journey undertaken seems at least as significant, and trying to compare a bike and a blimp and a boat is not a task obviously approachable.
Do you also find it suspicious that we could both arrive in the same city using different vehicles? Or that the answer to “how many socks is Alicorn wearing?” and the answer to “what is 6 − 4?” are the same? Or that one could correctly answer “yes” to the question “is there cheese in the fridge?” and the question “is it 4:30?” without meaning to use a completely different, non-yes word in either case?
Do you also find it suspicious that we could both arrive in the same city using different vehicles?
Not at all, if we started out by wanting to arrive in the same city.
And not at all, if I selected you as a point of comparison by looking around the city I was in at the time.
Otherwise, yes, very suspicious. Usually, when two randomly selected people in Earth’s population get into a car and drive somewhere, they arrive in different cities.
Or that the answer to “how many socks is Alicorn wearing?” and the answer to “what is 6 − 4?” are the same?
No, because you selected those two questions to have the same answer.
Or that one could correctly answer “yes” to the question “is there cheese in the fridge?” and the question “is it 4:30?” without meaning to use a completely different, non-yes word in either case?
Yes-or-no questions have a very small answer space so even if you hadn’t selected them to correlate, it would only be 1 bit of coincidence.
The examples in the grandparent do seem to miss the point that Alicorn was originally describing.
I find it a suspicious coincidence that we should arrive at similar answers by asking dissimilar questions.
It is still surprising, but somewhat less so if our question answering is about finding descriptions for our hardwired intuitions. In that case people with similar personalities can be expected to formulate question-answer pairs that differ mainly in their respective areas of awkwardness as descriptions of the territory.
Not at all, if we started out by wanting to arrive in the same city.
And we did exactly that (metaphorically speaking). I said:
We will arrive at superficially similar answers much of the time because “appeal to intuition” is considered a legitimate move in ethics and we have some similar intuitions about the kinds of answers we want to arrive at.
It seems to me that you and I ask dissimilar questions and arrive at superficially similar answers. (I say “superficially similar” because I consider the “because” clause in an ethical statement to be important—if you think you should pull the six-year-old off the train tracks because that maximizes your utility function and I think you should do it because the six-year-old is entitled to your protection on account of being a person, those are different answers, even if the six-year-old lives either way.) The babyeaters get more non-matching results in the “does the six-year-old live” department, but their questions—just about as important in comparing theories—are not (it seems to me) so much more different than yours and mine.
Everybody, in seeking a principled ethical theory, has to bite some bullets (or go on an endless Easter-epicycle hunt).
To me, this doesn’t seem like superficial similarity at all. I should sooner call the differences of verbal “because” superficial, and focus on that which actually produces the answer.
I think you should do it because the six-year-old is valuable and precious and irreplaceable, and if I had a utility function it would describe that. I’m not sure how this differs from what you’re doing, but I think it differs from what you think I’m doing.
I think you are right that paperclip maximizers would not care at all about ethics.
Correct. But neither would they ‘care’ about paperclips, under the way Eliezer’s pushing this idea. They would flarb about paperclips, and caring would be as alien to them as flarbing is to you.
Babyeaters, though, seem like they do, and it’s not even completely obvious to me that the gulf between me and a babyeater (in methodology, not in result) is larger than the gulf between me and you.
It’s all very well to tell me that I should stop arguing over definitions
You are arguing over definitions but it is useful. You make many posts that rely on these concepts so the definitions are relevant. That ‘you are just arguing semantics’ call is sometimes an irritating cached response.
but I seem to be at a loss to make people understand what I am trying to say here. You are, of course, welcome to tell me that this is my fault; but
You are making more than one claim here. The different-concept-alien stuff you have explained quite clearly (eg. from the first semicolon onwards in the parent). This seems to be obviously true. The part before the semicolon is a different concept (probably two). Your posts have not given me the impression that you consider the true issue distinct from normative issue and subjectivity. You also included ‘objective morality’ in with ‘True’, ‘transcendental’ and ‘ultimate’ as things that have no meaning. I believe you are confused and that your choice of definition for ‘should’ contributes to this.
it is somewhat disconcerting to find everyone saying that they agree with me
I say I disagree with a significant part of your position, although not the most important part.
, while continuing to disagree with each other.
I definitely disagree with Tim. I may agree with some of the others.
I agree with the claim you imply with the intuition pump. I disagree with the claim you imply when you are talking about ‘uniquely the right answer’. Your intuition pump does not describe the same concept that your description does.
; and that the aliens do not have a different view of the subject, but simply a view of a different subject.
This part does match the intuition pump but you are consistently conflating this concept with another (see uniquely-right true-value of girl treatment) in your posts in this thread. You are confused.
The fact that it is the intuitive use—that only humans ever think of what they should do in the ordinary sense, while aliens do what is babyeating; that looking at a paperclipper’s actions conveys no more information about what we should do than looking at evolution or a rockslide—is counterintuitive.
It is the claims along the lines of ‘truth value’ that are most counterintuitive. The universality that you attribute to ‘Right’ also requires some translation.
The problem here is that “should” is already bound fairly tightly to certain concepts, no matter what sort of verbal definitions people think they’re deploying, and if they expand the verbal label beyond that, it has consequences for e.g. how they think aliens and AIs will work, and consequences for how they emotionally experience their own moralities.
I see, and that is an excellent point. Daniel Dennett has taken a similar attitude towards qualia, if I interpret you correctly—he argues that the idea of qualia is so inextricably bound with its standard properties (his list goes ineffable, intrinsic, private, and directly or immediately apprehensible by the consciousness) that to describe a phenomenon lacking those properties by that term is as wrongheaded as using the term elan vital to refer to DNA.
An intelligent machine might make one of its first acts the assassination of other machine intelligence researchers—unless it is explicitly told not to do that. I figure we are going to want machines that will obey the law. That should be part of any sensible machine morality proposal.
I absolutely do not want my FAI to be constrained by the law. If the FAI allows machine intelligence researchers to create an uFAI we will all die. An AI that values the law above the existence of me and my species is evil, not Friendly. I wouldn’t want the FAI to kill such researchers unless it was unable to find a more appealing way to ensure future safety but I wouldn’t dream of constraining it to either laws or politics. But come to think of it I don’t want it to be sensible either.
The Three Laws of Robotics may be a naive conception but that Zeroth law was a step in the right direction.
Re: If the FAI allows machine intelligence researchers to create an uFAI we will all die
Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
Hopefully, we won’t let it get that far. Constructing an autonomous tool that will kill people is conspiracy to murder—so hopefully the legal system will allow us to lock up researchers who lack respect for the law before they do some real damage.
Assassinating your competitors is not an acceptable business practice.
Hopefully, the researchers will learn the error of their ways before then. The first big and successful machine intelligence project may well be a collaboration. Help build my tool, or be killed by it—is a rather aggressive proposition—and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
You seem confused (or, perhaps, hysterical). A psychopathic superintelligence would have no need to justify anything it does to anyone.
By including ‘delusional’ you appear to be claiming that an unfriendly super-intelligence would not likely cause the extinction of humanity. Was that your intent? If so, why do you suggest that the first actions of a FAI would be to kill AI researchers? Do you believe that a superintelligence will disagree with you about whether uFAI is a threat and that it will be wrong while you are right? That is a bizarre prediction.
and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
You seem to have a lot of faith in the law. I find this odd. Has it escaped your notice that a GAI is not constrained by country borders? I’m afraid most of the universe, even most of the planet, is out of your jurisdiction.
A powerful corporate agent not bound by the law might well choose to assassinate its potential competitors—if it thought it could get away with it. Its competitors are likely to be among those best placed to prevent it from meeting its goals.
Its competitors don’t have to want to destroy all humankind for it to want to eliminate them! The tiniest divergence between its goals and theirs could potentially be enough.
It is a misconception to think of law as a set of rules. Even more so to understand them as a set of rules that apply to non-humans today. In addition, rules won’t be very effective constraints on superintelligences.
The other way around is also of some concern:
An intelligent machine might make one of its first acts the assassination of other machine intelligence researchers—unless it is explicitly told not to do that. I figure we are going to want machines that will obey the law. That should be part of any sensible machine morality proposal.
As you can see, RobinZ, I’m trying to cure a particular kind of confusion here. The way people deploy their mental categories has consequences. The problem here is that “should” is already bound fairly tightly to certain concepts, no matter what sort of verbal definitions people think they’re deploying, and if they expand the verbal label beyond that, it has consequences for e.g. how they think aliens and AIs will work, and consequences for how they emotionally experience their own moralities.
It is odd how you apparently seem to think you are using the conventional definition of “should”—when you have several people telling you that your use of “should” and “ought” is counter-intuitive.
Most people are familiar with the idea that there are different human cultures, with somewhat different notions of right and wrong—and that “should” is often used in the context of the local moral climate.
For example:
If the owner of the restaurant serves you himself, you should still tip him;
You should not put your elbows on the table while you are eating;
Women should curtsey—“a little bob is quite sufficient”.
To be fair, there are several quite distinct ways in which ‘should’ is typically used. Eliezer’s usage is one of them. It is used more or less universally by children and tends to be supplanted or supplemented as people mature with the ‘local context’ definition you mention and/or the ‘best action for agent given his preferences’ definition. In Eliezer’s case he seems to have instead evolved and philosophically refined the child version. (I hasten to add that I imply only that he matured his moral outlook in other ways than by transitioning usage of those particular words in the most common manner.)
I can understand such usage. However, we have things like: “I’m trying to cure a particular kind of confusion here”. The confusion he is apparently talking about is the conventional view of “ought” and “should”—and it doesn’t need “curing”.
In fact, it helps us to understand the moral customs of other cultures—rather than labeling them as being full of “bad” heathens—who need to be brought into the light.
My use is not counterintuitive. The fact that it is the intuitive use—that only humans ever think of what they should do in the ordinary sense, while aliens do what is babyeating; that looking at a paperclipper’s actions conveys no more information about what we should do than looking at evolution or a rockslide—is counterintuitive.
If you tell me that “should” has a usage which is unrelated to “right”, “good”, and “ought”, then that usage could be adapted for aliens.
One of the standard usages is “doing this will most enhance your utility”. As in “you should kill that motherf@#$%”. This is distinct from ‘right’ and ‘good’ although ‘ought’ is used in the same way, albeit less frequently. It is advice, rather than exhortation.
Indeed. “The Pebblesorters should avoid making piles of 1,001 stones” makes perfect sense.
“Should” and “ought” actually have strong connotations of societal morality.
Should you rob the bank? Should you have sex with the minor? Should you confess to the crime?
Your personal utility is one thing—but “should” and “ought” often have more to do with what society thinks of your actions.
Probably not.
Probably not here.
Hell no. “The Fifth” is the only significant law-item that I’m explicitly familiar with. And I’m not even American.
More often what you want society to think of people’s actions (either as a signal or as persuasion. I wonder which category my answers above fit into?).
It’s counterintuitive to me—and I’m not the only one—if you look at the other comments here.
Aliens could have the “right”, “good”, “ought” and “should” concept cluster—just as some other social animals can, or other tribes, or humans at other times.
Basically, there are a whole bunch of possible and actual moral frameworks—and these words normally operate relative to the framework under consideration.
There are some people who think that “right” and “wrong” have some kind of universal moral meaning. However most of those people are religious, and think morality comes straight from god—or some such nonsense.
They have a universal meaning. They are fixed concepts. If you are talking about a different concept, you should use a different word.
Different people seem to find different parts of this counterintuitive.
Not how natural language works.
Do you mean it would be right and good for him to use a different word, or that it would be more effective communication if he did so?
To clarify, people agree that the moral “right” and “wrong” categories contain things that are moral and immoral respectively—but they disagree with each other about which actions are moral and which are immoral.
For example, some people think abortion is immoral. Other people think eating meat is immoral. Other people think homosexual union is immoral—and so on.
These opinions are not widely agreed upon—yet many of those who hold them defend them passionately.
And some people simply disagree with you. Some people say, for example, that ‘they don’t have a universal meaning’. They assert that ‘should’ claims are not claims that have truth value and allow that the value depends on the person speaking. They find this quite intuitive and even go so far as create words such as ‘normative’ and ‘subjective’ to describe these concepts when talking to each other.
It is not likely that aliens, for example, have the concept ‘should’ at all and so it is likely that other words will be needed. The Babyeaters, as described, seem to be using a concept sufficiently similar as to be within the variability of use within humans. ‘Should’ and ‘good’ would not be particularly poor translations. About the same as using, say, ‘tribe’ or ‘herd’ for example.
Okay, then these are the people I’m arguing against, as a view of morality. I’m arguing that, say, dragging 6-year-olds off the train tracks, as opposed to eating them for lunch, is every bit as much uniquely the right answer as it looks; and that the Space Cannibals are every bit as awful as they look; and that the aliens do not have a different view of the subject, but simply a view of a different subject.
As an intuition pump, it might help to imagine someone saying that “truth” has different values in different places and that we want to parameterize it by true and true. If Islam has a sufficiently different criterion for using the word “true”, i.e. “recorded in the Koran”, then we just want to say “recorded in the Koran”, not use the word “true”.
Another way of looking at it is that if we are not allowed to use the word “right” or any of its synonyms, at all, a la Empty Labels and Rationalist Taboo and Replace the Symbol with the Substance, then the new language that we are forced to use will no longer create the illusion that we and the aliens are talking about the same thing. (Like forcing people from two different spiritual traditions to say what they think exists without using the word “God”, thus eliminating the illusion of agreement.) And once you realize that we and the aliens are not talking about the same thing at all, and have no disagreement over the same subject, you are no longer tempted to try to relativize morality.
It’s all very well to tell me that I should stop arguing over definitions, but I seem to be at a loss to make people understand what I am trying to say here. You are, of course, welcome to tell me that this is my fault; but it is somewhat disconcerting to find everyone saying that they agree with me, while continuing to disagree with each other.
I disagree with you about what “should” means, and I’m not even a Space Cannibal. Or do I? Are you committed to saying that I, too, am talking past you if I type “should” to sincerely refer to things?
Are you basically declaring yourself impossible to disagree with?
Do you think we’re asking sufficiently different questions such that they would be expected to have different answers in the first place? How could you know?
Humans, especially humans from an Enlightenment tradition, I presume by default to be talking about the same thing as me—we share a lot of motivations and might share even more in the limit of perfect knowledge and perfect reflection. So when we appear to disagree, I assume by default and as a matter of courtesy that we are disagreeing about the answer to the same question or to questions sufficiently similar that they could normally be expected to have almost the same answer. And so we argue, and try to share thoughts.
With aliens, there might be some overlap—or might not; a starfish is pretty different from a mammal, and that’s just on Earth. With paperclip maximizers, they are simply not asking our question or anything like that question. And so there is no point in arguing, for there is no disagreement to argue about. It would be like arguing with natural selection. Evolution does not work like you do, and it does not choose actions the way you do, and it was not disagreeing with you about anything when it sentenced you to die of old age. It’s not that evolution is a less authoritative source, but that it is not saying anything at all about the morality of aging. Consider how many bioconservatives cannot understand the last sentence; it may help convey why this point is both metaethically important and intuitively difficult.
I really do not know. Our disagreements on ethics are definitely nontrivial—the structure of consequentialism inspires you to look at a completely different set of sub-questions than the ones I’d use to determine the nature of morality. That might mean that (at least) one of us is taking the wrong tack on a shared question, or that we’re asking different basic questions. We will arrive at superficially similar answers much of the time because “appeal to intuition” is considered a legitimate move in ethics and we have some similar intuitions about the kinds of answers we want to arrive at.
I think you are right that paperclip maximizers would not care at all about ethics. Babyeaters, though, seem like they do, and it’s not even completely obvious to me that the gulf between me and a babyeater (in methodology, not in result) is larger than the gulf between me and you. It looks to me a bit like you and I get to different parts of city A via bicycle and dirigible respectively, and then the babyeaters get to city B via kayak—yes, we humans have more similar destinations to each other than to the Space Cannibals, but the kind of journey undertaken seems at least as significant, and trying to compare a bike and a blimp and a boat is not a task obviously approachable.
I find it a suspicious coincidence that we should arrive at similar answers by asking dissimilar questions.
Do you also find it suspicious that we could both arrive in the same city using different vehicles? Or that the answer to “how many socks is Alicorn wearing?” and the answer to “what is 6 − 4?” are the same? Or that one could correctly answer “yes” to the question “is there cheese in the fridge?” and the question “is it 4:30?” without meaning to use a completely different, non-yes word in either case?
Not at all, if we started out by wanting to arrive in the same city.
And not at all, if I selected you as a point of comparison by looking around the city I was in at the time.
Otherwise, yes, very suspicious. Usually, when two randomly selected people in Earth’s population get into a car and drive somewhere, they arrive in different cities.
No, because you selected those two questions to have the same answer.
Yes-or-no questions have a very small answer space so even if you hadn’t selected them to correlate, it would only be 1 bit of coincidence.
The examples in the grandparent do seem to miss the point that Alicorn was originally describing.
It is still surprising, but somewhat less so if our question answering is about finding descriptions for our hardwired intuitions. In that case people with similar personalities can be expected to formulate question-answer pairs that differ mainly in their respective areas of awkwardness as descriptions of the territory.
And we did exactly that (metaphorically speaking). I said:
It seems to me that you and I ask dissimilar questions and arrive at superficially similar answers. (I say “superficially similar” because I consider the “because” clause in an ethical statement to be important—if you think you should pull the six-year-old off the train tracks because that maximizes your utility function and I think you should do it because the six-year-old is entitled to your protection on account of being a person, those are different answers, even if the six-year-old lives either way.) The babyeaters get more non-matching results in the “does the six-year-old live” department, but their questions—just about as important in comparing theories—are not (it seems to me) so much more different than yours and mine.
Everybody, in seeking a principled ethical theory, has to bite some bullets (or go on an endless Easter-epicycle hunt).
To me, this doesn’t seem like superficial similarity at all. I should sooner call the differences of verbal “because” superficial, and focus on that which actually produces the answer.
I think you should do it because the six-year-old is valuable and precious and irreplaceable, and if I had a utility function it would describe that. I’m not sure how this differs from what you’re doing, but I think it differs from what you think I’m doing.
Correct. But neither would they ‘care’ about paperclips, under the way Eliezer’s pushing this idea. They would flarb about paperclips, and caring would be as alien to them as flarbing is to you.
I think some subset of paperclip maximizers might be said to care about paperclips. Not, most likely, all possible instances of them.
I had the same thought.
You are arguing over definitions but it is useful. You make many posts that rely on these concepts so the definitions are relevant. That ‘you are just arguing semantics’ call is sometimes an irritating cached response.
You are making more than one claim here. The different-concept-alien stuff you have explained quite clearly (eg. from the first semicolon onwards in the parent). This seems to be obviously true. The part before the semicolon is a different concept (probably two). Your posts have not given me the impression that you consider the true issue distinct from normative issue and subjectivity. You also included ‘objective morality’ in with ‘True’, ‘transcendental’ and ‘ultimate’ as things that have no meaning. I believe you are confused and that your choice of definition for ‘should’ contributes to this.
I say I disagree with a significant part of your position, although not the most important part.
I definitely disagree with Tim. I may agree with some of the others.
I agree with the claim you imply with the intuition pump. I disagree with the claim you imply when you are talking about ‘uniquely the right answer’. Your intuition pump does not describe the same concept that your description does.
This part does match the intuition pump but you are consistently conflating this concept with another (see uniquely-right true-value of girl treatment) in your posts in this thread. You are confused.
It is the claims along the lines of ‘truth value’ that are most counterintuitive. The universality that you attribute to ‘Right’ also requires some translation.
I see, and that is an excellent point. Daniel Dennett has taken a similar attitude towards qualia, if I interpret you correctly—he argues that the idea of qualia is so inextricably bound with its standard properties (his list goes ineffable, intrinsic, private, and directly or immediately apprehensible by the consciousness) that to describe a phenomenon lacking those properties by that term is as wrongheaded as using the term elan vital to refer to DNA.
I withdraw my implied criticism.
I absolutely do not want my FAI to be constrained by the law. If the FAI allows machine intelligence researchers to create an uFAI we will all die. An AI that values the law above the existence of me and my species is evil, not Friendly. I wouldn’t want the FAI to kill such researchers unless it was unable to find a more appealing way to ensure future safety but I wouldn’t dream of constraining it to either laws or politics. But come to think of it I don’t want it to be sensible either.
The Three Laws of Robotics may be a naive conception but that Zeroth law was a step in the right direction.
Re: If the FAI allows machine intelligence researchers to create an uFAI we will all die
Yes, that’s probably just the kind of paranoid delusional thinking that a psychopathic superintelligence with no respect for the law would use to justify its murder of academic researchers.
Hopefully, we won’t let it get that far. Constructing an autonomous tool that will kill people is conspiracy to murder—so hopefully the legal system will allow us to lock up researchers who lack respect for the law before they do some real damage. Assassinating your competitors is not an acceptable business practice.
Hopefully, the researchers will learn the error of their ways before then. The first big and successful machine intelligence project may well be a collaboration. Help build my tool, or be killed by it—is a rather aggressive proposition—and I expect most researchers will reject it, and expend their energies elsewhere—hopefully on more law-abiding projects.
You seem confused (or, perhaps, hysterical). A psychopathic superintelligence would have no need to justify anything it does to anyone.
By including ‘delusional’ you appear to be claiming that an unfriendly super-intelligence would not likely cause the extinction of humanity. Was that your intent? If so, why do you suggest that the first actions of a FAI would be to kill AI researchers? Do you believe that a superintelligence will disagree with you about whether uFAI is a threat and that it will be wrong while you are right? That is a bizarre prediction.
You seem to have a lot of faith in the law. I find this odd. Has it escaped your notice that a GAI is not constrained by country borders? I’m afraid most of the universe, even most of the planet, is out of your jurisdiction.
Re: You seem confused (or, perhaps, hysterical).
Uh, thanks :-(
A powerful corporate agent not bound by the law might well choose to assassinate its potential competitors—if it thought it could get away with it. Its competitors are likely to be among those best placed to prevent it from meeting its goals.
Its competitors don’t have to want to destroy all humankind for it to want to eliminate them! The tiniest divergence between its goals and theirs could potentially be enough.
It is a misconception to think of law as a set of rules. Even more so to understand them as a set of rules that apply to non-humans today. In addition, rules won’t be very effective constraints on superintelligences.