I only need to know that the process used to construct it results in a friendly AI.
You are still facing the same problem. Given that you can’t recognize friendliness, how will you create or choose a process which will build a FAI? Would you be able to answer “Will it be friendly?” by looking at the process?
the negative parts of human values are entirely eliminated.
That doesn’t make much sense. What do you mean by “negative” and from which point of view? If from the point of view of the AI, that’s just a trivial tautology. If from the point of view of (at least some) humans, this seems to be not so.
In general, do you treat morals/values as subjective or objective? If objective, the whole “if they knew more” part is entirely unnecessary: you’re discovering empirical reality, not consulting with people on what do they like. And subjectivism here, of course, makes the whole idea of CEV meaningless.
Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of “improve”.
how will you create or choose a process which will build a FAI?
You are literally asking me to solve the FAI problem right here and now. I understand that FAI is a very hard problem and I don’t expect to solve it instantly. Just because a problem is hard, doesn’t mean it can’t have a solution.
First of all let me adopt some terminology from Superintelligence. I think FAI requires solving two somewhat different problems. Value Learning and Value Loading.
You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want. I think that’s the easy problem, and any intelligent AI will form a model of humans and understand what we want. Getting it to care about what we want seems like the hard problem to me.
But I do see some promising ideas to approach the problem. For instance have AIs that predict what choices a human would make in each situation. So you basically get an AI which is just a human, but sped up a lot. Or have an AI which presents arguments for and against each choice, so that humans can make more informed choices. Then it could predict what choice a human would make after hearing all the arguments, and do that.
More complicated ideas were mentioned in Superintelligence. I like the idea of “motivational scaffolding”.Somehow train an AI that can learn how the world works and can generate an “interpretable model”. Like e.g. being able to understand English sentences and translate their meanings to representations the AI can use. Then you can explicitly program a utility function into the AI using its learned model.
That doesn’t make much sense. What do you mean by “negative” and from which point of view?
From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I’m showing you that CEV would eliminate these things.
Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of “improve”.
Your stated example was ISIS. ISIS is so bad because they incorrectly believe that God is on their side and wants them to do the things they do. That the people that die will go to heaven, so loss of life isn’t so bad. If they were more intelligent, informed, and rational… If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.
The second thing CEV does is average everyone’s values together. So even if ISIS really does value killing people, their victims value not being killed even more. So a CEV of all of humanity would still value life, even if evil people’s values are included. Even if everyone was a sociopath, their CEV would still be the best compromise possible, between everyone’s values.
You are literally asking me to solve the FAI problem right here and now.
No, I’m asking you to specify it. My point is that you can’t build X if you can’t even recognize X.
You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want.
Learning what humans want is pretty easy. However it’s an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.
From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I’m showing you that CEV would eliminate these things.
Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative. Since I don’t expect to find myself in a privileged position, I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.
Later you say that CEV will average values. I don’t have average values.
If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.
I see no evidence to believe this is true and lots of evidence to believe this is false.
You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.
No, I’m asking you to specify it. My point is that you can’t build X if you can’t even recognize X.
And I don’t agree with that. I’ve presented some ideas on how an FAI could be built, and how CEV would work. None of them require “recognizing” FAI. What would it even mean to “recognize” FAI, except to see that it values the kinds of things we value and makes the world better for us.
Learning what humans want is pretty easy. However it’s an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.
I’ve written about one method to accomplish this, though there may be better methods.
Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative.
Humans are 99.999% identical. We have the same genetics, the same brain structures, and mostly the same environments. The only reason this isn’t obvious, is because we spend almost all our time focusing on the differences between people, because that’s what’s useful in everyday life.
I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.
That may be the case, but that’s still not a bad outcome. In the example I used, the values dropped from ISIS members were taken for 2 reasons. That they were based on false beliefs, or that they hurt other people. If you have values based on false beliefs, you should want them to be eliminated. If you have values that hurt other people then it’s only fair that be eliminated. Or else you risk the values of people that want to hurt you.
Later you say that CEV will average values. I don’t have average values.
Well I think it’s accurate, but it’s somewhat nonspecific. Specifically, CEV will find the optimal compromise of values. The values that satisfy the most people the most amount. Or at least dissatisfy the fewest people the least. See the post I just linked for more details, on one example of how that could be implemented. That’s not necessarily “average values”.
In the worst case, people with totally incompatible values will just be allowed to go separate ways, or whatever the most satisfying compromise is. Muslims live on one side of the dyson sphere, Christians on the other, and they never have to interact and can do their own thing.
You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.
My exact words were “If they were more intelligent, informed, and rational… If they knew all the arguments for and against...” Real world problems of persuading people don’t apply. Most people don’t research all the arguments against their beliefs, and most people aren’t rational and seriously consider the hypothesis that they are wrong.
For what it’s worth, I was deconverted like this. Not overnight by any means. But over time I found that the arguments against my beliefs were correct and I updated my belief.
Changing world views is really really hard. There’s no one piece of evidence or one argument to dispute. Religious people believe that there is tons of evidence of God. To them it just seems obviously true. From miracles, to recorded stories, to their own personal experiences, etc. It takes a lot of time to get at every single pillar of the belief and show its flaws. But it is possible. It’s not like Muslims were born believing in Islam. Islam is not encoded in genetics. People deconvert from religions all the time, entire societies have even done it.
In any case, my proposal does not require literally doing this. It’s just a thought experiment. To show that the ideal set of values is what you choose if you had all the correct beliefs.
It means that when you look an an AI system, you can tell whether it’s FAI or not.
If you can’t tell, you may be able to build an AI system, but you still won’t know whether it’s FAI or not.
I’ve written about one method to accomplish this
I don’t see what voting systems have to do with CEV. The “E” part means you don’t trust what the real, current humans say, so to making them vote on anything is pointless.
Humans are 99.999% identical.
That’s a meaningless expression without a context. Notably, we don’t have the same genes or the same brain structures. I don’t know about you, but it is really obvious to me that humans are not identical.
...false beliefs … it’s only fair …
How do you know what’s false? You are a mere human, you might well be mistaken. How do you know what’s fair? Is it an objective thing, something that exists in the territory?
The values that satisfy the most people the most amount.
Right, so the fat man gets thrown under the train… X-)
Muslims live on one side of the dyson sphere, Christians on the other
Hey, I want to live on the inside. The outside is going to be pretty gloomy and cold :-/
Real world problems of persuading people don’t apply.
LOL. You’re just handwaving then. “And here, in the difficult part, insert magic and everything works great!”
It means that when you look an an AI system, you can tell whether it’s FAI or not.
Look at it how? Look at it’s source code? I argued that we can write source code that will result in FAI, and you could recognize that. Look at the weights of it’s “brain”? Probably not, anymore than we can look at human brains and recognize what they do. Look at it’s actions? Definitely, FAI is an AI that doesn’t destroy the world etc.
I don’t see what voting systems have to do with CEV. The “E” part means you don’t trust what the real, current humans say, so to making them vote on anything is pointless.
The voting doesn’t have to actually happen. The AI can predict what we would vote for, if we had plenty of time to debate it. And you can get even more abstract than that and have the FAI just figure out the details of E itself.
The point is to solve the “coherent” part. That you can find a set of coherent values from a bunch of different agents or messy human brains. And to show that mathematicians have actually extensively studied a special case of this problem, voting systems.
That’s a meaningless expression without a context. Notably, we don’t have the same genes or the same brain structures. I don’t know about you, but it is really obvious to me that humans are not identical.
Compared to other animals, compared to aliens, yes we are incredibly similar. We do have 99.99% identical DNA, our brains all have the same structure with minor variations.
How do you know what’s false?
Did I claim that I did?
How do you know what’s fair? Is it an objective thing, something that exists in the territory?
I gave a precise algorithm for doing that actually.
Right, so the fat man gets thrown under the train… X-)
Which is the best possible outcome, vs killing 5 other people. But I don’t think these kinds of scenarios are realistic once we have incredibly powerful AI.
LOL. You’re just handwaving then. “And here, in the difficult part, insert magic and everything works great!”
I’m not handwaving anything… There is no magic involved at all. The whole scenario of persuading people is counterfactual and doesn’t need to actually be done. The point is to define more exactly what CEV is. It’s the values you would want if you had the correct beliefs. You don’t need to actually have the correct beliefs, to give your CEV.
We typically imagine CEV asking what people would do if they ‘knew what the AI knew’ - let’s say the AI tries to estimate expected value of a given action, with utility defined by extrapolated versions of us who know the truth, and probabilities taken from the AI’s own distribution. I am absolutely saying that theism fails under any credible epistemology, and any well-programmed FAI would expect ‘more knowledgeable versions of us’ to become atheists on general principles. Whether or not this means they would change “if they knew all the arguments for and against religion,” depends on whether or not they can accept some extremely basic premise.
(Note that nobody comes into the word with anything even vaguely resembling a prior that favors a major religion. We might start with a bias in favor of animism, but nearly everyone would verbally agree this anthropomorphism is false.)
It seems much less clear if CEV would make psychopathy irrelevant. But potential victims must object to their own suffering at least as much as real-world psychopaths want to hurt them. So the most obvious worst-case scenario, under implausibly cynical premises, looks more like Omelas than it does a Mongol invasion. (Here I’m completely ignoring the clause meant to address such scenarios, “had grown up farther together”.)
We typically imagine CEV asking what people would do if they ‘knew what the AI knew’
No, we don’t, because this would be a stupid question. CEV doesn’t ask people, CEV tells people what they want.
any well-programmed FAI would expect ‘more knowledgeable versions of us’ to become atheists on general principles.
I see little evidence to support this point of view. You might think that atheism is obvious, but a great deal of people, many of them smarter than you, disagree.
You are still facing the same problem. Given that you can’t recognize friendliness, how will you create or choose a process which will build a FAI? Would you be able to answer “Will it be friendly?” by looking at the process?
That doesn’t make much sense. What do you mean by “negative” and from which point of view? If from the point of view of the AI, that’s just a trivial tautology. If from the point of view of (at least some) humans, this seems to be not so.
In general, do you treat morals/values as subjective or objective? If objective, the whole “if they knew more” part is entirely unnecessary: you’re discovering empirical reality, not consulting with people on what do they like. And subjectivism here, of course, makes the whole idea of CEV meaningless.
Also, I see no evidence to support the view that as people know more, their morals improve, for pretty much any value of “improve”.
You are literally asking me to solve the FAI problem right here and now. I understand that FAI is a very hard problem and I don’t expect to solve it instantly. Just because a problem is hard, doesn’t mean it can’t have a solution.
First of all let me adopt some terminology from Superintelligence. I think FAI requires solving two somewhat different problems. Value Learning and Value Loading.
You seem to think Value Learning is the hard problem, getting an AI to learn what humans actually want. I think that’s the easy problem, and any intelligent AI will form a model of humans and understand what we want. Getting it to care about what we want seems like the hard problem to me.
But I do see some promising ideas to approach the problem. For instance have AIs that predict what choices a human would make in each situation. So you basically get an AI which is just a human, but sped up a lot. Or have an AI which presents arguments for and against each choice, so that humans can make more informed choices. Then it could predict what choice a human would make after hearing all the arguments, and do that.
More complicated ideas were mentioned in Superintelligence. I like the idea of “motivational scaffolding”.Somehow train an AI that can learn how the world works and can generate an “interpretable model”. Like e.g. being able to understand English sentences and translate their meanings to representations the AI can use. Then you can explicitly program a utility function into the AI using its learned model.
From your point of view. You gave me examples of values which you consider bad, as an argument against FAI. I’m showing you that CEV would eliminate these things.
Your stated example was ISIS. ISIS is so bad because they incorrectly believe that God is on their side and wants them to do the things they do. That the people that die will go to heaven, so loss of life isn’t so bad. If they were more intelligent, informed, and rational… If they knew all the arguments for and against religion, then their values would be more like ours. They would see how bad killing people is, and that their religion is wrong.
The second thing CEV does is average everyone’s values together. So even if ISIS really does value killing people, their victims value not being killed even more. So a CEV of all of humanity would still value life, even if evil people’s values are included. Even if everyone was a sociopath, their CEV would still be the best compromise possible, between everyone’s values.
No, I’m asking you to specify it. My point is that you can’t build X if you can’t even recognize X.
Learning what humans want is pretty easy. However it’s an inconsistent mess which involves many things contemporary people find unsavory. Making it all coherent and formulating a (single) policy on the basis of this mess is the hard part.
Why would CEV eliminate things I find negative? This is just a projected typical mind fallacy. Things I consider positive and negatve are not (necessarily) things many or most people consider positive and negative. Since I don’t expect to find myself in a privileged position, I should expect CEV to eliminate some things I believe are positive and impose some things I believe are negative.
Later you say that CEV will average values. I don’t have average values.
I see no evidence to believe this is true and lots of evidence to believe this is false.
You are essentially saying that religious people are idiots and if only you could sit them down and explain things to them, the scales would fall from their eyes and they will become atheists.This is a popular idea, but it fails real-life testing very very hard.
And I don’t agree with that. I’ve presented some ideas on how an FAI could be built, and how CEV would work. None of them require “recognizing” FAI. What would it even mean to “recognize” FAI, except to see that it values the kinds of things we value and makes the world better for us.
I’ve written about one method to accomplish this, though there may be better methods.
Humans are 99.999% identical. We have the same genetics, the same brain structures, and mostly the same environments. The only reason this isn’t obvious, is because we spend almost all our time focusing on the differences between people, because that’s what’s useful in everyday life.
That may be the case, but that’s still not a bad outcome. In the example I used, the values dropped from ISIS members were taken for 2 reasons. That they were based on false beliefs, or that they hurt other people. If you have values based on false beliefs, you should want them to be eliminated. If you have values that hurt other people then it’s only fair that be eliminated. Or else you risk the values of people that want to hurt you.
Well I think it’s accurate, but it’s somewhat nonspecific. Specifically, CEV will find the optimal compromise of values. The values that satisfy the most people the most amount. Or at least dissatisfy the fewest people the least. See the post I just linked for more details, on one example of how that could be implemented. That’s not necessarily “average values”.
In the worst case, people with totally incompatible values will just be allowed to go separate ways, or whatever the most satisfying compromise is. Muslims live on one side of the dyson sphere, Christians on the other, and they never have to interact and can do their own thing.
My exact words were “If they were more intelligent, informed, and rational… If they knew all the arguments for and against...” Real world problems of persuading people don’t apply. Most people don’t research all the arguments against their beliefs, and most people aren’t rational and seriously consider the hypothesis that they are wrong.
For what it’s worth, I was deconverted like this. Not overnight by any means. But over time I found that the arguments against my beliefs were correct and I updated my belief.
Changing world views is really really hard. There’s no one piece of evidence or one argument to dispute. Religious people believe that there is tons of evidence of God. To them it just seems obviously true. From miracles, to recorded stories, to their own personal experiences, etc. It takes a lot of time to get at every single pillar of the belief and show its flaws. But it is possible. It’s not like Muslims were born believing in Islam. Islam is not encoded in genetics. People deconvert from religions all the time, entire societies have even done it.
In any case, my proposal does not require literally doing this. It’s just a thought experiment. To show that the ideal set of values is what you choose if you had all the correct beliefs.
It means that when you look an an AI system, you can tell whether it’s FAI or not.
If you can’t tell, you may be able to build an AI system, but you still won’t know whether it’s FAI or not.
I don’t see what voting systems have to do with CEV. The “E” part means you don’t trust what the real, current humans say, so to making them vote on anything is pointless.
That’s a meaningless expression without a context. Notably, we don’t have the same genes or the same brain structures. I don’t know about you, but it is really obvious to me that humans are not identical.
How do you know what’s false? You are a mere human, you might well be mistaken. How do you know what’s fair? Is it an objective thing, something that exists in the territory?
Right, so the fat man gets thrown under the train… X-)
Hey, I want to live on the inside. The outside is going to be pretty gloomy and cold :-/
LOL. You’re just handwaving then. “And here, in the difficult part, insert magic and everything works great!”
Look at it how? Look at it’s source code? I argued that we can write source code that will result in FAI, and you could recognize that. Look at the weights of it’s “brain”? Probably not, anymore than we can look at human brains and recognize what they do. Look at it’s actions? Definitely, FAI is an AI that doesn’t destroy the world etc.
The voting doesn’t have to actually happen. The AI can predict what we would vote for, if we had plenty of time to debate it. And you can get even more abstract than that and have the FAI just figure out the details of E itself.
The point is to solve the “coherent” part. That you can find a set of coherent values from a bunch of different agents or messy human brains. And to show that mathematicians have actually extensively studied a special case of this problem, voting systems.
Compared to other animals, compared to aliens, yes we are incredibly similar. We do have 99.99% identical DNA, our brains all have the same structure with minor variations.
Did I claim that I did?
I gave a precise algorithm for doing that actually.
Which is the best possible outcome, vs killing 5 other people. But I don’t think these kinds of scenarios are realistic once we have incredibly powerful AI.
I’m not handwaving anything… There is no magic involved at all. The whole scenario of persuading people is counterfactual and doesn’t need to actually be done. The point is to define more exactly what CEV is. It’s the values you would want if you had the correct beliefs. You don’t need to actually have the correct beliefs, to give your CEV.
I think we have, um, irreconcilable differences and are just spinning wheels here. I’m happy to agree to disagree.
We typically imagine CEV asking what people would do if they ‘knew what the AI knew’ - let’s say the AI tries to estimate expected value of a given action, with utility defined by extrapolated versions of us who know the truth, and probabilities taken from the AI’s own distribution. I am absolutely saying that theism fails under any credible epistemology, and any well-programmed FAI would expect ‘more knowledgeable versions of us’ to become atheists on general principles. Whether or not this means they would change “if they knew all the arguments for and against religion,” depends on whether or not they can accept some extremely basic premise.
(Note that nobody comes into the word with anything even vaguely resembling a prior that favors a major religion. We might start with a bias in favor of animism, but nearly everyone would verbally agree this anthropomorphism is false.)
It seems much less clear if CEV would make psychopathy irrelevant. But potential victims must object to their own suffering at least as much as real-world psychopaths want to hurt them. So the most obvious worst-case scenario, under implausibly cynical premises, looks more like Omelas than it does a Mongol invasion. (Here I’m completely ignoring the clause meant to address such scenarios, “had grown up farther together”.)
No, we don’t, because this would be a stupid question. CEV doesn’t ask people, CEV tells people what they want.
I see little evidence to support this point of view. You might think that atheism is obvious, but a great deal of people, many of them smarter than you, disagree.