Has there been any progress on making this concept any more concrete since 2004? How to work out a CEV? Or even one person’s EV? I couldn’t find anything.
I’m interested because it’s an idea with obvious application even if the intelligence doing the calculation is human.
On the topic of CEV: the Wikipedia article only has primary sources and needs third-party ones.
Though it still doesn’t answer the question—it just states why it’s a good idea, not how one would actually do it. There’s a suggestion that reflective equilibrium is a good start, but that competing ideas to CEV also include that.
Is there even a little material on how one would actually do CEV? Some “and then a miracle occurs” in the middle is fine for these purposes, we have human intelligences on hand.
Are these two papers really all there is to show so far for the concept of CEV?
Are these two papers really all there is to show so far for the concept of CEV?
It isn’t a subject that I would expect anyone from, say, SIAI to actually discuss honestly. Saying sane things about CEV would be a political minefield.
Expand? Are you talking about saying things about the output of CEV, or something else?
Not just the output, the input and means of computation are also potential minefields of moral politics. After all this touches on what amounts to the ultimate moral question: “If I had ultimate power how would I decide how to use it?” When you are answering that question in public you must use extreme caution, at least you must if you have any real intent to gain power.
There are some things that are safe to say about CEV, particularly things on a technical side. But for most part it is best to avoid giving too many straight answers. I said something on the subject of what can be considered the subproblem (“Do you confess to being consequentialist, even when it sounds nasty?”). Eliezer’s responses took a similar position:
then they would be better off simply providing an answer calibrated to please whoever they most desired to avoid disapproval from
No they wouldn’t. Ambiguity is their ally. Both answers elicit negative responses, and they can avoid that from most people by not saying anything, so why shouldn’t they shut up?
When describing CEV mechanisms in detail from the position of someone with more than detached academic interest you are stuck between a rock and a hard place.
On one hand you must signal idealistic egalitarian thinking such that you do not trigger in the average reader those aversive instincts we have for avoiding human tyrants.
On the other hand you must also be aware that other important members (ie. many of those likely to fund you) of your audience will have a deeper understanding of the practical issues and will see the same description as naive to the point of being outright dangerous and destructive.
My application is so that an organisation can work out not only what people want from it but what they would want from it. This assumes some general intelligences on hand to do the working out, but we have those.
Including the part where you claim you wish to run it on the entirety of humanity? Wow, that’s… scary. I have no good reason to be confident that I or those I care about would survive such a singularity.
Michael Vassar is usually the voice within SIAI of such concerns. It hasn’t been formally written up yet, but besides the Last Judge notion expressed in the original CEV paper, I’ve also been looking favorably on the notion of giving a binary veto over the whole process, though not detailed control, to a coherent extrapolated superposition of SIAI donors weighted by percentage of income donated (not donations) or some other measure of effort exerted.
And before anyone points it out, yes I realize that this would require a further amendment to the main CEV extrapolation process so that it didn’t deliberately try to sneak just over the veto barrier.
Look, people who are carrying the Idiot Ball just don’t successfully build AIs that match up to their intentions in the first place. If you think I’m an idiot, worry about me being the first idiot to cross the Idiot Finish Line and fulfill the human species’ destiny of instant death, don’t worry about my plans going right enough to go wrong in complicated ways.
Won’t this incentiveice people to lower their income in many situations because the fraction of their income being donations increases even if the total amount decreases?
In what situation would this be better than or easier than simply donating more, especially if percentage of income is considered over some period of time instead of simply “here it is?”
The thing that spooks me most about CEV (aside from the difficulty of gathering the information about what people really care about and the further difficulties of accurate extrapolition and some doubts about whether the whole thing can be made coherent) is that it seems to be planned to be a thing that will be perfected and then imposed, rather than a system which will take feedback from the people whose lives are theoretically being improved.
Excuse me if this has been an ongoing topic and an aspect of CEV which is at least being considered, but I don’t think I’ve seen this angle brought up.
Sure, and people feel safer driving than riding in an airplane, because driving makes them feel more in control, even though it’s actually far more dangerous per mile.
Probably a lot of people would feel more comfortable with a genie that took orders than an AI that was trying to do any of that extrapolating stuff. Until they died, I mean. They’d feel more comfortable up until that point.
Feedback just supplies a form of information. If you disentangle the I-want-to-drive bias and say exactly what you want to do with that information, it’ll just come out to the AI observing humans and updating some beliefs based on their behavior, and then it’ll turn out that most of that information is obtainable and predictable in advance. There’s also a moral component where making a decision is different from predictably making that decision, but that’s on an object level rather than metaethical level and just says “There’s some things we wouldn’t want the AI to do until we actually decide them even if the decision is predictable in advance, because the decision itself is significant and not just the strategy and consequences following from it.”
Sure, and people feel safer driving than riding in an airplane, because driving makes them feel more in control, even though it’s actually far more dangerous per mile.
I don’t think it’s the sense of control that makes people feel safer in a car so much as the fact that they’re not miles up in the air.
I’m pretty confident that people would feel more secure with a magical granter of wishes than a command-taking AI (provided that the granter was not an actual genie, which are known to be jerks,) because intelligent beings fall into a class that we are used to being able to comprehend and implement our desires, and AI fall into the same mental class as automated help lines and Microsoft Office assistants, which are incapable of figuring out what we actually want.
When you build automated systems capable of moving faster and stronger than humans can keep up with, I think you just have to bite the bullet and accept that you have to get it right. The idea of building such a system and then having it wait for human feedback, while emotionally tempting, just doesn’t work.
If you build an automatic steering system for a car that travels 250 mph, you either trust it or you don’t, but you certainly don’t let humans anywhere near the steering wheel at that speed.
Which is to say that while I sympathize with you here, I’m not at all convinced that the distinction you’re highlighting actually makes all that much difference, unless we impose the artificial constraint that the environment doesn’t get changed more quickly than a typical human can assimilate completely enough to provide meaningful feedback on.
I mean, without that constraint, a powerful enough environment-changer simply won’t receive meaningful feedback, no matter how willing it might be to take it if offered, any more than the 250-mph artificial car driver can get meaningful feedback from its human passenger.
And while one could add such a constraint, I’m not sure I want to die of old age while an agent capable of making me immortal waits for humanity to collectively and informedly say “yeah, OK, we’re cool with that.”
(ETA: Hm. On further consideration, my last paragraph is bogus. Pretty much everyone would be OK with letting everybody live until the decision gets made; it’s not a make-immortal vs. let-die choice. That said, there probably are things that have this sort of all-or-nothing aspect to them; I picked a poor example but I think my point still holds.)
If this concern is valid, to my understanding, then the optimal (perfect?) system that CEV puts in place will take the kind of feedback and adjust itself and so on. CEV just establishes the original configuration. For a crude metaphor: CVE is the thing writing the constitution and building the voting machines, but the constitutio0n can still have terms that allow the constitution to be changed according to the results of votes.
is that it seems to be planned to be a thing that will be perfected and then imposed, rather than a system which will take feedback from the people whose lives are theoretically being improved.
I would have described it as a system that is the ideal feedback taker (and anticipator).
Are you under the impression that jumping on statements like this, after the original statement explicitly disclaimed them, is a positive contribution to the conversation?
even if it comes out perfect hanson will just say that it’s based on far mode thinking and is thus incoherent WRT near values :p
what sort of person would I be if I was getting enough food, sex and sleep (the source of which was secure) to allow me to stay in far mode all the time? I have no idea.
A happily married (or equivalent) one? I am cosy in domesticity but also have a small child to divert my immediate energies, and I find myself regarding raising her as my important work and everything else as either part of that or amusement. Thankfully it appears raising a child requires less focused effort than ideal-minded parents seem to think (I can’t find the study quickly, sorry—anyone?), so this allows me to sit on the couch working or reading stuff while she plays, occasionally tweaking her flow of interesting stuff and occasionally dealing with her coming over and jumping up and down on me.
what sort of person would I be if I was getting enough food, sex and sleep (the source of which was secure) to allow me to stay in far mode all the time?
Bad in bed, for a start. In far mode all the time?
No-one’s added sources to the CEV article other than primary ones, so I’ve merged-and-redirected it to the Friendly AI article. It can of course be un-merged any time.
And I’ll start things off with a question I couldn’t find a place for or a post for.
Coherent extrapolated volition. That 2004 paper sets out what it would be and why we want it, in the broadest outlines.
Has there been any progress on making this concept any more concrete since 2004? How to work out a CEV? Or even one person’s EV? I couldn’t find anything.
I’m interested because it’s an idea with obvious application even if the intelligence doing the calculation is human.
On the topic of CEV: the Wikipedia article only has primary sources and needs third-party ones.
Have you looked at the paper by Roko and Nick, published this year?
No, I hadn’t found that one. Thank you!
Though it still doesn’t answer the question—it just states why it’s a good idea, not how one would actually do it. There’s a suggestion that reflective equilibrium is a good start, but that competing ideas to CEV also include that.
Is there even a little material on how one would actually do CEV? Some “and then a miracle occurs” in the middle is fine for these purposes, we have human intelligences on hand.
Are these two papers really all there is to show so far for the concept of CEV?
It isn’t a subject that I would expect anyone from, say, SIAI to actually discuss honestly. Saying sane things about CEV would be a political minefield.
Expand? Are you talking about saying things about the output of CEV, or something else?
Not just the output, the input and means of computation are also potential minefields of moral politics. After all this touches on what amounts to the ultimate moral question: “If I had ultimate power how would I decide how to use it?” When you are answering that question in public you must use extreme caution, at least you must if you have any real intent to gain power.
There are some things that are safe to say about CEV, particularly things on a technical side. But for most part it is best to avoid giving too many straight answers. I said something on the subject of what can be considered the subproblem (“Do you confess to being consequentialist, even when it sounds nasty?”). Eliezer’s responses took a similar position:
When describing CEV mechanisms in detail from the position of someone with more than detached academic interest you are stuck between a rock and a hard place.
On one hand you must signal idealistic egalitarian thinking such that you do not trigger in the average reader those aversive instincts we have for avoiding human tyrants.
On the other hand you must also be aware that other important members (ie. many of those likely to fund you) of your audience will have a deeper understanding of the practical issues and will see the same description as naive to the point of being outright dangerous and destructive.
My application is so that an organisation can work out not only what people want from it but what they would want from it. This assumes some general intelligences on hand to do the working out, but we have those.
I’ve been transparent about CEV and intend to continue this policy.
Including the part where you claim you wish to run it on the entirety of humanity? Wow, that’s… scary. I have no good reason to be confident that I or those I care about would survive such a singularity.
Michael Vassar is usually the voice within SIAI of such concerns. It hasn’t been formally written up yet, but besides the Last Judge notion expressed in the original CEV paper, I’ve also been looking favorably on the notion of giving a binary veto over the whole process, though not detailed control, to a coherent extrapolated superposition of SIAI donors weighted by percentage of income donated (not donations) or some other measure of effort exerted.
And before anyone points it out, yes I realize that this would require a further amendment to the main CEV extrapolation process so that it didn’t deliberately try to sneak just over the veto barrier.
Look, people who are carrying the Idiot Ball just don’t successfully build AIs that match up to their intentions in the first place. If you think I’m an idiot, worry about me being the first idiot to cross the Idiot Finish Line and fulfill the human species’ destiny of instant death, don’t worry about my plans going right enough to go wrong in complicated ways.
Won’t this incentiveice people to lower their income in many situations because the fraction of their income being donations increases even if the total amount decreases?
In what situation would this be better than or easier than simply donating more, especially if percentage of income is considered over some period of time instead of simply “here it is?”
Only in situations in which the job allows for valueable ‘perks’ while granting a lower salary.
The thing that spooks me most about CEV (aside from the difficulty of gathering the information about what people really care about and the further difficulties of accurate extrapolition and some doubts about whether the whole thing can be made coherent) is that it seems to be planned to be a thing that will be perfected and then imposed, rather than a system which will take feedback from the people whose lives are theoretically being improved.
Excuse me if this has been an ongoing topic and an aspect of CEV which is at least being considered, but I don’t think I’ve seen this angle brought up.
Sure, and people feel safer driving than riding in an airplane, because driving makes them feel more in control, even though it’s actually far more dangerous per mile.
Probably a lot of people would feel more comfortable with a genie that took orders than an AI that was trying to do any of that extrapolating stuff. Until they died, I mean. They’d feel more comfortable up until that point.
Feedback just supplies a form of information. If you disentangle the I-want-to-drive bias and say exactly what you want to do with that information, it’ll just come out to the AI observing humans and updating some beliefs based on their behavior, and then it’ll turn out that most of that information is obtainable and predictable in advance. There’s also a moral component where making a decision is different from predictably making that decision, but that’s on an object level rather than metaethical level and just says “There’s some things we wouldn’t want the AI to do until we actually decide them even if the decision is predictable in advance, because the decision itself is significant and not just the strategy and consequences following from it.”
Clearly, I should have read new comments before posting mine.
I don’t think it’s the sense of control that makes people feel safer in a car so much as the fact that they’re not miles up in the air.
I’m pretty confident that people would feel more secure with a magical granter of wishes than a command-taking AI (provided that the granter was not an actual genie, which are known to be jerks,) because intelligent beings fall into a class that we are used to being able to comprehend and implement our desires, and AI fall into the same mental class as automated help lines and Microsoft Office assistants, which are incapable of figuring out what we actually want.
When you build automated systems capable of moving faster and stronger than humans can keep up with, I think you just have to bite the bullet and accept that you have to get it right. The idea of building such a system and then having it wait for human feedback, while emotionally tempting, just doesn’t work.
If you build an automatic steering system for a car that travels 250 mph, you either trust it or you don’t, but you certainly don’t let humans anywhere near the steering wheel at that speed.
Which is to say that while I sympathize with you here, I’m not at all convinced that the distinction you’re highlighting actually makes all that much difference, unless we impose the artificial constraint that the environment doesn’t get changed more quickly than a typical human can assimilate completely enough to provide meaningful feedback on.
I mean, without that constraint, a powerful enough environment-changer simply won’t receive meaningful feedback, no matter how willing it might be to take it if offered, any more than the 250-mph artificial car driver can get meaningful feedback from its human passenger.
And while one could add such a constraint, I’m not sure I want to die of old age while an agent capable of making me immortal waits for humanity to collectively and informedly say “yeah, OK, we’re cool with that.”
(ETA: Hm. On further consideration, my last paragraph is bogus. Pretty much everyone would be OK with letting everybody live until the decision gets made; it’s not a make-immortal vs. let-die choice. That said, there probably are things that have this sort of all-or-nothing aspect to them; I picked a poor example but I think my point still holds.)
If this concern is valid, to my understanding, then the optimal (perfect?) system that CEV puts in place will take the kind of feedback and adjust itself and so on. CEV just establishes the original configuration. For a crude metaphor: CVE is the thing writing the constitution and building the voting machines, but the constitutio0n can still have terms that allow the constitution to be changed according to the results of votes.
I would have described it as a system that is the ideal feedback taker (and anticipator).
Is that what they mean by “getting the inside track on the singularity”? ;-)
It gets to possibly say “No”, once. Nothing else.
Are you under the impression that jumping on statements like this, after the original statement explicitly disclaimed them, is a positive contribution to the conversation?
Yu’El—please don’t you jump on me! I was mostly trying to be funny. Check with my smiley!
This was a reference to Jaron Lanier’s comment on this topic—in discussion with you.
Woah there. I remind you that what prompted your first reply here was me supporting you on this particular subject!
I can sure see that in the fundraising prospectus. “We’ve been working on something but can’t tell you what it is. Trust us, though!”
Let’s assume things are better than that and it is possible to talk about CEV. Is anyone from SIAI in the house and working on what CEV means?
even if it comes out perfect hanson will just say that it’s based on far mode thinking and is thus incoherent WRT near values :p
what sort of person would I be if I was getting enough food, sex and sleep (the source of which was secure) to allow me to stay in far mode all the time? I have no idea.
A happily married (or equivalent) one? I am cosy in domesticity but also have a small child to divert my immediate energies, and I find myself regarding raising her as my important work and everything else as either part of that or amusement. Thankfully it appears raising a child requires less focused effort than ideal-minded parents seem to think (I can’t find the study quickly, sorry—anyone?), so this allows me to sit on the couch working or reading stuff while she plays, occasionally tweaking her flow of interesting stuff and occasionally dealing with her coming over and jumping up and down on me.
well you should be working on CEV and I shouldn’t.
Hence the question ;-)
Bad in bed, for a start. In far mode all the time?
No-one’s added sources to the CEV article other than primary ones, so I’ve merged-and-redirected it to the Friendly AI article. It can of course be un-merged any time.