I wonder if SIAI could publicly discuss the values part of the AI without discussing the optimization part. The values part seems to me (and from what I can tell, you too) where the most good would be done by public discussion while the optimization part seems to me where the danger lies if the information gets out.
I wonder if SIAI could publicly discuss the values part of the AI without discussing the optimization part. The values part seems to me (and from what I can tell, you too) where the most good would be done by public discussion while the optimization part seems to me where the danger lies if the information gets out.
Not honestly. When discussing values publicly you more or less have to spin bullshit. I would expect any public discussion the SIAI engaged in to be downright sickening to read and any interesting parts quickly censored. I’d much prefer no discussion at all—or discussion done by other people outside the influence or direct affiliation with the SIAI. That way the SIAI would not be obliged to distort or cripple the conversation for the sake of PR nor able to even if it wanted to.
CEV is one of the things which, if actually explored thoroughly, would definitely fit this description. As it is it is at the ‘bullshit border’. That is, a point at which you don’t yet have to trade off epistemic considerations in favor of signalling to the lowest common denominator. Because it is still credible that the not-superficially-nice parts just haven’t been covered yet—rather than being outright lied about.
I agree entirely with both of wedifrid’s comments above. Just read the CEV document, and ask, “If you were tasked with implementing this, how would you do it?” I tried unsuccessfully many times to elicit details from Eliezer on several points back on Overcoming Bias, until I concluded he did not want to go into those details.
One obvious question is, “The expected value calculations that I make from your stated beliefs indicate that your Friendly AI should prefer killing a billion people over taking a 10% chance that one of them is developing an AI; do you agree?” (If the answer is “no”, I suspect that is only due to time discounting of utility.)
Surely though if the FAI is in a position to be able to execute that action, it is in a position where it is so far ahead of an AI someone could be developing that it would have little fear of that possibility as a threat to CEV?
It won’t be very far ahead of an AI in realtime. The idea that the FAI can get far ahead, is based on the idea that it can develop very far in a “small” amount of time. Well, so can the new AI—and who’s to say it can’t develop 10 times as quickly as the FAI? So, how can a one-year-old FAI be certain that there isn’t an AI project that has been developed secretly 6 months ago and is about to overtake it in itelligence?
It is a somewhat complex issue, best understood by following what is (and isn’t) said in conversations along the lines of CEV (and sometimes metaethics) when the subject comes up. I believe the last time was a month or two ago in one of lukeprog’s posts.
Mind you this is a subject that would take a couple of posts to properly explore.
Because it is still credible that the not-superficially-nice parts just haven’t been covered yet—rather than being outright lied about.
Isn’t exploring the consequences of something like CEV pretty boring? Naively, the default scenario conditional on a large amount of background assumptions about relative optimization possible from various simulation scenarios et cetera is that the FAI fooms along possibly metaphysical spatiotemporal dimensions turning everything into acausal economic goodness. Once you get past the ‘oh no that means it kills everything I love’ part it’s basically a dead end. No? Note: the publicly acknowledged default scenario for a lot of smart people is a lot more PC than this. It’s probably not default for many people at all. I’m not confident in it.
The values part seems to me (and from what I can tell, you too) where the most good would be done by public discussion while the optimization part seems to me where the danger lies if the information gets out.
The problem is if one organisation with dubious values gets far ahead of everyone else. That situation is likely to be result of keeping secrets in this area.
Openness seems more likely to create a level playing field where the good guys have an excellent chance of winning. Those promoting secrecy are part of the problem here, IMO. I think we should leave the secret projects to the NSA and IARPA.
The history of IT shows many cases where use of closed solutions led to monopolies and problems. I think history shows that closed source solutions are mostly good for those selling them, but bad for the rest of society. IMO, we really don’t want machine intelligence to be like that.
The problem is if one organisation with dubious values gets far ahead of everyone else. That situation is likely to be result of keeping secrets in this area.
It’s likely to be the result of organizations with dubious values keeping secrets in this area. The good guys being open doesn’t make it better, it makes it worse, by giving the bad guys an asymmetric advantage.
The good guys want to form a large cooperatve network with each other, to help ensure they reach the goal first. Sharing is one of the primary ways they have of signalling to each other that they are good guys. Signalling must be expensive to be credible, and this is a nice, relevant, expensive signal. Being secretive—and failing to share—self-identifies yourself as a selfish bad guy—in the eyes of the sharers.
It is not an advantage to be recognised by good guys as a probable bad guy. For one thing, it most likey means you get no technical support.
A large cooperative good-guy network is a major win in terms of risk—compared to the scenario where everyone is secretive. The bad guys get some shared source code—but that in no way makes up for how much worse their position is overall.
To get ahead, the bad guys have to pretend to be good guys. To convince others of this—in the face of the innate human lie-detector abilities—they may even need to convince themselves they are good guys...
Personally, I think the benefits of openness win out in this case too.
That is especially true for the “inductive inference” side of things—which I estimate to be about 80% of the technical problem of machine intelligence. Keeping that secret is just a fantasy. Versions of that are going to be embedded in every library in every mobile computing device on the planet—doing input prediction, compression, and pattern completion. It is core infrastructure. You can’t hide things like that.
Essentially, you will have to learn to live with the possibility of bad guys using machine intelligence to help themselves. You can’t really stop that—so, don’t think that you can—and instead look into affecting what you can change—for example, reducing the opportunities for them to win, limiting the resulting damage, etc.
In this case, I’m less afraid of “bad guys” than I am of “good guys” who make mistakes. The bad guys just want to rule the Earth for a little while. The good guys want to define the Universe’s utility function.
I’m less afraid of “bad guys” than I am of “good guys” who make mistakes.
Looking at history of accidents with machines, they seem to be mostly automobile accidents. Medical accidents are number two, I think.
In both cases, technology that proved dangerous was used deliberately—before the relevant safety features could be added—due to the benefits it gave in the mean time. It seems likely that we will see more of that—in conjunction with the overall trend towards increased safety.
My position on this is the opposite of yours. I think there are probably greater individual risks from a machine intelligence working properly for someone else than there are from an accident. Both positions are players, though.
I wonder if SIAI could publicly discuss the values part of the AI without discussing the optimization part. The values part seems to me (and from what I can tell, you too) where the most good would be done by public discussion while the optimization part seems to me where the danger lies if the information gets out.
Not honestly. When discussing values publicly you more or less have to spin bullshit. I would expect any public discussion the SIAI engaged in to be downright sickening to read and any interesting parts quickly censored. I’d much prefer no discussion at all—or discussion done by other people outside the influence or direct affiliation with the SIAI. That way the SIAI would not be obliged to distort or cripple the conversation for the sake of PR nor able to even if it wanted to.
CEV doesn’t seem to fit this description.
CEV is one of the things which, if actually explored thoroughly, would definitely fit this description. As it is it is at the ‘bullshit border’. That is, a point at which you don’t yet have to trade off epistemic considerations in favor of signalling to the lowest common denominator. Because it is still credible that the not-superficially-nice parts just haven’t been covered yet—rather than being outright lied about.
Do you have evidence for this proposition?
I agree entirely with both of wedifrid’s comments above. Just read the CEV document, and ask, “If you were tasked with implementing this, how would you do it?” I tried unsuccessfully many times to elicit details from Eliezer on several points back on Overcoming Bias, until I concluded he did not want to go into those details.
One obvious question is, “The expected value calculations that I make from your stated beliefs indicate that your Friendly AI should prefer killing a billion people over taking a 10% chance that one of them is developing an AI; do you agree?” (If the answer is “no”, I suspect that is only due to time discounting of utility.)
Surely though if the FAI is in a position to be able to execute that action, it is in a position where it is so far ahead of an AI someone could be developing that it would have little fear of that possibility as a threat to CEV?
It won’t be very far ahead of an AI in realtime. The idea that the FAI can get far ahead, is based on the idea that it can develop very far in a “small” amount of time. Well, so can the new AI—and who’s to say it can’t develop 10 times as quickly as the FAI? So, how can a one-year-old FAI be certain that there isn’t an AI project that has been developed secretly 6 months ago and is about to overtake it in itelligence?
It is a somewhat complex issue, best understood by following what is (and isn’t) said in conversations along the lines of CEV (and sometimes metaethics) when the subject comes up. I believe the last time was a month or two ago in one of lukeprog’s posts.
Mind you this is a subject that would take a couple of posts to properly explore.
Isn’t exploring the consequences of something like CEV pretty boring? Naively, the default scenario conditional on a large amount of background assumptions about relative optimization possible from various simulation scenarios et cetera is that the FAI fooms along possibly metaphysical spatiotemporal dimensions turning everything into acausal economic goodness. Once you get past the ‘oh no that means it kills everything I love’ part it’s basically a dead end. No? Note: the publicly acknowledged default scenario for a lot of smart people is a lot more PC than this. It’s probably not default for many people at all. I’m not confident in it.
I don’t really understand what this means, so I don’t see why the next bit follows. Could you break this down, preferably using simpler terms?
The problem is if one organisation with dubious values gets far ahead of everyone else. That situation is likely to be result of keeping secrets in this area.
Openness seems more likely to create a level playing field where the good guys have an excellent chance of winning. Those promoting secrecy are part of the problem here, IMO. I think we should leave the secret projects to the NSA and IARPA.
The history of IT shows many cases where use of closed solutions led to monopolies and problems. I think history shows that closed source solutions are mostly good for those selling them, but bad for the rest of society. IMO, we really don’t want machine intelligence to be like that.
Many governments realise the significance of open source software these days—e.g. see: The government gets really serious about open source.
It’s likely to be the result of organizations with dubious values keeping secrets in this area. The good guys being open doesn’t make it better, it makes it worse, by giving the bad guys an asymmetric advantage.
We discussed this very recently.
The good guys want to form a large cooperatve network with each other, to help ensure they reach the goal first. Sharing is one of the primary ways they have of signalling to each other that they are good guys. Signalling must be expensive to be credible, and this is a nice, relevant, expensive signal. Being secretive—and failing to share—self-identifies yourself as a selfish bad guy—in the eyes of the sharers.
It is not an advantage to be recognised by good guys as a probable bad guy. For one thing, it most likey means you get no technical support.
A large cooperative good-guy network is a major win in terms of risk—compared to the scenario where everyone is secretive. The bad guys get some shared source code—but that in no way makes up for how much worse their position is overall.
To get ahead, the bad guys have to pretend to be good guys. To convince others of this—in the face of the innate human lie-detector abilities—they may even need to convince themselves they are good guys...
You never did address the issue I raised in the linked comment. As far as I can tell, it’s a showstopper for open-access development models of AI.
You gave some disadvantages of openness—I responded with a list of advantages of openness. Why you concluded this was not responsive is not clear.
Conventional wisdom about open source and security is that it helps—e.g. see Bruce Schneier on the topic.
Personally, I think the benefits of openness win out in this case too.
That is especially true for the “inductive inference” side of things—which I estimate to be about 80% of the technical problem of machine intelligence. Keeping that secret is just a fantasy. Versions of that are going to be embedded in every library in every mobile computing device on the planet—doing input prediction, compression, and pattern completion. It is core infrastructure. You can’t hide things like that.
Essentially, you will have to learn to live with the possibility of bad guys using machine intelligence to help themselves. You can’t really stop that—so, don’t think that you can—and instead look into affecting what you can change—for example, reducing the opportunities for them to win, limiting the resulting damage, etc.
What linked comment?
The first comment here, I believe.
In this case, I’m less afraid of “bad guys” than I am of “good guys” who make mistakes. The bad guys just want to rule the Earth for a little while. The good guys want to define the Universe’s utility function.
Looking at history of accidents with machines, they seem to be mostly automobile accidents. Medical accidents are number two, I think.
In both cases, technology that proved dangerous was used deliberately—before the relevant safety features could be added—due to the benefits it gave in the mean time. It seems likely that we will see more of that—in conjunction with the overall trend towards increased safety.
My position on this is the opposite of yours. I think there are probably greater individual risks from a machine intelligence working properly for someone else than there are from an accident. Both positions are players, though.
Now I’m confused again. Who do you worry about if not the NSA?