Thanks. You didn’t answer my questions directly, but it sounds like things are proceeding more or less according to expectations. I have a couple of followup questions.
At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm? For simplicity, feel free to ignore the opportunity cost of spending financial and human resources on this project, and just consider the potential direct harmful effects, like accidentally creating an UFAI while experimenting to better understand AGI, or building a would-be FAI that turns out to be an UFAI due to a philosophical, theoretical or programming error, or leaking AGI advances that will allow others to build an UFAI, or starting an AGI arms race.
I have a serious concern that if SIAI ever manages to obtain abundant funding and a team of “pretty competent researchers” (or even “world-class talent”, since I’m not convinced that even a team of world-class talent trying to build an FAI will do more good than harm), it will proceed with an FAI project without adequate analysis of the costs and benefits of doing so, or without continuously reevaluating the decision in light of new information. Do you think this concern is reasonable?
If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent. It could post answers to questions like the ones I asked in the grandparent comment without having to be prompted. It could publish the reasons behind every major strategic decision, and the metrics it keeps to evaluate its initiatives. (One way to do this, if such strategic thinking often occurs or is presented at board meetings, would be to publish the meeting minutes, as I suggested in another comment.)
At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm?
I’m not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes. I would place more weight on epistemic rationality, motivations (personality, background checks), institutional setup and culture, the strategy of first trying to get test the tractability of robust FAI theory and then advancing FAI before code (with emphasis on the more-FAI-less-AGI problems first), and similar variables.
Do you think this concern is reasonable?
Certainly it’s a reasonable concern from a distance. Folk do try to estimate and reduce the risks you mentioned, and to investigate alternative non-FAI interventions. My personal sense is that these efforts have been reasonable but need to be bolstered along with the FAI research team. If it looks like a credible (to me) team may be assembled my plan would be (and has been) to monitor and influence team composition, culture, and exposure to information. In other words, I’d like to select folk ready to reevaluate as well as to make progress, and to work hard to build that culture as researchers join up.
If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent.
I can’t speak for everyone, but I am happy to see SIAI become more transparent in various ways. The publication of the strategic plan is part of that, and I believe Luke is keen (with encouragement from others) to increase communication and transparency in other ways.
publish the meeting minutes
This one would be a decision for the board, but I’ll give my personal take again. Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke’s Q&A, etc. But I wouldn’t be surprised if this happens too.
I’m not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes.
Let’s assume that all the other variables are already optimized for to minimize the risk of creating an UFAI. It seems to me that the the relationship between the ability level of the FAI team and probabilities of the possible outcomes must then look something like this:
This chart isn’t meant to communicate my actual estimates of the probabilities and crossover points, but just the overall shapes of the curves. Do you disagree with them? (If you want to draw your own version, click here and then click on “Modify This Chart”.)
Folk do try to estimate and reduce the risks you mentioned, and to investigate alternative non-FAI interventions.
Has anyone posted SIAI’s estimates of those risks?
I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics
That seems reasonable, and given that I’m more interested in the “strategic” as opposed to “tactical” reasoning within SIAI, I’d be happy for it to be communicated through some other means.
If we condition on having all other variables optimized, I’d expect a team to adopt very high standards of proof, and recognize limits to its own capabilities, biases, etc. One of the primary purposes of organizing a small FAI team is to create a team that can actually stop and abandon a line of research/design (Eliezer calls this “halt, melt, and catch fire”) that cannot be shown to be safe (given limited human ability, incentives and bias). If that works (and it’s a separate target in team construction rather than a guarantee, but you specified optimized non-talent variables) then I would expect a big shift of probability from “UFAI” to “null.”
What I’m afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or the formalization of the notion of “safety” used by the proof is wrong. This kind of thing happens a lot in cryptography, if you replace “safety” with “security”. These mistakes are still occurring today, even after decades of research into how to do such proofs and what the relevant formalizations are. From where I’m sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations. There’s good recent review of the history of provable security, titled Provable Security in the Real World, which might help you understand where I’m coming from.
Your comment has finally convinced me to study some practical crypto because it seems to have fruitful analogies to FAI. It’s especially awesome that one of the references in the linked article is “An Attack Against SSH2 Protocol” by W. Dai.
From where I’m sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations.
Correct me if I’m wrong, but it doesn’t seem as though “proofs” of algorithm correctness fail as frequently as “proofs” of cryptosystem unbreakableness.
Where does your intuition that friendliness proofs are on the order of reliability of cryptosystem proofs come from?
Interesting question. I guess proofs of algorithm correctness fail less often because:
It’s easier to empirically test algorithms to weed out the incorrect ones, so there are fewer efforts to prove conjectures of correctness that are actually false.
It’s easier to formalize what it means for an algorithm to be correct than for a cryptosystem to be secure.
In both respects, proving Friendliness seems even worse than proving security.
What I’m afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or that the formalization of the notion of “safety” used by the proof is wrong.
I can’t count myself “world class” on the raw ability axis, but I’m pretty sure that probability of a team of people like me producing UFAI is very low (in absolute value), as I know when I understand something and when I yet don’t, and I think this property would be even more reliable if I had better raw ability. That is a much more relevant safety factor than ability (but seems harder to test) that changes the shape of UFAI curve. A couple of levels worse than myself, I wouldn’t trust someone’s ability to disbelieve wrong things, so the maximum should probably be in this range, not centered on “world class” in particular.
Could you elaborate on the ability axis. Could you name some people that you perceive to be of world class ability in their field. Could you further explain if you believe that there are people who are sufficiently above that class.
For example, what about Terence Tao? What about the current SIAI team?
However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics
Basically it ensures that all serious discussion and decision making is made prior to any meeting in informal conversations so that the meeting sounds good. Such a record should be considered a work of fiction regardless of whether it is a video transcript or a typed document. (Only to the extent that the subject of the meeting matters—harmless or irrelevant things wouldn’t change.)
Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke’s Q&A, etc. But I wouldn’t be surprised if this happens too.
Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke’s Q&A, etc. But I wouldn’t be surprised if this happens too.
I’ll take this opportunity to mention that I’m against publishing SIAI’s board meeting minutes. First, for the reasons Carl gave above. Second, because then we’d have to invest a lot of time explaining the logic behind each decision, or else face waves of criticism for decisions that appear arbitrary when one merely publishes the decision and not the argument.
However, I’m definitely making big effort to improve SIAI transparency. Our new website (under development) has a page devoted to transparency, where you’ll be able to find our strategic plan, our 990s, and probably other links. I’m also publishing the monthly progress reports, and recently co-wrote ‘Intelligence Explosion: Evidence and Import’, which for the first time (excepting Chalmers) summarizes many of our key pieces of reasoning with the clarity of mainstream academic form. We’re also developing an annual report, and I’m working toward developing some other documents that will make SIAI strategy more transparent. But all this takes time, especially when starting from pretty close to 0 on transparency, and having lots of other problems to fix, too.
Second, because then we’d have to invest a lot of time explaining the logic behind each decision, or else face waves of criticism for decisions that appear arbitrary when one merely publishes the decision and not the argument.
Are the arguments not made during the board meetings? Or do you guys talk ahead of time and just formalize the decisions during the board meetings?
In any case, I think you should invest more time explaining the logic behind your decisions, and not just make the decisions themselves more transparent. If publishing board meeting minutes is not the best way to do that, then please think about some other way of doing it. I’ll list some of the benefits of doing this, in case you haven’t thought of some of them:
encourage others to emulate you and think strategically about their own choices
allow outsiders to review your strategic thinking and point out possible errors
assure donors and potential donors that there is good reasoning behind your strategic decisions
improve exchange of strategic ideas between everyone working on existential risk reduction
The arguments are strewn across dozens of conversations in and out of board meetings (mostly out).
As for finding other ways to explain the logic behind our decisions, I agree, and I’m working on it. One qualification I would add, however, is that I predict more benefit to my strategic thinking from one hour with Paul Christiano and one hour with Nick Bostrom than from spending four hours to write up my strategic thinking on subject X and publishing it so that passersby can comment on it. It takes a lot of effort to be so well-informed about these issues that one can offer valuable strategic advice. But for some X we have already spent those many productive hours with Christiano and Bostrom and so on, and it’s a good marginal investment to write up our strategic thinking on X.
This reminds me a bit of Eliezer’s excuse when he was resisting calls for him to publish his TDT ideas on LW:
Unfortunately this “timeless decision theory” would require a long sequence to write up
I suggest you may be similarly overestimating the difficulty of explaining your strategic ideas/problems to a sufficiently large audience to get useful feedback. Why not just explain them the same way that you would explain to Christiano and Bostrom? If some among the LW community don’t understand, they can ask questions and others could fill them in.
The decision theory discussions on LW generated significant progress, but perhaps more importantly created a pool of people with strong interest in the topic (some of whom ended up becoming your research associates). Don’t you think the same thing could happen with Singularity strategies?
I suggest you may be similarly overestimating the difficulty of explaining your strategic ideas/problems to a sufficiently large audience to get useful feedback...
Yes, I would get some useful feedback, but I also predict a negative effect: When people don’t have enough background knowledge to make what I say sound reasonable to them, I’ll get penalized for sounding crazy in the same way that I’m penalized when I try to explain AGI to an intuitive Cartesian dualist.
By penalized, I mean something like the effect that Scott Adams (author of Dilbert) encountered while blogging:
I hoped that people who loved the blog would spill over to people who read Dilbert, and make my flagship product stronger. Instead, I found that if I wrote nine highly popular posts, and one that a reader disagreed with, the reaction was inevitably “I can never read Dilbert again because of what you wrote in that one post.” Every blog post reduced my income, even if 90% of the readers loved it. And a startling number of readers couldn’t tell when I was serious or kidding, so most of the negative reactions were based on misperceptions.
Anyway, you also wrote:
The decision theory discussions on LW generated significant progress, but perhaps more importantly created a pool of people with strong interest in the topic (some of whom ended up becoming your research associates). Don’t you think the same thing could happen with Singularity strategies?
If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it’s hard to feel you’ve gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won’t be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.
The “results” in decision theory we’ve got so far are so tenuous that I believe their role is primarily to somewhat clarify the problem statement for what remains to be done (a big step compared to complete confusion in the past, but not quite clear (-ly motivated) math). The ratchet of science hasn’t clicked yet, even if rational evidence is significant, which is the same problem you voice for strategy discussion.
If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it’s hard to feel you’ve gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won’t be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.
At FHI they sometimes sit around a whiteboard and discuss weird AI-boxing ideas or weird acquire-relevant-influence ideas, and feel as though they are making progress when something sounds more-promising than usual, leads to other interesting ideas, etc. We could too. I suspect it would create a similar set of interested people capable of having strategy ideas, though probably less math-inclined than the decision theory folk, and with more surrounding political chaos.
Okay; that changes my attitude a bit. But FHI’s core people are unlikely to produce the Scott Adams effect in response to strategic discussion. Do you or Wei think it’s reasonable for me to worry about that when discussing strategy in detail amongst, say, LWers — most of whom have far less understanding of the relevant issues (by virtue of not working on them every weeks for months or years)?
I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some in the SingInst fan base. It is possible that this is reason enough to avoid such discussion; my guess is that it is not, but I could easily be wrong here, and many think it is.
I was mostly responding to the [paraphrased] “we can’t discuss it publicly because it would take too long”, and “it wouldn’t work to create an informed set of strategists because there wouldn’t be a sense of progress”; I’ve said sentences like that before, and, when I said them, they were excuses/rationalizations. My actual reason was something like: “I’d like to avoid alienating people, and I’d like to avoid starting conflicts whose outcomes I cannot predict.”
I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers.
It’ll alienate some SingInst-ers? That’s a troubling sign. Aren’t most SingInst-ers at least vaguely competent rationalists who are actually interested in Singularity options? Yet they will be alienated by mere theoretical exploration of the domain? What has your HR department been doing?
I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers.
From a public relations viewpoint this sentence alone is worse than any particular detail could possible be. Because it not only allows, but forces people to imagine what horrible strategies you could possible explore and pursue. Strategies that are bad enough that you not only believe that even the community most closely related to SI would be alienated by them, but that you are also unable to support those explorations with rational arguments.
Personally I don’t want to contribute anything to an organisation which admits to explore strategies that are unacceptable by most people. And I wouldn’t suggest anyone else to do so. Yet I would neither be willing to to contribute if you were secretive about your strategic explorations. I just don’t trust you people, I never did. And I am still horrified by how people who actually believe that what you are saying is true and possible are willing to trust your small group blindly to shape the universe.
A paperclip maximizer is just a transformation of the universe into a state of almost no suffering. But a friendly AI that isn’t quite friendly, or one that is biased by the ideas of a small group of abnormal and psychopathic people, could increase negative utility dramatically.
I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers.
From a public relations viewpoint this sentence alone is worse than any particular detail could possible be.
No, I don’t agree with this. I predict that whatever strategies AnnaSalamon has in mind would alienate someone unless those strategies were very anodyne or vague. If the sample of listeners is big enough there will usually be someone to take issue with just about any idea one voices.
Because it not only allows, but forces people to imagine what horrible strategies you could possible explore and pursue.
How true is that? In my case it just makes me try to imagine whether there are any strategies AnnaSalamon could propose that wouldn’t perturb anyone. When it comes to the singularity I draw a blank, as it’s a big enough issue that just about anything she or I or you could say about it will bother somebody.
I disagree that AS’s weak statement that “detailed exploration of Singularity strategies would alienate some LW-ers” tells you very much at all about the nature of those strategies. I expect most conceivable strategies would piss someone off, so I’d say her claim communicates less than 1 bit of information about those strategies.
Based on the rest of your comment I think you’ve read AnnaSalamon’s statement as one implying that SI’s strategies are unusually objectionable or alienating; maybe that’s what she meant but it doesn’t seem to be what she wrote.
Based on the rest of your comment I think you’ve read AnnaSalamon’s statement as one implying that SI’s strategies are unusually objectionable or alienating;
Which is the right strategy. Humans are unfriendly. The group around AnnaSalamon is trying to take over and shape the universe according to their idea of what is right and good.
If you are making decisions based on the worst case scenario—as you are clearly doing when it comes to artificial intelligence, if you support friendly AI research—then you should do the same when it comes to human beings.
It isn’t enough to talk to them, to review their output and conclude that they are most likely friendly. Doing so and contributing money is aking to letting an AI, that is not provably friendly, out of the box. They either have to prove that they are friendly or make all their work transparent. Otherwise the right thing to do is to label them as terrorists and tell them to fuck off.
You could just as reasonably have written that comment if AnnaSalamon had never posted in this thread, though. My argument here isn’t with your broader attitude to FAI/SI, it’s that I think it’s unfair to pounce on a very low-information statement like “detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers” and write it off as terrible PR that implies SI’s considering horrible strategies.
...it’s unfair to pounce on a very low-information statement like “detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers”...
I think that it does convey quite a lot information. I already know that people associated with SI and LW accept a lot of strategic thinking that would be considered everything from absurd to outright psychopathic within different circles. If she says that the strategies they explore would even alienate some people associated with LW, let alone SI, then that’s really bad.
I think you underestimate the amount of information that a natural language sentence can carry and signal.
...and write it off as terrible PR that implies SI’s considering horrible strategies.
It is abundantly clear that SI is really bad at PR. I assign a high probability to the possibility that her and other members of the SI are revealing a lot of what is going on behind the scenes by being careless about their communication.
If she says that the strategies they explore would even alienate some people associated with LW, let alone SI, then that’s really bad.
I disagree. LWers have a range of opinions on AI & the singularity (yes, those opinions are less diverse than the general population’s, but I don’t see them being sufficiently less diverse for your argument to go through). There are already quite a few LWers who’re SI sceptics to a degree. I’m also sure there are LWers who, at the moment, basically agree with SI but would spurn it if it announced a more specific strategy for handling AI/the singularity. I think this would be true for most possible strategies SI could announce. I’d expect the same basic argument to hold for SI (though I’m less sure because I know less about SI).
I think you underestimate the amount of information that a natural language sentence can carry and signal.
Quite possible! But in any case, a sentence can carry lots of information about one thing, but not another. One has to look at the probability of a sentence or claim conditional on a specific thing. As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one, and AnnaSalamon’s belief discriminates poorly between those two particular hypotheses.
It is abundantly clear that SI is really bad at PR. I assign a high probability to the possibility that her and other members of the SI are revealing a lot of what is going on behind the scenes by being careless about their communication.
Plausible, but I doubt it’s true for this specific example.
As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one...
If I was to accept your estimation then the associated utility of P(people alienated | terrible strategy) and P(people alienated | un-terrible strategy) would force you to act according to the first possibility.
I don’t follow. Do you mean that the potential disutility of SI having a terrible strategy is so much bigger than the potential utility of SI having an un-terrible strategy that, given equal likelihoods, I should act against SI? If so, I disagree.
Quite possible! But in any case, a sentence can carry lots of information about one thing, but not another. One has to look at the probability of a sentence or claim conditional on a specific thing. As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to …
Blah blah blah...full stop. We’re talking about the communication of primates with other primates. Evolution honed your skills to detect the intention and possible bullshit in the output of other primates. Use your intuition!
I disagree. LWers have a range of opinions on AI & the singularity …
I am not sure what you are getting at. If she thinks that there are strategies that should be kept secrete for political reasons or whatever and admits it, that’s bad from any possible viewpoint.
I have. My gut didn’t raise a red flag when I read AnnaSalamon’s post, but it did when I read yours.
I am not sure what you are getting at.
I was giving a reason for my claim that there’d be someone on LW/in SI who’d be alienated by all but the blandest of strategies.
If she thinks that there are strategies that should be kept secrete for political reasons or whatever and admits it, that’s bad from any possible viewpoint.
Maybe she thinks that and maybe she doesn’t, but either way she didn’t admit it. (At least not in the post I’m talking about. I haven’t read AS’s whole comment history.)
To my intuitions you sound exactly like a bitter excluded nobody attacking someone successful and popular. You DON’T talk like someone who sees through the lies of an evil greedy deceiver and honestly wants people to examine what he says and come to the correct opinion.
It isn’t enough to talk to them, to review their output and conclude that they are most likely friendly. Doing so and contributing money is aking to letting an AI, that is not provably friendly, out of the box. They either have to prove that they are friendly or make all their work transparent. Otherwise the right thing to do is to label them as terrorists and tell them to fuck off.
I think the “mostly harmless” phrase still applies. These look like kids with firecrackers. The folk we should watch out for are more likely to be the Chinese, the military, hedge funds—and so on.
Maybe you can give an example of the kind of thing that you’re worried about? What might you say that could get you penalized for sounding crazy?
Could get them penalized for sounding crazy? Those people believe into the possibility of heaven and hell and believe that merely thinking about decision and game theoretic conjectures might be dangerous.
Thanks. You didn’t answer my questions directly, but it sounds like things are proceeding more or less according to expectations. I have a couple of followup questions.
At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm? For simplicity, feel free to ignore the opportunity cost of spending financial and human resources on this project, and just consider the potential direct harmful effects, like accidentally creating an UFAI while experimenting to better understand AGI, or building a would-be FAI that turns out to be an UFAI due to a philosophical, theoretical or programming error, or leaking AGI advances that will allow others to build an UFAI, or starting an AGI arms race.
I have a serious concern that if SIAI ever manages to obtain abundant funding and a team of “pretty competent researchers” (or even “world-class talent”, since I’m not convinced that even a team of world-class talent trying to build an FAI will do more good than harm), it will proceed with an FAI project without adequate analysis of the costs and benefits of doing so, or without continuously reevaluating the decision in light of new information. Do you think this concern is reasonable?
If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent. It could post answers to questions like the ones I asked in the grandparent comment without having to be prompted. It could publish the reasons behind every major strategic decision, and the metrics it keeps to evaluate its initiatives. (One way to do this, if such strategic thinking often occurs or is presented at board meetings, would be to publish the meeting minutes, as I suggested in another comment.)
I’m not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes. I would place more weight on epistemic rationality, motivations (personality, background checks), institutional setup and culture, the strategy of first trying to get test the tractability of robust FAI theory and then advancing FAI before code (with emphasis on the more-FAI-less-AGI problems first), and similar variables.
Certainly it’s a reasonable concern from a distance. Folk do try to estimate and reduce the risks you mentioned, and to investigate alternative non-FAI interventions. My personal sense is that these efforts have been reasonable but need to be bolstered along with the FAI research team. If it looks like a credible (to me) team may be assembled my plan would be (and has been) to monitor and influence team composition, culture, and exposure to information. In other words, I’d like to select folk ready to reevaluate as well as to make progress, and to work hard to build that culture as researchers join up.
I can’t speak for everyone, but I am happy to see SIAI become more transparent in various ways. The publication of the strategic plan is part of that, and I believe Luke is keen (with encouragement from others) to increase communication and transparency in other ways.
This one would be a decision for the board, but I’ll give my personal take again. Personally, I like the recorded GiveWell meetings and see the virtues of transparency in being more credible to observers, and in providing external incentives. However, I would also worry that signalling issues with a diverse external audience can hinder accurate discussion of important topics, e.g. frank discussions of the strengths and weaknesses of potential Summit speakers, partners, and potential hires that could cause hurt feelings and damage valuable relationships. Because of this problem I would be more wholehearted in supporting other forms of transparency, e.g. more frequent and detailed reporting on activities, financial transparency, the strategic plan, things like Luke’s Q&A, etc. But I wouldn’t be surprised if this happens too.
Let’s assume that all the other variables are already optimized for to minimize the risk of creating an UFAI. It seems to me that the the relationship between the ability level of the FAI team and probabilities of the possible outcomes must then look something like this:
This chart isn’t meant to communicate my actual estimates of the probabilities and crossover points, but just the overall shapes of the curves. Do you disagree with them? (If you want to draw your own version, click here and then click on “Modify This Chart”.)
Has anyone posted SIAI’s estimates of those risks?
That seems reasonable, and given that I’m more interested in the “strategic” as opposed to “tactical” reasoning within SIAI, I’d be happy for it to be communicated through some other means.
I like this chart.
If we condition on having all other variables optimized, I’d expect a team to adopt very high standards of proof, and recognize limits to its own capabilities, biases, etc. One of the primary purposes of organizing a small FAI team is to create a team that can actually stop and abandon a line of research/design (Eliezer calls this “halt, melt, and catch fire”) that cannot be shown to be safe (given limited human ability, incentives and bias). If that works (and it’s a separate target in team construction rather than a guarantee, but you specified optimized non-talent variables) then I would expect a big shift of probability from “UFAI” to “null.”
What I’m afraid of is that a design will be shown to be safe, and then it turns out that the proof is wrong, or the formalization of the notion of “safety” used by the proof is wrong. This kind of thing happens a lot in cryptography, if you replace “safety” with “security”. These mistakes are still occurring today, even after decades of research into how to do such proofs and what the relevant formalizations are. From where I’m sitting, proving an AGI design Friendly seems even more difficult and error-prone than proving a crypto scheme secure, probably by a large margin, and there is no decades of time to refine the proof techniques and formalizations. There’s good recent review of the history of provable security, titled Provable Security in the Real World, which might help you understand where I’m coming from.
Your comment has finally convinced me to study some practical crypto because it seems to have fruitful analogies to FAI. It’s especially awesome that one of the references in the linked article is “An Attack Against SSH2 Protocol” by W. Dai.
More than fruitful analogies, I’d say: http://lesswrong.com/lw/3cz/cryptographic_boxes_for_unfriendly_ai/
Correct me if I’m wrong, but it doesn’t seem as though “proofs” of algorithm correctness fail as frequently as “proofs” of cryptosystem unbreakableness.
Where does your intuition that friendliness proofs are on the order of reliability of cryptosystem proofs come from?
Interesting question. I guess proofs of algorithm correctness fail less often because:
It’s easier to empirically test algorithms to weed out the incorrect ones, so there are fewer efforts to prove conjectures of correctness that are actually false.
It’s easier to formalize what it means for an algorithm to be correct than for a cryptosystem to be secure.
In both respects, proving Friendliness seems even worse than proving security.
Thanks for clarifying.
I agree.
I can’t count myself “world class” on the raw ability axis, but I’m pretty sure that probability of a team of people like me producing UFAI is very low (in absolute value), as I know when I understand something and when I yet don’t, and I think this property would be even more reliable if I had better raw ability. That is a much more relevant safety factor than ability (but seems harder to test) that changes the shape of UFAI curve. A couple of levels worse than myself, I wouldn’t trust someone’s ability to disbelieve wrong things, so the maximum should probably be in this range, not centered on “world class” in particular.
Could you elaborate on the ability axis. Could you name some people that you perceive to be of world class ability in their field. Could you further explain if you believe that there are people who are sufficiently above that class.
For example, what about Terence Tao? What about the current SIAI team?
Basically it ensures that all serious discussion and decision making is made prior to any meeting in informal conversations so that the meeting sounds good. Such a record should be considered a work of fiction regardless of whether it is a video transcript or a typed document. (Only to the extent that the subject of the meeting matters—harmless or irrelevant things wouldn’t change.)
That’s more like it!
I’ll take this opportunity to mention that I’m against publishing SIAI’s board meeting minutes. First, for the reasons Carl gave above. Second, because then we’d have to invest a lot of time explaining the logic behind each decision, or else face waves of criticism for decisions that appear arbitrary when one merely publishes the decision and not the argument.
However, I’m definitely making big effort to improve SIAI transparency. Our new website (under development) has a page devoted to transparency, where you’ll be able to find our strategic plan, our 990s, and probably other links. I’m also publishing the monthly progress reports, and recently co-wrote ‘Intelligence Explosion: Evidence and Import’, which for the first time (excepting Chalmers) summarizes many of our key pieces of reasoning with the clarity of mainstream academic form. We’re also developing an annual report, and I’m working toward developing some other documents that will make SIAI strategy more transparent. But all this takes time, especially when starting from pretty close to 0 on transparency, and having lots of other problems to fix, too.
Are the arguments not made during the board meetings? Or do you guys talk ahead of time and just formalize the decisions during the board meetings?
In any case, I think you should invest more time explaining the logic behind your decisions, and not just make the decisions themselves more transparent. If publishing board meeting minutes is not the best way to do that, then please think about some other way of doing it. I’ll list some of the benefits of doing this, in case you haven’t thought of some of them:
encourage others to emulate you and think strategically about their own choices
allow outsiders to review your strategic thinking and point out possible errors
assure donors and potential donors that there is good reasoning behind your strategic decisions
improve exchange of strategic ideas between everyone working on existential risk reduction
The arguments are strewn across dozens of conversations in and out of board meetings (mostly out).
As for finding other ways to explain the logic behind our decisions, I agree, and I’m working on it. One qualification I would add, however, is that I predict more benefit to my strategic thinking from one hour with Paul Christiano and one hour with Nick Bostrom than from spending four hours to write up my strategic thinking on subject X and publishing it so that passersby can comment on it. It takes a lot of effort to be so well-informed about these issues that one can offer valuable strategic advice. But for some X we have already spent those many productive hours with Christiano and Bostrom and so on, and it’s a good marginal investment to write up our strategic thinking on X.
This reminds me a bit of Eliezer’s excuse when he was resisting calls for him to publish his TDT ideas on LW:
I suggest you may be similarly overestimating the difficulty of explaining your strategic ideas/problems to a sufficiently large audience to get useful feedback. Why not just explain them the same way that you would explain to Christiano and Bostrom? If some among the LW community don’t understand, they can ask questions and others could fill them in.
The decision theory discussions on LW generated significant progress, but perhaps more importantly created a pool of people with strong interest in the topic (some of whom ended up becoming your research associates). Don’t you think the same thing could happen with Singularity strategies?
Yes.
Yes, I would get some useful feedback, but I also predict a negative effect: When people don’t have enough background knowledge to make what I say sound reasonable to them, I’ll get penalized for sounding crazy in the same way that I’m penalized when I try to explain AGI to an intuitive Cartesian dualist.
By penalized, I mean something like the effect that Scott Adams (author of Dilbert) encountered while blogging:
Anyway, you also wrote:
If so, then not for the same reasons. I think people got interested in decision theory because they could see results. But it’s hard to feel you’ve gotten a result in something like strategy, where we may never know whether or not one strategy was counterfactually better, or at least won’t be confident about that for another 5 years. Decision theory offers the opportunity for results that most people in the field can agree on.
The “results” in decision theory we’ve got so far are so tenuous that I believe their role is primarily to somewhat clarify the problem statement for what remains to be done (a big step compared to complete confusion in the past, but not quite clear (-ly motivated) math). The ratchet of science hasn’t clicked yet, even if rational evidence is significant, which is the same problem you voice for strategy discussion.
At FHI they sometimes sit around a whiteboard and discuss weird AI-boxing ideas or weird acquire-relevant-influence ideas, and feel as though they are making progress when something sounds more-promising than usual, leads to other interesting ideas, etc. We could too. I suspect it would create a similar set of interested people capable of having strategy ideas, though probably less math-inclined than the decision theory folk, and with more surrounding political chaos.
Okay; that changes my attitude a bit. But FHI’s core people are unlikely to produce the Scott Adams effect in response to strategic discussion. Do you or Wei think it’s reasonable for me to worry about that when discussing strategy in detail amongst, say, LWers — most of whom have far less understanding of the relevant issues (by virtue of not working on them every weeks for months or years)?
I agree that detailed exploration of Singularity strategies would alienate some LW-ers, and some in the SingInst fan base. It is possible that this is reason enough to avoid such discussion; my guess is that it is not, but I could easily be wrong here, and many think it is.
I was mostly responding to the [paraphrased] “we can’t discuss it publicly because it would take too long”, and “it wouldn’t work to create an informed set of strategists because there wouldn’t be a sense of progress”; I’ve said sentences like that before, and, when I said them, they were excuses/rationalizations. My actual reason was something like: “I’d like to avoid alienating people, and I’d like to avoid starting conflicts whose outcomes I cannot predict.”
It’ll alienate some SingInst-ers? That’s a troubling sign. Aren’t most SingInst-ers at least vaguely competent rationalists who are actually interested in Singularity options? Yet they will be alienated by mere theoretical exploration of the domain? What has your HR department been doing?
From a public relations viewpoint this sentence alone is worse than any particular detail could possible be. Because it not only allows, but forces people to imagine what horrible strategies you could possible explore and pursue. Strategies that are bad enough that you not only believe that even the community most closely related to SI would be alienated by them, but that you are also unable to support those explorations with rational arguments.
Personally I don’t want to contribute anything to an organisation which admits to explore strategies that are unacceptable by most people. And I wouldn’t suggest anyone else to do so. Yet I would neither be willing to to contribute if you were secretive about your strategic explorations. I just don’t trust you people, I never did. And I am still horrified by how people who actually believe that what you are saying is true and possible are willing to trust your small group blindly to shape the universe.
A paperclip maximizer is just a transformation of the universe into a state of almost no suffering. But a friendly AI that isn’t quite friendly, or one that is biased by the ideas of a small group of abnormal and psychopathic people, could increase negative utility dramatically.
No, I don’t agree with this. I predict that whatever strategies AnnaSalamon has in mind would alienate someone unless those strategies were very anodyne or vague. If the sample of listeners is big enough there will usually be someone to take issue with just about any idea one voices.
How true is that? In my case it just makes me try to imagine whether there are any strategies AnnaSalamon could propose that wouldn’t perturb anyone. When it comes to the singularity I draw a blank, as it’s a big enough issue that just about anything she or I or you could say about it will bother somebody.
I disagree that AS’s weak statement that “detailed exploration of Singularity strategies would alienate some LW-ers” tells you very much at all about the nature of those strategies. I expect most conceivable strategies would piss someone off, so I’d say her claim communicates less than 1 bit of information about those strategies.
Based on the rest of your comment I think you’ve read AnnaSalamon’s statement as one implying that SI’s strategies are unusually objectionable or alienating; maybe that’s what she meant but it doesn’t seem to be what she wrote.
Which is the right strategy. Humans are unfriendly. The group around AnnaSalamon is trying to take over and shape the universe according to their idea of what is right and good.
If you are making decisions based on the worst case scenario—as you are clearly doing when it comes to artificial intelligence, if you support friendly AI research—then you should do the same when it comes to human beings.
It isn’t enough to talk to them, to review their output and conclude that they are most likely friendly. Doing so and contributing money is aking to letting an AI, that is not provably friendly, out of the box. They either have to prove that they are friendly or make all their work transparent. Otherwise the right thing to do is to label them as terrorists and tell them to fuck off.
You could just as reasonably have written that comment if AnnaSalamon had never posted in this thread, though. My argument here isn’t with your broader attitude to FAI/SI, it’s that I think it’s unfair to pounce on a very low-information statement like “detailed exploration of Singularity strategies would alienate some LW-ers, and some SingInst-ers” and write it off as terrible PR that implies SI’s considering horrible strategies.
I think that it does convey quite a lot information. I already know that people associated with SI and LW accept a lot of strategic thinking that would be considered everything from absurd to outright psychopathic within different circles. If she says that the strategies they explore would even alienate some people associated with LW, let alone SI, then that’s really bad.
I think you underestimate the amount of information that a natural language sentence can carry and signal.
It is abundantly clear that SI is really bad at PR. I assign a high probability to the possibility that her and other members of the SI are revealing a lot of what is going on behind the scenes by being careless about their communication.
I disagree. LWers have a range of opinions on AI & the singularity (yes, those opinions are less diverse than the general population’s, but I don’t see them being sufficiently less diverse for your argument to go through). There are already quite a few LWers who’re SI sceptics to a degree. I’m also sure there are LWers who, at the moment, basically agree with SI but would spurn it if it announced a more specific strategy for handling AI/the singularity. I think this would be true for most possible strategies SI could announce. I’d expect the same basic argument to hold for SI (though I’m less sure because I know less about SI).
Quite possible! But in any case, a sentence can carry lots of information about one thing, but not another. One has to look at the probability of a sentence or claim conditional on a specific thing. As I see it, P(AS says some people would be alienated | SI has a terrible secret strategy) is about equal to P(AS says some people would be alienated | SI has an un-terrible secret strategy), so the likelihood ratio is about one, and AnnaSalamon’s belief discriminates poorly between those two particular hypotheses.
Plausible, but I doubt it’s true for this specific example.
If I was to accept your estimation then the associated utility of P(people alienated | terrible strategy) and P(people alienated | un-terrible strategy) would force you to act according to the first possibility.
I don’t follow. Do you mean that the potential disutility of SI having a terrible strategy is so much bigger than the potential utility of SI having an un-terrible strategy that, given equal likelihoods, I should act against SI? If so, I disagree.
Blah blah blah...full stop. We’re talking about the communication of primates with other primates. Evolution honed your skills to detect the intention and possible bullshit in the output of other primates. Use your intuition!
I am not sure what you are getting at. If she thinks that there are strategies that should be kept secrete for political reasons or whatever and admits it, that’s bad from any possible viewpoint.
I have. My gut didn’t raise a red flag when I read AnnaSalamon’s post, but it did when I read yours.
I was giving a reason for my claim that there’d be someone on LW/in SI who’d be alienated by all but the blandest of strategies.
Maybe she thinks that and maybe she doesn’t, but either way she didn’t admit it. (At least not in the post I’m talking about. I haven’t read AS’s whole comment history.)
To my intuitions you sound exactly like a bitter excluded nobody attacking someone successful and popular. You DON’T talk like someone who sees through the lies of an evil greedy deceiver and honestly wants people to examine what he says and come to the correct opinion.
I think the “mostly harmless” phrase still applies. These look like kids with firecrackers. The folk we should watch out for are more likely to be the Chinese, the military, hedge funds—and so on.
Maybe you can give an example of the kind of thing that you’re worried about? What might you say that could get you penalized for sounding crazy?
(Maybe we could take this discussion private; I’m also curious what kinds of questions these considerations apply to.)
Could get them penalized for sounding crazy? Those people believe into the possibility of heaven and hell and believe that merely thinking about decision and game theoretic conjectures might be dangerous.
Right, better to hide in your ivory tower only talking to people who agree with you. A perfect recipe to reinforce crazy ideas and amplify any biases.
Minutes can be much more general than (video) transcripts.
I would be surprised if the optimal solution isn’t a third alternative and is instead total secrecy or manipulable complete transcription.