a world in which nobody appears to be doing similar work or care sufficiently to do so.
This is astonishingly good evidence that MIRI’s efforts will not be wasted via redundancy, de facto “failure” only because someone else will independently succeed first.
But it’s actually (very weak) evidence against the proposition that MIRI’s efforts will not be wasted because you’ve overestimated the problem, and it isn’t evidence either way concerning the proposition that you haven’t overestimated the problem but nobody will succeed at solving it.
you’re asking about the probability of having some technical people get together and solve basic research problems. I don’t see why anyone else should expect to know more about that than workshop MIRI participants. Besides backward reasoning from the importance of a good result (which ordinarily operates through implying already-well-tugged ropes) is there any reason why you should be more skeptical of this than any other piece of basic research on an important problem?
I’m concerned about the probability of having some technical people get together and solve some incredibly deep research problems before some perhaps-slightly-less-technical people plough ahead and get practical results without the benefit of that research. I’m skeptical that we’ll see FAI before UFAI for the same reason I’m skeptical that we’ll see a Navier-Stokes existence proof before a macroscale DNS solution, I’m skeptical that we’ll prove P!=NP or even find a provably secure encryption scheme before making the world’s economy dependent on unproven schemes, etc.
Even some of the important subgoals of FAI, being worked on with far more resources than MIRI has yet, are barely showing on the radar. IIRC someone only recently produced a provably correct C compiler (and in the process exposed a bunch of bugs in the industry standard compilers) - wouldn’t we feel foolish if a provably FAI human-readable code turned UF simply because a bug was automatically introduced in the compilation? Or if a cosmic ray or slightly-out-of-tolerance manufacturing defect affected one of the processors? Fault-tolerant MPI is still leading-edge research, because although we’ve never needed it before, at exascale and above the predicted mean time between hardware-failures-on-some-node goes down to hours.
One of the reasons UFAI could be such an instant danger is the current ubiquitous nature of exploitable bugs on networked computers… yet “how do we write even simple high performance software without exploitable bugs” seems to be both a much more popular research problem than and a prerequisite to “how do we write a FAI”, and it’s not yet solved.
I’m skeptical that we’ll prove P!=NP or even find a provably secure encryption scheme before making the world’s economy dependent on unproven schemes, etc.
Nitpick, but finding a provably secure encryption scheme is harder than proving P!=NP, since if P=NP then no secure encryption scheme can exist.
if P=NP then no [provably] secure encryption scheme can exist.
What? Why? Just because RSA would be broken? Shor’s algorithm would also do so, even in a proven P!=NP world. There may be other substitutes for RSA, using different complexity classes. There are other approaches altogether. Not to mention one-time pads.
As I understand it, if P=NP in a practical sense, then almost all cryptography is destroyed as P=NP destroys one-way functions & secure hashes in general. So RSA goes down, many quantum-proof systems go down, and so on and so forth, and you’re left with basically just http://en.wikipedia.org/wiki/Information-theoretic_security
Really, if P=NP, then encoding your messages would be quite low on the priority list … however we’re not debating the practical impact here, but that “finding a provably secure encryption scheme is harder than proving P!=NP”, which was raised as a nitpick, and is clearly not the case.
Happiness or unhappiness of life with one-time pads notwithstanding.
Suppose some new rich sponsor wanted to donate a lot to MIRI, subject to an independent outside group of experts evaluating the merits of some of its core claims, like that AGI is a near-term (under 100 years) x-risk and that MIRI has non-negligible odds (say, a few percent or more) of mitigating it. Who would you suggest s/he would engage for review?
like that AGI is a near-term (under 100 years) x-risk
FHI sent a survey the top 100 most cited authors in AI and got a response rate of ~1/3, and the median estimates backed this (although this needs to be checked for response bias). Results will be published in September at PT-AI.
x-risk and that MIRI has non-negligible odds (say, a few percent or more) of mitigating it.
I.e. a probability of a few percent that there is AI risk, MIRI solves it, and otherwise it wouldn’t have been solved and existential catastrophe would have resulted? That would not happen with non-gerrymandered criteria for the expert group.
But if a credible such group did deliver that result believably, then one could go to Gates or Buffett (who has spent hundreds of millions on nuclear risk efforts with much lower probability of averting nuclear war) or national governments and get billions in funding. All the work in that scenario is coming from the independent panel concluding the thing is many orders of magnitude better than almost any alternative use of spending, way past the threshold for funding.
The rich guy who says he would donate based on it is an irrelevancy in the hypo.
Damned if I know. Oddly enough, anyone chooses to spend a bunch of their life becoming an expert on these issues tends to be sympathetic to the claims, and most random others tend to make up crap on the spot and stick with it. If they could manage to pay Peter Norvig enough money to spend a lot of time working through these issues I’d be pretty optimistic, but Peter Norvig works for Google and would be hard to pay sufficiently.
I agree with Eliezer that the main difficulty is in getting top-quality, relatively rational people to spend hundreds of hours being educated, working through the arguments, etc.
Jaan has done a surprising amount of that and also read most or all of the Sequences. Thiel has not yet decided to put in that kind of time.
Here’s a list of people I’d want on that committee if they were willing to put in hundreds of hours catching up and working through the arguments with us: Scott Aaronson, Peter Norvig, Stuart Russell, Michael Nielsen.
I’d probably be able to add lots more names to that list if I could afford to spend more time becoming familiar with the epistemic standards and philosophical sophistication of more high-status CS people. I would trust Carl Shulman, Paul Christiano, Jacob Steinhardt, and a short list of others to add to my list with relatively little personal double-checking from me.
But yeah; the main problem seems to me that I don’t know how to get 400 hours of Andrew Ng’s time.
Although with Ng in particular it might not take 400 hours. When Louie and I met with him in Nov. ’12 he seemed to think AI was almost certainly a century or more away, but by May ’13 (after getting to do his deep learning work on Google’s massive server clusters for a few months) he changed his tune, saying “It gives me hope –- no, more than hope –- that we might be able to [build AGI]… We clearly don’t have the right algorithms yet. It’s going to take decades. This is not going to be an easy one, but I think there’s hope.” (On the other hand, maybe he just made himself sound more optimistic than he anticipates inside because he was giving a public interview on behalf of pro-AI Google.)
This is a great answer but actually a little tangential to my question, sorry for being vague. Mine was actually about the part of shminux’s proposal that involved finding potential mega donors. Relatedly, how much convincing do you think it would take to get Tallinn or thiel to increase their donations by an order of magnitude, something they could easily afford? This seems like a relatively high leverage plan if you can swing it. With x million dollars you can afford to actually pay to hire people like google can, if on a much smaller scale.
This is astonishingly good evidence that MIRI’s efforts will not be wasted via redundancy, de facto “failure” only because someone else will independently succeed first.
But it’s actually (very weak) evidence against the proposition that MIRI’s efforts will not be wasted because you’ve overestimated the problem, and it isn’t evidence either way concerning the proposition that you haven’t overestimated the problem but nobody will succeed at solving it.
you’re asking about the probability of having some technical people get together and solve basic research problems. I don’t see why anyone else should expect to know more about that than workshop MIRI participants. Besides backward reasoning from the importance of a good result (which ordinarily operates through implying already-well-tugged ropes) is there any reason why you should be more skeptical of this than any other piece of basic research on an important problem?
I’m concerned about the probability of having some technical people get together and solve some incredibly deep research problems before some perhaps-slightly-less-technical people plough ahead and get practical results without the benefit of that research. I’m skeptical that we’ll see FAI before UFAI for the same reason I’m skeptical that we’ll see a Navier-Stokes existence proof before a macroscale DNS solution, I’m skeptical that we’ll prove P!=NP or even find a provably secure encryption scheme before making the world’s economy dependent on unproven schemes, etc.
Even some of the important subgoals of FAI, being worked on with far more resources than MIRI has yet, are barely showing on the radar. IIRC someone only recently produced a provably correct C compiler (and in the process exposed a bunch of bugs in the industry standard compilers) - wouldn’t we feel foolish if a provably FAI human-readable code turned UF simply because a bug was automatically introduced in the compilation? Or if a cosmic ray or slightly-out-of-tolerance manufacturing defect affected one of the processors? Fault-tolerant MPI is still leading-edge research, because although we’ve never needed it before, at exascale and above the predicted mean time between hardware-failures-on-some-node goes down to hours.
One of the reasons UFAI could be such an instant danger is the current ubiquitous nature of exploitable bugs on networked computers… yet “how do we write even simple high performance software without exploitable bugs” seems to be both a much more popular research problem than and a prerequisite to “how do we write a FAI”, and it’s not yet solved.
Nitpick, but finding a provably secure encryption scheme is harder than proving P!=NP, since if P=NP then no secure encryption scheme can exist.
What? Why? Just because RSA would be broken? Shor’s algorithm would also do so, even in a proven P!=NP world. There may be other substitutes for RSA, using different complexity classes. There are other approaches altogether. Not to mention one-time pads.
As I understand it, if P=NP in a practical sense, then almost all cryptography is destroyed as P=NP destroys one-way functions & secure hashes in general. So RSA goes down, many quantum-proof systems go down, and so on and so forth, and you’re left with basically just http://en.wikipedia.org/wiki/Information-theoretic_security
http://www.karlin.mff.cuni.cz/~krajicek/ri5svetu.pdf discusses some of this.
And life was so happy with just one-time pads?
Really, if P=NP, then encoding your messages would be quite low on the priority list … however we’re not debating the practical impact here, but that “finding a provably secure encryption scheme is harder than proving P!=NP”, which was raised as a nitpick, and is clearly not the case.
Happiness or unhappiness of life with one-time pads notwithstanding.
Suppose some new rich sponsor wanted to donate a lot to MIRI, subject to an independent outside group of experts evaluating the merits of some of its core claims, like that AGI is a near-term (under 100 years) x-risk and that MIRI has non-negligible odds (say, a few percent or more) of mitigating it. Who would you suggest s/he would engage for review?
FHI sent a survey the top 100 most cited authors in AI and got a response rate of ~1/3, and the median estimates backed this (although this needs to be checked for response bias). Results will be published in September at PT-AI.
I.e. a probability of a few percent that there is AI risk, MIRI solves it, and otherwise it wouldn’t have been solved and existential catastrophe would have resulted? That would not happen with non-gerrymandered criteria for the expert group.
But if a credible such group did deliver that result believably, then one could go to Gates or Buffett (who has spent hundreds of millions on nuclear risk efforts with much lower probability of averting nuclear war) or national governments and get billions in funding. All the work in that scenario is coming from the independent panel concluding the thing is many orders of magnitude better than almost any alternative use of spending, way past the threshold for funding.
The rich guy who says he would donate based on it is an irrelevancy in the hypo.
Damned if I know. Oddly enough, anyone chooses to spend a bunch of their life becoming an expert on these issues tends to be sympathetic to the claims, and most random others tend to make up crap on the spot and stick with it. If they could manage to pay Peter Norvig enough money to spend a lot of time working through these issues I’d be pretty optimistic, but Peter Norvig works for Google and would be hard to pay sufficiently.
Do you guys deliberately go out of your way to evangeliz to Jaan tallin and thiel or is that source of funds a lucky break?
I agree with Eliezer that the main difficulty is in getting top-quality, relatively rational people to spend hundreds of hours being educated, working through the arguments, etc.
Jaan has done a surprising amount of that and also read most or all of the Sequences. Thiel has not yet decided to put in that kind of time.
Here’s a list of people I’d want on that committee if they were willing to put in hundreds of hours catching up and working through the arguments with us: Scott Aaronson, Peter Norvig, Stuart Russell, Michael Nielsen.
I’d probably be able to add lots more names to that list if I could afford to spend more time becoming familiar with the epistemic standards and philosophical sophistication of more high-status CS people. I would trust Carl Shulman, Paul Christiano, Jacob Steinhardt, and a short list of others to add to my list with relatively little personal double-checking from me.
But yeah; the main problem seems to me that I don’t know how to get 400 hours of Andrew Ng’s time.
Although with Ng in particular it might not take 400 hours. When Louie and I met with him in Nov. ’12 he seemed to think AI was almost certainly a century or more away, but by May ’13 (after getting to do his deep learning work on Google’s massive server clusters for a few months) he changed his tune, saying “It gives me hope –- no, more than hope –- that we might be able to [build AGI]… We clearly don’t have the right algorithms yet. It’s going to take decades. This is not going to be an easy one, but I think there’s hope.” (On the other hand, maybe he just made himself sound more optimistic than he anticipates inside because he was giving a public interview on behalf of pro-AI Google.)
This is a great answer but actually a little tangential to my question, sorry for being vague. Mine was actually about the part of shminux’s proposal that involved finding potential mega donors. Relatedly, how much convincing do you think it would take to get Tallinn or thiel to increase their donations by an order of magnitude, something they could easily afford? This seems like a relatively high leverage plan if you can swing it. With x million dollars you can afford to actually pay to hire people like google can, if on a much smaller scale.