You might even be justified in refusing to use probabilities at this point. In all honesty, I really don’t know how to estimate the probability of solving an impossible problem that I have gone forth with intent to solve; in a case where I’ve previously solved some impossible problems, but the particular impossible problem is more difficult than anything I’ve yet solved, but I plan to work on it longer, etcetera.
People ask me how likely it is that humankind will survive, or how likely it is that anyone can build a Friendly AI, or how likely it is that I can build one. I really don’t know how to answer. I’m not being evasive; I don’t know how to put a probability estimate on my, or someone else, successfully shutting up and doing the impossible. Is it probability zero because it’s impossible? Obviously not. But how likely is it that this problem, like previous ones, will give up its unyielding blankness when I understand it better? It’s not truly impossible, I can see that much. But humanly impossible? Impossible to me in particular? I don’t know how to guess. I can’t even translate my intuitive feeling into a number, because the only intuitive feeling I have is that the “chance” depends heavily on my choices and unknown unknowns: a wildly unstable probability estimate.
But it’s not clear whether Eliezer means that he can’t even translate his intuitive feeling into a word like “small” or “medium”. I thought the comment I was replying to was saying that SIAI had a “medium” chance of success, given:
If you can’t argue for a medium probability of a large impact, you shouldn’t bother.
and
I don’t consider myself to be multiplying small probabilities by large utility intervals at any point in my strategy
But perhaps I misinterpreted? In any case, there’s still the question of what is rational for those of us who do think SIAI’s chance of success is “small”.
Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won’t substantiate that claim here.) Eliezer didn’t talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he’s super awesome, but it can’t be assumed he’d assert that with confidence, I think.)
SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you’d manage to pull through.
Yes, I had read that, and perhaps even more apropos (from Shut up and do the impossible!):
But it’s not clear whether Eliezer means that he can’t even translate his intuitive feeling into a word like “small” or “medium”. I thought the comment I was replying to was saying that SIAI had a “medium” chance of success, given:
and
But perhaps I misinterpreted? In any case, there’s still the question of what is rational for those of us who do think SIAI’s chance of success is “small”.
I thought he was taking the “don’t bother” approach by not giving a probability estimate or arguing about probabilities.
I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.
This. I’m flabbergasted this isn’t pursued further.
Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won’t substantiate that claim here.) Eliezer didn’t talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he’s super awesome, but it can’t be assumed he’d assert that with confidence, I think.)
Will you substantiate it elsewhere?
Second that interest in hearing it substantiated elsewhere.
Your comments are a cruel reminder that I’m in a world where some of the very best people I know are taken from me.
SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you’d manage to pull through.