I don’t consider myself to be multiplying small probabilities by large utility intervals at any point in my strategy
What about people who do think SIAI’s probability of success is small? Perhaps they have different intuitions about how hard FAI is, or don’t have enough knowledge to make an object-level judgement so they just apply the absurdity heuristic. Being one of those people, I think it’s still an important question whether it’s rational to support SIAI given a small estimate of probability of success, even if SIAI itself doesn’t want to push this line of inquiry too hard for fear of signaling that their own estimate of probability of success is low.
Leaving aside Aumann questions: If people like that think that the Future of Humanity Institute, work on human rationality, or Giving What We Can has a large probability of catalyzing the creation of an effective institution, they should quite plausibly be looking there instead. “I should be doing something I think is at least medium-probably remedying the sheerly stupid situation humanity has gotten itself into with respect to the intelligence explosion” seems like a valuable summary heuristic.
If you can’t think of anything medium-probable, using that as an excuse to do nothing is unacceptable. Figure out which of the people trying to address the problem seem most competent and gamble on something interesting happening if you give them more money. Money is the unit of caring and I can’t begin to tell you how much things change when you add more money to them. Imagine what the global financial sector would look like if it was funded to the tune of $600,000/year. You would probably think it wasn’t worth scaling up Earth’s financial sector.
If you can’t think of anything medium-probable, using that as an excuse to do nothing is unacceptable.
That’s my gut feeling as well, but can we give a theoretical basis for that conclusion, which might also potentially be used to convince people who can’t think of anything medium-probable to “do something”?
My current thoughts are
I assign some non-zero credence to having an unbounded utility function.
Bostrom and Toby’s moral parliament idea seems to be the best that we have about how to handle moral uncertainty.
If Pascal’s wager argument works, and to the extent that I have a faction representing unbounded utility in my moral parliament, I ought to spend a fraction of my resources on Pascal’s wager type “opportunities”
If Pascal’s wager argument works, I should pick the best wager to bet on, which intuitively could well be “push for a positive Singularity”
But it’s not clear that Pascal’s wager argument works or what could be the justification for thinking that “push for a positive Singularity” is the best wager. We also don’t have any theory to handle this kind of philosophical uncertainty.
Given all this, I still have to choose between “do nothing”, “push for positive Singularity”, or “investigate Pascal’s wager”. Is there any way, in this decision problem, to improve upon going with my gut?
Anyway, I understand that you probably have reasons not to engage too deeply with this line of thought, so I’m mostly explaining where I’m currently at, as well as hoping that someone else can offer some ideas.
Imagine what the global financial sector would look like if it was funded to the tune of $600,000/year. You would probably think it wasn’t worth scaling up Earth’s financial sector.
And one might even be right about that.
A better analogy might be if regulation of the global financial sector were funded at 600k/yr.
Money is the unit of caring and it really is impossible to overstate how much things change when you add money to them.
Can you give an example relevant to the context at hand to illustrate what you have in mind? I don’t necessarily disagree, but I presently think that there’s a tenable argument that money is seldom the key limiting factor for philanthropic efforts in the developed world.
BTW, note that I deleted the “impossible to overstate” line on grounds of its being false. It’s actually quite possible to overstate the impact of adding money. E.g., “Adding one dollar to this charity will CHANGE THE LAWS OF PHYSICS.”
What sort of key limiting factors do you have in mind that are untouched by money? Every limiting factor I can think of, whether it’s lack of infrastructure or corruption or lack of political will in the West, is something that you could spend money on doing something about.
If nothing else, historical examples show that huge amounts of money lobbed at a cause can go to waste or do more harm than good (e.g. the Iraq war as a means to improve relations with the middle East).
Eliezer and I were both speaking in vague terms; presumably somebody intelligent, knowledgeable, sophisticated, motivated, energetic & socially/politically astute can levy money toward some positive expected change in a given direction. There remains the question about the conversion factor between money and other goods and how quickly it changes at the margin; it could be negligible in a given instance.
The main limiting factor that I had in mind was human capital: an absence of people who are sufficiently intelligent, knowledgeable, sophisticated, motivated, energetic & socially/politically astute.
I would add that a group of such people would have substantially better than average odds of attracting sufficient funding from some philanthropist; further diminishing the value of donations (on account of fungibility).
I think the creation of smarter-than-human intelligence has a (very) large probability of an (extremely) large impact, and that most of the probability mass there is concentrated into AI
That’s the probability statement in his post. He didn’t mention the probability of SIAI’s success, and hasn’t previously when I’ve emailed him or asked in public forums, nor has he at any point in time that I’ve heard. Shortly after I asked, he posted When (Not) To Use Probabilities.
You might even be justified in refusing to use probabilities at this point. In all honesty, I really don’t know how to estimate the probability of solving an impossible problem that I have gone forth with intent to solve; in a case where I’ve previously solved some impossible problems, but the particular impossible problem is more difficult than anything I’ve yet solved, but I plan to work on it longer, etcetera.
People ask me how likely it is that humankind will survive, or how likely it is that anyone can build a Friendly AI, or how likely it is that I can build one. I really don’t know how to answer. I’m not being evasive; I don’t know how to put a probability estimate on my, or someone else, successfully shutting up and doing the impossible. Is it probability zero because it’s impossible? Obviously not. But how likely is it that this problem, like previous ones, will give up its unyielding blankness when I understand it better? It’s not truly impossible, I can see that much. But humanly impossible? Impossible to me in particular? I don’t know how to guess. I can’t even translate my intuitive feeling into a number, because the only intuitive feeling I have is that the “chance” depends heavily on my choices and unknown unknowns: a wildly unstable probability estimate.
But it’s not clear whether Eliezer means that he can’t even translate his intuitive feeling into a word like “small” or “medium”. I thought the comment I was replying to was saying that SIAI had a “medium” chance of success, given:
If you can’t argue for a medium probability of a large impact, you shouldn’t bother.
and
I don’t consider myself to be multiplying small probabilities by large utility intervals at any point in my strategy
But perhaps I misinterpreted? In any case, there’s still the question of what is rational for those of us who do think SIAI’s chance of success is “small”.
Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won’t substantiate that claim here.) Eliezer didn’t talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he’s super awesome, but it can’t be assumed he’d assert that with confidence, I think.)
SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you’d manage to pull through.
What about people who do think SIAI’s probability of success is small? Perhaps they have different intuitions about how hard FAI is, or don’t have enough knowledge to make an object-level judgement so they just apply the absurdity heuristic. Being one of those people, I think it’s still an important question whether it’s rational to support SIAI given a small estimate of probability of success, even if SIAI itself doesn’t want to push this line of inquiry too hard for fear of signaling that their own estimate of probability of success is low.
Leaving aside Aumann questions: If people like that think that the Future of Humanity Institute, work on human rationality, or Giving What We Can has a large probability of catalyzing the creation of an effective institution, they should quite plausibly be looking there instead. “I should be doing something I think is at least medium-probably remedying the sheerly stupid situation humanity has gotten itself into with respect to the intelligence explosion” seems like a valuable summary heuristic.
If you can’t think of anything medium-probable, using that as an excuse to do nothing is unacceptable. Figure out which of the people trying to address the problem seem most competent and gamble on something interesting happening if you give them more money. Money is the unit of caring and I can’t begin to tell you how much things change when you add more money to them. Imagine what the global financial sector would look like if it was funded to the tune of $600,000/year. You would probably think it wasn’t worth scaling up Earth’s financial sector.
That’s my gut feeling as well, but can we give a theoretical basis for that conclusion, which might also potentially be used to convince people who can’t think of anything medium-probable to “do something”?
My current thoughts are
I assign some non-zero credence to having an unbounded utility function.
Bostrom and Toby’s moral parliament idea seems to be the best that we have about how to handle moral uncertainty.
If Pascal’s wager argument works, and to the extent that I have a faction representing unbounded utility in my moral parliament, I ought to spend a fraction of my resources on Pascal’s wager type “opportunities”
If Pascal’s wager argument works, I should pick the best wager to bet on, which intuitively could well be “push for a positive Singularity”
But it’s not clear that Pascal’s wager argument works or what could be the justification for thinking that “push for a positive Singularity” is the best wager. We also don’t have any theory to handle this kind of philosophical uncertainty.
Given all this, I still have to choose between “do nothing”, “push for positive Singularity”, or “investigate Pascal’s wager”. Is there any way, in this decision problem, to improve upon going with my gut?
Anyway, I understand that you probably have reasons not to engage too deeply with this line of thought, so I’m mostly explaining where I’m currently at, as well as hoping that someone else can offer some ideas.
And one might even be right about that.
A better analogy might be if regulation of the global financial sector were funded at 600k/yr.
Can you give an example relevant to the context at hand to illustrate what you have in mind? I don’t necessarily disagree, but I presently think that there’s a tenable argument that money is seldom the key limiting factor for philanthropic efforts in the developed world.
BTW, note that I deleted the “impossible to overstate” line on grounds of its being false. It’s actually quite possible to overstate the impact of adding money. E.g., “Adding one dollar to this charity will CHANGE THE LAWS OF PHYSICS.”
What sort of key limiting factors do you have in mind that are untouched by money? Every limiting factor I can think of, whether it’s lack of infrastructure or corruption or lack of political will in the West, is something that you could spend money on doing something about.
If nothing else, historical examples show that huge amounts of money lobbed at a cause can go to waste or do more harm than good (e.g. the Iraq war as a means to improve relations with the middle East).
Eliezer and I were both speaking in vague terms; presumably somebody intelligent, knowledgeable, sophisticated, motivated, energetic & socially/politically astute can levy money toward some positive expected change in a given direction. There remains the question about the conversion factor between money and other goods and how quickly it changes at the margin; it could be negligible in a given instance.
The main limiting factor that I had in mind was human capital: an absence of people who are sufficiently intelligent, knowledgeable, sophisticated, motivated, energetic & socially/politically astute.
I would add that a group of such people would have substantially better than average odds of attracting sufficient funding from some philanthropist; further diminishing the value of donations (on account of fungibility).
That’s the probability statement in his post. He didn’t mention the probability of SIAI’s success, and hasn’t previously when I’ve emailed him or asked in public forums, nor has he at any point in time that I’ve heard. Shortly after I asked, he posted When (Not) To Use Probabilities.
Yes, I had read that, and perhaps even more apropos (from Shut up and do the impossible!):
But it’s not clear whether Eliezer means that he can’t even translate his intuitive feeling into a word like “small” or “medium”. I thought the comment I was replying to was saying that SIAI had a “medium” chance of success, given:
and
But perhaps I misinterpreted? In any case, there’s still the question of what is rational for those of us who do think SIAI’s chance of success is “small”.
I thought he was taking the “don’t bother” approach by not giving a probability estimate or arguing about probabilities.
I propose that the rational act is to investigate approaches to greater than human intelligence which would succeed.
This. I’m flabbergasted this isn’t pursued further.
Sufficiently-Friendly AI can be hard for SIAI-now but easy or medium for non-SIAI-now (someone else now, someone else future, SIAI future). I personally believe this, since SIAI-now is fucked up (and SIAI-future very well will be too). (I won’t substantiate that claim here.) Eliezer didn’t talk about SIAI specifically. (He probably thinks SIAI will be at least as likely to succeed as anyone else because he thinks he’s super awesome, but it can’t be assumed he’d assert that with confidence, I think.)
Will you substantiate it elsewhere?
Second that interest in hearing it substantiated elsewhere.
Your comments are a cruel reminder that I’m in a world where some of the very best people I know are taken from me.
SingInst seems a lot better since I wrote that comment; you and Luke are doing some cool stuff. Around August everything was in a state of disarray and it was unclear if you’d manage to pull through.