Allow me to generalize: Don’t take anything too seriously. (By definition of “too”.)
I don’t (at all) assume that MIRI would in fact be effective in preventing disastrous-AI scenarios. I think that’s an open question, and in the very article we’re commenting on we can see that Holden Karnofsky of GiveWell gave the matter some thought and decided that MIRI’s work is probably counterproductive overall in that respect. (Some time ago; MIRI and/or HK’s opinions may have changed relevantly since then.) As I already mentioned, I do not myself donate to MIRI; I was trying to answer the question “why would anyone who isn’t crazy or stupid denote to MIRI?” and I think it’s reasonably clear that someone neither crazy nor stupid could decide that MIRI’s work does help to reduce the risk of AI-induced disaster.
(“Evil AIs running around and killing everybody”, though, is a curious choice of phrasing. It seems to fit much better with any number of rather silly science fiction movies than with anything MIRI and its supporters are actually arguing might happen. Which suggests that either you haven’t grasped what it is they are worried about, or you have grasped it but prefer inaccurate mockery to engagement—which is, of course, your inalienable right, but may not encourage people here to take your comments as seriously as you might prefer.)
I wasn’t intending to make a Pascal’s wager. Again, I am not myself a MIRI donor, but my understanding is that those who are generally think that the probability of AI-induced disaster is not very small. So the point isn’t that there’s this tiny probability of a huge disaster so we multiply (say) a 10^-6 chance of disaster by billions of lives lost and decide that we have to act urgently. It’s that (for the MIRI donor) there’s maybe a 10% -- or a 99% -- chance of AI-induced disaster if we aren’t super-careful, and they hope MIRI can substantially reduce that.
other far more pressing and realistic problems in the world today
The underlying argument here is—if I’m understanding right—something like this: “We know that there are people starving in Africa right now. We fear that there might some time in the future be danger from superintelligent artificial intelligences whose goals don’t match ours. We should always prioritize known, present problems over future, uncertain ones. So it’s silly to expend any effort worrying about AI.” I disagree with the premise I’ve emphasized there. Consider global warming: it probably isn’t doing us much harm yet; although the skeptics/deniers are probably wrong, it’s not altogether impossible; so trying to deal with global warming also falls into the category of future, uncertain threats—and yet this was your first example of something that should obviously be given priority over AI safety.
I guess (but please correct me if I guess wrong) your response would be that the danger of AI is much much lower-probability than the danger of global warming. (Because the probability of producing AI at all is small, or because the probability of getting a substantially superhuman AI is small, or because a substantially superhuman AI would be very unlikely to do any harm, or whatever.) You might be right. How sure are you that you’re right, and why?
Extremely tiny probabilities with enormous utilities attached do suffer from Pascal’s Mugging-type scenario’s. That being said, AI-risk probabilities are much larger in my estimate than the sorts of probabilities required for Pascal-type problems to start coming into play. Unless Perrr333 intends to suggest that probabilities involving UFAI really are that small, I think it’s unlikely he/she is actually making any sort of logical argument. It’s far more likely, I think, that he/she is making an argument based on incredulity (disguised by seemingly logical arguments, but still at its core motivated by incredulity).
The problem with that, of course, is that arguments from incredulity rely almost exclusively on intuition, and the usefulness of intuition decreases spectacularly as scenarios become more esoteric and further removed from the realm of everyday experience.
Allow me to generalize: Don’t take anything too seriously. (By definition of “too”.)
I don’t (at all) assume that MIRI would in fact be effective in preventing disastrous-AI scenarios. I think that’s an open question, and in the very article we’re commenting on we can see that Holden Karnofsky of GiveWell gave the matter some thought and decided that MIRI’s work is probably counterproductive overall in that respect. (Some time ago; MIRI and/or HK’s opinions may have changed relevantly since then.) As I already mentioned, I do not myself donate to MIRI; I was trying to answer the question “why would anyone who isn’t crazy or stupid denote to MIRI?” and I think it’s reasonably clear that someone neither crazy nor stupid could decide that MIRI’s work does help to reduce the risk of AI-induced disaster.
(“Evil AIs running around and killing everybody”, though, is a curious choice of phrasing. It seems to fit much better with any number of rather silly science fiction movies than with anything MIRI and its supporters are actually arguing might happen. Which suggests that either you haven’t grasped what it is they are worried about, or you have grasped it but prefer inaccurate mockery to engagement—which is, of course, your inalienable right, but may not encourage people here to take your comments as seriously as you might prefer.)
I wasn’t intending to make a Pascal’s wager. Again, I am not myself a MIRI donor, but my understanding is that those who are generally think that the probability of AI-induced disaster is not very small. So the point isn’t that there’s this tiny probability of a huge disaster so we multiply (say) a 10^-6 chance of disaster by billions of lives lost and decide that we have to act urgently. It’s that (for the MIRI donor) there’s maybe a 10% -- or a 99% -- chance of AI-induced disaster if we aren’t super-careful, and they hope MIRI can substantially reduce that.
The underlying argument here is—if I’m understanding right—something like this: “We know that there are people starving in Africa right now. We fear that there might some time in the future be danger from superintelligent artificial intelligences whose goals don’t match ours. We should always prioritize known, present problems over future, uncertain ones. So it’s silly to expend any effort worrying about AI.” I disagree with the premise I’ve emphasized there. Consider global warming: it probably isn’t doing us much harm yet; although the skeptics/deniers are probably wrong, it’s not altogether impossible; so trying to deal with global warming also falls into the category of future, uncertain threats—and yet this was your first example of something that should obviously be given priority over AI safety.
I guess (but please correct me if I guess wrong) your response would be that the danger of AI is much much lower-probability than the danger of global warming. (Because the probability of producing AI at all is small, or because the probability of getting a substantially superhuman AI is small, or because a substantially superhuman AI would be very unlikely to do any harm, or whatever.) You might be right. How sure are you that you’re right, and why?
?
Extremely tiny probabilities with enormous utilities attached do suffer from Pascal’s Mugging-type scenario’s. That being said, AI-risk probabilities are much larger in my estimate than the sorts of probabilities required for Pascal-type problems to start coming into play. Unless Perrr333 intends to suggest that probabilities involving UFAI really are that small, I think it’s unlikely he/she is actually making any sort of logical argument. It’s far more likely, I think, that he/she is making an argument based on incredulity (disguised by seemingly logical arguments, but still at its core motivated by incredulity).
The problem with that, of course, is that arguments from incredulity rely almost exclusively on intuition, and the usefulness of intuition decreases spectacularly as scenarios become more esoteric and further removed from the realm of everyday experience.