I guess those are pretty vague words. It’s a (set of) research projects followed by thousands, if not tens of thousands, of people. Among these people are philanthropists and entrepreneurs who have donated millions of dollars to the cause, and seem to be on track to donate even more money. It’s received attention and support from major scientist, and some world-famous people, including Stephen Hawking, Elon Musk, and very recently Bill Gates. He’s been published alongside the academics from the Future of Humanity Institute, and his work has merited the respect of prominent thinkers in fields related to artificial intelligence. When his work has attracted derision, it has also been because his ideas attract enough attention for other prominent academics and thinkers to see fit to criticize him. If we evaluate the success of a movement on the basis of memetics alone, this last observation might also count.
The idea of dangers from superintelligence was debated in Aeon Magazine last year. Much of the effort and work to raise the profile and increase focus upon the issue has been done by Nick Bostrom and the Future of Humanity Institute, the Future of Life Institute, and evne the rest of the Machine Intelligence Research Institute aside from Eliezer himself. Still, though, he initiated several theses on solving the problem, and communicated them to the public.
This is gonna be maybe uncomfortably blunt, but: Eliezer seems to be playing a role in getting AI risk research off the ground similar to the role of Aubrey de Gray in getting life extension research off the ground. Namely, he’s the embarrassing crank with the facial hair that will not shut up, but who’s smart enough and informed enough to be making arguments that aren’t trivially dismissed. No one with real power wants to have that guy in the room, and so they don’t usually end up as the person giving TV interviews and going to White House dinners and such when it does turn into a viable intellectual current. But if you need to get a really weird concept off the ground, you need to have such a person pushing it until it stops being really weird and starts being merely weird, because that’s when it becomes possible for Traditional Public Intellectuals to score some points by becoming early adopters without totally screwing up their credibility.
I wouldn’t use the word “crank” myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn’t be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don’t know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn’t get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.
These men haven’t totally screwed up their credibility, but neither does it seem they’ve scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he’s playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn’t care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.
Well, at this point I think Eliezer’s basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the “Traditional Public Intellectuals” of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won’t be getting credit for it until it goes truly mainstream, but that’s how the early adopter thing works in this context. I don’t know much about Nick Bostrom on a strategic level, but from what I’ve read of his publications he seems to be taking a complementary approach.
But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I’m trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it’s important to recognize that comes from the same place that the Sequences did.
That has implications in both directions, of course.
By “research projects followed by”, I was again being vague. I didn’t mean there are thousands of people reading each and every publication that comes out from the MIRI, or is even linked to by its website as related to its research. I meant there are people interested in the problem, whether through exposure from LessWrong, the MIRI, the Singularity Summit and similar events, who will return to think of the problem in future years. “Tens of thousands” means “at least twenty thousand”, which I doubt is true. The 2014 LessWrong survey had 1506 participants, most of who I’d guess “are aware of the MIRI’s ongoing work”. As this sample is representative of a larger group of LessWrong users, along with the other sources I mentioned, I wouldn’t be surprised if there are couple or few thousand people paying attention to the MIRI or related research in at least a cursory way. If it was actually ten thousand, that might surprise me.
I guess those are pretty vague words. It’s a (set of) research projects followed by thousands, if not tens of thousands, of people. Among these people are philanthropists and entrepreneurs who have donated millions of dollars to the cause, and seem to be on track to donate even more money. It’s received attention and support from major scientist, and some world-famous people, including Stephen Hawking, Elon Musk, and very recently Bill Gates. He’s been published alongside the academics from the Future of Humanity Institute, and his work has merited the respect of prominent thinkers in fields related to artificial intelligence. When his work has attracted derision, it has also been because his ideas attract enough attention for other prominent academics and thinkers to see fit to criticize him. If we evaluate the success of a movement on the basis of memetics alone, this last observation might also count.
The idea of dangers from superintelligence was debated in Aeon Magazine last year. Much of the effort and work to raise the profile and increase focus upon the issue has been done by Nick Bostrom and the Future of Humanity Institute, the Future of Life Institute, and evne the rest of the Machine Intelligence Research Institute aside from Eliezer himself. Still, though, he initiated several theses on solving the problem, and communicated them to the public.
This is gonna be maybe uncomfortably blunt, but: Eliezer seems to be playing a role in getting AI risk research off the ground similar to the role of Aubrey de Gray in getting life extension research off the ground. Namely, he’s the embarrassing crank with the facial hair that will not shut up, but who’s smart enough and informed enough to be making arguments that aren’t trivially dismissed. No one with real power wants to have that guy in the room, and so they don’t usually end up as the person giving TV interviews and going to White House dinners and such when it does turn into a viable intellectual current. But if you need to get a really weird concept off the ground, you need to have such a person pushing it until it stops being really weird and starts being merely weird, because that’s when it becomes possible for Traditional Public Intellectuals to score some points by becoming early adopters without totally screwing up their credibility.
I wouldn’t use the word “crank” myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn’t be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don’t know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn’t get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.
These men haven’t totally screwed up their credibility, but neither does it seem they’ve scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he’s playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn’t care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.
Well, at this point I think Eliezer’s basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the “Traditional Public Intellectuals” of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won’t be getting credit for it until it goes truly mainstream, but that’s how the early adopter thing works in this context. I don’t know much about Nick Bostrom on a strategic level, but from what I’ve read of his publications he seems to be taking a complementary approach.
But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I’m trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it’s important to recognize that comes from the same place that the Sequences did.
That has implications in both directions, of course.
This is the message I missed inferring from your original reply. Yes, I concur we’re in agreement.
I would.
Thousands, if not tens of thousands? Try a few dozen, maybe.
By “research projects followed by”, I was again being vague. I didn’t mean there are thousands of people reading each and every publication that comes out from the MIRI, or is even linked to by its website as related to its research. I meant there are people interested in the problem, whether through exposure from LessWrong, the MIRI, the Singularity Summit and similar events, who will return to think of the problem in future years. “Tens of thousands” means “at least twenty thousand”, which I doubt is true. The 2014 LessWrong survey had 1506 participants, most of who I’d guess “are aware of the MIRI’s ongoing work”. As this sample is representative of a larger group of LessWrong users, along with the other sources I mentioned, I wouldn’t be surprised if there are couple or few thousand people paying attention to the MIRI or related research in at least a cursory way. If it was actually ten thousand, that might surprise me.