This may be a stupid question, but is that mosquito laser drone thing really the best way to solve the problem of… what problem is it even solving ? “Too many mosquitoes” ? “Malaria” ?
Self-perpetuating area-wide techniques like mass release of modified mosquitoes with gene-drive systems is very probably a superior answer if the problem is “there are too many (ie any) human-feeding mosquitoes”.
If the problem is rather “what is the coolest-sounding possible way to wipe out mosquitoes”, then drone-mounted lasers are in the running.
I don’t think there is any quadcopter that can fly for more than 30 minutes on one battery charge—and that’s without mosquito recognition and zapping systems drawing on that same battery.
Instead of closed offices, most of people worked or learned under the sunlight, shielded by a glass screen overhead that kept out the rain and ultraviolet, with your own space concealed by curtains that could be opened or shut to indicate botherability. If you needed silence for concentration, you used earplugs. People who needed to have loud conversations without disturbing others would have enclosed rooms with doors and glass ceilings and air conditioning. If you showed the serious people a world where most people never saw the sun while they worked, they’d flip out and then correct the problem. Skyscrapers weren’t much built in dath ilan until we had extremely bright artificial light that could mostly substitute for sunlight, and hey were all put in locations where skyscrapers were explicitly allowed. Blocking out someone else’s sun would be a serious transgression, and symbolic.
We had laser zappers and other measures that destroyed bugs and mosquitos and wasps and bees—these were considered far more annoying in dath ilan than Earth, and our civilization put a lot of effort and technology into rooting them out, or preventing them from getting a foothold within the great city. On the “beware of trivial inconveniences” scale, I suspect that an absence of little flying bugs, to say nothing of bugs that bit and stung and made noises, might be part of why people did their daily work beneath sunlight, in open air. I think there was a variety of butterfly that was bred to pollinate flowers and such within cities, in place of bees—at least I know that we weren’t supposed to crush butterflies.
From here. Basically Eliezer thinks people should work outside but don’t because of insect problems (among other things.)
Not quite greenhouses. It seems like Eliezer is saying it would be a glass canopy without enclosing walls (so you would still get natural fresh air flow.
Laser-mounted mosquito-exterminating drones seems an uncommon and difficult-to-engineer enough approach that if anyone was serious or successful enough in even launching such an endeavor that they have a public-facing interface to raise awareness or gauge interest, I figure they’d be both impressive and rare enough Eliezer would want to at least get in touch with them. Either that, or he is trolling us with all his projects near the end, or is partially trolling us by interjecting fake interest in ridiculously ambitious projects between his real interest in other ridiculously ambitious projects.
Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years. From Eliezer’s perspective, the rationality of him and his network, combined with the gusto of the surrounding community, may be enough to achieve very ambitious projects which start from seemingly ridiculous premises.
Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years.
Yet another source of perspective is reading the documents he wrote around the turn of the millennium circa the founding of SIAI on the subject. They can only be described as ‘hilarious’. There were specs for a programming language that would by its design ‘do what i mean’ that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.
Well, Eliezer’s about… what, 35? Not far off that, anyway. I’m sure I wrote some stuff that was at least that embarrassing when I was 20, though it wouldn’t have been under my own name or wouldn’t have had any public exposure to speak of or both.
I just want to note we’re not discussing laser-mounted mosquito-terminating drones anymore. That’s fine. Anyway, I’m a bit older than Eliezer was when he founded the Singularity Institute for Artificial Intelligence. While starting a non-profit organization at that age seems impressive to me, once it’s been legally incorporated I’d guess one can slap just about any name they like on it. The SIAI doesn’t seem to have achieved much in the first few years of its operation.
Based on the their history from the Wikipedia page on the Machine Intelligence Research Institute, it seems to me how notable the organization’s achievements are are commensurate with the time it’s been around. For several years, as the Singularity Institute, they also ran the Singularity Summit, which they eventually sold as a property to Singularity University for one million dollars. Eliezer Yudkowsky contributed two chapters to Global Catastrophic Risks in 2008, at the age of 28, without having completed either secondary school or a university education.
On the other hand, the MIRI has made great mistakes in operations, research, and outreach in their history. Eliezer Yudkwosky is obviously an impressive person for various reasons. I think the conclusion is Eliezer sometimes assumes he’s enough of a ‘rationalist’ he can get away with being lazy with how he plans or portrays his ideas. He seems like he’s not much of a communications consequentialist, and seems relcuctant to declare mea culpa when he makes those sorts of mistakes. All things equal, especially if we hasn’t tallied Eliezer’s track record, we should remain skeptical of his plans based on shoddy grounds. I too don’t believe we should take his bonus requests and ideas at the end of the post seriously.
There were specs for a programming language that would by its design ‘do what i mean’ that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.
Flare (the language) didn’t sound that dumb to me—my impression wasn’t that it would inherently ‘do what i mean’ but that it would somehow be both machine and human—readable, so that it would be easy to run advanced optimising compliers over it, and later would provide a natural basis for AI that could rewrite its own source code.
Looking back on it, this is way too much of a free lunch, and since an AI capable of understanding AI theory would probably also be able to parse the meaning of code written in conventional languages, its rather redundant. I still expect that ‘do what i mean’ languages will appear, for instance the language could detect ‘obvious’ mistakes, correct them and inform the user.
e.g. “x * y=z does not work because the dimensions do not match. Nor does x’ * y=z, but x * y’=z does, so I have taken the liberty of changing your code to x * y’=z”
or “‘inutaliseation’ is not a function or variable. I assume you meant ‘initialization’, which is a function, and I corrected this mistake”
Eventually, it might evolve into a natural language to code translator.
But yes, a nanowar by 2010 wasn’t the smartest idea.
Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years.
SIAI started before the rationality blogging. Vernor Vinge warned about AI causing the end of the human race back in 1993.
I have difficulty accepting that a substantial portion of FAI researchers were drawn to the subject by HPMOR.
(Of course, FAI researchers, LWers and HPMOR fans are distinct groups of people)
Information on the history of the MIRI from 2002 through 2006 is sparse, as gleaned from the Wikipedia page on the organization. As the SIAI in 2006, they successfully raised $200,000 as part of a donation campaign, with $100,000 matched as a donation by Peter Thiel. In the years since, the MIRI seems to have at least once annually held fundraisers that turn out just as successful. “The Sequences” were scarcely started in 2006, so I don’t know if Peter Thiel got wind of Eliezer’s ideas and organization on SL4, or Overcoming Bias, or what. Anyway, while Vinge, and earlier, I. J. Good, warned against the dangers of machine superintelligence, Eliezer founded a research organization aimed at solving this problem, formulated the mission for doing so, and popularized this through his meetings. I’m using metrics such as the raised profile of risks from machine intelligence, and the amount of vocal support and donations the MIRI receives as a proxy for how they and Eliezer specifically have raised the profile of this field of inquiry and concern. I assume others would not have done so much for the MIRI if they didn’t believe in its mission. Most of the recent coverage should probably be attributed to Nick Bostrom ans his recent book, though.
At the 2014 Effective Altruism Summit, Eliezer reported there are only four full-time FAI researchers in the world. That is himself, Nate Soares, and Benja Fallenstein of the MIRI, and Stuart Armstrong of the FHI. I was incredulous, and guessed Eliezer’s definition of ‘FAI researcher’ was more stringent than most sensible people would use. I asked Luke Muehlhauser for clarification. He remarked beyond those four Paul Christiano might count as ‘half a FAI researcher’ because he spends a portion of his time as a mathematician as UCB working on mathematics in line with the MIRI’s research agenda. The MIRI has since hired Patrick LaVictoire, and perhaps others.
The point is, the MIRI itself thinks there’s less than a dozen FAI researchers. For all we know, all FAI researchers might be users of LessWrong, and HPMoR fans. I could ask all of the known “FAI researchers” if they were first introduced to these research ideas, through LessWrong, through HPMoR. That indeed might be a “substantial portion”. You or I might qualify “FAI researcher” differently, but Eliezer by his own admission believes writing more HPMoR is one of the surprisingly best way to draw more attention from Math Olympiad contestants to their research, as does the MIRI.
That may not be the only reason that didn’t get off the ground as a movement. Movements have existed before the internet. However, in a different way the internet may matter: a world with internet and modern computers may make something like a superintelligent AI more viscerally plausible as a possibility.
Movements certainly have existed before the net, but generally where there is a high enough density of potential members to organise via word of mouth and print media. With the possible exception of a few places such as Silicon Valley, I don’t think that exists in this case.
I do agree with you that in many ways superintelligence seems more plausible given modern technology, but OTOH people are cautious after the AI winters.
I guess those are pretty vague words. It’s a (set of) research projects followed by thousands, if not tens of thousands, of people. Among these people are philanthropists and entrepreneurs who have donated millions of dollars to the cause, and seem to be on track to donate even more money. It’s received attention and support from major scientist, and some world-famous people, including Stephen Hawking, Elon Musk, and very recently Bill Gates. He’s been published alongside the academics from the Future of Humanity Institute, and his work has merited the respect of prominent thinkers in fields related to artificial intelligence. When his work has attracted derision, it has also been because his ideas attract enough attention for other prominent academics and thinkers to see fit to criticize him. If we evaluate the success of a movement on the basis of memetics alone, this last observation might also count.
The idea of dangers from superintelligence was debated in Aeon Magazine last year. Much of the effort and work to raise the profile and increase focus upon the issue has been done by Nick Bostrom and the Future of Humanity Institute, the Future of Life Institute, and evne the rest of the Machine Intelligence Research Institute aside from Eliezer himself. Still, though, he initiated several theses on solving the problem, and communicated them to the public.
This is gonna be maybe uncomfortably blunt, but: Eliezer seems to be playing a role in getting AI risk research off the ground similar to the role of Aubrey de Gray in getting life extension research off the ground. Namely, he’s the embarrassing crank with the facial hair that will not shut up, but who’s smart enough and informed enough to be making arguments that aren’t trivially dismissed. No one with real power wants to have that guy in the room, and so they don’t usually end up as the person giving TV interviews and going to White House dinners and such when it does turn into a viable intellectual current. But if you need to get a really weird concept off the ground, you need to have such a person pushing it until it stops being really weird and starts being merely weird, because that’s when it becomes possible for Traditional Public Intellectuals to score some points by becoming early adopters without totally screwing up their credibility.
I wouldn’t use the word “crank” myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn’t be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don’t know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn’t get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.
These men haven’t totally screwed up their credibility, but neither does it seem they’ve scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he’s playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn’t care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.
Well, at this point I think Eliezer’s basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the “Traditional Public Intellectuals” of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won’t be getting credit for it until it goes truly mainstream, but that’s how the early adopter thing works in this context. I don’t know much about Nick Bostrom on a strategic level, but from what I’ve read of his publications he seems to be taking a complementary approach.
But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I’m trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it’s important to recognize that comes from the same place that the Sequences did.
That has implications in both directions, of course.
By “research projects followed by”, I was again being vague. I didn’t mean there are thousands of people reading each and every publication that comes out from the MIRI, or is even linked to by its website as related to its research. I meant there are people interested in the problem, whether through exposure from LessWrong, the MIRI, the Singularity Summit and similar events, who will return to think of the problem in future years. “Tens of thousands” means “at least twenty thousand”, which I doubt is true. The 2014 LessWrong survey had 1506 participants, most of who I’d guess “are aware of the MIRI’s ongoing work”. As this sample is representative of a larger group of LessWrong users, along with the other sources I mentioned, I wouldn’t be surprised if there are couple or few thousand people paying attention to the MIRI or related research in at least a cursory way. If it was actually ten thousand, that might surprise me.
This may be a stupid question, but is that mosquito laser drone thing really the best way to solve the problem of… what problem is it even solving ? “Too many mosquitoes” ? “Malaria” ?
Your confusion is a clever ruse, but your username gives away your true motives!
Curses ! I am undone !
There’s a much cheaper and much older flying platform for mosquito elimination. It’s called a bat.
EDIT: or perhaps the bred/genetically modified steriile mosquitos that can wipe out populations in large areas?
Self-perpetuating area-wide techniques like mass release of modified mosquitoes with gene-drive systems is very probably a superior answer if the problem is “there are too many (ie any) human-feeding mosquitoes”.
If the problem is rather “what is the coolest-sounding possible way to wipe out mosquitoes”, then drone-mounted lasers are in the running.
Wiki says the idea has been suggested in earnest as one of the forms a mosquito laser could take, and was rejected in favor of a better one.
I don’t think there is any quadcopter that can fly for more than 30 minutes on one battery charge—and that’s without mosquito recognition and zapping systems drawing on that same battery.
Also, having quadcopters flying around zapping insects is at least going to be visually distracting.
Not to mention, the leading cause of propeller-induced face laceration syndrome...
Right, this sort of thing is only practical given fully automated battery replacement.
Some perspective:
From here. Basically Eliezer thinks people should work outside but don’t because of insect problems (among other things.)
So… people should work inside greenhouses? I can see more than one problem with this.
Air-conditioned greenhouses.
Not quite greenhouses. It seems like Eliezer is saying it would be a glass canopy without enclosing walls (so you would still get natural fresh air flow.
This might be a good idea if there was some way to stop screen glare
E-ink.
Laser-mounted mosquito-exterminating drones seems an uncommon and difficult-to-engineer enough approach that if anyone was serious or successful enough in even launching such an endeavor that they have a public-facing interface to raise awareness or gauge interest, I figure they’d be both impressive and rare enough Eliezer would want to at least get in touch with them. Either that, or he is trolling us with all his projects near the end, or is partially trolling us by interjecting fake interest in ridiculously ambitious projects between his real interest in other ridiculously ambitious projects.
Another source of perspective is the fact that Eliezer has made research into engineering safety mechanisms into advanced machine agents a seriously substantial movement, and he did this by blogging about rationality for two years, and then writing a Harry Potter fanfiction over the course of the succeeding five years. From Eliezer’s perspective, the rationality of him and his network, combined with the gusto of the surrounding community, may be enough to achieve very ambitious projects which start from seemingly ridiculous premises.
Yet another source of perspective is reading the documents he wrote around the turn of the millennium circa the founding of SIAI on the subject. They can only be described as ‘hilarious’. There were specs for a programming language that would by its design ‘do what i mean’ that make my programmer friends laugh and complicated AI architectures and ideas for the social engineering they would do with the gigadollars that would be rolling in to bring about the singularity by 2010 so as to avoid the apocalyptic Nanowar that was coming.
Well, Eliezer’s about… what, 35? Not far off that, anyway. I’m sure I wrote some stuff that was at least that embarrassing when I was 20, though it wouldn’t have been under my own name or wouldn’t have had any public exposure to speak of or both.
I just want to note we’re not discussing laser-mounted mosquito-terminating drones anymore. That’s fine. Anyway, I’m a bit older than Eliezer was when he founded the Singularity Institute for Artificial Intelligence. While starting a non-profit organization at that age seems impressive to me, once it’s been legally incorporated I’d guess one can slap just about any name they like on it. The SIAI doesn’t seem to have achieved much in the first few years of its operation.
Based on the their history from the Wikipedia page on the Machine Intelligence Research Institute, it seems to me how notable the organization’s achievements are are commensurate with the time it’s been around. For several years, as the Singularity Institute, they also ran the Singularity Summit, which they eventually sold as a property to Singularity University for one million dollars. Eliezer Yudkowsky contributed two chapters to Global Catastrophic Risks in 2008, at the age of 28, without having completed either secondary school or a university education.
On the other hand, the MIRI has made great mistakes in operations, research, and outreach in their history. Eliezer Yudkwosky is obviously an impressive person for various reasons. I think the conclusion is Eliezer sometimes assumes he’s enough of a ‘rationalist’ he can get away with being lazy with how he plans or portrays his ideas. He seems like he’s not much of a communications consequentialist, and seems relcuctant to declare mea culpa when he makes those sorts of mistakes. All things equal, especially if we hasn’t tallied Eliezer’s track record, we should remain skeptical of his plans based on shoddy grounds. I too don’t believe we should take his bonus requests and ideas at the end of the post seriously.
Flare (the language) didn’t sound that dumb to me—my impression wasn’t that it would inherently ‘do what i mean’ but that it would somehow be both machine and human—readable, so that it would be easy to run advanced optimising compliers over it, and later would provide a natural basis for AI that could rewrite its own source code.
Looking back on it, this is way too much of a free lunch, and since an AI capable of understanding AI theory would probably also be able to parse the meaning of code written in conventional languages, its rather redundant. I still expect that ‘do what i mean’ languages will appear, for instance the language could detect ‘obvious’ mistakes, correct them and inform the user.
e.g. “x * y=z does not work because the dimensions do not match. Nor does x’ * y=z, but x * y’=z does, so I have taken the liberty of changing your code to x * y’=z”
or “‘inutaliseation’ is not a function or variable. I assume you meant ‘initialization’, which is a function, and I corrected this mistake”
Eventually, it might evolve into a natural language to code translator.
But yes, a nanowar by 2010 wasn’t the smartest idea.
SIAI started before the rationality blogging. Vernor Vinge warned about AI causing the end of the human race back in 1993.
I have difficulty accepting that a substantial portion of FAI researchers were drawn to the subject by HPMOR.
(Of course, FAI researchers, LWers and HPMOR fans are distinct groups of people)
Information on the history of the MIRI from 2002 through 2006 is sparse, as gleaned from the Wikipedia page on the organization. As the SIAI in 2006, they successfully raised $200,000 as part of a donation campaign, with $100,000 matched as a donation by Peter Thiel. In the years since, the MIRI seems to have at least once annually held fundraisers that turn out just as successful. “The Sequences” were scarcely started in 2006, so I don’t know if Peter Thiel got wind of Eliezer’s ideas and organization on SL4, or Overcoming Bias, or what. Anyway, while Vinge, and earlier, I. J. Good, warned against the dangers of machine superintelligence, Eliezer founded a research organization aimed at solving this problem, formulated the mission for doing so, and popularized this through his meetings. I’m using metrics such as the raised profile of risks from machine intelligence, and the amount of vocal support and donations the MIRI receives as a proxy for how they and Eliezer specifically have raised the profile of this field of inquiry and concern. I assume others would not have done so much for the MIRI if they didn’t believe in its mission. Most of the recent coverage should probably be attributed to Nick Bostrom ans his recent book, though.
At the 2014 Effective Altruism Summit, Eliezer reported there are only four full-time FAI researchers in the world. That is himself, Nate Soares, and Benja Fallenstein of the MIRI, and Stuart Armstrong of the FHI. I was incredulous, and guessed Eliezer’s definition of ‘FAI researcher’ was more stringent than most sensible people would use. I asked Luke Muehlhauser for clarification. He remarked beyond those four Paul Christiano might count as ‘half a FAI researcher’ because he spends a portion of his time as a mathematician as UCB working on mathematics in line with the MIRI’s research agenda. The MIRI has since hired Patrick LaVictoire, and perhaps others.
The point is, the MIRI itself thinks there’s less than a dozen FAI researchers. For all we know, all FAI researchers might be users of LessWrong, and HPMoR fans. I could ask all of the known “FAI researchers” if they were first introduced to these research ideas, through LessWrong, through HPMoR. That indeed might be a “substantial portion”. You or I might qualify “FAI researcher” differently, but Eliezer by his own admission believes writing more HPMoR is one of the surprisingly best way to draw more attention from Math Olympiad contestants to their research, as does the MIRI.
I.J. Good also warned about it in the 1960s.
Indeed, although since this was before the internet, it didn’t start any sort of movement.
That may not be the only reason that didn’t get off the ground as a movement. Movements have existed before the internet. However, in a different way the internet may matter: a world with internet and modern computers may make something like a superintelligent AI more viscerally plausible as a possibility.
Movements certainly have existed before the net, but generally where there is a high enough density of potential members to organise via word of mouth and print media. With the possible exception of a few places such as Silicon Valley, I don’t think that exists in this case.
I do agree with you that in many ways superintelligence seems more plausible given modern technology, but OTOH people are cautious after the AI winters.
In what way is it a seriously substantial movement?
I guess those are pretty vague words. It’s a (set of) research projects followed by thousands, if not tens of thousands, of people. Among these people are philanthropists and entrepreneurs who have donated millions of dollars to the cause, and seem to be on track to donate even more money. It’s received attention and support from major scientist, and some world-famous people, including Stephen Hawking, Elon Musk, and very recently Bill Gates. He’s been published alongside the academics from the Future of Humanity Institute, and his work has merited the respect of prominent thinkers in fields related to artificial intelligence. When his work has attracted derision, it has also been because his ideas attract enough attention for other prominent academics and thinkers to see fit to criticize him. If we evaluate the success of a movement on the basis of memetics alone, this last observation might also count.
The idea of dangers from superintelligence was debated in Aeon Magazine last year. Much of the effort and work to raise the profile and increase focus upon the issue has been done by Nick Bostrom and the Future of Humanity Institute, the Future of Life Institute, and evne the rest of the Machine Intelligence Research Institute aside from Eliezer himself. Still, though, he initiated several theses on solving the problem, and communicated them to the public.
This is gonna be maybe uncomfortably blunt, but: Eliezer seems to be playing a role in getting AI risk research off the ground similar to the role of Aubrey de Gray in getting life extension research off the ground. Namely, he’s the embarrassing crank with the facial hair that will not shut up, but who’s smart enough and informed enough to be making arguments that aren’t trivially dismissed. No one with real power wants to have that guy in the room, and so they don’t usually end up as the person giving TV interviews and going to White House dinners and such when it does turn into a viable intellectual current. But if you need to get a really weird concept off the ground, you need to have such a person pushing it until it stops being really weird and starts being merely weird, because that’s when it becomes possible for Traditional Public Intellectuals to score some points by becoming early adopters without totally screwing up their credibility.
I wouldn’t use the word “crank” myself to describe either Yudkowsky or de Grey, but I perceive there may be a grain of truth in this interpretation. Eliezer does say or write embarrassing things from time to time. I wouldn’t be surprised if the embarrassing speech attributed to him is in majority not related to machine intelligence. I don’t know enough about de Grey to have an opinion about how embarrassing he may or may not be. Nick Bostrom seems the sort of person who gets TV interviews. If not him, Stephen Hawking. Even if Stephen Hawking doesn’t get invited to White House dinners, I imagine Elon Musk or Bill Gates could easily get invited.
These men haven’t totally screwed up their credibility, but neither does it seem they’ve scored lots of points, for speaking up about potential dangers from machine superintelligence. With his $10 million donation to the Future of Life Institute, he might have gained points. However, he gains points for almost everything he does these days. Anyway, if Eliezer as Embarrassing Crank was necessary, it could be argued his role was just as important because he had the will and courage to become the Embarrassing Crank. Eliezer believes he’s playing a major part in saving the world, which he actually takes seriously, which he probably considers more important than public relations management. The mindset Eliezer has cultivated over a dozen years as being above caring about status games compared to saving the world might explain well why he doesn’t care about not expressing himself poorly, seeming ridiculous, or getting into tiffs with the media.
Well, at this point I think Eliezer’s basically succeeded in that role, and my evidence for that is that people like Hawking and Musk and Gates (the “Traditional Public Intellectuals” of my post, though only Hawking really fits that label well) have started picking up the AI safety theme; they won’t be getting credit for it until it goes truly mainstream, but that’s how the early adopter thing works in this context. I don’t know much about Nick Bostrom on a strategic level, but from what I’ve read of his publications he seems to be taking a complementary approach.
But if we ignore petty stuff like exactly what labels to use, I think we largely agree. The main thing I’m trying to get across is that you need a highly specific personality to bootstrap something like FAI research into the edges of the intellectual Overton window, and that while I (strongly!) sympathize with the people frustrated by e.g. the malaria drone thing or the infamous utopian Facebook post, I think it’s important to recognize that comes from the same place that the Sequences did.
That has implications in both directions, of course.
This is the message I missed inferring from your original reply. Yes, I concur we’re in agreement.
I would.
Thousands, if not tens of thousands? Try a few dozen, maybe.
By “research projects followed by”, I was again being vague. I didn’t mean there are thousands of people reading each and every publication that comes out from the MIRI, or is even linked to by its website as related to its research. I meant there are people interested in the problem, whether through exposure from LessWrong, the MIRI, the Singularity Summit and similar events, who will return to think of the problem in future years. “Tens of thousands” means “at least twenty thousand”, which I doubt is true. The 2014 LessWrong survey had 1506 participants, most of who I’d guess “are aware of the MIRI’s ongoing work”. As this sample is representative of a larger group of LessWrong users, along with the other sources I mentioned, I wouldn’t be surprised if there are couple or few thousand people paying attention to the MIRI or related research in at least a cursory way. If it was actually ten thousand, that might surprise me.
Wouldn’t a stationary laser system be simpler? At least as an initial minimum viable product?