I have a suspicion that one of the factors holding back donations from big names (think Peter Thiel level), is the absence of visibility. Both from the point of view that it isn’t as “cool” as the Bill and Melinda gates foundation (i.e. to say there isn’t already an existing public opinion that issues such as x risk are charity worthy, as opposed to something like say donating for underprivileged children to take part in some sporting event) and that it isn’t as “visible” (to continue with the donation to children example, a lot of publicity can be obtained by putting up photos of apparently malnourished children sitting together in a line, full of smiles for the camera).
The distinction I have made between the two is artificial, but I thought it was the best way to illustrate that the disadvantages suffered my FHI, MIRI and that cluster of institutes are happening on two different levels.
However, the second point about visibility is actually a bit of a teeny bit concerning. The MIRI has been criticized for not doing much except publishing papers.That doesn’t look good and it is hard for a layman to feel that giving away a portion of his salary just to see a new set of math formulas (looking much like the same formulas he saw last month) a good use of his money, especially if he doesn’t see it directly helping anyone out.
I understand that by the nature of the research being undertaken, this may be all that we can hope for, but if there is a better way that MIRI can signal it’s accountability, then I think that it should be done. Pronto.
Also, could someone who is so inclined get the math/code that is happening and dumb it down enough so that an average LW-er such as yours truly could make more sense of it?
I take it that this is a damned if you do and damned if you don’t kind of situation.
I’m not able to find the source right now (that criticized the MIRI on said grounds), but I’m pretty certain it wasn’t a very authentic/respectable source to begin with. As far as I can recall, it was Stephen Bond, the same guy who wrote the article on “the cult of bayes theorem”, though there was a link to his page from Yudkowsky’s wikipedia page which is not there anymore.
I simply brought up this example to show how easy it is tarnish an image, something I’m sure you’re well aware of.
Nonetheless, my point still stands. IMAGE MATTERS.
It doesn’t make a difference that the good (and ingenious) folk at MIRI are doing some of the most important work, that may at any given moment solve a large number of headaches for the human race. There are others out there making that same claim. And because some of those others are politicians wearing fancy suits, people will listen to them. (Don’t even get me started on the saints and the priests who successfully manage to make decent hard working folk part with large portions of their lifetime’s worth of savings, but those cases are a little bit beyond the scope of this particular argument).
A real estate agent can point to a rising skyscraper as evidence of money being put to good use. A NASA type of organisation (slightly tongue in cheek, just indicating a cluster) can point to a satellite orbiting Mars. A biotech company may one day point to a fully lab grown human with perfect glowing skin. A nanotech company can one day point to the world’s smallest robot to “do the robot”.
The above examples have two things in common, one that they are visible in the most literal sense of the word. The second is (I believe) that most people have a ready intuition by which they can see how achieving any of the above would require a large amount of cash/funding.
Software is harder to impress people with. Even harder if the software is genuinely complicated. To make matters worse, the media has flooded the imagination of newspaper readers all over the world with rage to riches stories of entrepreneurs who made it big and were ok with being only ramen profitable for long years.
And yet institutions that are ostensibly purely academic and research oriented also require funding. And I don’t disagree. I’ve read HPMoR and I’ve read portions of the LW site as well. I know that this is likely for real, and that there is more than enough credibility built up by the proponents for research into these areas.
Unfortunately, I’m in the minority. And as of now, I’m a far cry from being financially sound. If the MIRI/FHI have to accelerate their research and they need funding for it, then it is not a bad idea to make their progress seem more tangible, even if they can’t deliver every single detail every single time.
One possible major downside of this approach of course is that it might eat into valuable time which could otherwise be spent making the real progress that these institutions were created for in the first place.
The MIRI has been criticized for not doing much except publishing papers.
By whom? By the traditional metric of published papers, MIRI is an exceptionally unproductive research organization- only a few low-impact peer-reviewed papers, mostly in the last few years, despite a decade of funding. Its probably fair to say that donations to the old SIAI were more likely to go toward blog posts and fanfic than toward research papers.
Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers. If current donations happened to lead to blog posts of that caliber the donations would be money well spent.
Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers.
How are we measuring useful or important? The sequences are entertaining, but its not clear to me they do much to actually help with the core goals of MIRI (besides the goal of entertaining people enough to fund MIRI, perhaps).
The advantage of a high-impact academic paper is it shapes the culture of academic research. A good idea in a well-received research paper will almost instantly lead to lots of other researchers working on the same problems. A great idea on a well-received research paper can get an entire sub-field working on the same problem.
The sequences are more advertisements than formalized research. Its papers like the one on Lob’s obstacle that get researchers interested in working on these problems.
The sequences are more advertisements than formalized research. Its papers like the one on Lob’s obstacle that get researchers interested in working on these problems.
I think that’s up for debate.
And the sequences aren’t “just advertisements”.
I don’t know any LW-ers in person, but I’m sure that at least some people have benefited from reading the sequences.
Can’t really speak on behalf of researchers, but their motivations could literally be anything, maybe just finding the work interesting, to altruistic reasons or financial incentives.
The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
Kinda… more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. ‘Friendly AI’ is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people’s tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don’t fully persuade them on the first go, there’s a case to be made for improving people’s epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.
I have a suspicion that one of the factors holding back donations from big names (think Peter Thiel level), is the absence of visibility. Both from the point of view that it isn’t as “cool” as the Bill and Melinda gates foundation (i.e. to say there isn’t already an existing public opinion that issues such as x risk are charity worthy, as opposed to something like say donating for underprivileged children to take part in some sporting event) and that it isn’t as “visible” (to continue with the donation to children example, a lot of publicity can be obtained by putting up photos of apparently malnourished children sitting together in a line, full of smiles for the camera).
The distinction I have made between the two is artificial, but I thought it was the best way to illustrate that the disadvantages suffered my FHI, MIRI and that cluster of institutes are happening on two different levels.
However, the second point about visibility is actually a bit of a teeny bit concerning. The MIRI has been criticized for not doing much except publishing papers.That doesn’t look good and it is hard for a layman to feel that giving away a portion of his salary just to see a new set of math formulas (looking much like the same formulas he saw last month) a good use of his money, especially if he doesn’t see it directly helping anyone out.
I understand that by the nature of the research being undertaken, this may be all that we can hope for, but if there is a better way that MIRI can signal it’s accountability, then I think that it should be done. Pronto.
Also, could someone who is so inclined get the math/code that is happening and dumb it down enough so that an average LW-er such as yours truly could make more sense of it?
Really? Before, MIRI was being constantly criticized for not publishing any papers.
I see.
I take it that this is a damned if you do and damned if you don’t kind of situation.
I’m not able to find the source right now (that criticized the MIRI on said grounds), but I’m pretty certain it wasn’t a very authentic/respectable source to begin with. As far as I can recall, it was Stephen Bond, the same guy who wrote the article on “the cult of bayes theorem”, though there was a link to his page from Yudkowsky’s wikipedia page which is not there anymore.
I simply brought up this example to show how easy it is tarnish an image, something I’m sure you’re well aware of. Nonetheless, my point still stands. IMAGE MATTERS.
It doesn’t make a difference that the good (and ingenious) folk at MIRI are doing some of the most important work, that may at any given moment solve a large number of headaches for the human race. There are others out there making that same claim. And because some of those others are politicians wearing fancy suits, people will listen to them. (Don’t even get me started on the saints and the priests who successfully manage to make decent hard working folk part with large portions of their lifetime’s worth of savings, but those cases are a little bit beyond the scope of this particular argument).
A real estate agent can point to a rising skyscraper as evidence of money being put to good use. A NASA type of organisation (slightly tongue in cheek, just indicating a cluster) can point to a satellite orbiting Mars. A biotech company may one day point to a fully lab grown human with perfect glowing skin. A nanotech company can one day point to the world’s smallest robot to “do the robot”.
The above examples have two things in common, one that they are visible in the most literal sense of the word. The second is (I believe) that most people have a ready intuition by which they can see how achieving any of the above would require a large amount of cash/funding.
Software is harder to impress people with. Even harder if the software is genuinely complicated. To make matters worse, the media has flooded the imagination of newspaper readers all over the world with rage to riches stories of entrepreneurs who made it big and were ok with being only ramen profitable for long years.
And yet institutions that are ostensibly purely academic and research oriented also require funding. And I don’t disagree. I’ve read HPMoR and I’ve read portions of the LW site as well. I know that this is likely for real, and that there is more than enough credibility built up by the proponents for research into these areas.
Unfortunately, I’m in the minority. And as of now, I’m a far cry from being financially sound. If the MIRI/FHI have to accelerate their research and they need funding for it, then it is not a bad idea to make their progress seem more tangible, even if they can’t deliver every single detail every single time.
One possible major downside of this approach of course is that it might eat into valuable time which could otherwise be spent making the real progress that these institutions were created for in the first place.
I don’t think you can call Nick Bostrom not visible. He made Foreign Policy’s Top 100 Global Thinkers list. He also wrote the book last year.
By whom? By the traditional metric of published papers, MIRI is an exceptionally unproductive research organization- only a few low-impact peer-reviewed papers, mostly in the last few years, despite a decade of funding. Its probably fair to say that donations to the old SIAI were more likely to go toward blog posts and fanfic than toward research papers.
Yeah, I was actually trying to say that they need to do other stuff too, not cut down on publishing papers.
You might wanna weigh in on this: http://lesswrong.com/lw/l13/the_future_of_humanity_institute_could_make_use/be4o
Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers. If current donations happened to lead to blog posts of that caliber the donations would be money well spent.
How are we measuring useful or important? The sequences are entertaining, but its not clear to me they do much to actually help with the core goals of MIRI (besides the goal of entertaining people enough to fund MIRI, perhaps).
The advantage of a high-impact academic paper is it shapes the culture of academic research. A good idea in a well-received research paper will almost instantly lead to lots of other researchers working on the same problems. A great idea on a well-received research paper can get an entire sub-field working on the same problem.
The sequences are more advertisements than formalized research. Its papers like the one on Lob’s obstacle that get researchers interested in working on these problems.
I think that’s up for debate.
And the sequences aren’t “just advertisements”.
I don’t know any LW-ers in person, but I’m sure that at least some people have benefited from reading the sequences.
Can’t really speak on behalf of researchers, but their motivations could literally be anything, maybe just finding the work interesting, to altruistic reasons or financial incentives.
You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.
I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.
Kinda… more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. ‘Friendly AI’ is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people’s tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don’t fully persuade them on the first go, there’s a case to be made for improving people’s epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.
By whom? I mean, what should MIRI do other than publishing research papers?
Of course, If I did get such a version of the code, I may end up tinkering with it and inadvertently create the paperclip Maximiser.
Though if I ended up creating Quirinus Quirell, I’m not sure if it would be a good thing or not.
PS. this was meant as a joke.