Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers. If current donations happened to lead to blog posts of that caliber the donations would be money well spent.
Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers.
How are we measuring useful or important? The sequences are entertaining, but its not clear to me they do much to actually help with the core goals of MIRI (besides the goal of entertaining people enough to fund MIRI, perhaps).
The advantage of a high-impact academic paper is it shapes the culture of academic research. A good idea in a well-received research paper will almost instantly lead to lots of other researchers working on the same problems. A great idea on a well-received research paper can get an entire sub-field working on the same problem.
The sequences are more advertisements than formalized research. Its papers like the one on Lob’s obstacle that get researchers interested in working on these problems.
The sequences are more advertisements than formalized research. Its papers like the one on Lob’s obstacle that get researchers interested in working on these problems.
I think that’s up for debate.
And the sequences aren’t “just advertisements”.
I don’t know any LW-ers in person, but I’m sure that at least some people have benefited from reading the sequences.
Can’t really speak on behalf of researchers, but their motivations could literally be anything, maybe just finding the work interesting, to altruistic reasons or financial incentives.
The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
Kinda… more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. ‘Friendly AI’ is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people’s tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don’t fully persuade them on the first go, there’s a case to be made for improving people’s epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.
Before dismissing blog posts keep in mind the Sequences were blog posts. And they are probably much more useful and important than all but the best academic papers. If current donations happened to lead to blog posts of that caliber the donations would be money well spent.
How are we measuring useful or important? The sequences are entertaining, but its not clear to me they do much to actually help with the core goals of MIRI (besides the goal of entertaining people enough to fund MIRI, perhaps).
The advantage of a high-impact academic paper is it shapes the culture of academic research. A good idea in a well-received research paper will almost instantly lead to lots of other researchers working on the same problems. A great idea on a well-received research paper can get an entire sub-field working on the same problem.
The sequences are more advertisements than formalized research. Its papers like the one on Lob’s obstacle that get researchers interested in working on these problems.
I think that’s up for debate.
And the sequences aren’t “just advertisements”.
I don’t know any LW-ers in person, but I’m sure that at least some people have benefited from reading the sequences.
Can’t really speak on behalf of researchers, but their motivations could literally be anything, maybe just finding the work interesting, to altruistic reasons or financial incentives.
You miss my meaning. The stated core goal of MIRI/the old SIAI is to develop friendly AI. With regards to that goal, the sequences are advertising.
With regards to their core goal, the sequences matter if 1. they lead to people donating to MIRI 2. they lead to people working on friendly AI.
I view point 1 as advertising, and I think research papers are obviously better than the sequences for point 2.
Kinda… more specifically, a big part of what they are is an attempt at insurance against the possibility that there exists someone out there (probably young) with more innate potential for FAI research than EY himself possesses but who never finds out about FAI research at all.
A big part of the purpose of the Sequences is to kill likely mistakes and missteps from smart people trying to think about AI. ‘Friendly AI’ is a sufficiently difficult problem that it may be more urgent to raise the sanity waterline, filter for technical and philosophical insight, and amplify that insight (e.g., through CFAR), than to merely inform academia that AI is risky. Given people’s tendencies to leap on the first solution that pops into their head, indulge in anthropomorphism and optimism, and become inoculated to arguments that don’t fully persuade them on the first go, there’s a case to be made for improving people’s epistemic rationality, and honing the MIRI arguments more carefully, before diving into outreach.