(1) Write a short, introductory, thoroughly cited guide on each major concept employed by SIAI / LW.
As an example, this is what I’m currently doing for the point about why standard, simple designs for machine ethics will result in disaster if implemented in a superintelligent machine. Right now, you have to read hundreds of pages of dense material that references unusual terms described in hundreds of other pages all across Less Wrong and SIAI’s website. That is unnecessary, and doesn’t help public perception of SIAI / LW. It looks like we’re being purposely obscurantist and cult-like.
(2) Engage the professional community. Somebody goes to SIAI’s page and looks for accomplishments and they see not a single article in a peer-reviewed journal. Compare this to, um… the accomplishments page of every other 10-year research institute or university research program on the planet.
EDIT: I should note that in the course of not publishing papers in journals and engaging the mainstream community, SIAI has managed to be almost a decade ahead of everyone else. Having just read quite nearly the entirety of extant literature in the field of machine ethics, I can say with some confidence that the machine ethics field still isn’t caught up to where Eliezer was circa 2001.
So of course SIAI can work much more quickly if it doesn’t bother to absorb the entirety of the (mostly useless) machine ethics literature and then write papers that use the same language and style as the mainstream community, and cites all the same papers.
The problem is that if you don’t write all those papers, then people keep asking you dumb questions about “Why can’t we just tell it to maximize human happiness?” You have to keep answering that question because there is no readable, thoroughly-cited, mainstream-language guide that answers those types of questions. (Except, the one I’m writing now.)
Also, not publishing those papers in mainstream journals leaves you with less credibility in the eyes of journals who are savvy enough to know there is a difference between conference papers and those accepted to mainstream journals.
So I think it’s worth all that effort, though probably not for somebody like Yudkowsky. He should be working on TDT and CEV, I imagine. Not reading papers about Kantian solutions to machine ethics.
(1) Write a short, introductory, thoroughly cited guide on each major concept employed by SIAI / LW.
As an example, this is what I’m currently doing for the point about why standard, simple designs for machine ethics will result in disaster if implemented in a superintelligent machine. Right now, you have to read hundreds of pages of dense material that references unusual terms described in hundreds of other pages all across Less Wrong and SIAI’s website. That is unnecessary, and doesn’t help public perception of SIAI / LW. It looks like we’re being purposely obscurantist and cult-like.
Why an intelligence explosion is probable is another good example of this.
(2) Engage the professional community. Somebody goes to SIAI’s page and looks for accomplishments and they see not a single article in a peer-reviewed journal. Compare this to, um… the accomplishments page of every other 10-year research institute or university research program on the planet.
EDIT: I should note that in the course of not publishing papers in journals and engaging the mainstream community, SIAI has managed to be almost a decade ahead of everyone else. Having just read quite nearly the entirety of extant literature in the field of machine ethics, I can say with some confidence that the machine ethics field still isn’t caught up to where Eliezer was circa 2001.
So of course SIAI can work much more quickly if it doesn’t bother to absorb the entirety of the (mostly useless) machine ethics literature and then write papers that use the same language and style as the mainstream community, and cites all the same papers.
The problem is that if you don’t write all those papers, then people keep asking you dumb questions about “Why can’t we just tell it to maximize human happiness?” You have to keep answering that question because there is no readable, thoroughly-cited, mainstream-language guide that answers those types of questions. (Except, the one I’m writing now.)
Also, not publishing those papers in mainstream journals leaves you with less credibility in the eyes of journals who are savvy enough to know there is a difference between conference papers and those accepted to mainstream journals.
So I think it’s worth all that effort, though probably not for somebody like Yudkowsky. He should be working on TDT and CEV, I imagine. Not reading papers about Kantian solutions to machine ethics.