I hope SI will agree that the FAQ answer you linked is inadequate
As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.
Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like “put off the singularity until we can make it go well”) is nearly the opposite of its original mission (“make the singularity happen as quickly as possible”). The story of that transition is here.
Nonetheless, I object to the FAQ answer on the grounds that its ontology of objections to the likelihood of singularity lacks a category that you’d expect to contain my objections.
I admit that the problem is only in not perfecting the FAQ entry, not that the objections weren’t considered in detail elsewhere. Thus, no evidence of Uri-Gellerism, and more for lack of resources spent on it.
The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.
Perhaps you’d like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?
Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter “Closed for not taking into account my objection!”. If what is currently qualified as “the most common objections” is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time’s “most common objections”, and then repeat.
I’m sure this argument was made in better form somewhere else before, but I’m not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.
To (very poorly) paraphrase Eliezer*: “The obvious solution to you just isn’t. It wasn’t obvious to X, it wasn’t obvious to Y, and it certainly wasn’t obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building “safe” AIs.”
This also holds true of objections to SIAI, AFAICT. What seems like an “obvious” rebuttal, objection, etc. or a “common” complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of “common objections” and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses… I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of “people made aware of the issue” and “donators gained” and maybe even “researchers sensitivitized to the issue”.
Whether or not a non-self-modifying planning Oracle is the best solution in the end, it’s not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that ‘tool AI’ wasn’t the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.
As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.
Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like “put off the singularity until we can make it go well”) is nearly the opposite of its original mission (“make the singularity happen as quickly as possible”). The story of that transition is here.
Nonetheless, I object to the FAQ answer on the grounds that its ontology of objections to the likelihood of singularity lacks a category that you’d expect to contain my objections.
I admit that the problem is only in not perfecting the FAQ entry, not that the objections weren’t considered in detail elsewhere. Thus, no evidence of Uri-Gellerism, and more for lack of resources spent on it.
The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.
Perhaps you’d like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?
Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter “Closed for not taking into account my objection!”. If what is currently qualified as “the most common objections” is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time’s “most common objections”, and then repeat.
I’m sure this argument was made in better form somewhere else before, but I’m not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.
To (very poorly) paraphrase Eliezer*: “The obvious solution to you just isn’t. It wasn’t obvious to X, it wasn’t obvious to Y, and it certainly wasn’t obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building “safe” AIs.”
This also holds true of objections to SIAI, AFAICT. What seems like an “obvious” rebuttal, objection, etc. or a “common” complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of “common objections” and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses… I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of “people made aware of the issue” and “donators gained” and maybe even “researchers sensitivitized to the issue”.
* Edit: Correct quote in reply by Grognor, thanks!
-Reply to Holden on Tool AI