I hope SI will agree that the FAQ answer you linked is inadequate (either overlooking some common objections, or lumping them together dismissively as unspecified obstacles that will be revealed in the future). For example, “building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion”, or “computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow” are both plausible.
And yes, even if the answer is improved, it does suggest a possible pattern. It could just be a lack of resources available to create high quality, comprehensive answers to objections. Or it could be that SI is slightly more like Uri Geller in not doubting itself than GiveWell is.
Is GiveWell really doubting itself or its premise—that it’s worth spending extra money evaluating where to give money? (actually, I think it is worth it, but that’s not my point).
I hope SI will agree that the FAQ answer you linked is inadequate
As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.
Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like “put off the singularity until we can make it go well”) is nearly the opposite of its original mission (“make the singularity happen as quickly as possible”). The story of that transition is here.
Nonetheless, I object to the FAQ answer on the grounds that its ontology of objections to the likelihood of singularity lacks a category that you’d expect to contain my objections.
I admit that the problem is only in not perfecting the FAQ entry, not that the objections weren’t considered in detail elsewhere. Thus, no evidence of Uri-Gellerism, and more for lack of resources spent on it.
The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.
Perhaps you’d like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?
Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter “Closed for not taking into account my objection!”. If what is currently qualified as “the most common objections” is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time’s “most common objections”, and then repeat.
I’m sure this argument was made in better form somewhere else before, but I’m not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.
To (very poorly) paraphrase Eliezer*: “The obvious solution to you just isn’t. It wasn’t obvious to X, it wasn’t obvious to Y, and it certainly wasn’t obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building “safe” AIs.”
This also holds true of objections to SIAI, AFAICT. What seems like an “obvious” rebuttal, objection, etc. or a “common” complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of “common objections” and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses… I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of “people made aware of the issue” and “donators gained” and maybe even “researchers sensitivitized to the issue”.
Whether or not a non-self-modifying planning Oracle is the best solution in the end, it’s not such an obvious privileged-point-in-solution-space that someone should be alarmed at SIAI not discussing it. This is empirically verifiable in the sense that ‘tool AI’ wasn’t the obvious solution to e.g. John McCarthy, Marvin Minsky, I. J. Good, Peter Norvig, Vernor Vinge, or for that matter Isaac Asimov.
Is GiveWell really doubting itself or its premise—that it’s worth spending extra money evaluating where to give money?
That would be a cost-of-information question, which is really quite tricky. For instance, one might ask “Which of these charities would we prefer based on the amount of data we already have?” and then go and gather more data and see if the earlier data was actually sufficiently predictive, or some such ….
For example, “building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion”, or “computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow” are both plausible.
A short answer to these might note the advancement of applied AI techniques in fields where it was previously common knowledge that “only humans can do that” — e.g. high-quality speech recognition or self-driving cars. I would propose limiting such an answer to “serious” endeavors — ones where humans highly value the outcome, such as understanding a translated message correctly or driving safely — as opposed to games such as chess or TV game-shows.
I agree that people raising the AI-objection I mentioned are often making that mistake. But even after that bit of hopeful perspective, there’s still a real concern about (unknown—maybe it will turn out to be a cinch in hindsight once discoveries are made) difficulty. It’s not the case that we have AI that merely runs 10^10 times too slow to be nowhere deficient to human intelligence+expertise.
I hope SI will agree that the FAQ answer you linked is inadequate (either overlooking some common objections, or lumping them together dismissively as unspecified obstacles that will be revealed in the future). For example, “building an AI seems hard. no human (even given much longer lifespans) or team of humans will ever be smart enough to build something that leads to an intelligence explosion”, or “computing devices that can realistically model an entire human brain (even taking shortcuts on parts that turn out to be irrelevant to intelligence) will be prohibitively expensive and slow” are both plausible.
And yes, even if the answer is improved, it does suggest a possible pattern. It could just be a lack of resources available to create high quality, comprehensive answers to objections. Or it could be that SI is slightly more like Uri Geller in not doubting itself than GiveWell is.
Is GiveWell really doubting itself or its premise—that it’s worth spending extra money evaluating where to give money? (actually, I think it is worth it, but that’s not my point).
As Randaly notes, an FAQ of short answers to common questions is the wrong place to look for in-depth analysis and detailed self-skepticism! Also, the FAQ links directly to papers that do respond in some detail to the objections mentioned.
Another point to make is that SI has enough of a culture of self-skepticism that its current mission (something like “put off the singularity until we can make it go well”) is nearly the opposite of its original mission (“make the singularity happen as quickly as possible”). The story of that transition is here.
Nonetheless, I object to the FAQ answer on the grounds that its ontology of objections to the likelihood of singularity lacks a category that you’d expect to contain my objections.
I admit that the problem is only in not perfecting the FAQ entry, not that the objections weren’t considered in detail elsewhere. Thus, no evidence of Uri-Gellerism, and more for lack of resources spent on it.
The SIAI seems very open to volunteer work and any offer of improvement on their current methodologies and strategies, given that the change can be shown to be an improvement.
Perhaps you’d like to curate a large library of objection precedents, as well as given historical responses to those objections, so as to facilitate their work in incrementally giving more and more responses to more and more objections?
Please keep in mind that anyone not trained in The Way that comes across the SIAI and finds that its claims conflict with their beliefs will often do their utmost to find the first unanswered criticism to declare the matter “Closed for not taking into account my objection!”. If what is currently qualified as “the most common objections” is answered and the answers displayed prominently, future newcomers will read those and then formulate new objections, which will then be that time’s “most common objections”, and then repeat.
I’m sure this argument was made in better form somewhere else before, but I’m not sure the inherent difficulty in formulating a comprehensive perfected objection-proof FAQ was clearly communicated.
To (very poorly) paraphrase Eliezer*: “The obvious solution to you just isn’t. It wasn’t obvious to X, it wasn’t obvious to Y, and it certainly wasn’t obvious to [Insert list of prominent specialists in the field] either, who all thought they had the obvious solution to building “safe” AIs.”
This also holds true of objections to SIAI, AFAICT. What seems like an “obvious” rebuttal, objection, etc. or a “common” complaint to one might not be to the next person that comes along. Perhaps a more comprehensive list of “common objections” and official SIAI responses might help, but is it cost-efficient in the overall strategy? Factor in the likelihood ratio of any particular objector who, objecting after having read the FAQ, would really be more convinced after reading a longer list of responses… I believe simple movement-building or even mere propaganda might be more cost-effective in raw counts of “people made aware of the issue” and “donators gained” and maybe even “researchers sensitivitized to the issue”.
* Edit: Correct quote in reply by Grognor, thanks!
-Reply to Holden on Tool AI
That would be a cost-of-information question, which is really quite tricky. For instance, one might ask “Which of these charities would we prefer based on the amount of data we already have?” and then go and gather more data and see if the earlier data was actually sufficiently predictive, or some such ….
A short answer to these might note the advancement of applied AI techniques in fields where it was previously common knowledge that “only humans can do that” — e.g. high-quality speech recognition or self-driving cars. I would propose limiting such an answer to “serious” endeavors — ones where humans highly value the outcome, such as understanding a translated message correctly or driving safely — as opposed to games such as chess or TV game-shows.
I agree that people raising the AI-objection I mentioned are often making that mistake. But even after that bit of hopeful perspective, there’s still a real concern about (unknown—maybe it will turn out to be a cinch in hindsight once discoveries are made) difficulty. It’s not the case that we have AI that merely runs 10^10 times too slow to be nowhere deficient to human intelligence+expertise.