I am very skeptical about causes that engage exclusively in spreading awareness.
As am I. However, here are some things I believe about the SIAI and FAI:
To the average well-educated person, the efforts of the SIAI are indistinguishable from a particularly emphatic declaration of “Yay FAI!” To the average person who cares strongly about FAI, the performance of the SIAI still does not validate that “we are in fact producing people capable of working on the problem,” because there are essentially no standards to judge against, no concrete theoretical results in evidence, and no suggestion that impressive theoretical advances are forthcoming. Saying “the problem is difficulty” is a perfectly fine defense, but it does not give the work being done any more value as validation.
The average intelligent (and even abnormally rational) non-singulatarian has little respect for the work of the SIAI, to the extent that the affiliation of the SIAI with outreach significantly reduces its credibility with the most important audience, and the (even quite vague) affiliation of an individual with SIAI makes it significantly more difficult for that individual to argue credibly about the future of humanity.
It is not at all obvious that FAI is the most urgent technical problem currently in view. For example, pushing better physical understanding of the brain, better algorithmic understanding of cognition, and technology for interfacing with human brains all seem like they could have a much larger effect on the probability of a positive singularity. The real argument for normal humans working on FAI is extremely complicated and uncertain.
I place fairly little value on an exponentially growing group of people interested in FAI, except insofar as they can be converted into an exponentially large group of people who care about the future of humanity and act rationally on that preference. I think there are easier ways to accomplish this goal; and on the flip side I think “merely” having an exponentially large group of rational people who care about humanity is incredibly valuable.
My main concern in the direction you are pointing is the difficulty of effective outreach when the rationality on offer appears to be disconnected from reality (in particular the risk that what you are spreading will almost certainly cease to be “rationality” without some good grounding). I believe working on FAI is a uniquely bad way to overcome this difficulty, because most of the target audience (really smart people whose help is incredibly valuable) considers work on FAI even more disconnected from reality than rationality outreach itself, and because the quality or relevance of work on FAI is essentially impossible for almost anyone not directly involved with that work to assess.
As am I. However, here are some things I believe about the SIAI and FAI:
To the average well-educated person, the efforts of the SIAI are indistinguishable from a particularly emphatic declaration of “Yay FAI!” To the average person who cares strongly about FAI, the performance of the SIAI still does not validate that “we are in fact producing people capable of working on the problem,” because there are essentially no standards to judge against, no concrete theoretical results in evidence, and no suggestion that impressive theoretical advances are forthcoming. Saying “the problem is difficulty” is a perfectly fine defense, but it does not give the work being done any more value as validation.
The average intelligent (and even abnormally rational) non-singulatarian has little respect for the work of the SIAI, to the extent that the affiliation of the SIAI with outreach significantly reduces its credibility with the most important audience, and the (even quite vague) affiliation of an individual with SIAI makes it significantly more difficult for that individual to argue credibly about the future of humanity.
It is not at all obvious that FAI is the most urgent technical problem currently in view. For example, pushing better physical understanding of the brain, better algorithmic understanding of cognition, and technology for interfacing with human brains all seem like they could have a much larger effect on the probability of a positive singularity. The real argument for normal humans working on FAI is extremely complicated and uncertain.
I place fairly little value on an exponentially growing group of people interested in FAI, except insofar as they can be converted into an exponentially large group of people who care about the future of humanity and act rationally on that preference. I think there are easier ways to accomplish this goal; and on the flip side I think “merely” having an exponentially large group of rational people who care about humanity is incredibly valuable.
My main concern in the direction you are pointing is the difficulty of effective outreach when the rationality on offer appears to be disconnected from reality (in particular the risk that what you are spreading will almost certainly cease to be “rationality” without some good grounding). I believe working on FAI is a uniquely bad way to overcome this difficulty, because most of the target audience (really smart people whose help is incredibly valuable) considers work on FAI even more disconnected from reality than rationality outreach itself, and because the quality or relevance of work on FAI is essentially impossible for almost anyone not directly involved with that work to assess.