When discussing SIAI’s instrumental rationality, it’s important to remember what its actual goals are. Speaking of story-bias, it’s all too easy to pattern-match to “organization promoting some cause they think is important”, in which case one easily concludes that SIAI has been a miserable failure because FAI hasn’t become a trendy academic research discipline, and Vice Presidents aren’t making films about paperclip maximizers.
However, the picture changes somewhat if instead you think in terms of the following (more accurate) caricature of SIAI’s actual objectives:
(1) To persuade a dozen or so Putnam Fellows to collaborate with Eliezer on FAI instead of pursuing brilliant careers in academic mathematics;
(2) To dissuade people like Ben Goertzel from trying to build AGI without solving the FAI problem first.
If you look at it like this (still admittedly oversimplified), then yes, SIAI still has a way to go in achieving its goals, but they don’t seem to be quite as hopelessly underequipped for the task as one might have thought.
(Disclaimer: I certainly don’t speak for SIAI; my association with the organization is that of a former visitor, i.e. about as loose as it’s possible to get while still having to answer “yes” to the question “Are you, or have you ever been, affiliated with the Singularity Institute for Artificial Intelligence?” if ever called to testify before Congress....)
When discussing SIAI’s instrumental rationality, it’s important to remember what its actual goals are. Speaking of story-bias, it’s all too easy to pattern-match to “organization promoting some cause they think is important”, in which case one easily concludes that SIAI has been a miserable failure because FAI hasn’t become a trendy academic research discipline, and Vice Presidents aren’t making films about paperclip maximizers.
However, the picture changes somewhat if instead you think in terms of the following (more accurate) caricature of SIAI’s actual objectives:
(1) To persuade a dozen or so Putnam Fellows to collaborate with Eliezer on FAI instead of pursuing brilliant careers in academic mathematics;
(2) To dissuade people like Ben Goertzel from trying to build AGI without solving the FAI problem first.
If you look at it like this (still admittedly oversimplified), then yes, SIAI still has a way to go in achieving its goals, but they don’t seem to be quite as hopelessly underequipped for the task as one might have thought.
(Disclaimer: I certainly don’t speak for SIAI; my association with the organization is that of a former visitor, i.e. about as loose as it’s possible to get while still having to answer “yes” to the question “Are you, or have you ever been, affiliated with the Singularity Institute for Artificial Intelligence?” if ever called to testify before Congress....)