Actually, I will comment (for the purposes of authenticity and from the belief that being more transparent about my motivations will increase mutual truth-finding) that while I’m not arguing “against” SIAI, this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
When discussing SIAI’s instrumental rationality, it’s important to remember what its actual goals are. Speaking of story-bias, it’s all too easy to pattern-match to “organization promoting some cause they think is important”, in which case one easily concludes that SIAI has been a miserable failure because FAI hasn’t become a trendy academic research discipline, and Vice Presidents aren’t making films about paperclip maximizers.
However, the picture changes somewhat if instead you think in terms of the following (more accurate) caricature of SIAI’s actual objectives:
(1) To persuade a dozen or so Putnam Fellows to collaborate with Eliezer on FAI instead of pursuing brilliant careers in academic mathematics;
(2) To dissuade people like Ben Goertzel from trying to build AGI without solving the FAI problem first.
If you look at it like this (still admittedly oversimplified), then yes, SIAI still has a way to go in achieving its goals, but they don’t seem to be quite as hopelessly underequipped for the task as one might have thought.
(Disclaimer: I certainly don’t speak for SIAI; my association with the organization is that of a former visitor, i.e. about as loose as it’s possible to get while still having to answer “yes” to the question “Are you, or have you ever been, affiliated with the Singularity Institute for Artificial Intelligence?” if ever called to testify before Congress....)
this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
I’ve had similar thoughts. I would be interested in hearing what room for improvement you see in SIAI’s organizational instrumental rationality. I have my own thoughts on this (which have evolved somewhat as I’ve learned more since making my posts about SIAI back in August). Feel free to PM me if you’d prefer to communicate privately.
Actually, I will comment (for the purposes of authenticity and from the belief that being more transparent about my motivations will increase mutual truth-finding) that while I’m not arguing “against” SIAI, this post is to some degree emerging from me exploring the question of SIAI’s organizational instrumental rationality. I have the impression from a variety of angles/sources that it’s pretty bad. Since I care about SIAI’s success, it’s one of the things I think about in the background—why, and how you could be more effective.
When discussing SIAI’s instrumental rationality, it’s important to remember what its actual goals are. Speaking of story-bias, it’s all too easy to pattern-match to “organization promoting some cause they think is important”, in which case one easily concludes that SIAI has been a miserable failure because FAI hasn’t become a trendy academic research discipline, and Vice Presidents aren’t making films about paperclip maximizers.
However, the picture changes somewhat if instead you think in terms of the following (more accurate) caricature of SIAI’s actual objectives:
(1) To persuade a dozen or so Putnam Fellows to collaborate with Eliezer on FAI instead of pursuing brilliant careers in academic mathematics;
(2) To dissuade people like Ben Goertzel from trying to build AGI without solving the FAI problem first.
If you look at it like this (still admittedly oversimplified), then yes, SIAI still has a way to go in achieving its goals, but they don’t seem to be quite as hopelessly underequipped for the task as one might have thought.
(Disclaimer: I certainly don’t speak for SIAI; my association with the organization is that of a former visitor, i.e. about as loose as it’s possible to get while still having to answer “yes” to the question “Are you, or have you ever been, affiliated with the Singularity Institute for Artificial Intelligence?” if ever called to testify before Congress....)
I’ve had similar thoughts. I would be interested in hearing what room for improvement you see in SIAI’s organizational instrumental rationality. I have my own thoughts on this (which have evolved somewhat as I’ve learned more since making my posts about SIAI back in August). Feel free to PM me if you’d prefer to communicate privately.
You’ll want to see my reply to patrissimo above.