There are now three “major” responses from SI to Holden’s Thoughts on the Singularity Institute (SI): (1) a comments thread on recent improvements to SI as an organization, (2) a post series on how SI is turning donor dollars into AI risk reduction and how it could do more of this if it had more funding, and (3) Eliezer’s post on Tool AI above.
At least two more major responses from SI are forthcoming: a detailed reply to Holden’s earlier posts and comments on expected value estimates (e.g. this one), and a long reply from me that summarizes my responses to all (or almost all) of the many issues raised in Thoughts on the Singularity Institute (SI).
I told Holden privately that this would be explained in my final “summary” reply. I suspect the 5200 words of Eliezer’s post above will be part of the 50,000.
To clarify, for everyone:
There are now three “major” responses from SI to Holden’s Thoughts on the Singularity Institute (SI): (1) a comments thread on recent improvements to SI as an organization, (2) a post series on how SI is turning donor dollars into AI risk reduction and how it could do more of this if it had more funding, and (3) Eliezer’s post on Tool AI above.
At least two more major responses from SI are forthcoming: a detailed reply to Holden’s earlier posts and comments on expected value estimates (e.g. this one), and a long reply from me that summarizes my responses to all (or almost all) of the many issues raised in Thoughts on the Singularity Institute (SI).
How much of this is counting toward the 50,000 words of authorized responses?
I told Holden privately that this would be explained in my final “summary” reply. I suspect the 5200 words of Eliezer’s post above will be part of the 50,000.
Luke, do you know if there has been any official (or unofficial) response to my argument that Holden quoted in his post?
Not that I know of. I fully agree with that comment, and I suspect Eliezer does as well.