Thanks. I actually linked to that paper in the OP. As I wrote, that an organisation like the SIAI is necessary and should be supported is not being challenged. But what that paper accomplishes is merely giving a very basic introduction to someone who might have never thought about risks posed by AI. What I actually had in mind writing the OP is that the SIAI addresses people like Ben Goertzel who are irrespective of the currently available material skeptic about the risks from working on AGI and who are unsure of what the SIAI actually wants them to do or not to do and why. Further I would like if the SIAI provided educated outsiders with a summary of how people that believe into the importance of risks associated with AGI arrived at this conclusion, especially in comparison to other existential risks and challenges.
What I seek is a centralized code of practice that incorporates the basic assumptions and a way to roughly asses their likelihood in comparison to other existential risks and challenges by the use of probability. See for example this SIAI page. Bayes sits in there alone and doomed. Why is there no way for people to formally derive their own probability estimates with their own values? To put it bluntly, it looks like you have to put any estimation out of your ass. The SIAI has to set itself apart from works of science fiction and actually provide some formal analysis of what we know, what conclusions can be drawn and how they relate to other problems. The first question most people will ask is why to worry about AGI when there are challenges like climate change. There needs to be a risks benefits analysis that shows why AGI is more important and a way to reassess the results yourself by following a provided decision procedure.
Yes, I strongly endorse what you were saying in your top level posting and agree that the new overview is by no means sufficient, I was just remarking that the new overview is at least a step in the right direction. Didn’t notice that you had linked it in the top level post.
Great post.
If you haven’t seen SIAI’s new overview you might find it relevant. I’m quite favorably impressed by it.
Thanks. I actually linked to that paper in the OP. As I wrote, that an organisation like the SIAI is necessary and should be supported is not being challenged. But what that paper accomplishes is merely giving a very basic introduction to someone who might have never thought about risks posed by AI. What I actually had in mind writing the OP is that the SIAI addresses people like Ben Goertzel who are irrespective of the currently available material skeptic about the risks from working on AGI and who are unsure of what the SIAI actually wants them to do or not to do and why. Further I would like if the SIAI provided educated outsiders with a summary of how people that believe into the importance of risks associated with AGI arrived at this conclusion, especially in comparison to other existential risks and challenges.
What I seek is a centralized code of practice that incorporates the basic assumptions and a way to roughly asses their likelihood in comparison to other existential risks and challenges by the use of probability. See for example this SIAI page. Bayes sits in there alone and doomed. Why is there no way for people to formally derive their own probability estimates with their own values? To put it bluntly, it looks like you have to put any estimation out of your ass. The SIAI has to set itself apart from works of science fiction and actually provide some formal analysis of what we know, what conclusions can be drawn and how they relate to other problems. The first question most people will ask is why to worry about AGI when there are challenges like climate change. There needs to be a risks benefits analysis that shows why AGI is more important and a way to reassess the results yourself by following a provided decision procedure.
Yes, I strongly endorse what you were saying in your top level posting and agree that the new overview is by no means sufficient, I was just remarking that the new overview is at least a step in the right direction. Didn’t notice that you had linked it in the top level post.