This is the kind of summary of a decision procedure I have been complaining about to be missing, or hidden within enormous amounts of content. I wish someone with enough skill could write a top-level post about it demanding that the SIAI creates an introductory paper exemplifying how to reach the conclusion that (1) the risks are to be taken seriously (2) you should donate to the SIAI to reduce the risks. There could either a be a few papers for different people with different backgrounds or one with different levels of detail. It should feature detailed references to what knowledge is necessary to understand the paper itself. Further it should feature the formulas, variables and decision procedures you have to follow to estimate the risks posed by and incentive to alleviate ufriendly AI. It should also include references to further information from people not associated with the SIAI.
This would allow for the transparency that is required by claims of this magnitude and calls for action, including donations.
I wonder why it took so long until you came along posting this comment.
You didn’t succeed in communicating your problem, otherwise someone else would have explained earlier. I had been reading your posts on the issue and didn’t have even the tiniest hint of an idea that the piece you were missing was an explanation of bayesian reasoning until just before writing that comment, and even then was less optimistic about the comment doing anything for you than I had been for earlier comments. I’m still puzzled and unsure whether it actually was Bayesian reasoning or something else in the comment that apparently helped you. if it was you should read http://yudkowsky.net/rational/bayes and some of the post here tagged “bayesian”.
I wonder why it took so long until you came along posting this comment.
Because thinking is work, and it’s not always obvious what question needs to be answered.
More generally (and this is something I’m still working on grasping fully). what’s obvious to you is not necessarily obvious to other people, even if you think you have enough in common with them that it’s hard to believe that they could have missed it.
I wouldn’t have said so even a week ago, but I’m now inclined to think that your short attention span is asset to LW.
Just as Eliezer has said (can someone remember the link?) that science as conventionally set up to be too leisurely (not enough thought put into coming up with good hypotheses), LW is set up on the assumption that people have a lot of time to put into the sequences and ability to remember what’s in them.
This is the kind of summary of a decision procedure I have been complaining about to be missing, or hidden within enormous amounts of content. I wish someone with enough skill could write a top-level post about it demanding that the SIAI creates an introductory paper exemplifying how to reach the conclusion that (1) the risks are to be taken seriously (2) you should donate to the SIAI to reduce the risks. There could either a be a few papers for different people with different backgrounds or one with different levels of detail. It should feature detailed references to what knowledge is necessary to understand the paper itself. Further it should feature the formulas, variables and decision procedures you have to follow to estimate the risks posed by and incentive to alleviate ufriendly AI. It should also include references to further information from people not associated with the SIAI.
This would allow for the transparency that is required by claims of this magnitude and calls for action, including donations.
I wonder why it took so long until you came along posting this comment.
You didn’t succeed in communicating your problem, otherwise someone else would have explained earlier. I had been reading your posts on the issue and didn’t have even the tiniest hint of an idea that the piece you were missing was an explanation of bayesian reasoning until just before writing that comment, and even then was less optimistic about the comment doing anything for you than I had been for earlier comments. I’m still puzzled and unsure whether it actually was Bayesian reasoning or something else in the comment that apparently helped you. if it was you should read http://yudkowsky.net/rational/bayes and some of the post here tagged “bayesian”.
Because thinking is work, and it’s not always obvious what question needs to be answered.
More generally (and this is something I’m still working on grasping fully). what’s obvious to you is not necessarily obvious to other people, even if you think you have enough in common with them that it’s hard to believe that they could have missed it.
I wouldn’t have said so even a week ago, but I’m now inclined to think that your short attention span is asset to LW.
Just as Eliezer has said (can someone remember the link?) that science as conventionally set up to be too leisurely (not enough thought put into coming up with good hypotheses), LW is set up on the assumption that people have a lot of time to put into the sequences and ability to remember what’s in them.
This isn’t quite what you’re talking about, but a relatively accessible intro doc:
http://singinst.org/riskintro/index.html
This seems like a summary of the idea of there being significant risk:
Anna Salamon at Singularity Summit 2009 - “Shaping the Intelligence Explosion”
http://www.vimeo.com/7318055