It would be interesting to see a list of all the material that has been deleted in cover-up operations over the years. We really need a SIAIWatch organisation.
SIAI will not enter any partnership that compromises our values.
Technology developed by SIAI will not be used to harm human life.
The challenge, opportunity and risk of artificial intelligence is the common concern of all humanity. SIAI will not show ethnic, national, political, or religious favoritism in the discharge of our mission.
That was a surprisingly good summary of Roko’s basilisk. Thanks for the link.
In case anyone’s wondering, here’s the standard answer I give to people who are unsure whether to worry about the basilisk: the AI won’t adopt the awful strategy if adopting it hurts the AI overall instead of helping, which is something you can affect by (conditionally) refusing to donate. Of course this answer doesn’t come with a guarantee of correctness, but feel free to accept it if it works for you.
So, did anyone actually save Roko’s comments before the mass deletion?
Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won’t remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.
See, while I’m not sure about the Commitments and obviously I’m reasoning from partial data across the board, most of these look like aspects of Eliezer’s pre-2003 mistake(s). I thought of that but decided calling it a cover-up didn’t make much sense; he spent a lot of time explaining his mistake and how his later views on it motivated his posts on LW.
Deleting content that is no longer relevant is not the same thing as a cover up. It might be best to keep copies of such content around, but there’s nothing inherently sinister about not doing so.
I’m amused by how much section 1.8 resembles my own Singularitarian conversion moment.
It is quite funny how my story differs. See the banner on my homepage in 2005?
“Towards the Singularity and a Posthuman Future”
I was a believer. It seemed completely obvious that we’ll soon see superhuman AI. When reading ‘Permutation City’ and ‘Diaspora’ I was bothered by how there was no AI, just emulations. That didn’t seem right.
I changed my mind. I now think that a lot of what I believed to know was based on extrapolations of current trends mixed with pure speculation. Those incredible amounts of technological optimism just seem naive now. It all sounds really cool and convincing when formulated in English, but that’s not enough.
I was a believer. It seemed completely obvious that we’ll soon see superhuman AI. When reading ‘Permutation City’ and ‘Diaspora’ I was bothered by how there was no AI, just emulations. That didn’t seem right.
A common plot device—humans need human-like protagonists to identify with—or the story doesn’t sell.
Such scenarios then get “reified” in people’s minds, and a whole lot of nonsense results.
I only know of one cover-up operation, and that didn’t include material directly about Eliezer or the SIAI.
Hey, maybe this was the point of that exercise—a deliberately flawed cover-up, to make me underestimate how easily the SI can hide any facts they really want to keep secret!
I wouldn’t exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn’t develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It’s like he has a thick dossier, slowly accumulating negative content...
I don’t save them anywhere and I never actively searched for negative content. It was either given to me by other people or I came across it by chance. That particular link is from a comment made by David Pearce on Facebook on a link posted there to the latest interview between Eliezer Yudkowsky and John Baez.
Do you think it is a bad idea to take a closer look at what is said about and by someone who is working on fooming AI?
It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It’s like he has a thick dossier, slowly accumulating negative content...
It would be interesting to see a list of all the material that has been deleted in cover-up operations over the years. We really need a SIAIWatch organisation.
[Added] some deletions that spring to mind:
Physics Workarounds (archived here)
Coding a Transhuman AI.(archived here)
Eliezer, the person (archived here)
The deleted posts from around the time of Roko’s departure.
Algernon’s Law: (archived here)
Love and Life Just Before the Singularity
Flare—though remanants survive.
SysopMind. (archived here)
Gaussian Humans (archived here)
The Seed AI page.
Becoming a Seed AI Programmer. (archived here)
The “Commitments” vanished from: http://singinst.org/aboutus/ourmission
They used to look like this:
So, did anyone actually save Roko’s comments before the mass deletion?
They did. There’s a very brief synopsis here.
That was a surprisingly good summary of Roko’s basilisk. Thanks for the link.
In case anyone’s wondering, here’s the standard answer I give to people who are unsure whether to worry about the basilisk: the AI won’t adopt the awful strategy if adopting it hurts the AI overall instead of helping, which is something you can affect by (conditionally) refusing to donate. Of course this answer doesn’t come with a guarantee of correctness, but feel free to accept it if it works for you.
Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won’t remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.
Ok now what.
See, while I’m not sure about the Commitments and obviously I’m reasoning from partial data across the board, most of these look like aspects of Eliezer’s pre-2003 mistake(s). I thought of that but decided calling it a cover-up didn’t make much sense; he spent a lot of time explaining his mistake and how his later views on it motivated his posts on LW.
[edited slightly for clarity]
Deleting content that is no longer relevant is not the same thing as a cover up. It might be best to keep copies of such content around, but there’s nothing inherently sinister about not doing so.
Citations needed.
I added some.
Updated link.
Reading it for the first time today, I’m amused by how much section 1.8 resembles my own Singularitarian conversion moment.
And boy, is this quote ever true of me: “I do my best thinking into a keyboard.”
It is quite funny how my story differs. See the banner on my homepage in 2005?
“Towards the Singularity and a Posthuman Future”
I was a believer. It seemed completely obvious that we’ll soon see superhuman AI. When reading ‘Permutation City’ and ‘Diaspora’ I was bothered by how there was no AI, just emulations. That didn’t seem right.
I changed my mind. I now think that a lot of what I believed to know was based on extrapolations of current trends mixed with pure speculation. Those incredible amounts of technological optimism just seem naive now. It all sounds really cool and convincing when formulated in English, but that’s not enough.
A common plot device—humans need human-like protagonists to identify with—or the story doesn’t sell.
Such scenarios then get “reified” in people’s minds, and a whole lot of nonsense results.
I only know of one cover-up operation, and that didn’t include material directly about Eliezer or the SIAI.
Hey, maybe this was the point of that exercise—a deliberately flawed cover-up, to make me underestimate how easily the SI can hide any facts they really want to keep secret!
I wouldn’t exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn’t develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
Was this the “PUA Controversy”?
I don’t save them anywhere and I never actively searched for negative content. It was either given to me by other people or I came across it by chance. That particular link is from a comment made by David Pearce on Facebook on a link posted there to the latest interview between Eliezer Yudkowsky and John Baez.
Do you think it is a bad idea to take a closer look at what is said about and by someone who is working on fooming AI?