Poor kid. He’s a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he’ll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he’ll grow up to be embarrassed by this, instead of turning into a crank.
His theories as quoted in the article don’t seem to be very coherent—I can’t even tell if he’s using the term “big bang” to mean the origin of the universe or a nova—so I don’t think there’s much of a claim to be evaluated here.
Of course, it’s very possible that the reporter butchered the quote. It’s a human interest article and it’s painfully obvious that the reporter parsed every word out of the kid’s mouth as science-as-attire, with no attempt to understand the content.
Hopefully he’ll grow up to be embarrassed by this, instead of turning into a crank.
I agree with this, but I’d bet this kid would be willing to drop his pet theory if he found it was wrong (if grudgingly). I really don’t think this one article, or just being in the news mostly for his youth/intelligence combo will ruin him.
It’s terribly common for highly intelligent boys to become slackers as adults. (More precisely, to strive to be “ordinary” and not overachieve). This book is a classic longitudinal study on this topic. I don’t know how well this applies way out on the tail end of the bell curve where Jacob resides, as opposed to kids who are “just” in the top couple percent.
He’s a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he’ll never be able to live them down because some reporter wrote a fluff piece about him.
Reminds me of this old article (04.19.01) about Yudkowsky:
Since then, Yudkowsky has become not just someone who predicts the Singularity, but a committed activist trying to speed its arrival. “My first allegiance is to the Singularity, not humanity,” he writes in one essay. “I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms.… If it comes down to Us or Them, I’m with Them.”
[...]
Yudkowsky takes it a step further, writing that he believes AI “will be developed on symmetric-multiprocessing hardware, at least initially.” He said he expects Singularity could happen in the very near future: “I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the Earth and Sun are reshaped into computing elements.”
When one researcher booted up a program he hoped would be AI-like, Yudkowsky said he believed there was a 5 percent chance the Singularity was about to happen and human existence would be forever changed.
[...]
In an autobiographical essay, he writes: “I think my efforts could spell the difference between life and death for most of humanity, or even the difference between a Singularity and a lifeless, sterilized planet… I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.”
Yudkowsky said he believed there was a 5 percent chance the Singularity was about to happen and human existence would be forever changed.
Note: This is a LIE.
The correct quote was that I said on SL4 that when Douglas Lenat switched on Eurisko, essentially the first time anyone had ever built a Turing-complete freeform genuinely recursive self-modifier with heuristics modifying heuristics, he ought to have evaluated a 5% chance of it going FOOM.
I was 4 years old when Eurisko was switched on, and could not possibly have said anything at the time.
Declan McCullagh. Write it down. Never trust him.
No matter how many terrible things you’ve heard about the mainstream press, you truly cannot appreciate how bad it really, really is until you have been reported on yourself. It is at least two orders of magnitude worse than you think it is from reading Reddit.
No matter how many terrible things you’ve heard about the mainstream press, you truly cannot appreciate how bad it really, really is until you have been reported on yourself. It is at least two orders of magnitude worse than you think it is from reading Reddit.
Very true.
As someone who has worked in the industry i can tell you that the process of creating news stories is remarkably similar to that of producing chicken nuggets—although, probably not as sanitary.
(There are exceptions—there’s even two people at the Register I’d ever speak to under any circumstances—but even if you know and trust the journalist in question personally, be prepared for their editor to screw you both over.)
The mainstream press can’t work technology more complicated than scissors, but they have occasionally heard the word “journalism.”
Really—unless you’re actually selling computer technology, there is no reason to deal with the tech press under any circumstances. The canonical example of “taking people seriously just because they pay you attention is often not a good idea.” If only WIkipedia had worked that one out early on …
(I wouldn’t count Wired as “mainstream press”, but the scary thing about your tale is that Declan McCullagh has a generally good reputation for a tech journalist.)
It would be interesting to see a list of all the material that has been deleted in cover-up operations over the years. We really need a SIAIWatch organisation.
SIAI will not enter any partnership that compromises our values.
Technology developed by SIAI will not be used to harm human life.
The challenge, opportunity and risk of artificial intelligence is the common concern of all humanity. SIAI will not show ethnic, national, political, or religious favoritism in the discharge of our mission.
That was a surprisingly good summary of Roko’s basilisk. Thanks for the link.
In case anyone’s wondering, here’s the standard answer I give to people who are unsure whether to worry about the basilisk: the AI won’t adopt the awful strategy if adopting it hurts the AI overall instead of helping, which is something you can affect by (conditionally) refusing to donate. Of course this answer doesn’t come with a guarantee of correctness, but feel free to accept it if it works for you.
So, did anyone actually save Roko’s comments before the mass deletion?
Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won’t remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.
See, while I’m not sure about the Commitments and obviously I’m reasoning from partial data across the board, most of these look like aspects of Eliezer’s pre-2003 mistake(s). I thought of that but decided calling it a cover-up didn’t make much sense; he spent a lot of time explaining his mistake and how his later views on it motivated his posts on LW.
Deleting content that is no longer relevant is not the same thing as a cover up. It might be best to keep copies of such content around, but there’s nothing inherently sinister about not doing so.
I’m amused by how much section 1.8 resembles my own Singularitarian conversion moment.
It is quite funny how my story differs. See the banner on my homepage in 2005?
“Towards the Singularity and a Posthuman Future”
I was a believer. It seemed completely obvious that we’ll soon see superhuman AI. When reading ‘Permutation City’ and ‘Diaspora’ I was bothered by how there was no AI, just emulations. That didn’t seem right.
I changed my mind. I now think that a lot of what I believed to know was based on extrapolations of current trends mixed with pure speculation. Those incredible amounts of technological optimism just seem naive now. It all sounds really cool and convincing when formulated in English, but that’s not enough.
I was a believer. It seemed completely obvious that we’ll soon see superhuman AI. When reading ‘Permutation City’ and ‘Diaspora’ I was bothered by how there was no AI, just emulations. That didn’t seem right.
A common plot device—humans need human-like protagonists to identify with—or the story doesn’t sell.
Such scenarios then get “reified” in people’s minds, and a whole lot of nonsense results.
I only know of one cover-up operation, and that didn’t include material directly about Eliezer or the SIAI.
Hey, maybe this was the point of that exercise—a deliberately flawed cover-up, to make me underestimate how easily the SI can hide any facts they really want to keep secret!
I wouldn’t exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn’t develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It’s like he has a thick dossier, slowly accumulating negative content...
I don’t save them anywhere and I never actively searched for negative content. It was either given to me by other people or I came across it by chance. That particular link is from a comment made by David Pearce on Facebook on a link posted there to the latest interview between Eliezer Yudkowsky and John Baez.
Do you think it is a bad idea to take a closer look at what is said about and by someone who is working on fooming AI?
When one researcher booted up a program he hoped would be AI-like, Yudkowsky said he believed there was a 5 percent chance the Singularity was about to happen
Um what? Being cautious in one’s predictions is being a dumbass now?
It isn’t caution to overestimate. Being risk averse with an accurate probability assessment is not the same thing as having overestimated probabilities. (Although the issue of the general tendency to overestimate one’s confidence is certainly relevant.)
Poor kid. He’s a smart 12 year old who has some silly ideas, as smart 12 year olds often do, and now he’ll never be able to live them down because some reporter wrote a fluff piece about him. Hopefully he’ll grow up to be embarrassed by this, instead of turning into a crank.
His theories as quoted in the article don’t seem to be very coherent—I can’t even tell if he’s using the term “big bang” to mean the origin of the universe or a nova—so I don’t think there’s much of a claim to be evaluated here.
Of course, it’s very possible that the reporter butchered the quote. It’s a human interest article and it’s painfully obvious that the reporter parsed every word out of the kid’s mouth as science-as-attire, with no attempt to understand the content.
I agree with this, but I’d bet this kid would be willing to drop his pet theory if he found it was wrong (if grudgingly). I really don’t think this one article, or just being in the news mostly for his youth/intelligence combo will ruin him.
It’s terribly common for highly intelligent boys to become slackers as adults. (More precisely, to strive to be “ordinary” and not overachieve). This book is a classic longitudinal study on this topic. I don’t know how well this applies way out on the tail end of the bell curve where Jacob resides, as opposed to kids who are “just” in the top couple percent.
.
Reminds me of this old article (04.19.01) about Yudkowsky:
Note: This is a LIE.
The correct quote was that I said on SL4 that when Douglas Lenat switched on Eurisko, essentially the first time anyone had ever built a Turing-complete freeform genuinely recursive self-modifier with heuristics modifying heuristics, he ought to have evaluated a 5% chance of it going FOOM.
I was 4 years old when Eurisko was switched on, and could not possibly have said anything at the time.
Declan McCullagh. Write it down. Never trust him.
No matter how many terrible things you’ve heard about the mainstream press, you truly cannot appreciate how bad it really, really is until you have been reported on yourself. It is at least two orders of magnitude worse than you think it is from reading Reddit.
The Wired article has a comments section, with 0 comments. You should probably put a response there.
Very true.
As someone who has worked in the industry i can tell you that the process of creating news stories is remarkably similar to that of producing chicken nuggets—although, probably not as sanitary.
In particular, the tech press should be greeted with gunfire.
(There are exceptions—there’s even two people at the Register I’d ever speak to under any circumstances—but even if you know and trust the journalist in question personally, be prepared for their editor to screw you both over.)
The mainstream press can’t work technology more complicated than scissors, but they have occasionally heard the word “journalism.”
Really—unless you’re actually selling computer technology, there is no reason to deal with the tech press under any circumstances. The canonical example of “taking people seriously just because they pay you attention is often not a good idea.” If only WIkipedia had worked that one out early on …
(I wouldn’t count Wired as “mainstream press”, but the scary thing about your tale is that Declan McCullagh has a generally good reputation for a tech journalist.)
It truly is astonishing, the number of quotes that XiXiDu has about Eliezer. It’s like he has a thick dossier, slowly accumulating negative content...
It would be interesting to see a list of all the material that has been deleted in cover-up operations over the years. We really need a SIAIWatch organisation.
[Added] some deletions that spring to mind:
Physics Workarounds (archived here)
Coding a Transhuman AI.(archived here)
Eliezer, the person (archived here)
The deleted posts from around the time of Roko’s departure.
Algernon’s Law: (archived here)
Love and Life Just Before the Singularity
Flare—though remanants survive.
SysopMind. (archived here)
Gaussian Humans (archived here)
The Seed AI page.
Becoming a Seed AI Programmer. (archived here)
The “Commitments” vanished from: http://singinst.org/aboutus/ourmission
They used to look like this:
So, did anyone actually save Roko’s comments before the mass deletion?
They did. There’s a very brief synopsis here.
That was a surprisingly good summary of Roko’s basilisk. Thanks for the link.
In case anyone’s wondering, here’s the standard answer I give to people who are unsure whether to worry about the basilisk: the AI won’t adopt the awful strategy if adopting it hurts the AI overall instead of helping, which is something you can affect by (conditionally) refusing to donate. Of course this answer doesn’t come with a guarantee of correctness, but feel free to accept it if it works for you.
Google Reader fetches every post and comment that is being made on lesswrong. Editing or deleting won’t remove it. All comments and posts that have ever been made are still there, saved by Google. You just have to add the right RSS feeds to Google Reader.
Ok now what.
See, while I’m not sure about the Commitments and obviously I’m reasoning from partial data across the board, most of these look like aspects of Eliezer’s pre-2003 mistake(s). I thought of that but decided calling it a cover-up didn’t make much sense; he spent a lot of time explaining his mistake and how his later views on it motivated his posts on LW.
[edited slightly for clarity]
Deleting content that is no longer relevant is not the same thing as a cover up. It might be best to keep copies of such content around, but there’s nothing inherently sinister about not doing so.
Citations needed.
I added some.
Updated link.
Reading it for the first time today, I’m amused by how much section 1.8 resembles my own Singularitarian conversion moment.
And boy, is this quote ever true of me: “I do my best thinking into a keyboard.”
It is quite funny how my story differs. See the banner on my homepage in 2005?
“Towards the Singularity and a Posthuman Future”
I was a believer. It seemed completely obvious that we’ll soon see superhuman AI. When reading ‘Permutation City’ and ‘Diaspora’ I was bothered by how there was no AI, just emulations. That didn’t seem right.
I changed my mind. I now think that a lot of what I believed to know was based on extrapolations of current trends mixed with pure speculation. Those incredible amounts of technological optimism just seem naive now. It all sounds really cool and convincing when formulated in English, but that’s not enough.
A common plot device—humans need human-like protagonists to identify with—or the story doesn’t sell.
Such scenarios then get “reified” in people’s minds, and a whole lot of nonsense results.
I only know of one cover-up operation, and that didn’t include material directly about Eliezer or the SIAI.
Hey, maybe this was the point of that exercise—a deliberately flawed cover-up, to make me underestimate how easily the SI can hide any facts they really want to keep secret!
I wouldn’t exactly call it a cover-up. It looks to me like the actual goal was to ensure that a particular subject wouldn’t develop further, by derailing any discussions about it into meta-discussions about censorship. Lots of noise was made, but no one ever published a sufficiently detailed description of the spell, so this did in fact succeed in averting a minor disaster.
Was this the “PUA Controversy”?
I don’t save them anywhere and I never actively searched for negative content. It was either given to me by other people or I came across it by chance. That particular link is from a comment made by David Pearce on Facebook on a link posted there to the latest interview between Eliezer Yudkowsky and John Baez.
Do you think it is a bad idea to take a closer look at what is said about and by someone who is working on fooming AI?
HAHA. What a dumbass.
Hmm. This certainly wasn’t helpful, and doesn’t score well on the comedy scale. Definitely downvoted.
Um what? Being cautious in one’s predictions is being a dumbass now?
It isn’t caution to overestimate. Being risk averse with an accurate probability assessment is not the same thing as having overestimated probabilities. (Although the issue of the general tendency to overestimate one’s confidence is certainly relevant.)
There is another sense in which a probability estimate can be cautious: by not being too close to 0 or 1.
Overestimating the probability of X is just underestimating the probability of not(X).