“And if Novamente should ever cross the finish line, we all die.”
And yet SIAI didn’t do anything to Ben Goertzel (except make him Director of Research for a time, which is kind of insane in my judgement, but obviously not in the sense you intend).
Ben Goertzel’s projects are knowably hopeless, so I didn’t too strongly oppose Tyler Emerson’s project from within SIAI’s then-Board of Directors; it was being argued to have political benefits, and I saw no noticeable x-risk so I didn’t expend my own political capital to veto it, just sighed. Nowadays the Board would not vote for this.
And it is also true that, in the hypothetical counterfactual conditional where Goertzel’s creations work, we all die. I’d phrase the email message differently today to avoid any appearance of endorsing the probability, because today I understand better that most people have trouble mentally separating hypotheticals. But the hypothetical is still true in that counterfactual universe, if not in this one.
Also, in the hypothetical counterfactual conditional where Goertzel’s creations work, we all die
What’s about hypothetical counterfactual conditional where you run into some AGI software that you think will work? Should I assume zero positive rate for ‘you think it works’?
I’d phrase the email message differently today to avoid any appearance of endorsing the probability, because today I understand better that most people have trouble mentally separating hypotheticals.
Really, so it is invalid to make a hypothetical that if someone has a project that you think will work, you may think that we are all going to die unless that project is stopped?
Did I claim they did beat him up or what? Ultimately, more recent opinion which I seen somewhere is that Eliezer ended up considering Ben harmless as in unlikely to achieve the result. I also see you guys really loving trolley problems including extreme forms of it (with 3^^^3 dustspecks in 3^^^3 eyes).
Having it popularly told that your project is going to kill everyone is already a risk given all the other nutjobs:
Even if later atoned for by making you head of SI or something (with unclear motivation which may well be creepy in nature)
See, i did not say he was going to definitely get killed or something. I said, there was a risk. Yea, nothing happening to Ben Goertzel’s persona is proof positive that the risk is zero. Geez, why won’t you for once reason like this about AI risk for example.
Ultimately: encounters with a nutjob* who may, after presentation of technical details, believe you are going to kill everyone, are about as safe as making credible death threats against normal person and his relatives and his family etc. Or less safe, even. Neither results in 100% probability of anything happening.
*though of course the point may be made that he doesn’t believe the stuff he says he believes, or that a sane portion of his brain will reliably enact akrasia over the decision, or something.
The existence of third-party anti-technology terrorists adds something to the conversation beyond the risks FinalState can directly pose to SIAI-folk and vice versa. I’m curious about gwern’s response, especially, given his interest in Death Note, which describes a world where law enforcement can indirectly have people killed just by publishing their identifying information.
The Roko incident has absolutely nothing to do with this at all. Roko did not claim to be on the verge of creating an AGI.
Once again you’re spreading FUD about the SI. Presumably moderation will come eventually, no doubt over some hue and cry over censoring contrarians.
The Roko incident allows to evaluate the sanity of people he’d be talking to.
Other relevant link:
http://acceleratingfuture.com/sl4/archive/0501/10613.html
“And if Novamente should ever cross the finish line, we all die.”
Ultimately, you can present your arguments, I can present my arguments, and then he can decide, to talk to you guys, or not.
And yet SIAI didn’t do anything to Ben Goertzel (except make him Director of Research for a time, which is kind of insane in my judgement, but obviously not in the sense you intend).
Ben Goertzel’s projects are knowably hopeless, so I didn’t too strongly oppose Tyler Emerson’s project from within SIAI’s then-Board of Directors; it was being argued to have political benefits, and I saw no noticeable x-risk so I didn’t expend my own political capital to veto it, just sighed. Nowadays the Board would not vote for this.
And it is also true that, in the hypothetical counterfactual conditional where Goertzel’s creations work, we all die. I’d phrase the email message differently today to avoid any appearance of endorsing the probability, because today I understand better that most people have trouble mentally separating hypotheticals. But the hypothetical is still true in that counterfactual universe, if not in this one.
There is no contradiction here.
To clarify, by “kind of insane” I didn’t mean you personally, but was commenting on SIAI’s group rationality at that time.
What’s about hypothetical counterfactual conditional where you run into some AGI software that you think will work? Should I assume zero positive rate for ‘you think it works’?
Really, so it is invalid to make a hypothetical that if someone has a project that you think will work, you may think that we are all going to die unless that project is stopped?
Did I claim they did beat him up or what? Ultimately, more recent opinion which I seen somewhere is that Eliezer ended up considering Ben harmless as in unlikely to achieve the result. I also see you guys really loving trolley problems including extreme forms of it (with 3^^^3 dustspecks in 3^^^3 eyes).
Having it popularly told that your project is going to kill everyone is already a risk given all the other nutjobs:
http://www.nature.com/news/2011/110822/full/476373a.html
Even if later atoned for by making you head of SI or something (with unclear motivation which may well be creepy in nature)
See, i did not say he was going to definitely get killed or something. I said, there was a risk. Yea, nothing happening to Ben Goertzel’s persona is proof positive that the risk is zero. Geez, why won’t you for once reason like this about AI risk for example.
Ultimately: encounters with a nutjob* who may, after presentation of technical details, believe you are going to kill everyone, are about as safe as making credible death threats against normal person and his relatives and his family etc. Or less safe, even. Neither results in 100% probability of anything happening.
*though of course the point may be made that he doesn’t believe the stuff he says he believes, or that a sane portion of his brain will reliably enact akrasia over the decision, or something.
The existence of third-party anti-technology terrorists adds something to the conversation beyond the risks FinalState can directly pose to SIAI-folk and vice versa. I’m curious about gwern’s response, especially, given his interest in Death Note, which describes a world where law enforcement can indirectly have people killed just by publishing their identifying information.
Yes, the most that has ever happened to anyone who talked to EY about building an AGI is some mild verbal/textual abuse.
I agree with gwern’s assessment of your arguments.
EDIT: Also, I am not affiliated with the SI.