Did I claim they did beat him up or what? Ultimately, more recent opinion which I seen somewhere is that Eliezer ended up considering Ben harmless as in unlikely to achieve the result. I also see you guys really loving trolley problems including extreme forms of it (with 3^^^3 dustspecks in 3^^^3 eyes).
Having it popularly told that your project is going to kill everyone is already a risk given all the other nutjobs:
Even if later atoned for by making you head of SI or something (with unclear motivation which may well be creepy in nature)
See, i did not say he was going to definitely get killed or something. I said, there was a risk. Yea, nothing happening to Ben Goertzel’s persona is proof positive that the risk is zero. Geez, why won’t you for once reason like this about AI risk for example.
Ultimately: encounters with a nutjob* who may, after presentation of technical details, believe you are going to kill everyone, are about as safe as making credible death threats against normal person and his relatives and his family etc. Or less safe, even. Neither results in 100% probability of anything happening.
*though of course the point may be made that he doesn’t believe the stuff he says he believes, or that a sane portion of his brain will reliably enact akrasia over the decision, or something.
The existence of third-party anti-technology terrorists adds something to the conversation beyond the risks FinalState can directly pose to SIAI-folk and vice versa. I’m curious about gwern’s response, especially, given his interest in Death Note, which describes a world where law enforcement can indirectly have people killed just by publishing their identifying information.
Did I claim they did beat him up or what? Ultimately, more recent opinion which I seen somewhere is that Eliezer ended up considering Ben harmless as in unlikely to achieve the result. I also see you guys really loving trolley problems including extreme forms of it (with 3^^^3 dustspecks in 3^^^3 eyes).
Having it popularly told that your project is going to kill everyone is already a risk given all the other nutjobs:
http://www.nature.com/news/2011/110822/full/476373a.html
Even if later atoned for by making you head of SI or something (with unclear motivation which may well be creepy in nature)
See, i did not say he was going to definitely get killed or something. I said, there was a risk. Yea, nothing happening to Ben Goertzel’s persona is proof positive that the risk is zero. Geez, why won’t you for once reason like this about AI risk for example.
Ultimately: encounters with a nutjob* who may, after presentation of technical details, believe you are going to kill everyone, are about as safe as making credible death threats against normal person and his relatives and his family etc. Or less safe, even. Neither results in 100% probability of anything happening.
*though of course the point may be made that he doesn’t believe the stuff he says he believes, or that a sane portion of his brain will reliably enact akrasia over the decision, or something.
The existence of third-party anti-technology terrorists adds something to the conversation beyond the risks FinalState can directly pose to SIAI-folk and vice versa. I’m curious about gwern’s response, especially, given his interest in Death Note, which describes a world where law enforcement can indirectly have people killed just by publishing their identifying information.