As I mentioned here,
I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive
self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a
variety of personal risks.
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that hasFOOMed that really represents the threat.
As I mentioned here, I’ve seen a presentation on Watson, and it looks to me like its architecture is compatible with recursive self-improvement (though that is not the immediate goal for it). Clippy does seem rather probable...
One caveat: I tend to overestimate risks. I overestimated the severity of y2k, and I’ve overestimated a variety of personal risks.
“I see that you’re trying to extrapolate human volition. Would you like some help ?” converts the Earth into computronium
Soreff was probably alluding to User:Clippy, someone role-playing a non-FOOMed paperclip maximiser.
Though yours is good too :-)
Yes, I was indeed alluding to User:Clippy. Actually, I should have tweaked the reference, since it it the possibility of a paperclip maximiser that has FOOMed that really represents the threat.
Ah, thanks, that makes sense.