I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here’s a 1997 post:
There is a very realistic
chance that the Singularity may turn out to be undesirable to many of us.
Perhaps it will be unstable and destroy all closely-coupled intelligence.
Or maybe the only entity that emerges from it will have the “personality”
of the Blight.
You can also see here that I was strongly influenced by Vernor Vinge’s novels. I’d like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I’m not sure how to check that, or if I wouldn’t have been, that would be more rational.
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here’s a 1997 post:
You can also see here that I was strongly influenced by Vernor Vinge’s novels. I’d like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I’m not sure how to check that, or if I wouldn’t have been, that would be more rational.