I had not. And I will avoid that in the future. However, that has very little bearing on my overall post. Please ignore the single sentence that references works of fiction.
True enough. I hadn’t read that one either, and, having joined a few days ago, there is very little of the content here that I have read. This seemed like a light standalone topic in which to jump in.
This second article however really does address the weaknesses of my thought process, and clarify the philosophical difficulty that the op is concerned with.
I’m assuming you haven’t read this;
http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/
I had not. And I will avoid that in the future. However, that has very little bearing on my overall post. Please ignore the single sentence that references works of fiction.
I’m not quite sure how to put this, but there are many other posts on the site which you seem unaware of.
True enough. I hadn’t read that one either, and, having joined a few days ago, there is very little of the content here that I have read. This seemed like a light standalone topic in which to jump in.
This second article however really does address the weaknesses of my thought process, and clarify the philosophical difficulty that the op is concerned with.
Also all of the framing that are implied by those works? And the dichotomy that you propose?
You shouldn’t just read it, think about how it has warped your perspective on AI risks—that’s the point.