I am of the opinion that you’re probably right. That AI will likely be the end of humanity. I am glad to see others pondering this risk.
However, I would like to mention that there are 2 possible/likely modes of that end coming about.
First is the “terminator” future, and second is the “Wall-e” future. The risk that AI war machines will destroy humanity is a legitimate concern, given “autonomous drones”, and other developmental projects. The other side has a LOT more projects and progress. Siri, Emospark, automated factories, advanced realdolls, AI, even basic things like predictive text algorithms lead toward the future where people relate to the digital world, rather than the real world, and, instead of being killed off, simply fail to breed. Fail to learn, fail to advance.
However, here’s the more philosophical question, and point.
Organisms exist to bring their children to maturity. Species exist to evolve.
What if AI is simply the next evolution of humanity? If AI is the “children” of humanity?
If humanity fails to get out of this solar system, then everything we are, and everything we ever were is for nothing. It was all, from Gilgamesh to hawking, a zero sum game, nothing gained, all for nothing. But if we can make it out to the stars, then maybe it was all for something, Our glory and failings need not be lost.
So while I agree that it’s likely that AI will end humanity, it’s my opinion that A) it will probably be by “coddling” us to death, and B) either way, that’s okay.
I had not. And I will avoid that in the future. However, that has very little bearing on my overall post. Please ignore the single sentence that references works of fiction.
True enough. I hadn’t read that one either, and, having joined a few days ago, there is very little of the content here that I have read. This seemed like a light standalone topic in which to jump in.
This second article however really does address the weaknesses of my thought process, and clarify the philosophical difficulty that the op is concerned with.
I am of the opinion that you’re probably right. That AI will likely be the end of humanity. I am glad to see others pondering this risk.
However, I would like to mention that there are 2 possible/likely modes of that end coming about.
First is the “terminator” future, and second is the “Wall-e” future. The risk that AI war machines will destroy humanity is a legitimate concern, given “autonomous drones”, and other developmental projects. The other side has a LOT more projects and progress. Siri, Emospark, automated factories, advanced realdolls, AI, even basic things like predictive text algorithms lead toward the future where people relate to the digital world, rather than the real world, and, instead of being killed off, simply fail to breed. Fail to learn, fail to advance.
However, here’s the more philosophical question, and point.
Organisms exist to bring their children to maturity. Species exist to evolve.
What if AI is simply the next evolution of humanity? If AI is the “children” of humanity?
If humanity fails to get out of this solar system, then everything we are, and everything we ever were is for nothing. It was all, from Gilgamesh to hawking, a zero sum game, nothing gained, all for nothing. But if we can make it out to the stars, then maybe it was all for something, Our glory and failings need not be lost.
So while I agree that it’s likely that AI will end humanity, it’s my opinion that A) it will probably be by “coddling” us to death, and B) either way, that’s okay.
I’m assuming you haven’t read this;
http://lesswrong.com/lw/k9/the_logical_fallacy_of_generalization_from/
I had not. And I will avoid that in the future. However, that has very little bearing on my overall post. Please ignore the single sentence that references works of fiction.
I’m not quite sure how to put this, but there are many other posts on the site which you seem unaware of.
True enough. I hadn’t read that one either, and, having joined a few days ago, there is very little of the content here that I have read. This seemed like a light standalone topic in which to jump in.
This second article however really does address the weaknesses of my thought process, and clarify the philosophical difficulty that the op is concerned with.
Also all of the framing that are implied by those works? And the dichotomy that you propose?
You shouldn’t just read it, think about how it has warped your perspective on AI risks—that’s the point.