To some extent, they already are. Google and Facebook have had measurable impacts on neural structures and human behavior. There are also products like “emospark” that are designed to deliberately manipulate our emotional condition. Now, how well they do remains a question.
ford_prefect42
True enough. I hadn’t read that one either, and, having joined a few days ago, there is very little of the content here that I have read. This seemed like a light standalone topic in which to jump in.
This second article however really does address the weaknesses of my thought process, and clarify the philosophical difficulty that the op is concerned with.
I had not. And I will avoid that in the future. However, that has very little bearing on my overall post. Please ignore the single sentence that references works of fiction.
I am of the opinion that you’re probably right. That AI will likely be the end of humanity. I am glad to see others pondering this risk.
However, I would like to mention that there are 2 possible/likely modes of that end coming about.
First is the “terminator” future, and second is the “Wall-e” future. The risk that AI war machines will destroy humanity is a legitimate concern, given “autonomous drones”, and other developmental projects. The other side has a LOT more projects and progress. Siri, Emospark, automated factories, advanced realdolls, AI, even basic things like predictive text algorithms lead toward the future where people relate to the digital world, rather than the real world, and, instead of being killed off, simply fail to breed. Fail to learn, fail to advance.
However, here’s the more philosophical question, and point.
Organisms exist to bring their children to maturity. Species exist to evolve.
What if AI is simply the next evolution of humanity? If AI is the “children” of humanity?
If humanity fails to get out of this solar system, then everything we are, and everything we ever were is for nothing. It was all, from Gilgamesh to hawking, a zero sum game, nothing gained, all for nothing. But if we can make it out to the stars, then maybe it was all for something, Our glory and failings need not be lost.
So while I agree that it’s likely that AI will end humanity, it’s my opinion that A) it will probably be by “coddling” us to death, and B) either way, that’s okay.
I haven’t read all the comments to this post, and I am new to LW generally, so if I say anything that’s been gone over, bear with me.
The argument that abortion is bad due to the QALYs has certain inherent assumptions. First is that there’s “room” in the system for additional people. If the addition of a new person subtracts from the quality of life of others, then that has to be factored.
Another aspect that must be factored into this analysis is somewhat more obscure. “Moral hazard”. “Slippery slope” is a fallacy, however, as noted here under certain conditions, it’s a caution to be taken seriously. If abortion is banned because it reduces total QALYs, then the precedent has been set for authoritative intervention in personal choices for the purpose of increasing total QALYs. it would then make sense to, for instance, ban eating beef due to the inefficiency, and health consequences of it. And etcetera, etcetera, etcetera. Depending on how the adjustment for “quality” is calculated.
And this gets into the more important question when pondering QALYs, What calculus are we using for the “Quality adjustment”? What’s the coefficient adjustment for depression? How do you factor in personal differences? Is a year of the life of a man with a mean wife worth less than that of a man with a nice wife? Does “Pleasure” factor in? How? Fried food is pleasurable to many people, consuming it increases their instantaneous quality of life, but has some long-term costs in the “years”.
Additionally, the quality of life hit that is suffered from banning abortion (and taking a commensurate increase in adoptions) is not just to the mother. Every human in that system takes a quality of life hit due to the chilling effect that it will likely have on the sexual climate, the additional concerns that will inevitably be present on every act of sexual congress (A full term pregnancy is a far greater consequence than an abortion). If our goal is to maximize QALY, then QALY must include all of the variables of life.
Lacking such comprehensive (and impossible) indepth, individual analysis, then it would be possible to, for instance, maximize QALY by essentially putting everyone in prison, feeding them an “optimal life extension diet” (which I assure you is not enjoyable), and keeping them away from toxins/hazards… But a population on full-time suicide watch seems suboptimal to me.