The “A” of “AI” is sufficient for human extinction.
What a dangerous AI might do to us, we’re already doing ourselves, mainly by the help of technology, which takes charge of human decisions. It’s not the addition of artificial intelligence, but the lack of human involvement, which nets us dystopia. To explain why this is the case, I’m going to borrow a really useful view from a post on qualiacomputing titled “Wireheading Done Right”, namely that of “Consciousness vs. Pure Replicators”. What makes humans special is that we care about valence. Pure replicators only care about winning and optimizing, whereas conscious beings care about things like joy. We’re not purely about optimization, we have aesthetics, morals, and various other preferences. In fact, a lot of optimal solutions offend us, they require selling out ones dignity, sacrificing oneself to Moloch, or being cruel to oneself and others. This taste of ours is unique to complex, sentient life.
I claim that: 1: Optimization is already becoming void of human values. This happens because we have lost the sight of cause and effect, it’s all too complex for us to see through. Dehumanization becomes easier with distance/indirectness, and it’s even worse with things we can’t see through (like algorithms on social media platforms). 2: Humanity itself is decreasing. This is only getting worse as technology improves, and as we start treating ideas and abstract symbols like they were reality itself. It’s hard to put into words without sounding like a skizophrenic, luckily Freddie deBoer has written an article called “Mimetic Collapse”, making it more likely that my intuition is correct than this being mere Apophenia. Let me quote “The litbro, in other words, is a simulacra, a symbol that has eaten what it was meant to symbolize, a representation of something that has never existed” Anyway, this is also trivially true as we create more non-human things. We also seek to “improve” humanity, as if the most popular ideology (liberalism), religion (christianity) and method (science) are anything to go by, this “improvement” is nothing but the reduction of humanity. They’re all against human instinct, which can be used for both good and evil, and against the inherent egoism and subjectivity of humanity (our self-affirming nature) which is necessarily imperfect and ‘deceptive’. But this is throwing out the baby with the bathwater. 3: Technology will kill humanity anyway. Human freedom allows us to make non-optimal choices, which is why this won’t be a thing in the future. At least, it will be decreased so much that what little wiggle-room we have left will be insufficient for covering the need for agency in anyone but the most shallow comformists. Fixing this suffering is easy though—we just get rid of the genes responsible for increasing this need. Who are the “bad guys” of society? It’s the outliers. “We have detected that your son has genes which increase his chances of acting out, putting him at odds with his peers and society at large. Do you want us to get rid of these with CRISPR?”. You will have no choice but to say yes, as saying no will make you look like a bad person. Does this technique sound crude and like it might fail? The difference between pre-911 airport security and the nanny state we have now is just this one method being applied thousands of times. Even rational communities seem vulnerable to subversion as long as the subject is moral in nature. Also, you can’t make technology which can be used only for good, and you can’t make up more powerful technology to makes sure that less powerful technology is used for good, for that technology will just be abused as well. If it could work, it already would have (same goes for regulations! Just stop) As a matter of fact, Theodore Kaczynski already made some of these arguments. But I haven’t seen his core ideas criticized before, in fact, I’ve never seen them referenced at all.
The only beauty left in life is found in consciousness and (organic!) humanity, but the most of modern humanity seems hellbend on destroying itself. I don’t just mean people avoiding reality nor the popularity of hedonism. I’m refering to the profound success of anti-human philosophies (like anti-natalism and many others). Society speaks well of basically everything which reduces humanity, like stoicism and any denouncement of inherent drives. Detachment and distration are viewed as a solutions even by psychologists. As far as I’m aware, SSRIs and stimulants mainly blunt subjective feelings, which is how they make you more productive and less sad. In the future, we will come up with ways to blunt emotions and individuality, and I have no doubt that they will be popular. Most religions and philosophies are about the *reduction* of things, and modern society takes the exact same stance. “Ego is bad”, “desire is bad”, “trusting yourself is bad”. It’s hard to tell if these beliefs are the cause or effect of mental illness, but it’s one or the other. A reason why the vast majority of society offends my taste like this is probably because it tends towards the bottom of the hierarchy of needs. The further up a person goes on this hierarchy, the less “pure replicators” will appeal to them. “luxuries” like caring about humanity becomes possible.
I’ll be the first to admit that I’m not a good writer, but I have a lot of insights like these and it pains me that I even have to write them down. Why haven’t they already been formalized and taken 10 times further? It’s hard not to grow arrogant. That said, I’m a just monkey using its intelligence, rather than the intelligence residing in the monkey. I believe Carl Jung claimed that Nietzsche went mad as a result of identifying with Zarathustra, so I won’t make the same mistake. But seriously, not only are things looking really bad, all the popular “solutions” only bring us further to dystopic futures, even if true AI is never created. And the reputation of workable solutions are kind of bad to boot, unjustly so.
The “A” of “AI” is sufficient for human extinction.
What a dangerous AI might do to us, we’re already doing ourselves, mainly by the help of technology, which takes charge of human decisions. It’s not the addition of artificial intelligence, but the lack of human involvement, which nets us dystopia. To explain why this is the case, I’m going to borrow a really useful view from a post on qualiacomputing titled “Wireheading Done Right”, namely that of “Consciousness vs. Pure Replicators”. What makes humans special is that we care about valence. Pure replicators only care about winning and optimizing, whereas conscious beings care about things like joy. We’re not purely about optimization, we have aesthetics, morals, and various other preferences. In fact, a lot of optimal solutions offend us, they require selling out ones dignity, sacrificing oneself to Moloch, or being cruel to oneself and others. This taste of ours is unique to complex, sentient life.
I claim that:
1: Optimization is already becoming void of human values.
This happens because we have lost the sight of cause and effect, it’s all too complex for us to see through. Dehumanization becomes easier with distance/indirectness, and it’s even worse with things we can’t see through (like algorithms on social media platforms).
2: Humanity itself is decreasing.
This is only getting worse as technology improves, and as we start treating ideas and abstract symbols like they were reality itself. It’s hard to put into words without sounding like a skizophrenic, luckily Freddie deBoer has written an article called “Mimetic Collapse”, making it more likely that my intuition is correct than this being mere Apophenia. Let me quote “The litbro, in other words, is a simulacra, a symbol that has eaten what it was meant to symbolize, a representation of something that has never existed” Anyway, this is also trivially true as we create more non-human things. We also seek to “improve” humanity, as if the most popular ideology (liberalism), religion (christianity) and method (science) are anything to go by, this “improvement” is nothing but the reduction of humanity. They’re all against human instinct, which can be used for both good and evil, and against the inherent egoism and subjectivity of humanity (our self-affirming nature) which is necessarily imperfect and ‘deceptive’. But this is throwing out the baby with the bathwater.
3: Technology will kill humanity anyway.
Human freedom allows us to make non-optimal choices, which is why this won’t be a thing in the future. At least, it will be decreased so much that what little wiggle-room we have left will be insufficient for covering the need for agency in anyone but the most shallow comformists. Fixing this suffering is easy though—we just get rid of the genes responsible for increasing this need. Who are the “bad guys” of society? It’s the outliers. “We have detected that your son has genes which increase his chances of acting out, putting him at odds with his peers and society at large. Do you want us to get rid of these with CRISPR?”. You will have no choice but to say yes, as saying no will make you look like a bad person. Does this technique sound crude and like it might fail? The difference between pre-911 airport security and the nanny state we have now is just this one method being applied thousands of times. Even rational communities seem vulnerable to subversion as long as the subject is moral in nature. Also, you can’t make technology which can be used only for good, and you can’t make up more powerful technology to makes sure that less powerful technology is used for good, for that technology will just be abused as well. If it could work, it already would have (same goes for regulations! Just stop)
As a matter of fact, Theodore Kaczynski already made some of these arguments. But I haven’t seen his core ideas criticized before, in fact, I’ve never seen them referenced at all.
The only beauty left in life is found in consciousness and (organic!) humanity, but the most of modern humanity seems hellbend on destroying itself. I don’t just mean people avoiding reality nor the popularity of hedonism. I’m refering to the profound success of anti-human philosophies (like anti-natalism and many others). Society speaks well of basically everything which reduces humanity, like stoicism and any denouncement of inherent drives. Detachment and distration are viewed as a solutions even by psychologists. As far as I’m aware, SSRIs and stimulants mainly blunt subjective feelings, which is how they make you more productive and less sad. In the future, we will come up with ways to blunt emotions and individuality, and I have no doubt that they will be popular. Most religions and philosophies are about the *reduction* of things, and modern society takes the exact same stance. “Ego is bad”, “desire is bad”, “trusting yourself is bad”. It’s hard to tell if these beliefs are the cause or effect of mental illness, but it’s one or the other.
A reason why the vast majority of society offends my taste like this is probably because it tends towards the bottom of the hierarchy of needs. The further up a person goes on this hierarchy, the less “pure replicators” will appeal to them. “luxuries” like caring about humanity becomes possible.
I’ll be the first to admit that I’m not a good writer, but I have a lot of insights like these and it pains me that I even have to write them down. Why haven’t they already been formalized and taken 10 times further? It’s hard not to grow arrogant. That said, I’m a just monkey using its intelligence, rather than the intelligence residing in the monkey. I believe Carl Jung claimed that Nietzsche went mad as a result of identifying with Zarathustra, so I won’t make the same mistake. But seriously, not only are things looking really bad, all the popular “solutions” only bring us further to dystopic futures, even if true AI is never created. And the reputation of workable solutions are kind of bad to boot, unjustly so.
I will appreciate any replies!