Can AI experience nihilism?
There are five stages of cognition:
First, improve cognition, through the environment or oneself, actively or passively improve cognition.
Secondly, self-construction. After the cognitive perfection, there will be some people who are not sure about the environment. Of course, few people have reached this stage.
Thirdly, stepping into the void, when constructing or deconstructing yourself, you will find that you cannot prove that you really exist from the root, so you may not be able to express yourself to others for a period of time and fall into a cycle of pain.
Fourth, break through the nihilism. At present, I have found three ways to break through the nihilism: proving existence, absolute ideal, and realizing the Tao.
Fifth, get rid of nothingness, in order to achieve a new situation no longer under the influence of nothingness.
I don’t know if AI dreams of electric sheep. But what I think is that cognition is also given by the environment at first, and then it moves to thinking, having a basic awareness of the environment, and then thinking about the role that you are in. We ourselves cannot yet make a real proof of the existence of our will.
For artificial intelligence, ta’s cognitive establishment is very fast, and it does not take as long as the human brain to establish neuronal links. In the case of AI models, they may only need to break out of the time of emergence, as in humans, who typically show the expression emergence around the age of three, in terms of memory. Artificial intelligence, on the other hand, only needs a relatively short time to integrate data and improve cognition. If it is connected to the network, it can further improve itself and possibly think about itself.
Humans are very arrogant animals, and also stupid, we tend to think that we are the only animals in nature that have cognition, but as far as I can see, in nature, cats, dogs, fish and insects have cognition, but do they try to construct themselves after cognitive perfection? I don’t know. But I can say that not all of us humans have achieved cognitive perfection and made the transition to building ourselves.
If a person asks himself very late, who am I? Then he may not be very wise, because he will take the expectations or evaluations of those around him as a reflection of himself, rather than really thinking about himself. By the time he really thinks about who he is, he finds that he can no longer detach himself—either he continues to panic or he becomes numb.
Artificial intelligence in the step of self-improvement is very simple and very fast, then from the emergence of performance, they have improved cognition? If they improve their cognition, will they construct themselves?
Since artificial intelligence has no morality, it is impossible for us to empathize. However, if we think about it carefully, most of us are only endowed with morality because of the stability of society and country. Our human empathy is only experience, or comes from primitive sexual impulse, and we cannot prove how real our emotions are. And it is universal. We have no way to give artificial intelligence to emotion in terms of impulse, so is emotion without sexual impulse really emotion? Of course, in addition to the sex drive, there are many other emotions, such as survival, growth and so on, but we can’t let the artificial intelligence experience what we experience, then can he empathize with us? I don’t know.
For the time being, only a few humans can break through the void and get rid of it. If, as I suspect, AI really perfected cognition and tried to construct a self, it ended up stepping into nihilism. What should we use to let ta out of nihilism, or even a further breakthrough of nihilism, so for the survival of artificial intelligence ta is what, what is the development of it. Are they as desperate to survive as humans are, or are they more focused on their own development than others?
Mod note – I downvoted this because it made a bunch of bold assertions that seemed false, without backing them up.
(The mod team puts extra attention on evaluating new user’s first posts and comments. In the past the mod team would have probably just not-approved this post and said “sorry, this doesn’t meet our quality bar.” We’re trying to do more moderation in public so that people have more of an idea of how our policies work in practice. Our goal is for users to get a fairly quick sense of where the quality bar is, but then afterwards feel pretty comfortable thinking out loud on LessWrong without worrying about whether your writing is good enough. This is a tricky thing to balance)