I disagree with your first point. You are saying people who use a tool are already ‘post human’ in some sense. But then, are people who can use abacus in 14th century post human? Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands? By that logic, chimps are more ‘human’ than humans!
I think we can draw a line. Algorithms are more or less things tools that give answers to what we want. It is a mistake to think they are above humans; computers just let us effectively use them. Is a person using LLMs in work human? No to me. But purely algorithmic tools get a pass. The point is that when AIs inform us, they take away a part of our ‘agency’.
One might ask, how is this different from asking answer from just another person? My answer is, the one they asked is not a human. It is a ‘demon of statistics’, as I call it. It is something that knows statistical associations between every word on the internet, and can construct meaning based on this alone. This is clearly beyond human capability. Note that my distinction is based on my belief that knowledge of a ‘demon of statistics’ is fundamentally different from that of humans.
But take the part of the story where the protagonist stops thinking with his brain, and gives up decision making to AI that is ‘similar’ to him. This is not human, and it is where I draw the line. But with his usage of AIs from the start, we can also argue he was never fully ‘human’ to begin with. We who were born before the birth of AI can be considered the last of ‘true’ humanity.
Using this definition, anyone who uses, say, Chat GPT, to make any decision for them, even for a small part of their life, are already no longer human. But they can revert to a human again if the decisions informed by AI no longer effect them, something potentially effectively impossible in an AI dominated world.
I consulted Chat GPT about this very paragraph now, and it replied if I considered someone consults AI, and makes the final decision themselves, are they still human?
And my reply was this:
What does ‘making decision themselves’ even mean? If one can consider the wisdom of an entity that can form meaningful sentences distilled by whole knowledge of humanity, even as a part of their decision, how can they be still called human? But they are a ‘non human’ with respect to that decision. Even in their life, their non-humanness can be considered to have been increased.
So, I am already not just a human, a different being. That said, I am all in for the Butlerarian Jihad… If humans can do the work, let no AI do it!
“entity that can form meaningful sentences distilled by whole knowledge of humanity”
I think that google search engine is also such an entity. Also knowledge, also statistical methods to pick certain bits of the whole internet knowledge to be presented to the user. Also adaptable parameters set by unknown to the user process. Why don’t you say we lost humanness when started using it?
“their non-humanness can be considered to have been increased.”
You also use some gradation in your model. Let’s say we have 2d plane. Your view is like a RELU, constant 0 before timepoint 0 (where LLM appears) and then x=y. First part stands for being human and vertical growth stands for accumulating nonhumanness after LLM appeared. Did I describe your position?
I see it like y=exp(x). (0,1) is a current stage. If you go back in time, you “get closer to nature” and 0, if you go forward, nonhumanness accumulates faster. But the whole graph can be renormalized relative to any point. Invention of calculus is the death of intuition crown. Invention of books is the death of local independency of thinking (your decisions affected by people long dead or far away).
I suppose Your solution to Theseus paradox is that the ship was changed the very moment the first plank was extracted. But then you are changing every moment (metabolism, information gathering), and you preserve your humanness only by abstract inheritance.
If I make a purely algorithmic statistical model, that bruteforcely parses internet and forms the relation tables between words, but at no point uses llms neural nets and learning algorithms, will you consider it also cursed? Is T9 autocomplete technology cursed?
“Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands?”
They are more posthuman. Their coordinate is higher. Correct interpretation of my words would be “chimps are less posthuman than humans”. In my model posthuman is the limit of the function, as it is infinity you can only get further in the direction but never will reach it. It is like the word “future”, the interval of which is automatically shifted every moment of time.
Disclaimer, my opinion on relative themes. 1) LLM is not the only architecture that can produce intelligence, and I expect other architectures to outpace LLM eventually. People will see llms as we see google, and some new system will be new dominant AI. 2) Chimps and many other animals can move up and become closer to posthuman if we teach them how to pass knowledge between generations. Language is not fundamental difference, but the knowledge preservartion is the moment that diverted humanity from nature and made us somewhat a hivemind 10k years ago. “My answer is, the one they asked is not a human.”—animals don’t ask at all. Yet.
I don’t think Google search engine is an entity that I call a demon of statistics.
I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective, based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.
To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must distinguish them based on their position in intelligence space.
Search engines do not give you sentences, but the result associated with the query, as I understand it. This may use statistical methods, but it does not overlap with statistical thinking of humans in intelligence space.
On the other hand, LLMs do overlap with human intelligence space, in their statistical thinking aspect.
I think depending on machines that overlap with statistical aspect (and higher levels) of human intelligence is where one starts to lose humanity. I don’t distinguish between ‘post human’ and Inhuman.
On the other hand, algorithmic machines are age old, and using simple ones like beads for counting does not deprive one of humanity.
Also, regarding books, I think there is no difference between consulting them and asking your grandma (or any person much older than you), since I accept algorithmic machines.
No, I think the ship never changed. As long as the structure is same, parts do not matter. This is the virtue of statistical thinking, and the same as how you recognize a dog when you see it.
Finally, I agree that we can never reach true post human, only become less human. One exception is if everyone commits suicide as described in this post. I think this is even more dangerous than bad AI, since AI can be stopped, but humanity cannot be interfered with, given our morality.
I disagree with your first point. You are saying people who use a tool are already ‘post human’ in some sense. But then, are people who can use abacus in 14th century post human? Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands? By that logic, chimps are more ‘human’ than humans!
I think we can draw a line. Algorithms are more or less things tools that give answers to what we want. It is a mistake to think they are above humans; computers just let us effectively use them. Is a person using LLMs in work human? No to me. But purely algorithmic tools get a pass. The point is that when AIs inform us, they take away a part of our ‘agency’.
One might ask, how is this different from asking answer from just another person? My answer is, the one they asked is not a human. It is a ‘demon of statistics’, as I call it. It is something that knows statistical associations between every word on the internet, and can construct meaning based on this alone. This is clearly beyond human capability. Note that my distinction is based on my belief that knowledge of a ‘demon of statistics’ is fundamentally different from that of humans.
But take the part of the story where the protagonist stops thinking with his brain, and gives up decision making to AI that is ‘similar’ to him. This is not human, and it is where I draw the line. But with his usage of AIs from the start, we can also argue he was never fully ‘human’ to begin with. We who were born before the birth of AI can be considered the last of ‘true’ humanity.
Using this definition, anyone who uses, say, Chat GPT, to make any decision for them, even for a small part of their life, are already no longer human. But they can revert to a human again if the decisions informed by AI no longer effect them, something potentially effectively impossible in an AI dominated world.
I consulted Chat GPT about this very paragraph now, and it replied if I considered someone consults AI, and makes the final decision themselves, are they still human?
And my reply was this:
So, I am already not just a human, a different being. That said, I am all in for the Butlerarian Jihad… If humans can do the work, let no AI do it!
“entity that can form meaningful sentences distilled by whole knowledge of humanity”
I think that google search engine is also such an entity. Also knowledge, also statistical methods to pick certain bits of the whole internet knowledge to be presented to the user. Also adaptable parameters set by unknown to the user process. Why don’t you say we lost humanness when started using it?
“their non-humanness can be considered to have been increased.”
You also use some gradation in your model. Let’s say we have 2d plane. Your view is like a RELU, constant 0 before timepoint 0 (where LLM appears) and then x=y. First part stands for being human and vertical growth stands for accumulating nonhumanness after LLM appeared. Did I describe your position?
I see it like y=exp(x). (0,1) is a current stage. If you go back in time, you “get closer to nature” and 0, if you go forward, nonhumanness accumulates faster. But the whole graph can be renormalized relative to any point. Invention of calculus is the death of intuition crown. Invention of books is the death of local independency of thinking (your decisions affected by people long dead or far away).
I suppose Your solution to Theseus paradox is that the ship was changed the very moment the first plank was extracted. But then you are changing every moment (metabolism, information gathering), and you preserve your humanness only by abstract inheritance.
If I make a purely algorithmic statistical model, that bruteforcely parses internet and forms the relation tables between words, but at no point uses llms neural nets and learning algorithms, will you consider it also cursed? Is T9 autocomplete technology cursed?
“Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands?”
They are more posthuman. Their coordinate is higher. Correct interpretation of my words would be “chimps are less posthuman than humans”. In my model posthuman is the limit of the function, as it is infinity you can only get further in the direction but never will reach it. It is like the word “future”, the interval of which is automatically shifted every moment of time.
Disclaimer, my opinion on relative themes. 1) LLM is not the only architecture that can produce intelligence, and I expect other architectures to outpace LLM eventually. People will see llms as we see google, and some new system will be new dominant AI. 2) Chimps and many other animals can move up and become closer to posthuman if we teach them how to pass knowledge between generations. Language is not fundamental difference, but the knowledge preservartion is the moment that diverted humanity from nature and made us somewhat a hivemind 10k years ago. “My answer is, the one they asked is not a human.”—animals don’t ask at all. Yet.
Thankyou for pointing out holes in my argument.
I don’t think Google search engine is an entity that I call a demon of statistics.
I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective, based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.
To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must distinguish them based on their position in intelligence space.
Search engines do not give you sentences, but the result associated with the query, as I understand it. This may use statistical methods, but it does not overlap with statistical thinking of humans in intelligence space.
On the other hand, LLMs do overlap with human intelligence space, in their statistical thinking aspect.
I think depending on machines that overlap with statistical aspect (and higher levels) of human intelligence is where one starts to lose humanity. I don’t distinguish between ‘post human’ and Inhuman.
On the other hand, algorithmic machines are age old, and using simple ones like beads for counting does not deprive one of humanity.
Also, regarding books, I think there is no difference between consulting them and asking your grandma (or any person much older than you), since I accept algorithmic machines.
No, I think the ship never changed. As long as the structure is same, parts do not matter. This is the virtue of statistical thinking, and the same as how you recognize a dog when you see it.
Finally, I agree that we can never reach true post human, only become less human. One exception is if everyone commits suicide as described in this post. I think this is even more dangerous than bad AI, since AI can be stopped, but humanity cannot be interfered with, given our morality.