“entity that can form meaningful sentences distilled by whole knowledge of humanity”
I think that google search engine is also such an entity. Also knowledge, also statistical methods to pick certain bits of the whole internet knowledge to be presented to the user. Also adaptable parameters set by unknown to the user process. Why don’t you say we lost humanness when started using it?
“their non-humanness can be considered to have been increased.”
You also use some gradation in your model. Let’s say we have 2d plane. Your view is like a RELU, constant 0 before timepoint 0 (where LLM appears) and then x=y. First part stands for being human and vertical growth stands for accumulating nonhumanness after LLM appeared. Did I describe your position?
I see it like y=exp(x). (0,1) is a current stage. If you go back in time, you “get closer to nature” and 0, if you go forward, nonhumanness accumulates faster. But the whole graph can be renormalized relative to any point. Invention of calculus is the death of intuition crown. Invention of books is the death of local independency of thinking (your decisions affected by people long dead or far away).
I suppose Your solution to Theseus paradox is that the ship was changed the very moment the first plank was extracted. But then you are changing every moment (metabolism, information gathering), and you preserve your humanness only by abstract inheritance.
If I make a purely algorithmic statistical model, that bruteforcely parses internet and forms the relation tables between words, but at no point uses llms neural nets and learning algorithms, will you consider it also cursed? Is T9 autocomplete technology cursed?
“Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands?”
They are more posthuman. Their coordinate is higher. Correct interpretation of my words would be “chimps are less posthuman than humans”. In my model posthuman is the limit of the function, as it is infinity you can only get further in the direction but never will reach it. It is like the word “future”, the interval of which is automatically shifted every moment of time.
Disclaimer, my opinion on relative themes. 1) LLM is not the only architecture that can produce intelligence, and I expect other architectures to outpace LLM eventually. People will see llms as we see google, and some new system will be new dominant AI. 2) Chimps and many other animals can move up and become closer to posthuman if we teach them how to pass knowledge between generations. Language is not fundamental difference, but the knowledge preservartion is the moment that diverted humanity from nature and made us somewhat a hivemind 10k years ago. “My answer is, the one they asked is not a human.”—animals don’t ask at all. Yet.
I don’t think Google search engine is an entity that I call a demon of statistics.
I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective, based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.
To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must distinguish them based on their position in intelligence space.
Search engines do not give you sentences, but the result associated with the query, as I understand it. This may use statistical methods, but it does not overlap with statistical thinking of humans in intelligence space.
On the other hand, LLMs do overlap with human intelligence space, in their statistical thinking aspect.
I think depending on machines that overlap with statistical aspect (and higher levels) of human intelligence is where one starts to lose humanity. I don’t distinguish between ‘post human’ and Inhuman.
On the other hand, algorithmic machines are age old, and using simple ones like beads for counting does not deprive one of humanity.
Also, regarding books, I think there is no difference between consulting them and asking your grandma (or any person much older than you), since I accept algorithmic machines.
No, I think the ship never changed. As long as the structure is same, parts do not matter. This is the virtue of statistical thinking, and the same as how you recognize a dog when you see it.
Finally, I agree that we can never reach true post human, only become less human. One exception is if everyone commits suicide as described in this post. I think this is even more dangerous than bad AI, since AI can be stopped, but humanity cannot be interfered with, given our morality.
“entity that can form meaningful sentences distilled by whole knowledge of humanity”
I think that google search engine is also such an entity. Also knowledge, also statistical methods to pick certain bits of the whole internet knowledge to be presented to the user. Also adaptable parameters set by unknown to the user process. Why don’t you say we lost humanness when started using it?
“their non-humanness can be considered to have been increased.”
You also use some gradation in your model. Let’s say we have 2d plane. Your view is like a RELU, constant 0 before timepoint 0 (where LLM appears) and then x=y. First part stands for being human and vertical growth stands for accumulating nonhumanness after LLM appeared. Did I describe your position?
I see it like y=exp(x). (0,1) is a current stage. If you go back in time, you “get closer to nature” and 0, if you go forward, nonhumanness accumulates faster. But the whole graph can be renormalized relative to any point. Invention of calculus is the death of intuition crown. Invention of books is the death of local independency of thinking (your decisions affected by people long dead or far away).
I suppose Your solution to Theseus paradox is that the ship was changed the very moment the first plank was extracted. But then you are changing every moment (metabolism, information gathering), and you preserve your humanness only by abstract inheritance.
If I make a purely algorithmic statistical model, that bruteforcely parses internet and forms the relation tables between words, but at no point uses llms neural nets and learning algorithms, will you consider it also cursed? Is T9 autocomplete technology cursed?
“Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands?”
They are more posthuman. Their coordinate is higher. Correct interpretation of my words would be “chimps are less posthuman than humans”. In my model posthuman is the limit of the function, as it is infinity you can only get further in the direction but never will reach it. It is like the word “future”, the interval of which is automatically shifted every moment of time.
Disclaimer, my opinion on relative themes. 1) LLM is not the only architecture that can produce intelligence, and I expect other architectures to outpace LLM eventually. People will see llms as we see google, and some new system will be new dominant AI. 2) Chimps and many other animals can move up and become closer to posthuman if we teach them how to pass knowledge between generations. Language is not fundamental difference, but the knowledge preservartion is the moment that diverted humanity from nature and made us somewhat a hivemind 10k years ago. “My answer is, the one they asked is not a human.”—animals don’t ask at all. Yet.
Thankyou for pointing out holes in my argument.
I don’t think Google search engine is an entity that I call a demon of statistics.
I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective, based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.
To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must distinguish them based on their position in intelligence space.
Search engines do not give you sentences, but the result associated with the query, as I understand it. This may use statistical methods, but it does not overlap with statistical thinking of humans in intelligence space.
On the other hand, LLMs do overlap with human intelligence space, in their statistical thinking aspect.
I think depending on machines that overlap with statistical aspect (and higher levels) of human intelligence is where one starts to lose humanity. I don’t distinguish between ‘post human’ and Inhuman.
On the other hand, algorithmic machines are age old, and using simple ones like beads for counting does not deprive one of humanity.
Also, regarding books, I think there is no difference between consulting them and asking your grandma (or any person much older than you), since I accept algorithmic machines.
No, I think the ship never changed. As long as the structure is same, parts do not matter. This is the virtue of statistical thinking, and the same as how you recognize a dog when you see it.
Finally, I agree that we can never reach true post human, only become less human. One exception is if everyone commits suicide as described in this post. I think this is even more dangerous than bad AI, since AI can be stopped, but humanity cannot be interfered with, given our morality.