I don’t think Google search engine is an entity that I call a demon of statistics.
I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective, based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.
To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must distinguish them based on their position in intelligence space.
Search engines do not give you sentences, but the result associated with the query, as I understand it. This may use statistical methods, but it does not overlap with statistical thinking of humans in intelligence space.
On the other hand, LLMs do overlap with human intelligence space, in their statistical thinking aspect.
I think depending on machines that overlap with statistical aspect (and higher levels) of human intelligence is where one starts to lose humanity. I don’t distinguish between ‘post human’ and Inhuman.
On the other hand, algorithmic machines are age old, and using simple ones like beads for counting does not deprive one of humanity.
Also, regarding books, I think there is no difference between consulting them and asking your grandma (or any person much older than you), since I accept algorithmic machines.
No, I think the ship never changed. As long as the structure is same, parts do not matter. This is the virtue of statistical thinking, and the same as how you recognize a dog when you see it.
Finally, I agree that we can never reach true post human, only become less human. One exception is if everyone commits suicide as described in this post. I think this is even more dangerous than bad AI, since AI can be stopped, but humanity cannot be interfered with, given our morality.
Thankyou for pointing out holes in my argument.
I don’t think Google search engine is an entity that I call a demon of statistics.
I classify thought processes as algorithmic and statistical. The former merely depends on IQ, while the later is more subjective, based on mental models. I am thinking along lines parallel to JonahS in his posts on mathematical ability.
To explain my reasoning, I think while it is difficult to distinguish simple statistical machines (as in smart keyboards, search engines) differ from demons of statistics, we must distinguish them based on their position in intelligence space.
Search engines do not give you sentences, but the result associated with the query, as I understand it. This may use statistical methods, but it does not overlap with statistical thinking of humans in intelligence space.
On the other hand, LLMs do overlap with human intelligence space, in their statistical thinking aspect.
I think depending on machines that overlap with statistical aspect (and higher levels) of human intelligence is where one starts to lose humanity. I don’t distinguish between ‘post human’ and Inhuman.
On the other hand, algorithmic machines are age old, and using simple ones like beads for counting does not deprive one of humanity.
Also, regarding books, I think there is no difference between consulting them and asking your grandma (or any person much older than you), since I accept algorithmic machines.
No, I think the ship never changed. As long as the structure is same, parts do not matter. This is the virtue of statistical thinking, and the same as how you recognize a dog when you see it.
Finally, I agree that we can never reach true post human, only become less human. One exception is if everyone commits suicide as described in this post. I think this is even more dangerous than bad AI, since AI can be stopped, but humanity cannot be interfered with, given our morality.