no, superintelligent machines are not replacing humans, and they are not even competing with us.
I do not think the author has read Superintelligence.
In fact, these large language models are merely tools made so well that they manage to delude us
Eliminativist philosophers would say approximately the same thing of the neural net in the Brain.
I would be happy to hear an argument in favor of developing models of ‘conscious’ artificial intelligence. What would be its purpose, aside from proving that we can do it? But that is all it would be
I believe consciousness is a prerequisite for moral agency. Determining what is conscious or not therefore a very important moral problem; I think Robert Wiblin summarize it correctly:
Failing to recognise machine consciousness is one moral catastrophe scenario. But prematurely doing so just because we make machines that are extremely skilled at persuasive moral advocacy is another path to disaster
I do not think the author has read Superintelligence.
Eliminativist philosophers would say approximately the same thing of the neural net in the Brain.
I believe consciousness is a prerequisite for moral agency. Determining what is conscious or not therefore a very important moral problem; I think Robert Wiblin summarize it correctly:
https://twitter.com/robertwiblin/status/1536345842512035840