Thank you,
I agree with your reasoning strictly logically speaking, but it seems to me that a LLM cannot be sentient or have thoughts, even theoritically, and the burden of proof seems strongly on the side of someone who would made opposite claims.
And for someone who do not know what is a LLM, it is of course easy to anthropomorphize the LLM for obvious reasons (it can be designed to sound sentient or to express ‘thoughts’), and it is my feeling that this post was a little bit about that.
Overall, I find the arguments that I received after my first comment more convincing in making me feel what could be the problem, than the original post.
As for the possibility of a LLM to accelerate scientific progress towards agentic AI, I am skeptical, but I may be lacking imagination.
And again, nothing in the exemples presented in the original post is related to this risk, It seems that people that are worried are more trying to find exemples where the “character” of the AI is strange (which in my opinion are mistaken worries due to anthropomorphization of the AI), rather than finding exemples where the AI is particularly “capable” in terms of generating powerful reasoning or impressive “new ideas” (maybe also because at this stage the best LLM are far from being there).
I think that the “most” in the sentence “most philosophers and AI people do think that neurol networks can be conscious if they run the right algorithm” is an overstatement, though I do not know to what extent.
I have no strong view on that, primarly because I think I lack some deep ML knowledge (I would weigh far more the view of ML experts than the view of philosophers on this topic).
Anyway, even accepting that neural networks can be conscious with the right algorithm, I think I disagree about “the fact that it’s a language model doesn’t seem relevant”. In a LLM language is not only the final layer, you have also the fact that the aim of the algorithm is p(next words), so it is a specific kind of algorithms. My feeling is that a p(next words) algorithms cannot be sentient, and I think that most ML researchers would agree with that, though I am not sure.
I am also not sure about the “reasoning-capability” scale, even if a LLM is very close to human for most parts of conversations, or better than human for some specific tasks (i.e doing summaries, for exemple), that would not mean that it is close to do a scientific breakthrough (on that I basically agree with the comments of AcurB some posts above)