See https://joshuafox.com for more info.
JoshuaFox
Thank you.
> It’s unclear if this implies fundamental differences in how they work versus different specializations.
Correct. That article argues that LLMs are more powerful than humans in this skill, but not that they have different (implicit) goal functions or that their cognitive architecture is deeply different from the human.
> Like, obviously it’s gonna be alien in some ways and human-like in other ways. Right
It has been said that since LLMs predict human output, they will, if sufficiently improved, be quite human—that they will behave in a quite human way.
> Can you say more about what you mean by “Where can I find a post
As part of a counterargument to that, we could find evidence that their logical structure is quite different from humans. I’d like to see such a write-up.
> Surely you would agree that if we were to do a cluster analysis of the cognition of all humans alive today + all LLMs, we’d end up with two distinct clusters (the LLMs and then humanity) right?
I agree, but I’d like to see some article or post arguing that.
Thank you. But being manipulative, silly, sycophantic, or nasty is pretty human. I am looking for hints of a fundamentally different cognitive architecture
That is good, thank you.
What is the best argument that LLMs are shoggoths?
Yes, I agree with your point about most journalists. Still, I think well enough of the professors and AI developers that I mentioned to imagine that they would have a more positive attitude.
He is also maybe just somewhat annoying to work with and interact with
I have heard that elsewhere as well. Still, I don’t really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn’t think that people were that delicate.
… advocating for a total stop… which might make AI lab… less likely to pay attention to him.
True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.
Thank you for your answers. Overall, I think you are right, though I don’t quite understand.
[Question] Why aren’t Yudkowsky & Bostrom getting more attention now?
At the time of Hofstadter’s Singularity Summit talk , I wondered why he wasn’t “getting with the program”, and it became clear he was a mysterian: He believed—without being a dualist -- that some things, like the mind, are ultimately, basically, essentially, impossible to understand or describe.
This 2023 interview shows that the new generation of AI has done more than chagne his mind about the potential of AI: it has struck at the core of his mysterianism
the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.
What about a Turing Test variant in which such inquiries are banned?
That would be possible. Plenty of people don’t know much about this topic. If you had such a judge, do you think actually doing a Turing Test (or some variant) for ChatGPT is a good ideaa
Nice! I am surprised we don’t hear more about attempts at a Turing Test, even if it is not quite there yet.
That looks pretty close to the level of passing a Turing Test to me. So is there a way of trying a full Turing Test, or something like it, perhaps building on the direction you show here?
Do you think there is a place for a Turing-like test that determines how close to human intelligence it is, even if it has not reached that level?
ChatGPT isn’t at that level.
That could well be. Do you think there is a place for a partial Turing Test as in the Loebner Prize—to determine how close to human intelligence it is, even if it has not reached that level?
What’s up with ChatGPT and the Turing Test?
I thought there was a great shortage of cadavers. How did they manage to get them for a non-medical school, indeed for use by non-students? Also, I am quite impressed that any course, particularly in the Bay Area, is $60 or free.
Nice! Is there also a list of AI-safety corporations and non-profits, with a short assessment of each where feasible: goals, techniques, leaders, number of employees, liveness, progress to date?
> maybe Gary Marcus-esque analysis of the pattern of LLM mistakes?
That is good. Can you recommend one?