See https://joshuafox.com for more info.
JoshuaFox
Thank you.
> It’s unclear if this implies fundamental differences in how they work versus different specializations.
Correct. That article argues that LLMs are more powerful than humans in this skill, but not that they have different (implicit) goal functions or that their cognitive architecture is deeply different from the human.
> Like, obviously it’s gonna be alien in some ways and human-like in other ways. Right
It has been said that since LLMs predict human output, they will, if sufficiently improved, be quite human—that they will behave in a quite human way.
> Can you say more about what you mean by “Where can I find a post
As part of a counterargument to that, we could find evidence that their logical structure is quite different from humans. I’d like to see such a write-up.
> Surely you would agree that if we were to do a cluster analysis of the cognition of all humans alive today + all LLMs, we’d end up with two distinct clusters (the LLMs and then humanity) right?
I agree, but I’d like to see some article or post arguing that.
Thank you. But being manipulative, silly, sycophantic, or nasty is pretty human. I am looking for hints of a fundamentally different cognitive architecture
That is good, thank you.
Yes, I agree with your point about most journalists. Still, I think well enough of the professors and AI developers that I mentioned to imagine that they would have a more positive attitude.
He is also maybe just somewhat annoying to work with and interact with
I have heard that elsewhere as well. Still, I don’t really see that myself, whether in his public posting or in my limited interactions with him. He can be rough and on rare occasion has said things that could be considered personally disrespectful, but I didn’t think that people were that delicate.
… advocating for a total stop… which might make AI lab… less likely to pay attention to him.
True, but I had thought better of those people. I would have thought that they could take criticism, especially from someone who inspired some of them into their career.
Thank you for your answers. Overall, I think you are right, though I don’t quite understand.
At the time of Hofstadter’s Singularity Summit talk , I wondered why he wasn’t “getting with the program”, and it became clear he was a mysterian: He believed—without being a dualist -- that some things, like the mind, are ultimately, basically, essentially, impossible to understand or describe.
This 2023 interview shows that the new generation of AI has done more than chagne his mind about the potential of AI: it has struck at the core of his mysterianism
the human mind is not so mysterious and complex and impenetrably complex as I imagined it was when I was writing Gödel, Escher, Bach and writing I Am a Strange Loop.
Correct. That is why the original Turing Test is a sufficient-but-not-necessary test: It is meant to identify an AI that is definitively above human level.
What about a Turing Test variant in which such inquiries are banned?
That would be possible. Plenty of people don’t know much about this topic. If you had such a judge, do you think actually doing a Turing Test (or some variant) for ChatGPT is a good ideaa
Nice! I am surprised we don’t hear more about attempts at a Turing Test, even if it is not quite there yet.
That looks pretty close to the level of passing a Turing Test to me. So is there a way of trying a full Turing Test, or something like it, perhaps building on the direction you show here?
Do you think there is a place for a Turing-like test that determines how close to human intelligence it is, even if it has not reached that level?
ChatGPT isn’t at that level.
That could well be. Do you think there is a place for a partial Turing Test as in the Loebner Prize—to determine how close to human intelligence it is, even if it has not reached that level?
I thought there was a great shortage of cadavers. How did they manage to get them for a non-medical school, indeed for use by non-students? Also, I am quite impressed that any course, particularly in the Bay Area, is $60 or free.
Nice! Is there also a list of AI-safety corporations and non-profits, with a short assessment of each where feasible: goals, techniques, leaders, number of employees, liveness, progress to date?
I organized that, so let me say that:
That online meetup, or the invitation to Vassar, was not officially affiliated to or endorsed by SSC. Any responsibility for inviting him is mine.
I have conversed with him a few times, as follows:
I met him in Israel around 2010. He was quite interesting, though he did try to get me to withdraw my retirement savings to invest with him. He was somewhat persuasive. During our time in conversation, he made some offensive statements, but I am perhaps less touchy about such things than the younger generation.
In 2012, he explained Acausal Trade to me, and that was the seed of this post. That discussion was quite sensible and I thank him for that.
A few years later, I invited him to speak at LessWrong Israel. At that time I thought him a mad genius—truly both. His talk was verging on incoherence, with flashes of apparent insight.
Before the online meetup, 2021, he insisted on a preliminary talk; he made statements that produced twinges of persuasiveness. (Introspecting that is kind of interesting, actually.) I stayed with it for 2 or more hours before begging off, because it was fascinating in a way. I was able to analyze his techniques as Dark Arts. Apparently I am mature enough to shrug off such techniques.
His talk at my online meetup was even less coherent than any before, with multiple offensive elements. Indeed, I believe it was a mistake to have him on.
If I have offended anyone, I apologize, though I believe that letting someone speak is generally not something to be afraid of. But I wouldn’t invite him again.
- Decentralized Exclusion by Mar 13, 2023, 3:50 PM; 26 points) (
- Mar 8, 2023, 11:23 AM; 24 points) 's comment on Abuse in LessWrong and rationalist communities in Bloomberg News by (EA Forum;
Thank you. Can you point me to a page on FLI’s latest grants? What I found was from a few years back. Is there another organizations whose grants are worthy of attention?
Thank you. Can you link to some of the better publications by Wentworth, Turner, and yourself? I’ve found mentions of each of you online but I’m not finding a canonical source for the recommended items.
I found this about Steve Byrnes
This about Beth Barnes
> maybe Gary Marcus-esque analysis of the pattern of LLM mistakes?
That is good. Can you recommend one?