Thank you for your answers!
Valentin Baltadzhiev
[Question] Can any LLM be represented as an Equation?
[Question] Can someone explain to me what went wrong with ChatGPT?
[Question] What experiment settles the Gary Marcus vs Geoffrey Hinton debate?
I think I understand, thank you. For reference, this is the tweet which sparked the question: https://twitter.com/RichardMCNgo/status/1735571667420598737
I was confused as to why you would necessarily need “understanding” and not just simple next token prediction to do what ChatGPT does
I think it does, thank you! In your model does a squirrel perform better than ChatGPT at practical problem solving simply because it was “trained” on practical problem solving examples and ChatGPT performs better on language tasks because it was trained on language? Or is there something fundamentally different between them?
I don’t really have a coherent answer to that but here it goes (before reading the spoiler): I don’t think the model understands anything about the real world because it never experienced the real world. It doesn’t understand why “a pink flying sheep” is a language construct and not something that was observed in the real world.
Reading my answer maybe we also don’t have any understanding of the real world, we have just come up with some patterns based on the qualia (tokens) that we have experienced (been trained on). Who is to say whether those patterns match to some deeper truth or not? Maybe there is a vantage point from which our “understanding” will look like hallucinations.
I have a vague feeling that I understand the second part of your answer. Not sure though. In that model of yours are the hallucinations of ChatGPT just the result of an imperfectly trained model? And can a model be trained to ever perfectly predict text?
Thanks for the answer it gave me some serious food for thought!
Okay, all of that makes sense. Could this mean that the model didn’t learn anything about the real world, but it learned something about the patterns of words which give it thimbs up from the RLHFers?
Thanks for the detailed answer! I think that helped
Does the following make sense:
We use language to talk about events and objects (could be emotions, trees, etc). Since those are things that we have observed, our language will have some patterns that are related to the patterns of the world. However, the patterns in the language are not a perfect representation of the patterns in the world (we can talk about things falling away from our planet, we can talk about fire which consumes heat instead of producing it, etc). An LLM trained on text then learns the patterns of the language but not the patterns of the world. Its “world” is only language, and that’s the only thing it can learn about.
Does the above sound true? What are the problems with it?
I am ignoring your point that neural networks can be trained on a host of other things since there is little discussion around whether or not Midjourney “understands’ the images it is generating. However, the same point should apply to other modalities as well
[Question] Why do we need an understanding of the real world to predict the next tokens in a body of text?
I love the idea that petertodd and Leilan are somehow interrelated with the archetypes of the trickster and the mother goddess inside GPT’s internals. I would love to see some work done in discovering other such prototypes, and weird seemingly-random tokens that correlate with them. Thigs like the Sun God, a great evil snake, a prophet seem to pop up in religions all over the place, so why not inside GPT as well?
Glad to hear that!
A list of all the deadlines in Biden’s Executive Order on AI
On the bright side Connor Leahy from Conjecture is going to be at the summit so there will be at least one strong voice for existential risk present there
For what it’s worth it’s probably a good thing that the Bing chatbot is like that. The overall attitude towards AI for the last few months has been one of unbridled optimism and people seeing a horribly aligned model in action might be a wake up call for some, showing that the people deploying those models are unable to control them.
You make interesting points. What about the other examples of the ToM task (the agent writing the false label themselves, or having been told by a trusted friend what is actually in the bag)?
Thanks for your answer. Would it be fair to say that both of them are oversimplifying the other’s position and that they are both, to some extent, right?