It has “beliefs” regarding which word should follow another, and any other belief, opinion or knowledge is an incidental outcome of that. Do you think it’s impossible to improve on LLM s by making the underlying engine more tuned in to truth per se?
No, I think it’s absolutely possible, at least theoretically—not sure what would it take to actually do it of course. But that’s my point, there exists somewhere in the space of possible LLMs a “always gives you the wisest, most truthful response” model that does exactly the same thing, predicting the next token. As long as the prediction is always that of the next token that would appear in the wisest, most truthful response!
Which is different to predicting a token on the basis of the statistical regularities in the training data. An LLM that works that way is relatively poor at reliably outputting truth, so a version of the SP argument goes through.
I think for the limit of infinite, truthful training data, with sufficient abstraction, it would not be necessarily different. We too form our beliefs from “training data” after all, we’re just highly multimodal and smart enough to know the distinction between a science textbook and a fantasy novel. An LLM doesn’t have maybe that distinction perfectly clear—though it does grasp it to some point.
I just don’t really understand in what way “token prediction” is anything less than “literally any possible function from a domain of all possible observations to a domain of all possible actions”. At least if your “tokens” cover extensively enough all the space of possible things you might want to do or say.
I think a significant part of the problem is not the LLMs trouble of distinguishing truth from fiction, it’s rather to convince it through your prompt that the output you want is the former and not the latter.
It has “beliefs” regarding which word should follow another, and any other belief, opinion or knowledge is an incidental outcome of that. Do you think it’s impossible to improve on LLM s by making the underlying engine more tuned in to truth per se?
No, I think it’s absolutely possible, at least theoretically—not sure what would it take to actually do it of course. But that’s my point, there exists somewhere in the space of possible LLMs a “always gives you the wisest, most truthful response” model that does exactly the same thing, predicting the next token. As long as the prediction is always that of the next token that would appear in the wisest, most truthful response!
Which is different to predicting a token on the basis of the statistical regularities in the training data. An LLM that works that way is relatively poor at reliably outputting truth, so a version of the SP argument goes through.
I think for the limit of infinite, truthful training data, with sufficient abstraction, it would not be necessarily different. We too form our beliefs from “training data” after all, we’re just highly multimodal and smart enough to know the distinction between a science textbook and a fantasy novel. An LLM doesn’t have maybe that distinction perfectly clear—though it does grasp it to some point.
There’s no evidence that we do so based solely on token prediction, so that’s irrelevant.
I just don’t really understand in what way “token prediction” is anything less than “literally any possible function from a domain of all possible observations to a domain of all possible actions”. At least if your “tokens” cover extensively enough all the space of possible things you might want to do or say.
I think a significant part of the problem is not the LLMs trouble of distinguishing truth from fiction, it’s rather to convince it through your prompt that the output you want is the former and not the latter.