Depends on what you include in the definition of LLM.
NN itself? Sure, it can. With the caveat of hardware and software limitations—we aren’t dealing with EXACT math here, floating points operations rounding, non-deterministic order of completion in parallel computation will also introduce slight differences from run to run even though the underlying math would stay the same.
The system that preprocess information, feeds into the NN and postprocess NN output into readable form? That is trickier, given that these usually involve some form of randomness, otherwise LLM output would be exactly the same, given exactly the same inputs and that generally is frowned upon, not very AI-like behavior. But if the system uses pseudo-random generators for that—those also can be described in math terms, if you know the random generator seed.
If they use non-deterministic source for their randomness—no. But that is rarely required and makes system really difficult to debug, so I doubt it.
Depends on what you include in the definition of LLM. NN itself? Sure, it can. With the caveat of hardware and software limitations—we aren’t dealing with EXACT math here, floating points operations rounding, non-deterministic order of completion in parallel computation will also introduce slight differences from run to run even though the underlying math would stay the same.
The system that preprocess information, feeds into the NN and postprocess NN output into readable form? That is trickier, given that these usually involve some form of randomness, otherwise LLM output would be exactly the same, given exactly the same inputs and that generally is frowned upon, not very AI-like behavior. But if the system uses pseudo-random generators for that—those also can be described in math terms, if you know the random generator seed.
If they use non-deterministic source for their randomness—no. But that is rarely required and makes system really difficult to debug, so I doubt it.