This means that LLMs can inadvertently learn to replicate these biases in their outputs.
Or the network learns to trust more the tokens that were already “thought about” during generation.
How is this possible? We are only inferencing
Or the network learns to trust more the tokens that were already “thought about” during generation.
How is this possible? We are only inferencing