Suppose you are writing a simulation. You keep optimizing it and hardcoding some stuff and handle different cases more efficiently and everything. One day your simulation becomes efficient enough so that you can run big enough grid for long enough, and there develops life. Then intelligent life. Then they tried to figure out the physics of their universe, and they succeed! But, oh wait, their description is extremely short but completely computationally intractable.
Can you say that they actually figured out in what kind of universe they are in already, or should you wait for when they discover another million lines of code of optimization for it? Should you create giant sparkling letters “congratulations, you figured that out” or wait for more efficient formulation?
This should be probably only attempted with clear and huge warning that it’s a LLM authored comment. Because LLMs are good at matching style without matching the content, it could go with exploiting heuristics of the users calibrated only for human level of honesty / reliability / non-bulshitting.
Also check this comment about how conditioning on the karma score can give you hallucinated strong evidence:
https://www.lesswrong.com/posts/PQaZiATafCh7n5Luf/gwern-s-shortform?commentId=smBq9zcrWaAavL9G7