This is not the post where I intended to discuss this question, just want to express disagreement here: you want a useful LLM, not LLM that produces all possible completions of your idea, but LLM that produces useful completion of your idea. So you want LLM which outputs are at least partially weighted by their usefulness (like ChatGPT), which implies consequentialism.
This is not the post where I intended to discuss this question, just want to express disagreement here: you want a useful LLM, not LLM that produces all possible completions of your idea, but LLM that produces useful completion of your idea. So you want LLM which outputs are at least partially weighted by their usefulness (like ChatGPT), which implies consequentialism.