Learn superhuman cognition to predict tokens better and accurately express human cognitive failings in simulacra because they learned these in their “world model”; or
Learn human-level cognition to predict tokens better, including human cognitive failings?
Are GPT-n systems more likely to:
Learn superhuman cognition to predict tokens better and accurately express human cognitive failings in simulacra because they learned these in their “world model”; or
Learn human-level cognition to predict tokens better, including human cognitive failings?