Apparently “human level” isn’t too hard to reach at least in the domain of “question and answer”. I wonder how often the ‘gotcha’ questions that current LLMs usually fail can be solved by even 50% of Americans...
Yup. I feel similarly about “human values”. The values of specific people are great. Humanity’s declared preferences are contradictory and incoherent. Humanity’s revealed preferences are awful.
Well it’s definitely related to the other topic this week : https://www.lesswrong.com/posts/gP8tvspKG79RqACTn/modern-transformers-are-agi-and-human-level
Apparently “human level” isn’t too hard to reach at least in the domain of “question and answer”. I wonder how often the ‘gotcha’ questions that current LLMs usually fail can be solved by even 50% of Americans...
Yup. I feel similarly about “human values”. The values of specific people are great. Humanity’s declared preferences are contradictory and incoherent. Humanity’s revealed preferences are awful.