Mainly things that we would never think of, as fruitful for AI and not for us.
Things that are useful for us but not for AI is things like investigating gaps in tokenization, hiding things from AI, and things that are hard to explain/judge, because we probably ought to trust the AI researchers less than we do human researchers with regards to good faith.
Mainly things that we would never think of, as fruitful for AI and not for us.
Things that are useful for us but not for AI is things like investigating gaps in tokenization, hiding things from AI, and things that are hard to explain/judge, because we probably ought to trust the AI researchers less than we do human researchers with regards to good faith.
That seems correct, but I don’t think any of those aren’t useful to investigate with AI, despite the relatively higher bar.