Gemini 1206 Exp has a 2 million token context window, even if that isn’t the effective context it probably performs much better in that regard than gpt 4o and such. Haven’t tested yet because I don’t want to get ratelimited from ai studio incase they monitor that
Frankly the “shorter” conversations I had at a few tens of thousand of tokens were already noticeably more consistent than before, e. g. it referenced previous responses significantly later
Gemini 1206 Exp has a 2 million token context window, even if that isn’t the effective context it probably performs much better in that regard than gpt 4o and such. Haven’t tested yet because I don’t want to get ratelimited from ai studio incase they monitor that
Frankly the “shorter” conversations I had at a few tens of thousand of tokens were already noticeably more consistent than before, e. g. it referenced previous responses significantly later