Google doesn’t seem interested in serving large models until it has a rock solid solution to the “if you ask the model to say something horrible, it will oblige” problem.
I think that is right call. Anecdotal bad outputs would probably go viral and create media firestorm with the stochastic parrots twitter crowd beating them over the head along the way. Not sure you can ever get it perfect but they should probably get close before releasing public.
At the same time, a good math-solving chatbot could be really useful for math-averse people, even with brittle performance. I’m not sure it’s worth the risk, but might be worth considering.
You’ll also get people complaining that it’ll help students cheat, because testing is more important than education to people involved in the education system.
Students to whom learning is more important than test results won’t cheat either way. Students to whom test results are more important than learning will cheat if it’s easy and reluctantly fall back on actually learning the material if they have to. Educators who care whether their students learn will prefer the latter outcome.
(It is sometimes also true that educators care more about testing than teaching. But I don’t think that’s anything like the only reason why they will complain about things that make it very easy for students to cheat.)
Students also might reason (maybe correctly) that if AI is already better than most humans will ever be in their lifetime, why exactly are they spending all this time on stuff like hand symbolic manipulation and hand arithmetic anyways..
Google doesn’t seem interested in serving large models until it has a rock solid solution to the “if you ask the model to say something horrible, it will oblige” problem.
I think that is right call. Anecdotal bad outputs would probably go viral and create media firestorm with the stochastic parrots twitter crowd beating them over the head along the way. Not sure you can ever get it perfect but they should probably get close before releasing public.
At the same time, a good math-solving chatbot could be really useful for math-averse people, even with brittle performance. I’m not sure it’s worth the risk, but might be worth considering.
You’ll also get people complaining that it’ll help students cheat, because testing is more important than education to people involved in the education system.
I think that’s unfair.
Students to whom learning is more important than test results won’t cheat either way. Students to whom test results are more important than learning will cheat if it’s easy and reluctantly fall back on actually learning the material if they have to. Educators who care whether their students learn will prefer the latter outcome.
(It is sometimes also true that educators care more about testing than teaching. But I don’t think that’s anything like the only reason why they will complain about things that make it very easy for students to cheat.)
Students also might reason (maybe correctly) that if AI is already better than most humans will ever be in their lifetime, why exactly are they spending all this time on stuff like hand symbolic manipulation and hand arithmetic anyways..