I literally noted that GPT-J, which uses said 7GB of math (assuming that number is right), usually fails at ‘2 + 2 =’. People can do several digit addition without pencil and paper. ’763 + 119 =’ probably doesn’t require pencil and paper to get ’882′. We do require it for many step algorithms, but this is not that. ‘Dumb’ computers do 64-bit addition trivially (along with algebra, calculus, etc.). I haven’t seen specialized math models, but I’m dumbfounded that general models don’t do math way better.
I haven’t tried coding using ‘AI’ tools, so have no real opinion on how well it compares to basic autocomplete.
I literally noted that GPT-J, which uses said 7GB of math (assuming that number is right), usually fails at ‘2 + 2 =’. People can do several digit addition without pencil and paper. ’763 + 119 =’ probably doesn’t require pencil and paper to get ’882′. We do require it for many step algorithms, but this is not that. ‘Dumb’ computers do 64-bit addition trivially (along with algebra, calculus, etc.). I haven’t seen specialized math models, but I’m dumbfounded that general models don’t do math way better.
I haven’t tried coding using ‘AI’ tools, so have no real opinion on how well it compares to basic autocomplete.