Another interesting fact is that without any NN, but using the rest of approach from the paper, their method gets 18⁄30 correct. The NN boosts to 25⁄30. The prior SOTA was 10⁄30 (also without a NN).
So arguably about 1⁄2 of the action is just improvements in the non-AI components.
Hmm it might be questionable to suggest that it is “non-AI” though? It’s based on symbolic and algebraic deduction engines and afaict it sounds like it might be the sort of thing that used to be very much mainstream “AI” i.e. symbolic AI + some hard-coded human heuristics?
Another interesting fact is that without any NN, but using the rest of approach from the paper, their method gets 18⁄30 correct. The NN boosts to 25⁄30. The prior SOTA was 10⁄30 (also without a NN).
So arguably about 1⁄2 of the action is just improvements in the non-AI components.
Hmm it might be questionable to suggest that it is “non-AI” though? It’s based on symbolic and algebraic deduction engines and afaict it sounds like it might be the sort of thing that used to be very much mainstream “AI” i.e. symbolic AI + some hard-coded human heuristics?
Sure, just seems like a very non-central example of AI from the typical perspective of LW readers.
Interesting—how does the non AI portion work?