I thought it’d be especially interesting to get critiques/discussion from the LW crowd, because the claims here seem antithetical to a lot of the beliefs people here have, mostly around just how capable and cognizant transformers are/can be.
The authors show that transformers are guaranteed to suffer from compounding errors when performing any computation with long reasoning chains.
From the abstract, “In an attempt to demystify Transformers, we investigate the limits of these models across three representative compositional tasks—multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer. We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures. Our empirical findings suggest that Transformers solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching, without necessarily developing systematic problem solving skills”
[Linkpost] Faith and Fate: Limits of Transformers on Compositionality
Link post
I thought it’d be especially interesting to get critiques/discussion from the LW crowd, because the claims here seem antithetical to a lot of the beliefs people here have, mostly around just how capable and cognizant transformers are/can be.
The authors show that transformers are guaranteed to suffer from compounding errors when performing any computation with long reasoning chains.
From the abstract, “In an attempt to demystify Transformers, we investigate the limits of these models across three representative compositional tasks—multi-digit multiplication, logic grid puzzles, and a classic dynamic programming problem. These tasks require breaking problems down into sub-steps and synthesizing these steps into a precise answer. We formulate compositional tasks as computation graphs to systematically quantify the level of complexity, and break down reasoning steps into intermediate sub-procedures. Our empirical findings suggest that Transformers solve compositional tasks by reducing multi-step compositional reasoning into linearized subgraph matching, without necessarily developing systematic problem solving skills”