This is correct, altho I’m specifically interested in the case of go AI because I think it’s important to understand neural networks that ‘plan’, as well as those that merely ‘perceive’ (the latter being the main focus of most interpretability work, with some notableexceptions).
This is correct, altho I’m specifically interested in the case of go AI because I think it’s important to understand neural networks that ‘plan’, as well as those that merely ‘perceive’ (the latter being the main focus of most interpretability work, with some notable exceptions).