I have been wanting to curate this for a long time. As AlphaStar seemed really powerful at the time, it was useful to read an analysis of where it goes wrong: I felt that the building placement was an excellent concrete example of what a lack of causal reasoning really means. Not only is it useful for thinking about AlphaStar, the same weaknesses apply to GPT, which we have been discussing a lot now: it only takes a bit of playing around with say AI Dungeon before this becomes very obvious.
I have been wanting to curate this for a long time. As AlphaStar seemed really powerful at the time, it was useful to read an analysis of where it goes wrong: I felt that the building placement was an excellent concrete example of what a lack of causal reasoning really means. Not only is it useful for thinking about AlphaStar, the same weaknesses apply to GPT, which we have been discussing a lot now: it only takes a bit of playing around with say AI Dungeon before this becomes very obvious.