Curated. This post was first considered for curation when it first came out (many months ago), and it fell through the cracks for various reasons. Kaj Sotala was interested in curating it now, in part to compare/contrast it with various discussions of GPT-3.
Due to some time-zone-issues, I’m curating it now, and Kaj will respond with more thoughts when he gets a chance.
I have been wanting to curate this for a long time. As AlphaStar seemed really powerful at the time, it was useful to read an analysis of where it goes wrong: I felt that the building placement was an excellent concrete example of what a lack of causal reasoning really means. Not only is it useful for thinking about AlphaStar, the same weaknesses apply to GPT, which we have been discussing a lot now: it only takes a bit of playing around with say AI Dungeon before this becomes very obvious.
Curated. This post was first considered for curation when it first came out (many months ago), and it fell through the cracks for various reasons. Kaj Sotala was interested in curating it now, in part to compare/contrast it with various discussions of GPT-3.
Due to some time-zone-issues, I’m curating it now, and Kaj will respond with more thoughts when he gets a chance.
I have been wanting to curate this for a long time. As AlphaStar seemed really powerful at the time, it was useful to read an analysis of where it goes wrong: I felt that the building placement was an excellent concrete example of what a lack of causal reasoning really means. Not only is it useful for thinking about AlphaStar, the same weaknesses apply to GPT, which we have been discussing a lot now: it only takes a bit of playing around with say AI Dungeon before this becomes very obvious.