Yes, I’m assuming cumulatively-calculated reward. In general this is a fairly standard assumption (rewards being defined for every timestep is part of the definition of MDPs and POMDPs, and given that I don’t see much advantage in delaying computing it until the end of the episode). For agents like AlphaGo observing these rewards obviously won’t be very helpful though since those rewards are all 0 until the last timestep. But in general I expect rewards to occur multiple times per episode when training advanced agents, especially as episodes get longer.
Hmm, I was surprised when it turned out that AlphaStar did not use any reward shaping and just got a reward at the end of each game/episode, and may have over-updated on that.
If you “expect rewards to occur multiple times per episode when training advanced agents” then sure, I understand your suggestion in light of that.
ETA: It occurs to me that your idea can be applied even if rewards are only available at the end of each episode. Just concatenate several episodes together into larger episodes during training, then within a single concatenated episode, the rewards from the earlier episodes can be shown to the agent during the later episodes.
Yes, I’m assuming cumulatively-calculated reward. In general this is a fairly standard assumption (rewards being defined for every timestep is part of the definition of MDPs and POMDPs, and given that I don’t see much advantage in delaying computing it until the end of the episode). For agents like AlphaGo observing these rewards obviously won’t be very helpful though since those rewards are all 0 until the last timestep. But in general I expect rewards to occur multiple times per episode when training advanced agents, especially as episodes get longer.
Hmm, I was surprised when it turned out that AlphaStar did not use any reward shaping and just got a reward at the end of each game/episode, and may have over-updated on that.
If you “expect rewards to occur multiple times per episode when training advanced agents” then sure, I understand your suggestion in light of that.
ETA: It occurs to me that your idea can be applied even if rewards are only available at the end of each episode. Just concatenate several episodes together into larger episodes during training, then within a single concatenated episode, the rewards from the earlier episodes can be shown to the agent during the later episodes.