How does DQN work in an environment where reward is always -1

删除回忆录丶 提交于 2021-01-05 07:14:05

问题


Given that the OpenAI Gym environment MountainCar-v0 ALWAYS returns -1.0 as a reward (even when goal is achieved), I don't understand how DQN with experience-replay converges, yet I know it does, because I have working code that proves it. By working, I mean that when I train the agent, the agent quickly (within 300-500 episodes) learns how to solve the mountaincar problem. Below is an example from my trained agent.

It is my understanding that ultimately there needs to be a "sparse reward" that is found. Yet as far as I can see from the openAI Gym code, there is never any reward other than -1. It feels more like a "no reward" environment.

What almost answers my question, but in fact does not: when the task is completed quickly, the return (sum of rewards) of the episode is larger. So if the car never finds the flag, the return is -1000. If the car finds the flag quickly the return might be -200. The reason this does not answer my question is because with DQN and experience replay, those returns (-1000, -200) are never present in the experience replay memory. All the memory has are tuples of the form (state, action, reward, next_state), and of course remember that tuples are pulled from memory at random, not episode-by-episode.

Another element of this particular OpenAI Gym environment is that the Done state is returned on either of two occasions: hitting the flag (yay) or timing out after some number of steps (boo). However, the agent treats both the same, accepting the reward of -1. Thus as far as the tuples in memory are concerned, both events look identical from a reward standpoint.

So, I don't see anything in the memory that indicates that the episode was performed well.

And thus, I have no idea why this DQN code is working for MountainCar.


回答1:


The reason this works is because in Q-learning, your model is trying to estimate the SUM (technically the time-decayed sum) of all future rewards for each possible action. In MountainCar you get a reward of -1 every step until you win, so if you do manage to win, you’ll end up getting less negative reward than usual. For example, your total score after winning might be -160 instead of -200, so your model will start predicting higher Q-values for actions that have historically led to winning the game.




回答2:


You are right. There is no direct association between memory (experience replay) and the performance of the model in episode reward. The Q-value in DQN is used to predict each action's expected reward in each step. The performance measure of how good your model was is the difference between the real reward and the expected reward (TD-error).

The deployment of -1 for non-goal steps is a trick to help RL models to choose the actions that can finish the episode quicker. Because of the Q-value is an action-value. At each step, the model predicts rewards for every possible move and the policy (usually greedy or epsilon-greedy) choose the action with the most significant value. You can imagine that going back at one moment will result in 200 steps to finish the episode but going forward takes only 100 steps. The Q-value will be -200 (without discount) and -100 respectively. You might wonder how the model knows the value of each action, that is because in the repeated episodes and successive trial-and-error. The model was trained to minimise the difference between real reward and expected reward, aka TD-error.

In a randomly sampled experience replay, all experiences are sampled and deleted uniformly. However, in priority experience replay, you can reuse those experience with high estimated error. Usually, the priorities are proportional to the TD error (real_reward - expected_reward) of expected Q-value and the current model's predicted Q-value. The larger the priority means how surprising the experience, and it helps accelerate the training.

You can check the idea in Priority Experience Replay, Schaul et al., 2016




回答3:


It might help to look at a reduced problem. Consider:

States:
───┬───┬───┬───┬───┐
...│L2 │L1 │ S │ R1│
───┴───┴───┴───┴───┘

Actions:
left or right

r = -1 for all states
episode terminates upon reaching R1 or after 2 steps

Memory:

  1. (S, left, -1, L1) non-terminal
  2. (S, right, -1, R1) terminal
  3. (L1, left, -1, L2) terminal
  4. (L1, right, -1, S) terminal

The obvious but important thing to note is that although the rewards are all identical, the states and actions are not. This information allows us to reason about the next state given the current one. Let's look at the targets that we are updating towards (with no discount):

  1. Q(S, left ) --> -1 + max{a}Q(L1, a)
  2. Q(S, right) --> -1
  3. Q(L1, left ) --> -1
  4. Q(L1, right) --> -1

In this contrived example, only transition 1 presents an extra source of instability. But over time, as action value on L1 converges from having sampled enough of transitions 3 and 4, so should the target on transition 1. At that point, when we encounter transition 1 again, we would have a better estimate Q(S, left) --> -1 + -1.

It is not enough to only look at the rewards when we are asking how DQN learns from its memory since it is also using the next observation to determine its current best estimate of the action value at the next step (relevant code), effectively linking everything together and slowly tallying up the rewards. Albeit it does so in a much more unstable manner unlike traditional Q-learning.


As an exercise, consider extending this further and put the terminal state on R2. Then we can easily see that now max{a}Q(S, a) = -2; it takes the same amount of time to reach R2 and to simply timeout, so it doesn't matter what we do (unless we start closer to R2). However, bumping the number of timeout steps up and it should head towards R2 again. I.e. the mountain car works also because the timeout is set to a number larger than the time step it takes to reach the goal. This path should eventually propagate its (negative but better) values back to our initial state.

Although this is true for any other environment, they can still at least learn to get closer to the goal before timing out provided they are rewarded for the effort. This is not so given the reward design in the mountain car environment.




回答4:


In DQN you learn the Q-function, which basically approximates your return. In the memory, you store tuples with (s,a,s',r) and re-train your Q-function on those tuples. If, for a given tuple, you performed well (you reached the flag quickly) then you are going to re-experience it by re-using the tuple for training, because the Q-function is higher for that tuple.

Anyway, usually experience replay works better for any problem, not just for the mountain car.




回答5:


You say:

those returns (-1000, -200) are never present in the experience replay memory.

What is present in the replay memory is a SARdS tuple with a done flag that tells you that the episode is finished. See the openai gym deepq example:

# Store transition in the replay buffer.
replay_buffer.add(obs, action, rew, new_obs, float(done))

In the final update, if float(done) == 1 then the future Q value is ignored. Hence at the end of the episode, the Q value is 0. If that happens on step 200, then the total return in the episode will be -200. If it happens on step 1000, then the total return will be -1000.

To put it a different way - if completely at random, the episode ended on step 200, an agent that had been performing as badly as a random policy would have a Q value (expected future return) of -800. If the episode then ended, the TD error would be +799 representing the positive surprise of only losing another -1.

I note that the code you linked to does not seem to use the done flag in the replay buffer. Instead it relies on a final state of s_ == None to signify the end of the episode. Take that code out, and the agent won't learn.

   if s_ is None:
        t[a] = r
    else:
        t[a] = r + GAMMA * numpy.amax(p_[i])


来源:https://stackoverflow.com/questions/54371272/how-does-dqn-work-in-an-environment-where-reward-is-always-1

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!