Off-Policy Actor-Critic Algorithms

   

This post extends my learning about Actor-Critic algorithms to the off-policy setting.

Deep Deterministic Policy Gradients

Deep Deterministic Policy Gradients (DDPG) is an extension of the DQN algorithm that is able to learn control policies for continuous action spaces. DDPG is an Actor-Critic algorithm, so it learns both a policy and value function (Q function). Like DQN, DDPG makes use of experience replay buffers and a frozen target network to stabilize training. The critic used in DDPG, however, differs from the critic used in DQN in two key ways. First, the critic used in DDPG takes as input both the states and actions. Second, the critic does not output a Q-value for every action (otherwise, there’d be infintely many outputs!), but instead the architecture has only one neuron that outputs values for each state-action pair.

So how does training work? Training the critic network in DDPG is very similar to how it is trained in DQN. Training the actor , on the other hand, relies on the Deterministic Policy Gradient Theorem proved by David Silver in 2014:

Notice that we are taking the gradient of with respect to the actions . The intuition for this is as follows. The critic can evaluate the action that the actor proposes from a particular state. By making small changes to that action, the critic will tell us whether the new action is an improvement over the previous action. If the new action does have a higher Q value, the gradient is used to update the parameters of the actor in the right direction.

It is also important to note that unlike actor-critic algorithms like A2C and PPO, the actor in DDPG maps states directly (i.e. deterministically) to actions rather than outputting a distribution over actions. Since the actor isn’t sampling actions, how, then, do we actually get exploration? One method involves adding Gaussian noise or Ornstein-Uhlenbenk process noise to the deterministic action. Another method involves adaptively perturbing the parameters of the actor network.

DDPG Algorithm

  1. For episode = 1,2,…
  2.    For
  3.        Select action according to policy and noise process:
                
  4.        Execute action and observe reward and next state
  5.        Store transition in replay buffer
  6.        Sample minibatch from replay buffer
  7.        Calculate targets :
  8.        Calculate the loss:
  9.        Update the critic network parameters:
  10.        Approximate the policy gradient:
  11.        Update the policy parameters

  12.        Update the target networks

DDPG Results

Twin Delayed DDPG

Although DDPG is capable of solving challenging continuous control tasks, training can be very difficult in practice. Twin Delayed DDPG (TD3) uses a few tricks to that greatly improve algorithm performance:

  1. Target policy smoothing
  2. Clipped Double Q learning
  3. Delaying update of policy and target networks

TD3 Algorithm

  1. For episode = 1,2,…
  2.    For
  3.        Select action according to policy and noise process:
                
  4.        Execute action and observe reward and next state
  5.        Store transition in replay buffer
  6.        Sample minibatch from replay buffer
  7.        Add noise to the action:
  8.        Calculate targets:
  9.        Calculate the loss:
  10.        Update the critic network parameters:
  11.        If then:
  12.           Approximate the policy gradient:
  13.           Update the policy parameters:

  14.           Update the target networks:

TD3 Results

Take aways

  • TD3 works much better than the original DDPG algorithm
  • Even using a subset of the modifications like delaying the update of the policy improves learning (not shown)
  • I didn’t investigate how decaying the exploration rate over time might affect algorithm convergence, so I might want to look into that at some point

References

  1. Playing Atari with Deep Reinforcement Learning
  2. Deterministic Policy Gradient Algorithms
  3. Continuous control with deep reinforcement learning
  4. Parameter Space Noise for Exploration
  5. Addressing Function Approximation Error in Actor-Critic Methods

Updated: