QR-DQN
Quantile Regression DQN (QR-DQN) builds on Deep Q-Network (DQN) and make use of quantile regression to explicitly model the distribution over returns, instead of predicting the mean return (DQN).
Available Policies
alias of |
|
Policy class for QR-DQN when using images as input. |
|
Policy class for QR-DQN when using dict observations as input. |
Notes
Original paper: https://arxiv.org/abs/1710.10044
Distributional RL (C51): https://arxiv.org/abs/1707.06887
Further reference: https://github.com/amy12xx/ml_notes_and_reports/blob/master/distributional_rl/QRDQN.pdf
Can I use?
Recurrent policies: ❌
Multi processing: ✔️
Gym spaces:
Space |
Action |
Observation |
---|---|---|
Discrete |
✔️ |
✔️ |
Box |
❌ |
✔️ |
MultiDiscrete |
❌ |
✔️ |
MultiBinary |
❌ |
✔️ |
Dict |
❌ |
✔️ |
Example
import gymnasium as gym
from sb3_contrib import QRDQN
env = gym.make("CartPole-v1", render_mode="human")
policy_kwargs = dict(n_quantiles=50)
model = QRDQN("MlpPolicy", env, policy_kwargs=policy_kwargs, verbose=1)
model.learn(total_timesteps=10_000, log_interval=4)
model.save("qrdqn_cartpole")
del model # remove to demonstrate saving and loading
model = QRDQN.load("qrdqn_cartpole")
obs, _ = env.reset()
while True:
action, _states = model.predict(obs, deterministic=True)
obs, reward, terminated, truncated, info = env.step(action)
env.render()
if terminated or truncated:
obs, _ = env.reset()
Results
Result on Atari environments (10M steps, Pong and Breakout) and classic control tasks using 3 and 5 seeds.
The complete learning curves are available in the associated PR.
Note
QR-DQN implementation was validated against Intel Coach one which roughly compare to the original paper results (we trained the agent with a smaller budget).
Environments |
QR-DQN |
DQN |
---|---|---|
Breakout |
413 +/- 21 |
~300 |
Pong |
20 +/- 0 |
~20 |
CartPole |
386 +/- 64 |
500 +/- 0 |
MountainCar |
-111 +/- 4 |
-107 +/- 4 |
LunarLander |
168 +/- 39 |
195 +/- 28 |
Acrobot |
-73 +/- 2 |
-74 +/- 2 |
How to replicate the results?
Clone RL-Zoo fork and checkout the branch feat/qrdqn
:
git clone https://github.com/ku2482/rl-baselines3-zoo/
cd rl-baselines3-zoo/
git checkout feat/qrdqn
Run the benchmark (replace $ENV_ID
by the envs mentioned above):
python train.py --algo qrdqn --env $ENV_ID --eval-episodes 10 --eval-freq 10000
Plot the results:
python scripts/all_plots.py -a qrdqn -e Breakout Pong -f logs/ -o logs/qrdqn_results
python scripts/plot_from_file.py -i logs/qrdqn_results.pkl -latex -l QR-DQN
Parameters
- class sb3_contrib.qrdqn.QRDQN(policy, env, learning_rate=5e-05, buffer_size=1000000, learning_starts=100, batch_size=32, tau=1.0, gamma=0.99, train_freq=4, gradient_steps=1, replay_buffer_class=None, replay_buffer_kwargs=None, optimize_memory_usage=False, target_update_interval=10000, exploration_fraction=0.005, exploration_initial_eps=1.0, exploration_final_eps=0.01, max_grad_norm=None, stats_window_size=100, tensorboard_log=None, policy_kwargs=None, verbose=0, seed=None, device='auto', _init_setup_model=True)[source]
Quantile Regression Deep Q-Network (QR-DQN) Paper: https://arxiv.org/abs/1710.10044 Default hyperparameters are taken from the paper and are tuned for Atari games (except for the
learning_starts
parameter).- Parameters:
policy (QRDQNPolicy) – The policy model to use (MlpPolicy, CnnPolicy, …)
env (Env | VecEnv | str) – The environment to learn from (if registered in Gym, can be str)
learning_rate (float | Callable[[float], float]) – The learning rate, it can be a function of the current progress remaining (from 1 to 0)
buffer_size (int) – size of the replay buffer
learning_starts (int) – how many steps of the model to collect transitions for before learning starts
batch_size (int) – Minibatch size for each gradient update
tau (float) – the soft update coefficient (“Polyak update”, between 0 and 1) default 1 for hard update
gamma (float) – the discount factor
train_freq (int | tuple[int, str]) – Update the model every
train_freq
steps. Alternatively pass a tuple of frequency and unit like(5, "step")
or(2, "episode")
.gradient_steps (int) – How many gradient steps to do after each rollout (see
train_freq
andn_episodes_rollout
) Set to-1
means to do as many gradient steps as steps done in the environment during the rollout.replay_buffer_class (type[ReplayBuffer] | None) – Replay buffer class to use (for instance
HerReplayBuffer
). IfNone
, it will be automatically selected.replay_buffer_kwargs (dict[str, Any] | None) – Keyword arguments to pass to the replay buffer on creation.
optimize_memory_usage (bool) – Enable a memory efficient variant of the replay buffer at a cost of more complexity. See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
target_update_interval (int) – update the target network every
target_update_interval
environment steps.exploration_fraction (float) – fraction of entire training period over which the exploration rate is reduced
exploration_initial_eps (float) – initial value of random action probability
exploration_final_eps (float) – final value of random action probability
max_grad_norm (float | None) – The maximum value for the gradient clipping (if None, no clipping)
stats_window_size (int) – Window size for the rollout logging, specifying the number of episodes to average the reported success rate, mean episode length, and mean reward over
tensorboard_log (str | None) – the log location for tensorboard (if None, no logging)
policy_kwargs (dict[str, Any] | None) – additional arguments to be passed to the policy on creation. See QR-DQN Policies
verbose (int) – the verbosity level: 0 no output, 1 info, 2 debug
seed (int | None) – Seed for the pseudo random generators
device (device | str) – Device (cpu, cuda, …) on which the code should be run. Setting it to auto, the code will be run on the GPU if possible.
_init_setup_model (bool) – Whether or not to build the network at the creation of the instance
- collect_rollouts(env, callback, train_freq, replay_buffer, action_noise=None, learning_starts=0, log_interval=None)
Collect experiences and store them into a
ReplayBuffer
.- Parameters:
env (VecEnv) – The training environment
callback (BaseCallback) – Callback that will be called at each step (and at the beginning and end of the rollout)
train_freq (TrainFreq) – How much experience to collect by doing rollouts of current policy. Either
TrainFreq(<n>, TrainFrequencyUnit.STEP)
orTrainFreq(<n>, TrainFrequencyUnit.EPISODE)
with<n>
being an integer greater than 0.action_noise (ActionNoise | None) – Action noise that will be used for exploration Required for deterministic policy (e.g. TD3). This can also be used in addition to the stochastic policy for SAC.
learning_starts (int) – Number of steps before learning for the warm-up phase.
replay_buffer (ReplayBuffer)
log_interval (int | None) – Log data every
log_interval
episodes
- Returns:
- Return type:
RolloutReturn
- get_env()
Returns the current environment (can be None if not defined).
- Returns:
The current environment
- Return type:
VecEnv | None
- get_parameters()
Return the parameters of the agent. This includes parameters from different networks, e.g. critics (value functions) and policies (pi functions).
- Returns:
Mapping of from names of the objects to PyTorch state-dicts.
- Return type:
dict[str, dict]
- get_vec_normalize_env()
Return the
VecNormalize
wrapper of the training env if it exists.- Returns:
The
VecNormalize
env.- Return type:
VecNormalize | None
- learn(total_timesteps, callback=None, log_interval=4, tb_log_name='QRDQN', reset_num_timesteps=True, progress_bar=False)[source]
Return a trained model.
- Parameters:
total_timesteps (int) – The total number of samples (env steps) to train on
callback (None | Callable | list[BaseCallback] | BaseCallback) – callback(s) called at every step with state of the algorithm.
log_interval (int) – for on-policy algos (e.g., PPO, A2C, …) this is the number of training iterations (i.e., log_interval * n_steps * n_envs timesteps) before logging; for off-policy algos (e.g., TD3, SAC, …) this is the number of episodes before logging.
tb_log_name (str) – the name of the run for TensorBoard logging
reset_num_timesteps (bool) – whether or not to reset the current timestep number (used in logging)
progress_bar (bool) – Display a progress bar using tqdm and rich.
self (SelfQRDQN)
- Returns:
the trained model
- Return type:
SelfQRDQN
- classmethod load(path, env=None, device='auto', custom_objects=None, print_system_info=False, force_reset=True, **kwargs)
Load the model from a zip-file. Warning:
load
re-creates the model from scratch, it does not update it in-place! For an in-place load useset_parameters
instead.- Parameters:
path (str | Path | BufferedIOBase) – path to the file (or a file-like) where to load the agent from
env (Env | VecEnv | None) – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment
device (device | str) – Device on which the code should run.
custom_objects (dict[str, Any] | None) – Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in
keras.models.load_model
. Useful when you have an object in file that can not be deserialized.print_system_info (bool) – Whether to print system info from the saved model and the current system info (useful to debug loading issues)
force_reset (bool) – Force call to
reset()
before training to avoid unexpected behavior. See https://github.com/DLR-RM/stable-baselines3/issues/597kwargs – extra arguments to change the model when loading
- Returns:
new model instance with loaded parameters
- Return type:
SelfBaseAlgorithm
- load_replay_buffer(path, truncate_last_traj=True)
Load a replay buffer from a pickle file.
- Parameters:
path (str | Path | BufferedIOBase) – Path to the pickled replay buffer.
truncate_last_traj (bool) – When using
HerReplayBuffer
with online sampling: If set toTrue
, we assume that the last trajectory in the replay buffer was finished (and truncate it). If set toFalse
, we assume that we continue the same trajectory (same episode).
- Return type:
None
- property logger: Logger
Getter for the logger object.
- predict(observation, state=None, episode_start=None, deterministic=False)[source]
Get the policy action from an observation (and optional hidden state). Includes sugar-coating to handle different observations (e.g. normalizing images).
- Parameters:
observation (ndarray | dict[str, ndarray]) – the input observation
state (tuple[ndarray, ...] | None) – The last hidden states (can be None, used in recurrent policies)
episode_start (ndarray | None) – The last masks (can be None, used in recurrent policies) this correspond to beginning of episodes, where the hidden states of the RNN must be reset.
deterministic (bool) – Whether or not to return deterministic actions.
- Returns:
the model’s action and the next hidden state (used in recurrent policies)
- Return type:
tuple[ndarray, tuple[ndarray, …] | None]
- save(path, exclude=None, include=None)
Save all the attributes of the object and the model parameters in a zip-file.
- Parameters:
path (str | Path | BufferedIOBase) – path to the file where the rl agent should be saved
exclude (Iterable[str] | None) – name of parameters that should be excluded in addition to the default ones
include (Iterable[str] | None) – name of parameters that might be excluded but should be included anyway
- Return type:
None
- save_replay_buffer(path)
Save the replay buffer as a pickle file.
- Parameters:
path (str | Path | BufferedIOBase) – Path to the file where the replay buffer should be saved. if path is a str or pathlib.Path, the path is automatically created if necessary.
- Return type:
None
- set_env(env, force_reset=True)
Checks the validity of the environment, and if it is coherent, set it as the current environment. Furthermore wrap any non vectorized env into a vectorized checked parameters: - observation_space - action_space
- Parameters:
env (Env | VecEnv) – The environment for learning a policy
force_reset (bool) – Force call to
reset()
before training to avoid unexpected behavior. See issue https://github.com/DLR-RM/stable-baselines3/issues/597
- Return type:
None
- set_logger(logger)
Setter for for logger object.
Warning
When passing a custom logger object, this will overwrite
tensorboard_log
andverbose
settings passed to the constructor.- Parameters:
logger (Logger)
- Return type:
None
- set_parameters(load_path_or_dict, exact_match=True, device='auto')
Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see
get_parameters
).- Parameters:
load_path_or_iter – Location of the saved data (path or file-like, see
save
), or a nested dictionary containing nn.Module parameters used by the policy. The dictionary maps object names to a state-dictionary returned bytorch.nn.Module.state_dict()
.exact_match (bool) – If True, the given parameters should include parameters for each module and each of their parameters, otherwise raises an Exception. If set to False, this can be used to update only specific parameters.
device (device | str) – Device on which the code should run.
load_path_or_dict (str | dict[str, Tensor])
- Return type:
None
- set_random_seed(seed=None)
Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)
- Parameters:
seed (int | None)
- Return type:
None
QR-DQN Policies
- sb3_contrib.qrdqn.MlpPolicy
alias of
QRDQNPolicy
- class sb3_contrib.qrdqn.policies.QRDQNPolicy(observation_space, action_space, lr_schedule, n_quantiles=200, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, features_extractor_class=<class 'stable_baselines3.common.torch_layers.FlattenExtractor'>, features_extractor_kwargs=None, normalize_images=True, optimizer_class=<class 'torch.optim.adam.Adam'>, optimizer_kwargs=None)[source]
Policy class with quantile and target networks for QR-DQN.
- Parameters:
observation_space (Space) – Observation space
action_space (Discrete) – Action space
lr_schedule (Callable[[float], float]) – Learning rate schedule (could be constant)
n_quantiles (int) – Number of quantiles
net_arch (list[int] | None) – The specification of the network architecture.
activation_fn (type[Module]) – Activation function
features_extractor_class (type[BaseFeaturesExtractor]) – Features extractor to use.
features_extractor_kwargs (dict[str, Any] | None) – Keyword arguments to pass to the features extractor.
normalize_images (bool) – Whether to normalize images or not, dividing by 255.0 (True by default)
optimizer_class (type[Optimizer]) – The optimizer to use,
th.optim.Adam
by defaultoptimizer_kwargs (dict[str, Any] | None) – Additional keyword arguments, excluding the learning rate, to pass to the optimizer
- forward(obs, deterministic=True)[source]
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.- Parameters:
obs (Tensor | dict[str, Tensor])
deterministic (bool)
- Return type:
Tensor
- set_training_mode(mode)[source]
Put the policy in either training or evaluation mode. This affects certain modules, such as batch normalisation and dropout. :param mode: if true, set to training mode, else set to evaluation mode
- Parameters:
mode (bool)
- Return type:
None
- class sb3_contrib.qrdqn.CnnPolicy(observation_space, action_space, lr_schedule, n_quantiles=200, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, features_extractor_class=<class 'stable_baselines3.common.torch_layers.NatureCNN'>, features_extractor_kwargs=None, normalize_images=True, optimizer_class=<class 'torch.optim.adam.Adam'>, optimizer_kwargs=None)[source]
Policy class for QR-DQN when using images as input.
- Parameters:
observation_space (Space) – Observation space
action_space (Discrete) – Action space
lr_schedule (Callable[[float], float]) – Learning rate schedule (could be constant)
n_quantiles (int) – Number of quantiles
net_arch (list[int] | None) – The specification of the network architecture.
activation_fn (type[Module]) – Activation function
features_extractor_class (type[BaseFeaturesExtractor]) – Features extractor to use.
normalize_images (bool) – Whether to normalize images or not, dividing by 255.0 (True by default)
optimizer_class (type[Optimizer]) – The optimizer to use,
th.optim.Adam
by defaultoptimizer_kwargs (dict[str, Any] | None) – Additional keyword arguments, excluding the learning rate, to pass to the optimizer
features_extractor_kwargs (dict[str, Any] | None)
- class sb3_contrib.qrdqn.MultiInputPolicy(observation_space, action_space, lr_schedule, n_quantiles=200, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, features_extractor_class=<class 'stable_baselines3.common.torch_layers.CombinedExtractor'>, features_extractor_kwargs=None, normalize_images=True, optimizer_class=<class 'torch.optim.adam.Adam'>, optimizer_kwargs=None)[source]
Policy class for QR-DQN when using dict observations as input.
- Parameters:
observation_space (Space) – Observation space
action_space (Discrete) – Action space
lr_schedule (Callable[[float], float]) – Learning rate schedule (could be constant)
n_quantiles (int) – Number of quantiles
net_arch (list[int] | None) – The specification of the network architecture.
activation_fn (type[Module]) – Activation function
features_extractor_class (type[BaseFeaturesExtractor]) – Features extractor to use.
normalize_images (bool) – Whether to normalize images or not, dividing by 255.0 (True by default)
optimizer_class (type[Optimizer]) – The optimizer to use,
th.optim.Adam
by defaultoptimizer_kwargs (dict[str, Any] | None) – Additional keyword arguments, excluding the learning rate, to pass to the optimizer
features_extractor_kwargs (dict[str, Any] | None)