Recurrent PPO
Implementation of recurrent policies for the Proximal Policy Optimization (PPO) algorithm. Other than adding support for recurrent policies (LSTM here), the behavior is the same as in SB3’s core PPO algorithm.
Available Policies
alias of |
|
alias of |
|
alias of |
Notes
Can I use?
Recurrent policies: ✔️
Multi processing: ✔️
Gym spaces:
Space |
Action |
Observation |
---|---|---|
Discrete |
✔️ |
✔️ |
Box |
✔️ |
✔️ |
MultiDiscrete |
✔️ |
✔️ |
MultiBinary |
✔️ |
✔️ |
Dict |
❌ |
✔️ |
Example
Note
It is particularly important to pass the lstm_states
and episode_start
argument to the predict()
method,
so the cell and hidden states of the LSTM are correctly updated.
import numpy as np
from sb3_contrib import RecurrentPPO
from stable_baselines3.common.evaluation import evaluate_policy
model = RecurrentPPO("MlpLstmPolicy", "CartPole-v1", verbose=1)
model.learn(5000)
vec_env = model.get_env()
mean_reward, std_reward = evaluate_policy(model, vec_env, n_eval_episodes=20, warn=False)
print(mean_reward)
model.save("ppo_recurrent")
del model # remove to demonstrate saving and loading
model = RecurrentPPO.load("ppo_recurrent")
obs = vec_env.reset()
# cell and hidden state of the LSTM
lstm_states = None
num_envs = 1
# Episode start signals are used to reset the lstm states
episode_starts = np.ones((num_envs,), dtype=bool)
while True:
action, lstm_states = model.predict(obs, state=lstm_states, episode_start=episode_starts, deterministic=True)
obs, rewards, dones, info = vec_env.step(action)
episode_starts = dones
vec_env.render("human")
Results
Report on environments with masked velocity (with and without framestack) can be found here: https://wandb.ai/sb3/no-vel-envs/reports/PPO-vs-RecurrentPPO-aka-PPO-LSTM-on-environments-with-masked-velocity–VmlldzoxOTI4NjE4
RecurrentPPO
was evaluated against PPO on:
PendulumNoVel-v1
LunarLanderNoVel-v2
CartPoleNoVel-v1
MountainCarContinuousNoVel-v0
CarRacing-v0
How to replicate the results?
Clone the repo for the experiment:
git clone https://github.com/DLR-RM/rl-baselines3-zoo
cd rl-baselines3-zoo
Run the benchmark (replace $ENV_ID
by the envs mentioned above):
python train.py --algo ppo_lstm --env $ENV_ID --eval-episodes 10 --eval-freq 10000
Parameters
- class sb3_contrib.ppo_recurrent.RecurrentPPO(policy, env, learning_rate=0.0003, n_steps=128, batch_size=128, n_epochs=10, gamma=0.99, gae_lambda=0.95, clip_range=0.2, clip_range_vf=None, normalize_advantage=True, ent_coef=0.0, vf_coef=0.5, max_grad_norm=0.5, use_sde=False, sde_sample_freq=-1, target_kl=None, stats_window_size=100, tensorboard_log=None, policy_kwargs=None, verbose=0, seed=None, device='auto', _init_setup_model=True)[source]
Proximal Policy Optimization algorithm (PPO) (clip version) with support for recurrent policies (LSTM).
Based on the original Stable Baselines 3 implementation.
Introduction to PPO: https://spinningup.openai.com/en/latest/algorithms/ppo.html
- Parameters:
policy (ActorCriticPolicy) – The policy model to use (MlpPolicy, CnnPolicy, …)
env (Env | VecEnv | str) – The environment to learn from (if registered in Gym, can be str)
learning_rate (float | Callable[[float], float]) – The learning rate, it can be a function of the current progress remaining (from 1 to 0)
n_steps (int) – The number of steps to run for each environment per update (i.e. batch size is n_steps * n_env where n_env is number of environment copies running in parallel)
batch_size (int | None) – Minibatch size
n_epochs (int) – Number of epoch when optimizing the surrogate loss
gamma (float) – Discount factor
gae_lambda (float) – Factor for trade-off of bias vs variance for Generalized Advantage Estimator
clip_range (float | Callable[[float], float]) – Clipping parameter, it can be a function of the current progress remaining (from 1 to 0).
clip_range_vf (None | float | Callable[[float], float]) – Clipping parameter for the value function, it can be a function of the current progress remaining (from 1 to 0). This is a parameter specific to the OpenAI implementation. If None is passed (default), no clipping will be done on the value function. IMPORTANT: this clipping depends on the reward scaling.
normalize_advantage (bool) – Whether to normalize or not the advantage
ent_coef (float) – Entropy coefficient for the loss calculation
vf_coef (float) – Value function coefficient for the loss calculation
max_grad_norm (float) – The maximum value for the gradient clipping
target_kl (float | None) – Limit the KL divergence between updates, because the clipping is not enough to prevent large update see issue #213 (cf https://github.com/hill-a/stable-baselines/issues/213) By default, there is no limit on the kl div.
stats_window_size (int) – Window size for the rollout logging, specifying the number of episodes to average the reported success rate, mean episode length, and mean reward over
tensorboard_log (str | None) – the log location for tensorboard (if None, no logging)
policy_kwargs (dict[str, Any] | None) – additional arguments to be passed to the policy on creation. See RecurrentPPO Policies
verbose (int) – the verbosity level: 0 no output, 1 info, 2 debug
seed (int | None) – Seed for the pseudo random generators
device (device | str) – Device (cpu, cuda, …) on which the code should be run. Setting it to auto, the code will be run on the GPU if possible.
_init_setup_model (bool) – Whether or not to build the network at the creation of the instance
use_sde (bool)
sde_sample_freq (int)
- collect_rollouts(env, callback, rollout_buffer, n_rollout_steps)[source]
Collect experiences using the current policy and fill a
RolloutBuffer
. The term rollout here refers to the model-free notion and should not be used with the concept of rollout used in model-based RL or planning.- Parameters:
env (VecEnv) – The training environment
callback (BaseCallback) – Callback that will be called at each step (and at the beginning and end of the rollout)
rollout_buffer (RolloutBuffer) – Buffer to fill with rollouts
n_steps – Number of experiences to collect per environment
n_rollout_steps (int)
- Returns:
True if function returned with at least n_rollout_steps collected, False if callback terminated rollout prematurely.
- Return type:
bool
- dump_logs(iteration=0)
Write log.
- Parameters:
iteration (int) – Current logging iteration
- Return type:
None
- get_env()
Returns the current environment (can be None if not defined).
- Returns:
The current environment
- Return type:
VecEnv | None
- get_parameters()
Return the parameters of the agent. This includes parameters from different networks, e.g. critics (value functions) and policies (pi functions).
- Returns:
Mapping of from names of the objects to PyTorch state-dicts.
- Return type:
dict[str, dict]
- get_vec_normalize_env()
Return the
VecNormalize
wrapper of the training env if it exists.- Returns:
The
VecNormalize
env.- Return type:
VecNormalize | None
- learn(total_timesteps, callback=None, log_interval=1, tb_log_name='RecurrentPPO', reset_num_timesteps=True, progress_bar=False)[source]
Return a trained model.
- Parameters:
total_timesteps (int) – The total number of samples (env steps) to train on
callback (None | Callable | list[BaseCallback] | BaseCallback) – callback(s) called at every step with state of the algorithm.
log_interval (int) – for on-policy algos (e.g., PPO, A2C, …) this is the number of training iterations (i.e., log_interval * n_steps * n_envs timesteps) before logging; for off-policy algos (e.g., TD3, SAC, …) this is the number of episodes before logging.
tb_log_name (str) – the name of the run for TensorBoard logging
reset_num_timesteps (bool) – whether or not to reset the current timestep number (used in logging)
progress_bar (bool) – Display a progress bar using tqdm and rich.
self (SelfRecurrentPPO)
- Returns:
the trained model
- Return type:
SelfRecurrentPPO
- classmethod load(path, env=None, device='auto', custom_objects=None, print_system_info=False, force_reset=True, **kwargs)
Load the model from a zip-file. Warning:
load
re-creates the model from scratch, it does not update it in-place! For an in-place load useset_parameters
instead.- Parameters:
path (str | Path | BufferedIOBase) – path to the file (or a file-like) where to load the agent from
env (Env | VecEnv | None) – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment
device (device | str) – Device on which the code should run.
custom_objects (dict[str, Any] | None) – Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in
keras.models.load_model
. Useful when you have an object in file that can not be deserialized.print_system_info (bool) – Whether to print system info from the saved model and the current system info (useful to debug loading issues)
force_reset (bool) – Force call to
reset()
before training to avoid unexpected behavior. See https://github.com/DLR-RM/stable-baselines3/issues/597kwargs – extra arguments to change the model when loading
- Returns:
new model instance with loaded parameters
- Return type:
SelfBaseAlgorithm
- property logger: Logger
Getter for the logger object.
- predict(observation, state=None, episode_start=None, deterministic=False)
Get the policy action from an observation (and optional hidden state). Includes sugar-coating to handle different observations (e.g. normalizing images).
- Parameters:
observation (ndarray | dict[str, ndarray]) – the input observation
state (tuple[ndarray, ...] | None) – The last hidden states (can be None, used in recurrent policies)
episode_start (ndarray | None) – The last masks (can be None, used in recurrent policies) this correspond to beginning of episodes, where the hidden states of the RNN must be reset.
deterministic (bool) – Whether or not to return deterministic actions.
- Returns:
the model’s action and the next hidden state (used in recurrent policies)
- Return type:
tuple[ndarray, tuple[ndarray, …] | None]
- save(path, exclude=None, include=None)
Save all the attributes of the object and the model parameters in a zip-file.
- Parameters:
path (str | Path | BufferedIOBase) – path to the file where the rl agent should be saved
exclude (Iterable[str] | None) – name of parameters that should be excluded in addition to the default ones
include (Iterable[str] | None) – name of parameters that might be excluded but should be included anyway
- Return type:
None
- set_env(env, force_reset=True)
Checks the validity of the environment, and if it is coherent, set it as the current environment. Furthermore wrap any non vectorized env into a vectorized checked parameters: - observation_space - action_space
- Parameters:
env (Env | VecEnv) – The environment for learning a policy
force_reset (bool) – Force call to
reset()
before training to avoid unexpected behavior. See issue https://github.com/DLR-RM/stable-baselines3/issues/597
- Return type:
None
- set_logger(logger)
Setter for for logger object.
Warning
When passing a custom logger object, this will overwrite
tensorboard_log
andverbose
settings passed to the constructor.- Parameters:
logger (Logger)
- Return type:
None
- set_parameters(load_path_or_dict, exact_match=True, device='auto')
Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see
get_parameters
).- Parameters:
load_path_or_iter – Location of the saved data (path or file-like, see
save
), or a nested dictionary containing nn.Module parameters used by the policy. The dictionary maps object names to a state-dictionary returned bytorch.nn.Module.state_dict()
.exact_match (bool) – If True, the given parameters should include parameters for each module and each of their parameters, otherwise raises an Exception. If set to False, this can be used to update only specific parameters.
device (device | str) – Device on which the code should run.
load_path_or_dict (str | dict[str, Tensor])
- Return type:
None
- set_random_seed(seed=None)
Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)
- Parameters:
seed (int | None)
- Return type:
None
RecurrentPPO Policies
- sb3_contrib.ppo_recurrent.MlpLstmPolicy
alias of
RecurrentActorCriticPolicy
- class sb3_contrib.common.recurrent.policies.RecurrentActorCriticPolicy(observation_space, action_space, lr_schedule, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.Tanh'>, ortho_init=True, use_sde=False, log_std_init=0.0, full_std=True, use_expln=False, squash_output=False, features_extractor_class=<class 'stable_baselines3.common.torch_layers.FlattenExtractor'>, features_extractor_kwargs=None, share_features_extractor=True, normalize_images=True, optimizer_class=<class 'torch.optim.adam.Adam'>, optimizer_kwargs=None, lstm_hidden_size=256, n_lstm_layers=1, shared_lstm=False, enable_critic_lstm=True, lstm_kwargs=None)[source]
Recurrent policy class for actor-critic algorithms (has both policy and value prediction). To be used with A2C, PPO and the likes. It assumes that both the actor and the critic LSTM have the same architecture.
- Parameters:
observation_space (Space) – Observation space
action_space (Space) – Action space
lr_schedule (Callable[[float], float]) – Learning rate schedule (could be constant)
net_arch (list[int] | dict[str, list[int]] | None) – The specification of the policy and value networks.
activation_fn (type[Module]) – Activation function
ortho_init (bool) – Whether to use or not orthogonal initialization
use_sde (bool) – Whether to use State Dependent Exploration or not
log_std_init (float) – Initial value for the log standard deviation
full_std (bool) – Whether to use (n_features x n_actions) parameters for the std instead of only (n_features,) when using gSDE
use_expln (bool) – Use
expln()
function instead ofexp()
to ensure a positive standard deviation (cf paper). It allows to keep variance above zero and prevent it from growing too fast. In practice,exp()
is usually enough.squash_output (bool) – Whether to squash the output using a tanh function, this allows to ensure boundaries when using gSDE.
features_extractor_class (type[BaseFeaturesExtractor]) – Features extractor to use.
features_extractor_kwargs (dict[str, Any] | None) – Keyword arguments to pass to the features extractor.
share_features_extractor (bool) – If True, the features extractor is shared between the policy and value networks.
normalize_images (bool) – Whether to normalize images or not, dividing by 255.0 (True by default)
optimizer_class (type[Optimizer]) – The optimizer to use,
th.optim.Adam
by defaultoptimizer_kwargs (dict[str, Any] | None) – Additional keyword arguments, excluding the learning rate, to pass to the optimizer
lstm_hidden_size (int) – Number of hidden units for each LSTM layer.
n_lstm_layers (int) – Number of LSTM layers.
shared_lstm (bool) – Whether the LSTM is shared between the actor and the critic (in that case, only the actor gradient is used) By default, the actor and the critic have two separate LSTM.
enable_critic_lstm (bool) – Use a seperate LSTM for the critic.
lstm_kwargs (dict[str, Any] | None) – Additional keyword arguments to pass the the LSTM constructor.
- evaluate_actions(obs, actions, lstm_states, episode_starts)[source]
Evaluate actions according to the current policy, given the observations.
- Parameters:
obs (Tensor) – Observation.
actions (Tensor)
lstm_states (RNNStates) – The last hidden and memory states for the LSTM.
episode_starts (Tensor) – Whether the observations correspond to new episodes or not (we reset the lstm states in that case).
- Returns:
estimated value, log likelihood of taking those actions and entropy of the action distribution.
- Return type:
tuple[Tensor, Tensor, Tensor]
- forward(obs, lstm_states, episode_starts, deterministic=False)[source]
Forward pass in all the networks (actor and critic)
- Parameters:
obs (Tensor) – Observation. Observation
lstm_states (RNNStates) – The last hidden and memory states for the LSTM.
episode_starts (Tensor) – Whether the observations correspond to new episodes or not (we reset the lstm states in that case).
deterministic (bool) – Whether to sample or use deterministic actions
- Returns:
action, value and log probability of the action
- Return type:
tuple[Tensor, Tensor, Tensor, RNNStates]
- get_distribution(obs, lstm_states, episode_starts)[source]
Get the current policy distribution given the observations.
- Parameters:
obs (Tensor) – Observation.
lstm_states (tuple[Tensor, Tensor]) – The last hidden and memory states for the LSTM.
episode_starts (Tensor) – Whether the observations correspond to new episodes or not (we reset the lstm states in that case).
- Returns:
the action distribution and new hidden states.
- Return type:
tuple[Distribution, tuple[Tensor, …]]
- predict(observation, state=None, episode_start=None, deterministic=False)[source]
Get the policy action from an observation (and optional hidden state). Includes sugar-coating to handle different observations (e.g. normalizing images).
- Parameters:
observation (ndarray | dict[str, ndarray]) – the input observation
lstm_states – The last hidden and memory states for the LSTM.
episode_starts – Whether the observations correspond to new episodes or not (we reset the lstm states in that case).
deterministic (bool) – Whether or not to return deterministic actions.
state (tuple[ndarray, ...] | None)
episode_start (ndarray | None)
- Returns:
the model’s action and the next hidden state (used in recurrent policies)
- Return type:
tuple[ndarray, tuple[ndarray, …] | None]
- predict_values(obs, lstm_states, episode_starts)[source]
Get the estimated values according to the current policy given the observations.
- Parameters:
obs (Tensor) – Observation.
lstm_states (tuple[Tensor, Tensor]) – The last hidden and memory states for the LSTM.
episode_starts (Tensor) – Whether the observations correspond to new episodes or not (we reset the lstm states in that case).
- Returns:
the estimated values.
- Return type:
Tensor
- sb3_contrib.ppo_recurrent.CnnLstmPolicy
alias of
RecurrentActorCriticCnnPolicy
- class sb3_contrib.common.recurrent.policies.RecurrentActorCriticCnnPolicy(observation_space, action_space, lr_schedule, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.Tanh'>, ortho_init=True, use_sde=False, log_std_init=0.0, full_std=True, use_expln=False, squash_output=False, features_extractor_class=<class 'stable_baselines3.common.torch_layers.NatureCNN'>, features_extractor_kwargs=None, share_features_extractor=True, normalize_images=True, optimizer_class=<class 'torch.optim.adam.Adam'>, optimizer_kwargs=None, lstm_hidden_size=256, n_lstm_layers=1, shared_lstm=False, enable_critic_lstm=True, lstm_kwargs=None)[source]
CNN recurrent policy class for actor-critic algorithms (has both policy and value prediction). Used by A2C, PPO and the likes.
- Parameters:
observation_space (Space) – Observation space
action_space (Space) – Action space
lr_schedule (Callable[[float], float]) – Learning rate schedule (could be constant)
net_arch (list[int] | dict[str, list[int]] | None) – The specification of the policy and value networks.
activation_fn (type[Module]) – Activation function
ortho_init (bool) – Whether to use or not orthogonal initialization
use_sde (bool) – Whether to use State Dependent Exploration or not
log_std_init (float) – Initial value for the log standard deviation
full_std (bool) – Whether to use (n_features x n_actions) parameters for the std instead of only (n_features,) when using gSDE
use_expln (bool) – Use
expln()
function instead ofexp()
to ensure a positive standard deviation (cf paper). It allows to keep variance above zero and prevent it from growing too fast. In practice,exp()
is usually enough.squash_output (bool) – Whether to squash the output using a tanh function, this allows to ensure boundaries when using gSDE.
features_extractor_class (type[BaseFeaturesExtractor]) – Features extractor to use.
features_extractor_kwargs (dict[str, Any] | None) – Keyword arguments to pass to the features extractor.
share_features_extractor (bool) – If True, the features extractor is shared between the policy and value networks.
normalize_images (bool) – Whether to normalize images or not, dividing by 255.0 (True by default)
optimizer_class (type[Optimizer]) – The optimizer to use,
th.optim.Adam
by defaultoptimizer_kwargs (dict[str, Any] | None) – Additional keyword arguments, excluding the learning rate, to pass to the optimizer
lstm_hidden_size (int) – Number of hidden units for each LSTM layer.
n_lstm_layers (int) – Number of LSTM layers.
shared_lstm (bool) – Whether the LSTM is shared between the actor and the critic. By default, only the actor has a recurrent network.
enable_critic_lstm (bool) – Use a seperate LSTM for the critic.
lstm_kwargs (dict[str, Any] | None) – Additional keyword arguments to pass the the LSTM constructor.
- sb3_contrib.ppo_recurrent.MultiInputLstmPolicy
alias of
RecurrentMultiInputActorCriticPolicy
- class sb3_contrib.common.recurrent.policies.RecurrentMultiInputActorCriticPolicy(observation_space, action_space, lr_schedule, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.Tanh'>, ortho_init=True, use_sde=False, log_std_init=0.0, full_std=True, use_expln=False, squash_output=False, features_extractor_class=<class 'stable_baselines3.common.torch_layers.CombinedExtractor'>, features_extractor_kwargs=None, share_features_extractor=True, normalize_images=True, optimizer_class=<class 'torch.optim.adam.Adam'>, optimizer_kwargs=None, lstm_hidden_size=256, n_lstm_layers=1, shared_lstm=False, enable_critic_lstm=True, lstm_kwargs=None)[source]
MultiInputActorClass policy class for actor-critic algorithms (has both policy and value prediction). Used by A2C, PPO and the likes.
- Parameters:
observation_space (Space) – Observation space
action_space (Space) – Action space
lr_schedule (Callable[[float], float]) – Learning rate schedule (could be constant)
net_arch (list[int] | dict[str, list[int]] | None) – The specification of the policy and value networks.
activation_fn (type[Module]) – Activation function
ortho_init (bool) – Whether to use or not orthogonal initialization
use_sde (bool) – Whether to use State Dependent Exploration or not
log_std_init (float) – Initial value for the log standard deviation
full_std (bool) – Whether to use (n_features x n_actions) parameters for the std instead of only (n_features,) when using gSDE
use_expln (bool) – Use
expln()
function instead ofexp()
to ensure a positive standard deviation (cf paper). It allows to keep variance above zero and prevent it from growing too fast. In practice,exp()
is usually enough.squash_output (bool) – Whether to squash the output using a tanh function, this allows to ensure boundaries when using gSDE.
features_extractor_class (type[BaseFeaturesExtractor]) – Features extractor to use.
features_extractor_kwargs (dict[str, Any] | None) – Keyword arguments to pass to the features extractor.
share_features_extractor (bool) – If True, the features extractor is shared between the policy and value networks.
normalize_images (bool) – Whether to normalize images or not, dividing by 255.0 (True by default)
optimizer_class (type[Optimizer]) – The optimizer to use,
th.optim.Adam
by defaultoptimizer_kwargs (dict[str, Any] | None) – Additional keyword arguments, excluding the learning rate, to pass to the optimizer
lstm_hidden_size (int) – Number of hidden units for each LSTM layer.
n_lstm_layers (int) – Number of LSTM layers.
shared_lstm (bool) – Whether the LSTM is shared between the actor and the critic. By default, only the actor has a recurrent network.
enable_critic_lstm (bool) – Use a seperate LSTM for the critic.
lstm_kwargs (dict[str, Any] | None) – Additional keyword arguments to pass the the LSTM constructor.