Augmented Random Search (ARS) is a simple reinforcement algorithm that uses a direct random search over policy parameters. It can be surprisingly effective compared to more sophisticated algorithms. In the original paper the authors showed that linear policies trained with ARS were competitive with deep reinforcement learning for the MuJuCo locomotion tasks.

SB3s implementation allows for linear policies without bias or squashing function, it also allows for training MLP policies, which include linear policies with bias and squashing functions as a special case.

Normally one wants to train ARS with several seeds to properly evaluate.


ARS multi-processing is different from the classic Stable-Baselines3 multi-processing: it runs n environments in parallel but asynchronously. This asynchronous multi-processing is considered experimental and does not fully support callbacks: the on_step() event is called artificially after the evaluation episodes are over.

Available Policies


alias of ARSLinearPolicy


alias of ARSPolicy


Can I use?

  • Recurrent policies: ❌

  • Multi processing: ✔️ (cf. example)

  • Gym spaces:
















from sb3_contrib import ARS

# Policy can be LinearPolicy or MlpPolicy
model = ARS("LinearPolicy", "Pendulum-v1", verbose=1)
model.learn(total_timesteps=10000, log_interval=4)"ars_pendulum")

With experimental asynchronous multi-processing:

from sb3_contrib import ARS
from sb3_contrib.common.vec_env import AsyncEval

from stable_baselines3.common.env_util import make_vec_env

env_id = "CartPole-v1"
n_envs = 2

model = ARS("LinearPolicy", env_id, n_delta=2, n_top=1, verbose=1)
# Create env for asynchronous evaluation (run in different processes)
async_eval = AsyncEval([lambda: make_vec_env(env_id) for _ in range(n_envs)], model.policy)

model.learn(total_timesteps=200_000, log_interval=4, async_eval=async_eval)


Replicating results from the original paper, which used the Mujoco benchmarks. Same parameters from the original paper, using 8 seeds.




4398 +/- 320


241 +/- 51


3320 +/- 120

How to replicate the results?

Clone RL-Zoo and checkout the branch feat/ars

git clone
cd rl-baselines3-zoo/
git checkout feat/ars

Run the benchmark. The following code snippet trains 8 seeds in parallel

for ENV_ID in Swimmer-v3 HalfCheetah-v3 Hopper-v3
    for SEED_NUM in {1..8}
        python --algo ars --env $ENV_ID --eval-episodes 10 --eval-freq 10000 -n 20000000 --seed  $SEED &
        sleep 1

Plot the results:

python scripts/ -a ars -e HalfCheetah Swimmer Hopper -f logs/ -o logs/ars_results -max 20000000
python scripts/ -i logs/ars_results.pkl -l ARS


class sb3_contrib.ars.ARS(policy, env, n_delta=8, n_top=None, learning_rate=0.02, delta_std=0.05, zero_policy=True, alive_bonus_offset=0, n_eval_episodes=1, policy_kwargs=None, stats_window_size=100, tensorboard_log=None, seed=None, verbose=0, device='cpu', _init_setup_model=True)[source]

Augmented Random Search:

Original implementation: C++/Cuda Implementation: 150 LOC Numpy Implementation:

  • policy (BasePolicy) – The policy to train, can be an instance of ARSPolicy, or a string from [“LinearPolicy”, “MlpPolicy”]

  • env (Env | VecEnv | str) – The environment to train on, may be a string if registered with gym

  • n_delta (int) – How many random perturbations of the policy to try at each update step.

  • n_top (int | None) – How many of the top delta to use in each update step. Default is n_delta

  • learning_rate (float | Callable[[float], float]) – Float or schedule for the step size

  • delta_std (float | Callable[[float], float]) – Float or schedule for the exploration noise

  • zero_policy (bool) – Boolean determining if the passed policy should have it’s weights zeroed before training.

  • alive_bonus_offset (float) – Constant added to the reward at each step, used to cancel out alive bonuses.

  • n_eval_episodes (int) – Number of episodes to evaluate each candidate.

  • policy_kwargs (Dict[str, Any] | None) – Keyword arguments to pass to the policy on creation

  • stats_window_size (int) – Window size for the rollout logging, specifying the number of episodes to average the reported success rate, mean episode length, and mean reward over

  • tensorboard_log (str | None) – String with the directory to put tensorboard logs:

  • seed (int | None) – Random seed for the training

  • verbose (int) – Verbosity level: 0 no output, 1 info, 2 debug

  • device (device | str) – Torch device to use for training, defaults to “cpu”

  • _init_setup_model (bool) – Whether or not to build the network at the creation of the instance

evaluate_candidates(candidate_weights, callback, async_eval)[source]

Evaluate each candidate.

  • candidate_weights (Tensor) – The candidate weights to be evaluated.

  • callback (BaseCallback) – Callback that will be called at each step (or after evaluation in the multiprocess version)

  • async_eval (AsyncEval | None) – The object for asynchronous evaluation of candidates.


The episodic return for each candidate.

Return type:



Returns the current environment (can be None if not defined).


The current environment

Return type:

VecEnv | None


Return the parameters of the agent. This includes parameters from different networks, e.g. critics (value functions) and policies (pi functions).


Mapping of from names of the objects to PyTorch state-dicts.

Return type:

Dict[str, Dict]


Return the VecNormalize wrapper of the training env if it exists.


The VecNormalize env.

Return type:

VecNormalize | None

learn(total_timesteps, callback=None, log_interval=1, tb_log_name='ARS', reset_num_timesteps=True, async_eval=None, progress_bar=False)[source]

Return a trained model.

  • total_timesteps (int) – The total number of samples (env steps) to train on

  • callback (None | Callable | List[BaseCallback] | BaseCallback) – callback(s) called at every step with state of the algorithm.

  • log_interval (int) – The number of timesteps before logging.

  • tb_log_name (str) – the name of the run for TensorBoard logging

  • reset_num_timesteps (bool) – whether or not to reset the current timestep number (used in logging)

  • async_eval (AsyncEval | None) – The object for asynchronous evaluation of candidates.

  • progress_bar (bool) – Display a progress bar using tqdm and rich.

  • self (SelfARS) –


the trained model

Return type:


classmethod load(path, env=None, device='auto', custom_objects=None, print_system_info=False, force_reset=True, **kwargs)

Load the model from a zip-file. Warning: load re-creates the model from scratch, it does not update it in-place! For an in-place load use set_parameters instead.

  • path (str | Path | BufferedIOBase) – path to the file (or a file-like) where to load the agent from

  • env (Env | VecEnv | None) – the new environment to run the loaded model on (can be None if you only need prediction from a trained model) has priority over any saved environment

  • device (device | str) – Device on which the code should run.

  • custom_objects (Dict[str, Any] | None) – Dictionary of objects to replace upon loading. If a variable is present in this dictionary as a key, it will not be deserialized and the corresponding item will be used instead. Similar to custom_objects in keras.models.load_model. Useful when you have an object in file that can not be deserialized.

  • print_system_info (bool) – Whether to print system info from the saved model and the current system info (useful to debug loading issues)

  • force_reset (bool) – Force call to reset() before training to avoid unexpected behavior. See

  • kwargs – extra arguments to change the model when loading


new model instance with loaded parameters

Return type:


property logger: Logger

Getter for the logger object.

predict(observation, state=None, episode_start=None, deterministic=False)

Get the policy action from an observation (and optional hidden state). Includes sugar-coating to handle different observations (e.g. normalizing images).

  • observation (ndarray | Dict[str, ndarray]) – the input observation

  • state (Tuple[ndarray, ...] | None) – The last hidden states (can be None, used in recurrent policies)

  • episode_start (ndarray | None) – The last masks (can be None, used in recurrent policies) this correspond to beginning of episodes, where the hidden states of the RNN must be reset.

  • deterministic (bool) – Whether or not to return deterministic actions.


the model’s action and the next hidden state (used in recurrent policies)

Return type:

Tuple[ndarray, Tuple[ndarray, …] | None]

save(path, exclude=None, include=None)

Save all the attributes of the object and the model parameters in a zip-file.

  • path (str | Path | BufferedIOBase) – path to the file where the rl agent should be saved

  • exclude (Iterable[str] | None) – name of parameters that should be excluded in addition to the default ones

  • include (Iterable[str] | None) – name of parameters that might be excluded but should be included anyway

Return type:


set_env(env, force_reset=True)

Checks the validity of the environment, and if it is coherent, set it as the current environment. Furthermore wrap any non vectorized env into a vectorized checked parameters: - observation_space - action_space

Return type:



Setter for for logger object.


When passing a custom logger object, this will overwrite tensorboard_log and verbose settings passed to the constructor.


logger (Logger) –

Return type:


set_parameters(load_path_or_dict, exact_match=True, device='auto')[source]

Load parameters from a given zip-file or a nested dictionary containing parameters for different modules (see get_parameters).

  • load_path_or_iter – Location of the saved data (path or file-like, see save), or a nested dictionary containing nn.Module parameters used by the policy. The dictionary maps object names to a state-dictionary returned by torch.nn.Module.state_dict().

  • exact_match (bool) – If True, the given parameters should include parameters for each module and each of their parameters, otherwise raises an Exception. If set to False, this can be used to update only specific parameters.

  • device (device | str) – Device on which the code should run.

  • load_path_or_dict (str | Dict[str, Dict]) –

Return type:



Set the seed of the pseudo-random generators (python, numpy, pytorch, gym, action_space)


seed (int | None) –

Return type:


ARS Policies

class sb3_contrib.ars.policies.ARSPolicy(observation_space, action_space, net_arch=None, activation_fn=<class 'torch.nn.modules.activation.ReLU'>, with_bias=True, squash_output=True)[source]

Policy network for ARS.

  • observation_space (Space) – The observation space of the environment

  • action_space (Space) – The action space of the environment

  • net_arch (List[int] | None) – Network architecture, defaults to a 2 layers MLP with 64 hidden nodes.

  • activation_fn (Type[Module]) – Activation function

  • with_bias (bool) – If set to False, the layers will not learn an additive bias

  • squash_output (bool) – For continuous actions, whether the output is squashed or not using a tanh() function. If not squashed with tanh the output will instead be clipped.


Defines the computation performed at every call.

Should be overridden by all subclasses.


Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.


obs (Tensor | Dict[str, Tensor]) –

Return type:



alias of ARSLinearPolicy


alias of ARSPolicy