C++-based high-performance parallel environment execution engine for general RL environments.


PyPI Read the Docs Unittest GitHub issues GitHub stars GitHub forks GitHub license

EnvPool is a highly parallel reinforcement learning environment execution engine which significantly outperforms existing environment executors. With a curated design dedicated to the RL use case, we leverage techniques of a general asynchronous execution model, implemented with C++ thread pool on the environment execution.

Here are EnvPool's several highlights:

  • Compatible with OpenAI gym APIs and DeepMind dm_env APIs;
  • Manage a pool of envs, interact with the envs in batched APIs by default;
  • Synchronous execution API and asynchronous execution API;
  • Easy C++ developer API to add new envs;
  • 1 Million Atari frames per second simulation with 256 CPU cores, ~13x throughput of Python subprocess-based vector env;
  • ~3x throughput of Python subprocess-based vector env on low resource setup like 12 CPU cores;
  • Comparing with existing GPU-based solution (Brax / Isaac-gym), EnvPool is a general solution for various kinds of speeding-up RL environment parallelization;
  • Compatible with some existing RL libraries, e.g., Tianshou.

Installation

PyPI

EnvPool is currently hosted on PyPI. It requires Python >= 3.7.

You can simply install EnvPool with the following command:

$ pip install envpool

After installation, open a Python console and type

import envpool
print(envpool.__version__)

If no error occurs, you have successfully installed EnvPool.

From Source

Please refer to the guideline.

Documentation

The tutorials and API documentation are hosted on envpool.readthedocs.io.

The example scripts are under examples/ folder.

Supported Environments

We're in the progress of open-sourcing all available envs from our internal version, stay tuned.

  • Atari via ALE
  • Single/Multi players Vizdoom
  • Classic RL envs, including CartPole, MountainCar, ...

Benchmark Results

We perform our benchmarks with ALE Atari environment (with environment wrappers) on different hardware setups, including a TPUv3-8 virtual machine (VM) of 96 CPU cores and 2 NUMA nodes, and an NVIDIA DGX-A100 of 256 CPU cores with 8 NUMA nodes. Baselines include 1) naive Python for-loop; 2) the most popular RL environment parallelization execution by Python subprocess, e.g., gym.vector_env; 3) to our knowledge, the fastest RL environment executor Sample Factory before EnvPool.

We report EnvPool performance with sync mode, async mode and NUMA + async mode, compared with the baselines on different number of workers (i.e., number of CPU cores). As we can see from the results, EnvPool achieves significant improvements over the baselines on all settings. On the high-end setup, EnvPool achieves 1 Million frames per second on 256 CPU cores, which is 13.3x of the gym.vector_env baseline. On a typical PC setup with 12 CPU cores, EnvPool's throughput is 2.8x of gym.vector_env.

Our benchmark script is in examples/benchmark.py. The detail configurations of 4 types of system are:

  • Personal laptop: 12 core Intel(R) Core(TM) i7-8750H CPU @ 2.20GHz
  • TPU-VM: 96 core Intel(R) Xeon(R) CPU @ 2.00GHz
  • Apollo: 96 core AMD EPYC 7352 24-Core Processor
  • DGX-A100: 256 core AMD EPYC 7742 64-Core Processor
Highest FPS Laptop (12) TPU-VM (96) Apollo (96) DGX-A100 (256)
For-loop 4,876 3,817 4,053 4,336
Subprocess 18,249 42,885 19,560 79,509
Sample Factory 27,035 192,074 262,963 639,389
EnvPool (sync) 40,791 175,938 159,191 470,170
EnvPool (async) 50,513 352,243 410,941 845,537
EnvPool (numa+async) / 367,799 458,414 1,060,371

API Usage

The following content shows both synchronous and asynchronous API usage of EnvPool. You can also run the full script at examples/env_step.py

Synchronous API

import envpool
import numpy as np

# make gym env
env = envpool.make("Pong-v5", env_type="gym", num_envs=100)
# or use envpool.make_gym(...)
obs = env.reset()  # should be (100, 4, 84, 84)
act = np.zeros(100, dtype=int)
obs, rew, done, info = env.step(act)

Under the synchronous mode, envpool closely resembles openai-gym/dm-env. It has the reset and step function with the same meaning. There is one exception though, in envpool batch interaction is the default. Therefore, during creation of the envpool, there is a num_envs argument that denotes how many envs you like to run in parallel.

env = envpool.make("Pong-v5", env_type="gym", num_envs=100)

The first dimension of action passed to the step function should be equal to num_envs.

act = np.zeros(100, dtype=int)

You don't need to manually reset one environment when any of done is true, instead, all envs in envpool has enabled auto-reset by default.

Asynchronous API

import envpool
import numpy as np

# make asynchronous 
env = envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)
env.async_reset()  # send the initial reset signal to all envs
while True:
    obs, rew, done, info = env.recv()
    action = np.random.randint(batch_size, size=len(info.env_id))
    env.send(action, env_id)

In the asynchronous mode, the step function is splitted into two part, namely the send/recv functions. send takes two arguments, a batch of action, and the corresponding env_id that each action should be sent to. Unlike step, send does not wait for the envs to execute and return the next state, it returns immediately after the actions are fed to the envs. (The reason why it is called async mode).

env.send(action, env_id)

To get the "next states", we need to call the recv function. However, recv does not guarantee that you will get back the "next states" of the envs that you just called send on. Instead, whatever envs finishes execution first gets recved first.

state = env.recv()

Besides num_envs, there's one more argument batch_size. While num_envs defines how many envs in total is being managed by the envpool, batch_size defines the number of envs involved each time we interact with envpool. e.g. There're 64 envs executing in the envpool, send and recv each time interacts with a batch of 16 envs.

envpool.make("Pong-v5", env_type="gym", num_envs=64, batch_size=16)

There are other configurable arguments with envpool.make, please check out envpool interface introduction.

Contributing

EnvPool is still under development. More environments are going to be added and we always welcome contributions to help EnvPool better. If you would like to contribute, please check out our contribution guideline.

License

EnvPool is under Apache2 license.

Other third party source-code and data are under their corresponding licenses.

We do not include their source-code and data in this repo.

Citing EnvPool

If you find EnvPool useful, please cite it in your publications.

[Coming soon!]

Disclaimer

This is not an official Sea Limited or Garena Online Private Limited product.

Comments
  • [Feature Request] Mujoco integration

    [Feature Request] Mujoco integration

    https://github.com/openai/gym/tree/master/gym/envs/mujoco

    Env List:

    • [x] Ant-v4 (#74)
    • [x] HalfCheetah-v4 (#75)
    • [x] Hopper-v4 (#76)
    • [x] Humanoid-v4 (#77)
    • [x] HumanoidStandup-v4 (#78)
    • [x] InvertedDoublePendulum-v4 (@Benjamin-eecs, #83)
    • [x] InvertedPendulum-v4 (#79)
    • [x] Pusher-v4 (#82)
    • [x] Reacher-v4 (#81)
    • [x] Swimmer-v4 (#80)
    • [x] Walker2d-v4 (@Benjamin-eecs, #86)
    • [x] add other options to align with gym (#93)

    Road Map:

    • [x] Get comfortable with current codebase, go through https://envpool.readthedocs.io/en/latest/pages/env.html and add a toy environment by yourself locally;
    • [x] Download Mujoco and run on your local machine [1] [5], try with different env settings and see the actual behavior;
    • [x] Go through their code [1] [2] (I think it's better to go through both openai and deepmind versions, but only use deepmind's solution as reference), understand their ctype APIs and what we can use to bind with EnvPool APIs [3];
    • [x] Integrate only one game and let it work;
    • [x] Add some unit tests (good to submit the first PR here);
    • [x] Integrate other environments (submit another PR) and related tests.

    Resources:

    1. https://github.com/openai/mujoco-py
    2. https://github.com/deepmind/dm_control/tree/master/dm_control/mujoco
    3. https://github.com/deepmind/mujoco/blob/main/doc/programming.rst
    4. It is quite similar with Atari games which we have already integrated: https://github.com/mgbellemare/Arcade-Learning-Environment
    5. First install gym and mujoco, then run with
    import gym
    env = gym.make("Ant-v3")
    env.reset()
    for _ in range(10):
      env.step(env.action_space.sample())
      env.render()
    
    1. https://github.com/ikostrikov/gym_dmc/blob/master/compare.py a checker
  • [BUG] `Atlantis-v5` does not reset life counter.

    [BUG] `Atlantis-v5` does not reset life counter.

    Describe the bug

    When all the lives are exhausted in Atlantis-v5, making an additional step does not reset the life counter, whereas in Breakout-v5 it does. I am not sure if this is something particular with the Atlantis environment though.

    To Reproduce

    Steps to reproduce the behavior.

    Please try to provide a minimal example to reproduce the bug. Error messages and stack traces are also helpful.

    Please use the markdown code blocks for both code and stack traces.

    import envpool
    import numpy as np
    
    num_envs = 1
    print("making Atlantis-v5")
    envs = envpool.make(
        "Atlantis-v5",
        env_type="gym",
        num_envs=num_envs,
        episodic_life=True,
        reward_clip=True,
    )
    envs.reset()
    for i in range(10000):
        _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
        if info["lives"].sum() == 0:
            print(f"step={i}, lives is", info["lives"].sum())
            break
    
    _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
    print(f"step={i+1}, lives is", info["lives"].sum())
    print(f"notice how step={i+1} does not reset the life counter in Atlantis")
    
    print("making Atlantis-v5")
    envs = envpool.make(
        "Breakout-v5",
        env_type="gym",
        num_envs=num_envs,
        episodic_life=True,
        reward_clip=True,
    )
    envs.reset()
    for i in range(10000):
        _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
        if info["lives"].sum() == 0:
            print(f"step={i}, lives is", info["lives"].sum())
            break
    
    _, _, next_done, info = envs.step(np.random.randint(0, envs.action_space.n, num_envs))
    print(f"step={i+1}, lives is", info["lives"].sum())
    print(f"notice how step={i+1} does reset the life counter in Breakout")
    
    making Atlantis-v5
    step=1148, lives is 0
    step=1149, lives is 0
    notice how step=1149 does not reset the life counter in Atlantis
    making Atlantis-v5
    step=123, lives is 0
    step=124, lives is 5
    notice how step=124 does reset the life counter in Breakout
    

    Expected behavior

    If all the lives are exhausted, making an additional step should reset the life counter.

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    >>> print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    0.6.2.post2 1.23.1 3.8.11 (default, Oct  9 2021, 12:06:05) 
    [GCC 10.3.0] linux
    

    Reason and Possible fixes

    Maybe this is

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
  • [BUG] Can't install on python 3.9 / 3.10 on macOS

    [BUG] Can't install on python 3.9 / 3.10 on macOS

    I tried to install envpool on python 3.9.5 and 3.10.0 with pip install envpool and got the following in both cases:

    ERROR: Could not find a version that satisfies the requirement envpool (from versions: none)
    ERROR: No matching distribution found for envpool
    

    I haven't checked other python versions though.

  • [BUG] Make bazel-test error on main branch

    [BUG] Make bazel-test error on main branch

    Describe the bug

    Make bazel-test error when clone the main branch of envpool

    To Reproduce

    $ git clone https://github.com/sail-sg/envpool.git
    $ make bazel-test
    
    ./envpool/core/xla.h:20:10: fatal error: cuda_runtime_api.h: No such file or directory
       20 | #include <cuda_runtime_api.h>
    

    Expected behavior

    No error.

    Screenshots

    envpool_bug_0

    System info

    0.6.3.post1 1.23.1 3.8.10 (default, Jun 22 2022, 20:18:18) 
    [GCC 9.4.0] linux
    

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
  • [BUG] Reward is not deterministic after seeding the env

    [BUG] Reward is not deterministic after seeding the env

    Describe the bug

    I use envpool to make HalfCheeth-v3 with a fixed seed, but the rewards are not the same during several runs. Specifically, only the reward turned by the first env is not deterministic, other envs are good. And if the num_envs is small, this bug does not occur.

    To Reproduce

    import envpool
    import numpy as np
    
    def random_rollout():
        np.random.seed(0)
        n = 32
        envs = envpool.make_gym('HalfCheetah-v3', num_envs=n, seed=123)
        envs.reset()
        rew_sum = 0
        for _ in range(10):
            action = np.random.rand(n, envs.action_space.shape[0])
            obs, rew, done, info = envs.step(action)
            rew_sum += rew
        envs.close()
        return rew_sum
    
    
    if __name__ == "__main__":
        a = random_rollout()
        b = random_rollout()
        print(a - b)
    

    Output:

    [-0.01131058  0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.          0.          0.          0.          0.
      0.          0.        ]
    

    Expected behavior

    The reward should be deterministic after seeding.

    System info

    Describe the characteristic of your environment:

    • envpool version: '0.6.0'
    • envpool is installed via pip
    • Python version: 3.8.10
    0.6.0 1.21.5 3.8.10 (default, Jun  4 2021, 15:09:15) 
    [GCC 7.5.0] linux
    

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
  • [Feature Request] ACME Integration

    [Feature Request] ACME Integration

    https://github.com/deepmind/acme

    Road Map:

    @TianyiSun316

    • [ ] Go through ACME codebase and integrate vector_env to the available algorithms;
    • [ ] Write Atari examples;
    • [ ] Check Atari performance: Pong and Breakout;
    • [ ] Submit PR;

    @LeoGuo98

    • [ ] Do some experiments with sample efficiency (actually you can try out with different libraries, either ACME, tianshou, or sb3, this doesn't depend on the previous item)

    Resources:

    tianshou: #51 stable-baselines3: #39 cleanrl: #48 #53

    cc @zhongwen

  • Atari option for repeat_action_probability

    Atari option for repeat_action_probability

    The -v5 Gym Atari environments have sticky actions enabled by default (with repeat_action_probability=0.25, see here). This makes it impossible to replicate the original results from several key papers, especially the DQN Nature paper.

    Would it be possible to add an option to the Atari environment options that lets the user change repeat_action_probability to a different value? I believe that internally this can be accomplished by forwarding the argument to either gym.make or the ALE constructor.

  • [BUG] Segfault when batch size is larger than 255 on Atari environments

    [BUG] Segfault when batch size is larger than 255 on Atari environments

    Describe the bug

    Segfault when batch size is larger than 255 on Atari environments

    MuJoCo environment seems to work well.

    To Reproduce

    Steps to reproduce the behavior.

    import time
    
    import envpool
    import numpy as np
    
    batch_size = 256  # set to 255 works
    
    env = envpool.make_gym("Breakout-v5",
                            stack_num=1,
                            
                            num_envs=batch_size * 2,
                            batch_size=batch_size,
    
                            use_inter_area_resize=False,
    
                            img_width=88,
                            img_height=88,
                            
                            num_threads=0,
                            thread_affinity_offset=0)
    action = np.array(
        [env.action_space.sample() for _ in range(batch_size)]
    )
    
    counter = 0
    
    env.async_reset()
    
    last_time = time.time()
    while True:
        obs, rew, done, info = env.recv()
    
        env_id = info["env_id"]
        env.send(action, env_id)
    
        counter += batch_size
        if counter >= 100000:
            cur_time = time.time()
            print("TPS", counter / (cur_time - last_time))
    
            counter = 0
            last_time = cur_time
    
    
    [1]    2959596 segmentation fault (core dumped)  python test_envpool.py
    

    Expected behavior

    Can run with large batch size, like 1024, 2048, etc.

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    0.6.1.post1 1.21.2 3.8.12 (default, Oct 12 2021, 13:49:34) 
    [GCC 7.5.0] linux
    

    Additional context

    Set batch size to 1024 works / segfaults randomly

    1024
    TPS 49611.30131772514
    TPS 57661.12695997062
    TPS 52648.235412990536
    TPS 52059.6945247295
    [1]    2971074 segmentation fault (core dumped)  python test_envpool.py
    

    Reason and Possible fixes

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
  • [BUG] no Acc-v3 environment

    [BUG] no Acc-v3 environment

    Describe the bug

    A clear and concise description of what the bug is.

    When using tianshou, there's no Acc-v3 environment.

    File "test_dqn_acc.py", line 246, in Acc_tain() File "test_dqn_acc.py", line 112, in Acc_tain args.task, num_envs=args.training_num, env_type="gym" File "/home/zhulin/.conda/envs/mytorch/lib/python3.7/site-packages/envpool/registration.py", line 43, in make f"{task_id} is not supported, envpool.list_all_envs() may help." AssertionError: Acc-v3 is not supported, envpool.list_all_envs() may help.

  • [BUG] Breakout-v5 Performance Regression

    [BUG] Breakout-v5 Performance Regression

    Describe the bug

    PPO can no longer reproduce 400 game scores in the Breakout-v5 given 10M steps of training (same hyperparameters) as it can in BreakoutNoFrameskip-v4.

    image

    To Reproduce

    Run the https://wandb.ai/costa-huang/cleanRL/runs/26k4q5jo/code?workspace=user-costa-huang to reproduce envpool's results and https://wandb.ai/costa-huang/cleanRL/runs/1ngqmz96/code?workspace=user-costa-huang to reproduce BreakoutNoFrameskip-v4 results.

    Expected behavior

    PPO should obtain 400 game scores in the Breakout-v5 given 10M steps of training

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    >>> import envpool, numpy, sys
    __, numpy.__version__, sys.version, sys.platform)
    >>> print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    0.4.3 1.21.5 3.9.5 (default, Jul 19 2021, 13:27:26) 
    [GCC 10.3.0] linux
    

    Reason and Possible fixes

    I ran the gym's ALE/Breakout-v5 as well and got a regression as well as shown below, but looking into it was because ALE/Breakout-v5 by default uses the full action space (14 discrete actions), whereas the Breakout-v5 has the minimal 4 discrete actions. So I have no idea why the regression happens with envpool...

    image

    Checklist

    • [ x ] I have checked that there is no similar issue in the repo (required)
    • [ x ] I have read the documentation (required)
    • [ x ] I have provided a minimal working example to reproduce the bug (required)
  • Add CleanRL examples: PPO solve Pong in 5 mins

    Add CleanRL examples: PPO solve Pong in 5 mins

    Kudos to this repo! This PR adds CleanRL example. Interestingly, after increasing num_envs=32, I was able to solve Pong in 10 mins :D

    image

    See the tracked experiment in costa-huang/cleanRL/runs/3rx432mj

    See also https://github.com/vwxyzjn/cleanrl/pull/100

  • [Feature Request] Minecraft Integration

    [Feature Request] Minecraft Integration

    Motivation

    A large-scale environment used for foundation model training, in desperate need of sampling speedup, 40-50fps per env.

    Resource

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
  • [BUG] Incompatible with latest gym normalize wrappers

    [BUG] Incompatible with latest gym normalize wrappers

    Describe the bug

    A clear and concise description of what the bug is.

    To Reproduce

    Steps to reproduce the behavior.

    Please try to provide a minimal example to reproduce the bug. Error messages and stack traces are also helpful.

    Please use the markdown code blocks for both code and stack traces.

    import numpy as np
    import envpool
    import gym
    
    envs = envpool.make(
        "HalfCheetah-v4",
        env_type="gym",
        num_envs=4,
    )
    envs.num_envs = 4
    envs.single_action_space = envs.action_space
    envs.single_observation_space = envs.observation_space
    envs.is_vector_env = True
    envs = gym.wrappers.ClipAction(envs)
    envs = gym.wrappers.NormalizeObservation(envs)
    envs = gym.wrappers.TransformObservation(envs, lambda obs: np.clip(obs, -10, 10))
    envs = gym.wrappers.NormalizeReward(envs)
    envs = gym.wrappers.TransformReward(envs, lambda reward: np.clip(reward, -10, 10))
    obs = envs.reset()
    envs.step(np.array([envs.action_space.sample() for _ in range(envs.num_envs)]))
    
    Traceback (most recent call last):
      File "/home/costa/Documents/go/src/github.com/vwxyzjn/envpool-cleanrl/bug.py", line 22, in <module>
        envs.step(np.array([envs.action_space.sample() for _ in range(envs.num_envs)]))
      File "/home/costa/.cache/pypoetry/virtualenvs/envpool-cleanrl-uAHoRI5J-py3.9/lib/python3.9/site-packages/gym/core.py", line 532, in step
        step_returns = self.env.step(action)
      File "/home/costa/.cache/pypoetry/virtualenvs/envpool-cleanrl-uAHoRI5J-py3.9/lib/python3.9/site-packages/gym/wrappers/normalize.py", line 149, in step
        self.env.step(action), True, self.is_vector_env
      File "/home/costa/.cache/pypoetry/virtualenvs/envpool-cleanrl-uAHoRI5J-py3.9/lib/python3.9/site-packages/gym/core.py", line 493, in step
        step_returns = self.env.step(action)
      File "/home/costa/.cache/pypoetry/virtualenvs/envpool-cleanrl-uAHoRI5J-py3.9/lib/python3.9/site-packages/gym/wrappers/normalize.py", line 77, in step
        obs, rews, terminateds, truncateds, infos = step_api_compatibility(
      File "/home/costa/.cache/pypoetry/virtualenvs/envpool-cleanrl-uAHoRI5J-py3.9/lib/python3.9/site-packages/gym/utils/step_api_compatibility.py", line 178, in step_api_compatibility
        return step_to_new_api(step_returns, is_vector_env)
      File "/home/costa/.cache/pypoetry/virtualenvs/envpool-cleanrl-uAHoRI5J-py3.9/lib/python3.9/site-packages/gym/utils/step_api_compatibility.py", line 59, in step_to_new_api
        and not infos["_TimeLimit.truncated"][i]
    KeyError: '_TimeLimit.truncated'
    

    Expected behavior

    It would be great if envpool is compatible with the gym normalize wrappers or the other way around.

    System info

    Describe the characteristic of your environment:

    • Describe how the library was installed (pip, source, ...)
    • Python version
    • Versions of any other relevant libraries
    import envpool, numpy, sys
    print(envpool.__version__, numpy.__version__, sys.version, sys.platform)
    
    0.6.3.post1 1.22.4 3.9.5 (default, Jul 19 2021, 13:27:26) 
    [GCC 10.3.0] linux
    

    Reason and Possible fixes

    I think the reason is the new gym.wrappers.NormalizeReward wrapper expects to see something like _TimeLimit.truncated in the info section...

    Checklist

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
  • [Feature Request] add cmake build method, make it easier for adding envpool as a third-party dependency in c++ environment

    [Feature Request] add cmake build method, make it easier for adding envpool as a third-party dependency in c++ environment

    Motivation

    thanks for this nice library, which is trying to provide is a general solution for various kinds of speeding-up RL environment parallelization. Currently, envpool treats the python users are first class citizens, ignoring and sacrificing the c++ user experience.

    Solution

    add cmake build method, make it easier for adding envpool as a third-party dependency in c++ environment, where users are not familiar with bazel or in condition which make the use of bazel infeasible.

  • [Feature Request] Customized Robotic Manipulation Environment

    [Feature Request] Customized Robotic Manipulation Environment

    Motivation

    We have a project that wants to have a large number of robotic manipulation environment in robogym in parallel to train a RL algorithm. I have taken a look at the docs and I know that for customized environment we need add C++ code and build from source following this guide. Also, so far in the existing added gym and dm_control environments, I haven't found any robotic arm or hand environment example. There are a lot of stuff to add in, including kinematics, different control modes, for a robotic manipulation environment if we start from C++ code and turn all the python code from robogym into C++. I am wondering if you have tested or considered adding support for robotic manipulation tasks. Do you have any suggestions on how to start to add in a complicated robotic manipulation tasks e.g., shadowhand manipulation?

    Thanks for creating such a useful tool and codebase for parallel training in simulation!

    Solution

    Could you add some feature for easy transfer of other gym-based complicated robotic manipulation tasks into envpool? Or provide any guidance or suggestions on how to effectively build a customized robotic manipulation tasks into envpool besides what you had in the docs?

    Additional context

    Robogym: a repo developed by openai that has similar API as gym but includes more robotic manipulation tasks, e.g., shadowhand, etc. https://github.com/openai/robogym

    Checklist

    • [x] I have checked that there is no similar issue in the repo. There is a one for customized environment but my request is more detailed about robotic manipualtion environment.
  • [BUG] prevent using recv() without send()

    [BUG] prevent using recv() without send()

    Hello,

    just find that if I wrongly run recv() without send(), then the program will goes into like an infinite loop.

    It would very easily to occur in jupyter or interactive console or program bug.

    Would it be better to add a timeout function inside recv() function?

    Best regards.

  • [BUG] Using `env.reset()` and `env.async_reset()` results in illegal instructions

    [BUG] Using `env.reset()` and `env.async_reset()` results in illegal instructions

    Describe the bug

    env.reset() and env.async_reset() results in illegal instructions. This warrants a better error message.

    To Reproduce

    import envpool
    import numpy as np
    
    # make asynchronous
    num_envs = 64
    batch_size = 16
    env = envpool.make("Pong-v5", env_type="gym", num_envs=num_envs, batch_size=batch_size)
    action_num = env.action_space.n
    env.reset()
    env.async_reset()  # send the initial reset signal to all envs
    while True:
        obs, rew, done, info = env.recv()
        env_id = info["env_id"]
        action = np.random.randint(action_num, size=batch_size)
        env.send(action, env_id)
        print(action)
    
    [0 4 0 3 1 1 2 1 0 3 0 3 3 4 4 5]
    [2 1 0 2 1 2 4 2 5 0 1 0 2 4 2 2]
    [1 5 2 0 5 4 1 2 0 1 2 0 0 3 2 0]
    [5 5 4 3 5 0 3 5 5 5 2 5 0 2 0 2]
    [1 3 5 3 4 2 4 1 1 5 3 5 1 5 5 1]
    [5 4 4 1 4 3 2 0 4 1 1 5 2 4 5 5]
    [1 0 1 0 1 3 3 3 2 0 0 3 4 5 0 2]
    Illegal Instruction! 69
    Illegal Instruction! 8a
    Illegal Instruction! 2
    Illegal Instruction! 2
    Illegal Instruction! 2
    Illegal Instruction! 2
    Illegal Instruction! 2
    Illegal Instruction! [4 1 4 1 2 1 2 2 3 2 4 0 5 4 0 2]
    2
    Illegal Instruction! e8
    Illegal Instruction! 30
    Illegal Instruction! 2
    Illegal Instruction! 38
    Illegal Instruction! 84
    Illegal Instruction! a
    Illegal Instruction! b9
    [1 0 1 4 4 5 3 3 2 1 1 4 3 1 0 0]
    [2 5 2 1 0 2 5 0 2 3 1 4 4 0 3 3]
    [4 0 3 2 5 2 2 4 4 0 2 2 4 4 0 3]
    Illegal Instruction! b2
    [3 2 3 0 2 1 0 1 4 2 1 0 3 3 2 3]
    [4 5 2 3 1 0 2 5 5 2 2 2 5 5 5 4]
    [4 1 2 0 5 2 0 2 4 1 4 2 4 0 4 4]
    [0 4 2 0 5 4 4 1 2 4 0 0 1 0 4 4]
    [2 1 5 4 3 3 2 5 2 2 0 4 5 4 5 0]
    [2 0 5 2 3 3 2 4 1 2 4 2 0 0 0 0]
    [0 3 1 2 0 5 2 5 1 3 0 3 2 0 4 5]
    [1 0 0 3 2 3 2 1 2 4 0 4 4 4 4 2]
    [4 2 5 5 4 1 3 2 4 4 4 0 1 3 0 5]
    [5 5 0 3 3 4 0 1 3 4 4 3 2 1 5 3]
    [2 1 2 2 4 4 2 4 4 3 3 5 2 1 1 3]
    Illegal Instruction! 2
    

    Expected behavior

    A better error message should be available like saying env.reset() and env.async_reset() cannot be used at the same time.

    • [x] I have checked that there is no similar issue in the repo (required)
    • [x] I have read the documentation (required)
    • [x] I have provided a minimal working example to reproduce the bug (required)
Kokkos C++ Performance Portability Programming EcoSystem: The Programming Model - Parallel Execution and Memory Abstraction

Kokkos: Core Libraries Kokkos Core implements a programming model in C++ for writing performance portable applications targeting all major HPC platfor

Aug 10, 2022
Powerful multi-threaded coroutine dispatcher and parallel execution engine

Quantum Library : A scalable C++ coroutine framework Quantum is a full-featured and powerful C++ framework build on top of the Boost coroutine library

Jul 25, 2022
A General-purpose Parallel and Heterogeneous Task Programming System
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous tasks programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, a

Aug 16, 2022
A General-purpose Parallel and Heterogeneous Task Programming System
A General-purpose Parallel and Heterogeneous Task Programming System

Taskflow Taskflow helps you quickly write parallel and heterogeneous task programs in modern C++ Why Taskflow? Taskflow is faster, more expressive, an

Aug 16, 2022
A library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies.

Fiber Tasking Lib This is a library for enabling task-based multi-threading. It allows execution of task graphs with arbitrary dependencies. Dependenc

Aug 7, 2022
SymQEMU: Compilation-based symbolic execution for binaries

SymQEMU This is SymQEMU, a binary-only symbolic executor based on QEMU and SymCC. It currently extends QEMU 4.1.1 and works with the most recent versi

Aug 14, 2022
High Performance Linux C++ Network Programming Framework based on IO Multiplexing and Thread Pool

Kingpin is a C++ network programming framework based on TCP/IP + epoll + pthread, aims to implement a library for the high concurrent servers and clie

Jul 16, 2022
Exploration of x86-64 ISA using speculative execution.

Haruspex /həˈrʌspeks/ A religious official in ancient Rome who predicted the future or interpreted the meaning of events by examining the insides of b

Aug 8, 2022
Bolt is a C++ template library optimized for GPUs. Bolt provides high-performance library implementations for common algorithms such as scan, reduce, transform, and sort.

Bolt is a C++ template library optimized for heterogeneous computing. Bolt is designed to provide high-performance library implementations for common

Jun 27, 2022
Aug 17, 2022
A C++17 thread pool for high-performance scientific computing.

We present a modern C++17-compatible thread pool implementation, built from scratch with high-performance scientific computing in mind. The thread pool is implemented as a single lightweight and self-contained class, and does not have any dependencies other than the C++17 standard library, thus allowing a great degree of portability

Aug 11, 2022
Thread-pool-cpp - High performance C++11 thread pool

thread-pool-cpp It is highly scalable and fast. It is header only. No external dependencies, only standard library needed. It implements both work-ste

Jul 17, 2022
ArrayFire: a general purpose GPU library.
ArrayFire: a general purpose GPU library.

ArrayFire is a general-purpose library that simplifies the process of developing software that targets parallel and massively-parallel architectures i

Aug 9, 2022
An optimized C library for math, parallel processing and data movement

PAL: The Parallel Architectures Library The Parallel Architectures Library (PAL) is a compact C library with optimized routines for math, synchronizat

Jul 24, 2022
Material for the UIBK Parallel Programming Lab (2021)

UIBK PS Parallel Systems (703078, 2021) This repository contains material required to complete exercises for the Parallel Programming lab in the 2021

May 6, 2022
Shared-Memory Parallel Graph Partitioning for Large K

KaMinPar The graph partitioning software KaMinPar -- Karlsruhe Minimal Graph Partitioning. KaMinPar is a shared-memory parallel tool to heuristically

Jul 5, 2022
Parallel algorithms (quick-sort, merge-sort , enumeration-sort) implemented by p-threads and CUDA

程序运行方式 一、编译程序,进入sort-project(cuda-sort-project),输入命令行 make 程序即可自动编译为可以执行文件sort(cudaSort)。 二、运行可执行程序,输入命令行 ./sort 或 ./cudaSort 三、删除程序 make clean 四、指定线程

May 30, 2022
Partr - Parallel Tasks Runtime

Parallel Tasks Runtime A parallel task execution runtime that uses parallel depth-first (PDF) scheduling [1]. [1] Shimin Chen, Phillip B. Gibbons, Mic

Jul 17, 2022
Cpp-taskflow - Modern C++ Parallel Task Programming Library
Cpp-taskflow - Modern C++ Parallel Task Programming Library

Cpp-Taskflow A fast C++ header-only library to help you quickly write parallel programs with complex task dependencies Why Cpp-Taskflow? Cpp-Taskflow

Mar 30, 2021