Nimble: Physics Engine for Deep Learning

Stanford Nimble Logo

Build Status

Stanford Nimble

pip3 install nimblephysics

** BETA SOFTWARE **

Use physics as a non-linearity in your neural network. A single timestep, nimble.timestep(state, controls), is a valid PyTorch function.

Forward pass illustration

We support an analytical backwards pass, that works even through contact and friction.

Backpropagation illustration

It's as easy as:

from nimble import timestep

# Everything is a PyTorch Tensor, and this is differentiable!!
next_state = timestep(world, current_state, control_forces)

Nimble started life as a fork of the popular DART physics engine, with analytical gradients and a PyTorch binding. We've worked hard to maintain as much backwards compatability as we can, so many simulations that worked in DART should translate directly to Nimble.

Check out our website for more information.

Comments
  • Issue with Rajagopal example

    Issue with Rajagopal example

    Hi!

    Thanks a lot for the amazing work! When I try to run the rajagopal.py example, I encountered the error below:

    ❯ python rajagopal.py
    Msg [NameManager::issueNewName] (default) The name [Joint_rot_z] is a duplicate, so it has been renamed to [Joint_rot_z(1)]
    Msg [NameManager::issueNewName] (default) The name [hip_r_z] is a duplicate, so it has been renamed to [hip_r_z(1)]
    Msg [NameManager::issueNewName] (default) The name [hip_l_z] is a duplicate, so it has been renamed to [hip_l_z(1)]
    Msg [NameManager::issueNewName] (default) The name [back_z] is a duplicate, so it has been renamed to [back_z(1)]
    Msg [NameManager::issueNewName] (default) The name [acromial_r_z] is a duplicate, so it has been renamed to [acromial_r_z(1)]
    Msg [NameManager::issueNewName] (default) The name [acromial_l_z] is a duplicate, so it has been renamed to [acromial_l_z(1)]
    Traceback (most recent call last):
      File "rajagopal.py", line 12, in <module>
        world.addSkeleton(skel)
    TypeError: addSkeleton(): incompatible function arguments. The following argument types are supported:
        1. (self: nimblephysics_libs._nimblephysics.simulation.World, skeleton: nimblephysics_libs._nimblephysics.dynamics.Skeleton) -> str
    
    Invoked with: <nimblephysics_libs._nimblephysics.simulation.World object at 0x7f95df2df880>, <nimblephysics_libs._nimblephysics.biomechanics.OpenSimFile object at 0x7f95df2df8b8>
    

    I have the version 0.6.1 installed via pip. How can I solve this problem?

  • How to reset the joint DOF

    How to reset the joint DOF

    Hi,

    I am loading a humanoid from a urdf file. It seems that we can not directly define a ball joint in urdf. So I wonder if we can load a urdf with 1 DOF and then change the DOF to 3 after loading the model into the nimble?

    Thanks

  • Joint degree of freedom

    Joint degree of freedom

    Hi,

    The object is described via generalized coordinate. I wonder how to get the joint DOF of each link? The question may be very basic... So I wonder if there is any documents besides the examples in the current repository that introduce the basic functions?

    Thanks

  • getMultipleContactInverseDynamicsOverTime

    getMultipleContactInverseDynamicsOverTime

    Hi,

    I am able to run all the examples related to the inverse dynamics but fail to call the "getMultipleContactInverseDynamicsOverTime" function. Specifically, I input the arguments following the requirements but the program still indicates the input format is wrong. So I am wondering if anyone has use this function successfully ?

    Thanks

  • segmentation fault when calling `getJoints()`

    segmentation fault when calling `getJoints()`

    The following is the behavior that I noticed in Python:

        world = nimble.simulation.World()
        skel: nimble.dynamics.Skeleton = world.loadSkeleton(
            "data/sdf/atlas/ground.urdf"
        )
        skel.getJoints()
    
  • `integrateVelocitiesFromImpulses` and `integratePositions`

    `integrateVelocitiesFromImpulses` and `integratePositions`

    Round 3 Review

    • Add default arg for integrateVelocitiesFromImpulses.

    Round 2 Review

    • Add python bindings.

    Round 1 Review

    • Separate position integration from impulse velocity integration.
    • Create functions for velocity (impulse) integration and position integration.
  • Drop frames in excess of 100fps on C++ end, before hitting web GUI

    Drop frames in excess of 100fps on C++ end, before hitting web GUI

    If we try to display a simulation from 1000fps at real time in the browser, right now the C++ will blindly attempt to send JSON packets to the browser at 1000fps. That's too much for the browser to handle, and slows everything down. So we should really have the C++ rate-limit itself to 100fps, and silently drop/batch updates that come in faster than that on the C++ side.

  • dart_layer needs update setForces -> setExternalForces

    dart_layer needs update setForces -> setExternalForces

    File "/home/yannis/anaconda3/envs/pytorch/lib/python3.7/site-packages/diffdart/dart_layer.py", line 96, in dart_layer return DartLayer.apply(world, pos, vel, torque, pointer) # type: ignore File "/home/yannis/anaconda3/envs/pytorch/lib/python3.7/site-packages/diffdart/dart_layer.py", line 35, in forward world.setForces(torque.detach().numpy()) AttributeError: 'diffdart_libs._diffdart.simulation.World' object has no attribute 'setForces'

  • renderTrajectoryLines produces jittery lines

    renderTrajectoryLines produces jittery lines

    If you call gui.stateMachine().renderTrajectoryLines(...) multiple times with the same input, you'll get different results visually. This indicates to me some kind of data corruption or race conditions hidden in here that we need to ferret out.

  • Capsule-Floor penetration

    Capsule-Floor penetration

    We're seeing penetration in Yannis's demos: https://github.com/iexarchos/DiffDART_DDP-iLQR_opt

    Specifically, HalfCheetah and Reacher

    This is probably an issue with capsule-box collisions.

  • Support differentiating through

    Support differentiating through "Constraint Force Mixing"

    Our LCP's are only guaranteed solvable if A is positive-semidefinite and there are no force bounds. To increase stability of our LCPs, we can do a trick called "Constraint Force Mixing". In practice, this means multiplying the elements on the diagonal of A by 1.0 + eps, where eps is some small positive value. This has the effect of ensuring A isn't singular, and in general reducing "singular-ness" of A.

    You can turn constraint force mixing on and off with void World::setConstraintForceMixingEnabled(bool enable). Currently, by default CFM is turned off, because our Jacobians don't support it.

    This ticket is about supporting it.

    The matrix A is computed in dart/constrant/BoxedLcpConstraintSolver.cpp in the method solveConstrainedGroup(). The A matrix it computes is in Open Dynamics Engine format, which means row-major order, where each row's length is rounded up to the nearest multiple of 4 (to allow vectorization) and any padding entries are ignored. This calls out to individual constraints to populate A. For each constraint, it applies a unit impulse, and then measures the change in relative velocity at the constraint. The method that applies the CFM is ContactConstraint::getVelocityChange(), towards the bottom.

    Supporting this in our differentiation means tracking all the CFM constants for each element of the diagonal of A, and storing them for later. These are constants wrt differentation, but we need to ensure that we scale A's diagonals, and the gradient of A's diagonals, by these constants wherever it is computed.

  • Wrong interpretation of OpenSim polynomial curves of OpenSim?

    Wrong interpretation of OpenSim polynomial curves of OpenSim?

    I used addBiomechanics to process data with this model.This model is based on the ragagopal opensim model, but I modified the knee joint definitions to use polynomials instead of SimmSplines. Looking great on the nimble viewer but crap in the OpenSim GUI, suggesting a mismatch between OpenSim and Nimble Physics. When replacing the polynomials with the original SimmSplines, it looks good, confirming the polynomials are the problem and probably wrongly interpreted by Nimble Physics.

    For reference, hereis the OpenSim polynomialFunction class. If I were to guess, I would first check if you use the same order for the coefficients. OpenSim uses decreasing order

  • Consufing Gradients on a Simple Scene

    Consufing Gradients on a Simple Scene

    Confusing Gradient Output

    I got confusing output gradients from Nimble on a simple scene. The scene is about two balls with the same mass making fully elastic collision. In this scene, Nimble gives inconsistent gradients with the analytical gradients.

    Scene description

    image

    Two balls are allowed to move horizontally. There is no friction or gravity. The two balls are of the same mass 1kg and have the same radius r = 0.1m. In the beginning, the left ball at x1 = 0 (shown in blue) moves at v0 = 1m/s to the right, while the right ball at x2 = 0.52m (shown in green) has velocity u0 = 0. Since there is no friction, the blue ball would make the uniform motion. At t = 0.5s, the two balls would collide. Then the two balls would exchange their speeds since they are of the same mass and the collision is fully elastic. The blue ball would then stay still while the green ball moves at 1m/s to the right. At t = T = 1s, the green ball would appear at xf = 1.2m.

    Gradients computation

    It is easy to show that the analytical form of xf w.r.t. (x1, x2, v0, u0) is: xf = v0 T + x1 + 2r

    So the analytical gradient of xf w.r.t. (x1, x2, v0, u0) is (1, 0, 1, 0). However, the output gradients from Nimble is (0.75, 0.25, 0.91, 0.08), which is obviously inconsistent with the analytical gradients.

    Reproduce

    System configuration:

    • OS: Ubuntu 20.04 LTS
    • CPU: AMD Ryzen Threadripper 3970X 32-Core Processor
    • GPU: NVIDIA GeForce RTX 3090
    • Nimblephysics version: 0.8.38
    • Pytorch version: 1.13.0
    • Python: 3.9.13

    Source code:

    import nimblephysics as nimble
    import torch
    
    
    def create_ball(radius, color):
        ball = nimble.dynamics.Skeleton()
        sphereJoint, sphereBody = ball.createTranslationalJoint2DAndBodyNodePair() 
    
        sphereShape = sphereBody.createShapeNode(nimble.dynamics.SphereShape(radius))
        sphereVisual = sphereShape.createVisualAspect()
        sphereVisual.setColor([i / 255.0 for i in color])
        sphereShape.createCollisionAspect()
        sphereBody.setFrictionCoeff(0.0)
        sphereBody.setRestitutionCoeff(1.0)
        sphereBody.setMass(1)
    
        return ball
    
    
    def create_world():
        world = nimble.simulation.World()
        world.setGravity([0, 0, 0]) # No gravity
    
        radius = 0.1
        world.addSkeleton(create_ball(radius, [68, 114, 196]))
        world.addSkeleton(create_ball(radius, [112, 173, 71]))
    
        return world
    
    
    def simulate_and_backward(world, x1, x2, v0, u0):
        # Ball 1 is initialized to be at x1 on the x-axis, with velocity v0.
        # Ball 2 is initialized to be at x2 on the x-axis, with velocity u0.
        # The zeros below mean that the vertical positions and velocities are all zero.
        # So the balls woul only move in the horizontal direction.
        init_state = torch.tensor([x1, 0, x2, 0, v0, 0, u0, 0], requires_grad=True)
        control_forces = torch.zeros(4) # No external forces
        total_simulation_time = 1.0 # simulate for 1 second
        num_time_steps = 1000       # split into 1000 discrete small time steps
        # Each time step has length 0.001 seconds
        world.setTimeStep(total_simulation_time / num_time_steps)
        state = init_state
        states = [state]
        for i in range(num_time_steps):
            state = nimble.timestep(world, state, control_forces)
            states.append(state)
    
        # xf is the final x-coordinate of ball 2
        xf = state[2]
        xf.backward()
    
        # The gradients on the y-axis are irrelevant, so we exclude them.
        grad = (init_state.grad)[0:8:2] 
        print(f"xf = {xf.detach().item()}")
        print(f"gradients of xf = {grad}")
    
        return states
    
    
    if __name__ == "__main__":
        world = create_world()
        gui = nimble.NimbleGUI(world)
        gui.serve(8080)
        states = simulate_and_backward(world, x1=0, x2=0.52, v0=1, u0=0)
        gui.loopStates(states)
        input()
        gui.stopServing()
    

    Execution results:

    xf = 1.1989999809264922
    gradients of xf = tensor([0.7500, 0.2500, 0.9197, 0.0803])
    
  • SliderJoint support in the OpenSim parser

    SliderJoint support in the OpenSim parser

    Is your feature request related to a problem? Please describe. When trying to import the Tug of War osim model I got a segmentation error, due to the SliderJoint not being supported by the osim parser (was using v0.8.34 in python 3.8). Sidenote: got a segmentation error instead of the error message that the joint was not yet implemented.

    Describe the solution you'd like

    • Support the opensim SliderJoint with the dart/dynamics/PrismaticJoint
    • Normal error message instead of segmentation error when trying to parse a unsupported joint.

    Describe alternatives you've considered None

    Additional context Would expect that implementing the SliderJoint (PrismaticJoint) is quite easy, since I expect that the syntax is basically the same as the already support PinJoint ('RevoluteJoint`).

  • gui not working, mac OS

    gui not working, mac OS

    Environment

    • Nimble physics version: master, v0.8.34
    • OS name and version name(or number): [macOS]
    • Browser : Safari, Firefox

    Expected Behavior

    showing cheetah GUI in web browser with some random motion

    Current Behavior

    1, states update loop runs correctly

    2, I get this warning during running : Warning [BodyNode.cpp:619] [BodyNode] A negative or zero mass [0] is set to BodyNode [h_pelvis_aux2]

    3, but gui in browser shows only empty page (no matters if Safari or Firefox used), javascript is enabled in my browser

    loaded page source : page_html

    javascript : bundle

    Code to Reproduce

    used example code : https://nimblephysics.org/docs/quick-start.html

    Archive.zip

  • Body node setScale() leads to incorrect origin of the child link

    Body node setScale() leads to incorrect origin of the child link

    Bug Report

    Environment

    Ubuntu 18.04, GCC 7.4.0

    Expected Behavior

    Current Behavior

    When I use setScale(), only the shape of the current node is scaled while the origin of the child link becomes locating at the zeros.

    Steps to Reproduce

    Code to Reproduce

    robot_path = "~.urdf" world: nimble.simulation.World = nimble.simulation.World() skel: nimble.dynamics.Skeleton = world.loadSkeleton(robot_path) world.getBodyNodeByIndex(node).setScale([1, 0.8, 1])

  • add getLocalVertices()

    add getLocalVertices()

    [Remove this line and describe this pull request. Link to relevant GitHub issues, if any.]


    Before creating a pull request

    • [ ] Document new methods and classes
    • [ ] Format new code files using clang-format

    Before merging a pull request

    • [ ] Set version target by selecting a milestone on the right side
    • [ ] Summarize this change in CHANGELOG.md
    • [ ] Add unit test(s) for this change
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.
A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms.

iNeural A library for creating Artificial Neural Networks, for use in Machine Learning and Deep Learning algorithms. What is a Neural Network? Work on

Apr 5, 2022
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.
PPLNN is a high-performance deep-learning inference engine for efficient AI inferencing.

PPLNN, which is short for "PPLNN is a Primitive Library for Neural Network", is a high-performance deep-learning inference engine for efficient AI inferencing.

Nov 24, 2022
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.
Vowpal Wabbit is a machine learning system which pushes the frontier of machine learning with techniques such as online, hashing, allreduce, reductions, learning2search, active, and interactive learning.

This is the Vowpal Wabbit fast online learning code. Why Vowpal Wabbit? Vowpal Wabbit is a machine learning system which pushes the frontier of machin

Nov 27, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

Nov 30, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Dec 2, 2022
header only, dependency-free deep learning framework in C++14
header only, dependency-free deep learning framework in C++14

The project may be abandoned since the maintainer(s) are just looking to move on. In the case anyone is interested in continuing the project, let us k

Nov 30, 2022
LibDEEP BSD-3-ClauseLibDEEP - Deep learning library. BSD-3-Clause

LibDEEP LibDEEP is a deep learning library developed in C language for the development of artificial intelligence-based techniques. Please visit our W

Nov 28, 2022
Caffe: a fast open framework for deep learning.

Caffe Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by Berkeley AI Research (BAIR)/The Berke

Nov 26, 2022
Deep Learning API and Server in C++11 support for Caffe, Caffe2, PyTorch,TensorRT, Dlib, NCNN, Tensorflow, XGBoost and TSNE

Open Source Deep Learning Server & API DeepDetect (https://www.deepdetect.com/) is a machine learning API and server written in C++11. It makes state

Nov 28, 2022
Forward - A library for high performance deep learning inference on NVIDIA GPUs
 Forward - A library for high performance deep learning inference on NVIDIA GPUs

a library for high performance deep learning inference on NVIDIA GPUs.

Mar 17, 2021
A library for high performance deep learning inference on NVIDIA GPUs.
A library for high performance deep learning inference on NVIDIA GPUs.

Forward - A library for high performance deep learning inference on NVIDIA GPUs Forward - A library for high performance deep learning inference on NV

Nov 21, 2022
Deploying Deep Learning Models in C++: BERT Language Model
 Deploying Deep Learning Models in C++: BERT Language Model

This repository show the code to deploy a deep learning model serialized and running in C++ backend.

Nov 14, 2022
TFCC is a C++ deep learning inference framework.

TFCC is a C++ deep learning inference framework.

Sep 28, 2022
AI4Animation: Deep Learning, Character Animation, Control
 AI4Animation: Deep Learning, Character Animation, Control

This project explores the opportunities of deep learning for character animation and control as part of my Ph.D. research at the University of Edinburgh in the School of Informatics, supervised by Taku Komura. Over the last couple years, this project has become a modular and stable framework for data-driven character animation, including data processing, network training and runtime control, developed in Unity3D / Tensorflow / PyTorch.

Nov 29, 2022
KSAI Lite is a deep learning inference framework of kingsoft, based on tensorflow lite

KSAI Lite English | 简体中文 KSAI Lite是一个轻量级、灵活性强、高性能且易于扩展的深度学习推理框架,底层基于tensorflow lite,定位支持包括移动端、嵌入式以及服务器端在内的多硬件平台。 当前KSAI Lite已经应用在金山office内部业务中,并逐步支持金山

Nov 2, 2022
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.
Deep Learning in C Programming Language. Provides an easy way to create and train ANNs.

cDNN is a Deep Learning Library written in C Programming Language. cDNN provides functions that can be used to create Artificial Neural Networks (ANN)

Oct 27, 2022
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.
Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Triton - a language and compiler for writing highly efficient custom Deep-Learning primitives.

Dec 5, 2022
deep learning vision detector/estimator

libopenvision deep learning visualization C library Prerequest ncnn Install openmp vulkan(optional) Build git submodule update --init --recursuve cd b

Sep 17, 2022