daily

GitHub C++ Trending

The latest build: 2024-06-16Source of data: GitHubTrendingRSS

NVIDIA® TensorRT is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.


LicenseDocumentation

TensorRT Open Source Software

This repository contains the Open Source Software (OSS) components of NVIDIA TensorRT. It includes the sources for TensorRT plugins and ONNX parser, as well as sample applications demonstrating usage and capabilities of the TensorRT platform. These open source software components are a subset of the TensorRT General Availability (GA) release with some extensions and bug-fixes.

Need enterprise support? NVIDIA global support is available for TensorRT with the NVIDIA AI Enterprise software suite. Check out NVIDIA LaunchPad for free access to a set of hands-on labs with TensorRT hosted on NVIDIA infrastructure.

Join the TensorRT and Triton community and stay current on the latest product updates, bug fixes, content, best practices, and more.

Prebuilt TensorRT Python Package

We provide the TensorRT Python package for an easy installation.
To install:

pip install tensorrt

You can skip the Build section to enjoy TensorRT with Python.

Build

Prerequisites

To build the TensorRT-OSS components, you will first need the following software packages.

TensorRT GA build

  • TensorRT v10.0.1.6
    • Available from direct download links listed below

System Packages

Optional Packages

Downloading TensorRT Build

  1. Download TensorRT OSS

    git clone -b main https://github.com/nvidia/TensorRT TensorRTcd TensorRTgit submodule update --init --recursive
  2. (Optional - if not using TensorRT container) Specify the TensorRT GA release build path

    If using the TensorRT OSS build container, TensorRT libraries are preinstalled under /usr/lib/x86_64-linux-gnu and you may skip this step.

    Else download and extract the TensorRT GA build from NVIDIA Developer Zone with the direct links below:

    Example: Ubuntu 20.04 on x86-64 with cuda-12.4

    cd ~/Downloadstar -xvzf TensorRT-10.0.1.6.Linux.x86_64-gnu.cuda-12.4.tar.gzexport TRT_LIBPATH=`pwd`/TensorRT-10.0.1.6

    Example: Windows on x86-64 with cuda-12.4

    Expand-Archive -Path TensorRT-10.0.1.6.Windows10.win10.cuda-12.4.zip$env:TRT_LIBPATH="$pwd\TensorRT-10.0.1.6\lib"

Setting Up The Build Environment

For Linux platforms, we recommend that you generate a docker container for building TensorRT OSS as described below. For native builds, please install the prerequisiteSystem Packages.

  1. Generate the TensorRT-OSS build container.

    The TensorRT-OSS build container can be generated using the supplied Dockerfiles and build scripts. The build containers are configured for building TensorRT OSS out-of-the-box.

    Example: Ubuntu 20.04 on x86-64 with cuda-12.4 (default)

    ./docker/build.sh --file docker/ubuntu-20.04.Dockerfile --tag tensorrt-ubuntu20.04-cuda12.4

    Example: Rockylinux8 on x86-64 with cuda-12.4

    ./docker/build.sh --file docker/rockylinux8.Dockerfile --tag tensorrt-rockylinux8-cuda12.4

    Example: Ubuntu 22.04 cross-compile for Jetson (aarch64) with cuda-12.4 (JetPack SDK)

    ./docker/build.sh --file docker/ubuntu-cross-aarch64.Dockerfile --tag tensorrt-jetpack-cuda12.4

    Example: Ubuntu 22.04 on aarch64 with cuda-12.4

    ./docker/build.sh --file docker/ubuntu-22.04-aarch64.Dockerfile --tag tensorrt-aarch64-ubuntu22.04-cuda12.4
  2. Launch the TensorRT-OSS build container.

    Example: Ubuntu 20.04 build container

    ./docker/launch.sh --tag tensorrt-ubuntu20.04-cuda12.4 --gpus all

    NOTE:
    1. Use the --tag corresponding to build container generated in Step 1.
    2. NVIDIA Container Toolkit is required for GPU access (running TensorRT applications) inside the build container.
    3. sudo password for Ubuntu build containers is 'nvidia'.
    4. Specify port number using --jupyter <port> for launching Jupyter notebooks.

Building TensorRT-OSS

  • Generate Makefiles and build.

    Example: Linux (x86-64) build with default cuda-12.4

     cd $TRT_OSSPATH mkdir -p build && cd build cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out make -j$(nproc)

    Example: Linux (aarch64) build with default cuda-12.4

     cd $TRT_OSSPATH mkdir -p build && cd build cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64-native.toolchain make -j$(nproc)

    Example: Native build on Jetson (aarch64) with cuda-12.4

     cd $TRT_OSSPATH mkdir -p build && cd build cmake .. -DTRT_LIB_DIR=$TRT_LIBPATH -DTRT_OUT_DIR=`pwd`/out -DTRT_PLATFORM_ID=aarch64 -DCUDA_VERSION=12.4CC=/usr/bin/gcc make -j$(nproc)

    NOTE: C compiler must be explicitly specified via CC= for native aarch64 builds of protobuf.

    Example: Ubuntu 22.04 Cross-Compile for Jetson (aarch64) with cuda-12.4 (JetPack)

     cd $TRT_OSSPATH mkdir -p build && cd build cmake .. -DCMAKE_TOOLCHAIN_FILE=$TRT_OSSPATH/cmake/toolchains/cmake_aarch64.toolchain -DCUDA_VERSION=12.4 -DCUDNN_LIB=/pdk_files/cudnn/usr/lib/aarch64-linux-gnu/libcudnn.so -DCUBLAS_LIB=/usr/local/cuda-12.4/targets/aarch64-linux/lib/stubs/libcublas.so -DCUBLASLT_LIB=/usr/local/cuda-12.4/targets/aarch64-linux/lib/stubs/libcublasLt.so -DTRT_LIB_DIR=/pdk_files/tensorrt/lib make -j$(nproc)
    **Example: Native builds on Windows (x86) with cuda-12.4**
     cd $TRT_OSSPATH mkdir -p build cd -p build cmake .. -DTRT_LIB_DIR="$env:TRT_LIBPATH" -DCUDNN_ROOT_DIR="$env:CUDNN_PATH" -DTRT_OUT_DIR="$pwd\\out" msbuild TensorRT.sln /property:Configuration=Release -m:$env:NUMBER_OF_PROCESSORS

    NOTE:
    1. The default CUDA version used by CMake is 12.4.0. To override this, for example to 11.8, append -DCUDA_VERSION=11.8 to the cmake command.

  • Required CMake build arguments are:

    • TRT_LIB_DIR: Path to the TensorRT installation directory containing libraries.
    • TRT_OUT_DIR: Output directory where generated build artifacts will be copied.
  • Optional CMake build arguments:

    • CMAKE_BUILD_TYPE: Specify if binaries generated are for release or debug (contain debug symbols). Values consists of [Release] | Debug
    • CUDA_VERSION: The version of CUDA to target, for example [11.7.1].
    • CUDNN_VERSION: The version of cuDNN to target, for example [8.6].
    • PROTOBUF_VERSION: The version of Protobuf to use, for example [3.0.0]. Note: Changing this will not configure CMake to use a system version of Protobuf, it will configure CMake to download and try building that version.
    • CMAKE_TOOLCHAIN_FILE: The path to a toolchain file for cross compilation.
    • BUILD_PARSERS: Specify if the parsers should be built, for example [ON] | OFF. If turned OFF, CMake will try to find precompiled versions of the parser libraries to use in compiling samples. First in ${TRT_LIB_DIR}, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.
    • BUILD_PLUGINS: Specify if the plugins should be built, for example [ON] | OFF. If turned OFF, CMake will try to find a precompiled version of the plugin library to use in compiling samples. First in ${TRT_LIB_DIR}, then on the system. If the build type is Debug, then it will prefer debug builds of the libraries before release versions if available.
    • BUILD_SAMPLES: Specify if the samples should be built, for example [ON] | OFF.
    • GPU_ARCHS: GPU (SM) architectures to target. By default we generate CUDA code for all major SMs. Specific SM versions can be specified here as a quoted space-separated list to reduce compilation time and binary size. Table of compute capabilities of NVIDIA GPUs can be found here. Examples:
      • NVidia A100: -DGPU_ARCHS="80"
      • Tesla T4, GeForce RTX 2080: -DGPU_ARCHS="75"
      • Titan V, Tesla V100: -DGPU_ARCHS="70"
      • Multiple SMs: -DGPU_ARCHS="80 75"
    • TRT_PLATFORM_ID: Bare-metal build (unlike containerized cross-compilation). Currently supported options: x86_64 (default).

References

TensorRT Resources

Known Issues

An open-source C++ library developed and used at Facebook.


Folly: Facebook Open-source Library

Support Ukraine - Help Provide Humanitarian Aid to Ukraine.

What is folly?

Logo Folly

Folly (acronymed loosely after Facebook Open Source Library) is a library of C++17 components designed with practicality and efficiency in mind. Folly contains a variety of core library components used extensively at Facebook. In particular, it's often a dependency of Facebook's other open source C++ efforts and place where those projects can share code.

It complements (as opposed to competing against) offerings such as Boost and of course std. In fact, we embark on defining our own component only when something we need is either not available, or does not meet the needed performance profile. We endeavor to remove things from folly if or when std or Boost obsoletes them.

Performance concerns permeate much of Folly, sometimes leading to designs that are more idiosyncratic than they would otherwise be (see e.g. PackedSyncPtr.h, SmallLocks.h). Good performance at large scale is a unifying theme in all of Folly.

Check it out in the intro video

Explain Like Im 5: Folly

Logical Design

Folly is a collection of relatively independent components, some as simple as a few symbols. There is no restriction on internal dependencies, meaning that a given folly module may use any other folly components.

All symbols are defined in the top-level namespace folly, except of course macros. Macro names are ALL_UPPERCASE and should be prefixed with FOLLY_. Namespace folly defines other internal namespaces such as internal or detail. User code should not depend on symbols in those namespaces.

Folly has an experimental directory as well. This designation connotes primarily that we feel the API may change heavily over time. This code, typically, is still in heavy use and is well tested.

Physical Design

At the top level Folly uses the classic "stuttering" scheme folly/folly used by Boost and others. The first directory serves as an installation root of the library (with possible versioning a la folly-1.0/), and the second is to distinguish the library when including files, e.g. #include <folly/FBString.h>.

The directory structure is flat (mimicking the namespace structure), i.e. we don't have an elaborate directory hierarchy (it is possible this will change in future versions). The subdirectory experimental contains files that are used inside folly and possibly at Facebook but not considered stable enough for client use. Your code should not use files in folly/experimental lest it may break when you update Folly.

The folly/folly/test subdirectory includes the unittests for all components, usually named ComponentXyzTest.cpp for each ComponentXyz.*. The folly/folly/docs directory contains documentation.

What's in it?

Because of folly's fairly flat structure, the best way to see what's in it is to look at the headers in top level folly/ directory. You can also check the docs folder for documentation, starting with the overview.

Folly is published on GitHub at https://github.com/facebook/folly.

Build Notes

Because folly does not provide any ABI compatibility guarantees from commit to commit, we generally recommend building folly as a static library.

folly supports gcc (5.1+), clang, or MSVC. It should run on Linux (x86-32, x86-64, and ARM), iOS, macOS, and Windows (x86-64). The CMake build is only tested on some of these platforms; at a minimum, we aim to support macOS and Linux (on the latest Ubuntu LTS release or newer.)

getdeps.py

This script is used by many of Meta's OSS tools. It will download and build all of the necessary dependencies first, and will then invoke cmake etc to build folly. This will help ensure that you build with relevant versions of all of the dependent libraries, taking into account what versions are installed locally on your system.

It's written in python so you'll need python3.6 or later on your PATH. It works on Linux, macOS and Windows.

The settings for folly's cmake build are held in its getdeps manifest build/fbcode_builder/manifests/folly, which you can edit locally if desired.

Dependencies

If on Linux or MacOS (with homebrew installed) you can install system dependencies to save building them:

# Clone the repogit clone https://github.com/facebook/folly# Install dependenciescd follysudo ./build/fbcode_builder/getdeps.py install-system-deps --recursive

If you'd like to see the packages before installing them:

./build/fbcode_builder/getdeps.py install-system-deps --dry-run --recursive

On other platforms or if on Linux and without system dependencies getdeps.py will mostly download and build them for you during the build step.

Some of the dependencies getdeps.py uses and installs are:

  • a version of boost compiled with C++14 support.
  • googletest is required to build and run folly's tests.

Build

This script will download and build all of the necessary dependencies first, and will then invoke cmake etc to build folly. This will help ensure that you build with relevant versions of all of the dependent libraries, taking into account what versions are installed locally on your system.

getdeps.py currently requires python 3.6+ to be on your path.

getdeps.py will invoke cmake etc.

# Clone the repogit clone https://github.com/facebook/follycd folly# Build, using system dependencies if availablepython3 ./build/fbcode_builder/getdeps.py --allow-system-packages build

It puts output in its scratch area:

  • installed/folly/lib/libfolly.a: Library

You can also specify a --scratch-path argument to control the location of the scratch directory used for the build. You can find the default scratch install location from logs or with python3 ./build/fbcode_builder/getdeps.py show-inst-dir.

There are also --install-dir and --install-prefix arguments to provide some more fine-grained control of the installation directories. However, given that folly provides no compatibility guarantees between commits we generally recommend building and installing the libraries to a temporary location, and then pointing your project's build at this temporary location, rather than installing folly in the traditional system installation directories. e.g., if you are building with CMake you can use the CMAKE_PREFIX_PATH variable to allow CMake to find folly in this temporary installation directory when building your project.

If you want to invoke cmake again to iterate, there is a helpful run_cmake.py script output in the scratch build directory. You can find the scratch build directory from logs or with python3 ./build/fbcode_builder/getdeps.py show-build-dir.

Run tests

By default getdeps.py will build the tests for folly. To run them:

cd follypython3 ./build/fbcode_builder/getdeps.py --allow-system-packages test

build.sh/build.bat wrapper

build.sh can be used on Linux and MacOS, on Windows use the build.bat script instead. Its a wrapper around getdeps.py.

Build with cmake directly

If you don't want to let getdeps invoke cmake for you then by default, building the tests is disabled as part of the CMake all target. To build the tests, specify -DBUILD_TESTS=ON to CMake at configure time.

NB if you want to invoke cmake again to iterate on a getdeps.py build, there is a helpful run_cmake.py script output in the scratch-path build directory. You can find the scratch build directory from logs or with python3 ./build/fbcode_builder/getdeps.py show-build-dir.

Running tests with ctests also works if you cd to the build dir, e.g. (cd $(python3 ./build/fbcode_builder/getdeps.py show-build-dir) && ctest)

Finding dependencies in non-default locations

If you have boost, gtest, or other dependencies installed in a non-default location, you can use the CMAKE_INCLUDE_PATH and CMAKE_LIBRARY_PATH variables to make CMAKE look also look for header files and libraries in non-standard locations. For example, to also search the directories /alt/include/path1 and /alt/include/path2 for header files and the directories /alt/lib/path1 and /alt/lib/path2 for libraries, you can invoke cmake as follows:

cmake \ -DCMAKE_INCLUDE_PATH=/alt/include/path1:/alt/include/path2 \ -DCMAKE_LIBRARY_PATH=/alt/lib/path1:/alt/lib/path2 ...

Ubuntu LTS, CentOS Stream, Fedora

Use the getdeps.py approach above. We test in CI on Ubuntu LTS, and occasionally on other distros.

If you find the set of system packages is not quite right for your chosen distro, you can specify distro version specific overrides in the dependency manifests (e.g. https://github.com/facebook/folly/blob/main/build/fbcode_builder/manifests/boost ). You could probably make it work on most recent Ubuntu/Debian or Fedora/Redhat derived distributions.

At time of writing (Dec 2021) there is a build break on GCC 11.x based systems in lang_badge_test. If you don't need badge functionality you can work around by commenting it out from CMakeLists.txt (unfortunately fbthrift does need it)

Windows (Vcpkg)

Note that many tests are disabled for folly Windows builds, you can see them in the log from the cmake configure step, or by looking for WINDOWS_DISABLED in CMakeLists.txt

That said, getdeps.py builds work on Windows and are tested in CI.

If you prefer, you can try Vcpkg. folly is available in Vcpkg and releases may be built via vcpkg install folly:x64-windows.

You may also use vcpkg install folly:x64-windows --head to build against main.

macOS

getdeps.py builds work on macOS and are tested in CI, however if you prefer, you can try one of the macOS package managers

Homebrew

folly is available as a Formula and releases may be built via brew install folly.

You may also use folly/build/bootstrap-osx-homebrew.sh to build against main:

 ./folly/build/bootstrap-osx-homebrew.sh

This will create a build directory _build in the top-level.

MacPorts

Install the required packages from MacPorts:

 sudo port install \ boost \ cmake \ gflags \ git \ google-glog \ libevent \ libtool \ lz4 \ lzma \ openssl \ snappy \ xz \ zlib

Download and install double-conversion:

 git clone https://github.com/google/double-conversion.git cd double-conversion cmake -DBUILD_SHARED_LIBS=ON . make sudo make install

Download and install folly with the parameters listed below:

 git clone https://github.com/facebook/folly.git cd folly mkdir _build cd _build cmake .. make sudo make install

A flexible, high-performance 3D simulator for Embodied AI research.


CircleCIcodecovGitHub licenseConda Version BadgeConda Platforms support BadgeDocumentationpre-commitPython 3.9Supports BulletTwitter Follow

Habitat-Sim

A high-performance physics-enabled 3D simulator with support for:

The design philosophy of Habitat is to prioritize simulation speed over the breadth of simulation capabilities. When rendering a scene from the Matterport3D dataset, Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded and reaches over 10,000 FPS multi-process on a single GPU. Habitat-Sim simulates a Fetch robot interacting in ReplicaCAD scenes at over 8,000 steps per second (SPS), where each step involves rendering 1 RGBD observation (128×128 pixels) and rigid-body dynamics for 1/30sec.

Habitat-Sim is typically used with Habitat-Lab, a modular high-level library for end-to-end experiments in embodied AI -- defining embodied AI tasks (e.g. navigation, instruction following, question answering), training agents (via imitation or reinforcement learning, or no learning at all as in classical SensePlanAct pipelines), and benchmarking their performance on the defined tasks using standard metrics.

Questions or Comments? Join the AI Habitat community discussions forum.

Habitat Demo

https://user-images.githubusercontent.com/2941091/126080914-36dc8045-01d4-4a68-8c2e-74d0bca1b9b8.mp4


Table of contents

  1. Citing Habitat
  2. Installation
  3. Testing
  4. Documentation
  5. Datasets
  6. External Contributions
  7. License

Citing Habitat

If you use the Habitat platform in your research, please cite the Habitat 1.0, Habitat 2.0, and Habitat 3.0 papers:

@misc{puig2023habitat3, title = {Habitat 3.0: A Co-Habitat for Humans, Avatars and Robots}, author = {Xavi Puig and Eric Undersander and Andrew Szot and Mikael Dallaire Cote and Ruslan Partsey and Jimmy Yang and Ruta Desai and Alexander William Clegg and Michal Hlavac and Tiffany Min and Theo Gervet and Vladimir Vondrus and Vincent-Pierre Berges and John Turner and Oleksandr Maksymets and Zsolt Kira and Mrinal Kalakrishnan and Jitendra Malik and Devendra Singh Chaplot and Unnat Jain and Dhruv Batra and Akshara Rai and Roozbeh Mottaghi}, year={2023}, archivePrefix={arXiv},}@inproceedings{szot2021habitat, title = {Habitat 2.0: Training Home Assistants to Rearrange their Habitat}, author = {Andrew Szot and Alex Clegg and Eric Undersander and Erik Wijmans and Yili Zhao and John Turner and Noah Maestre and Mustafa Mukadam and Devendra Chaplot and Oleksandr Maksymets and Aaron Gokaslan and Vladimir Vondrus and Sameer Dharur and Franziska Meier and Wojciech Galuba and Angel Chang and Zsolt Kira and Vladlen Koltun and Jitendra Malik and Manolis Savva and Dhruv Batra}, booktitle = {Advances in Neural Information Processing Systems (NeurIPS)}, year = {2021}}@inproceedings{habitat19iccv, title = {Habitat: {A} {P}latform for {E}mbodied {AI} {R}esearch}, author = {Manolis Savva and Abhishek Kadian and Oleksandr Maksymets and Yili Zhao and Erik Wijmans and Bhavana Jain and Julian Straub and Jia Liu and Vladlen Koltun and Jitendra Malik and Devi Parikh and Dhruv Batra}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, year = {2019}}

Habitat-Sim also builds on work contributed by others. If you use contributed methods/models, please cite their works. See the External Contributions section for a list of what was externally contributed and the corresponding work/citation.

Installation

Habitat-Sim can be installed in 3 ways:

  1. Via Conda - Recommended method for most users. Stable release and nightly builds.
  2. [Experimental] Via PIP - pip install . to compile the latest headless build with Bullet. Read build instructions and common build issues.
  3. Via Docker - Updated approximately once per year for the Habitat Challenge. Read habitat-docker-setup.
  4. Via Source - For active development. Read build instructions and common build issues.

[Recommended] Conda Packages

Habitat is under active development, and we advise users to restrict themselves to stable releases. Starting with v0.1.4, we provide conda packages for each release.

  1. Preparing conda env

    Assuming you have conda installed, let's prepare a conda env:

    # We require python>=3.9 and cmake>=3.10conda create -n habitat python=3.9 cmake=3.14.0conda activate habitat
  2. conda install habitat-sim

    Pick one of the options below depending on your system/needs:

    • To install on machines with an attached display:

      conda install habitat-sim -c conda-forge -c aihabitat
    • To install on headless machines (i.e. without an attached display, e.g. in a cluster) and machines with multiple GPUs (this parameter relies on EGL and thus does not work on MacOS):

      conda install habitat-sim headless -c conda-forge -c aihabitat
    • [Most common scenario] To install habitat-sim with bullet physics

      conda install habitat-sim withbullet -c conda-forge -c aihabitat
    • Note: Build parameters can be chained together. For instance, to install habitat-sim with physics on headless machines:

      conda install habitat-sim withbullet headless -c conda-forge -c aihabitat

Conda packages for older versions can installed by explicitly specifying the version, e.g. conda install habitat-sim=0.1.6 -c conda-forge -c aihabitat.

We also provide a nightly conda build for the main branch. However, this should only be used if you need a specific feature not yet in the latest release version. To get the nightly build of the latest main, simply swap -c aihabitat for -c aihabitat-nightly.

Testing

  1. Let's download some 3D assets using our python data download utility:

    • Download (testing) 3D scenes

      python -m habitat_sim.utils.datasets_download --uids habitat_test_scenes --data-path /path/to/data/

      Note that these testing scenes do not provide semantic annotations. If you would like to test the semantic sensors via example.py, please use the data from the Matterport3D dataset (see Datasets).

    • Download example objects

      python -m habitat_sim.utils.datasets_download --uids habitat_example_objects --data-path /path/to/data/
  2. Interactive testing: Use the interactive viewer included with Habitat-Sim in either C++ or python:

    #C++# ./build/viewer if compiling locallyhabitat-viewer /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb#Python#NOTE: depending on your choice of installation, you may need to add '/path/to/habitat-sim' to your PYTHONPATH.#e.g. from 'habitat-sim/' directory run 'export PYTHONPATH=$(pwd)'python examples/viewer.py --scene /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb

    You should be able to control an agent in this test scene. Use W/A/S/D keys to move forward/left/backward/right and arrow keys or mouse (LEFT click) to control gaze direction (look up/down/left/right). Try to find the picture of a woman surrounded by a wreath. Have fun!

  3. Physical interactions: Habitat-sim provides rigid and articulated dynamics simulation via integration with Bullet physics. Try it out now with our interactive viewer functionality in C++ or python.

    First, download our fully interactive ReplicaCAD apartment dataset (140 MB):

    #NOTE: by default, data will be downloaded into habitat-sim/data/. Optionally modify the data path by adding: `--data-path /path/to/data/`# with conda installpython -m habitat_sim.utils.datasets_download --uids replica_cad_dataset# with source (from inside habitat_sim/)python src_python/habitat_sim/utils/datasets_download.py --uids replica_cad_dataset
    • Alternatively, 105 scene variations with pre-baked lighting are available via --uids replica_cad_baked_lighting (480 MB).

    Then load a ReplicaCAD scene in the viewer application with physics enabled. If you modified the data path above, also modify it in viewer calls below.

    #C++# ./build/viewer if compiling locallyhabitat-viewer --enable-physics --dataset data/replica_cad/replicaCAD.scene_dataset_config.json -- apt_1#python#NOTE: habitat-sim/ directory must be on your `PYTHONPATH`python examples/viewer.py --dataset data/replica_cad/replicaCAD.scene_dataset_config.json --scene apt_1
    • Using scenes with pre-baked lighting instead? Use --dataset data/replica_cad_baked_lighting/replicaCAD_baked.scene_dataset_config.json --scene Baked_sc1_staging_00

    The viewer application outputs the full list of keyboard and mouse interface options to the console at runtime.

    Quickstart Example:

    • WASD to move
    • LEFT click and drag the mouse to look around
    • press SPACE to toggle simulation off/on (default on)
    • press 'm' to switch to "GRAB" mouse mode
    • now LEFT or RIGHT click and drag to move objects or open doors/drawers and release to drop the object
    • with an object gripped, scroll the mouse wheel to:
      • (default): move it closer or farther away
      • (+ALT): rotate object fixed constraint frame (yaw)
      • (+CTRL): rotate object fixed constraint frame (pitch)
      • (+ALT+CTRL): rotate object fixed constraint frame (roll)
  4. Non-interactive testing (e.g. for headless systems): Run the example script:

    python /path/to/habitat-sim/examples/example.py --scene /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb

    The agent will traverse a particular path and you should see the performance stats at the very end, something like this: 640 x 480, total time: 3.208 sec. FPS: 311.7.

    To reproduce the benchmark table from Habitat ICCV'19 run examples/benchmark.py --scene /path/to/mp3d_example/17DRP5sb8fy/17DRP5sb8fy.glb.

    Additional arguments to example.py are provided to change the sensor configuration, print statistics of the semantic annotations in a scene, compute action-space shortest path trajectories, and set other useful functionality. Refer to the example.py and demo_runner.py source files for an overview.

    Load a specific MP3D or Gibson house: examples/example.py --scene path/to/mp3d/house_id.glb.

    We have also provided an example demo for reference.

    To run a physics example in python (after building with "Physics simulation via Bullet"):

    python examples/example.py --scene /path/to/data/scene_datasets/habitat-test-scenes/skokloster-castle.glb --enable_physics

    Note that in this mode the agent will be frozen and oriented toward the spawned physical objects. Additionally, --save_png can be used to output agent visual observation frames of the physical scene to the current directory.

Common testing issues

  • If you are running on a remote machine and experience display errors when initializing the simulator, e.g.

     X11: The DISPLAY environment variable is missing Could not initialize GLFW

    ensure you do not have DISPLAY defined in your environment (run unset DISPLAY to undefine the variable)

  • If you see libGL errors like:

     X11: The DISPLAY environment variable is missing Could not initialize GLFW

    chances are your libGL is located at a non-standard location. See e.g. this issue.

Documentation

Browse the online Habitat-Sim documentation.

Check out our ECCV tutorial series for a hands-on quickstart experience.

Can't find the answer to your question? Try asking the developers and community on our Discussions forum.

Datasets

HowTo use common supported datasets with Habitat-Sim.

External Contributions

  • If you use the noise model from PyRobot, please cite the their technical report.

    Specifically, the noise model used for the noisy control functions named pyrobot_* and defined in src_python/habitat_sim/agent/controls/pyrobot_noisy_controls.py

  • If you use the Redwood Depth Noise Model, please cite their paper

    Specifically, the noise model defined in src_python/habitat_sim/sensors/noise_models/redwood_depth_noise_model.py and src/esp/sensor/RedwoodNoiseModel.*

License

Habitat-Sim is MIT licensed. See the LICENSE for details.

The WebGL demo and demo scripts use: