Intel® RealSense™ Developer Documentation

ROS wrapper allows using Intel RealSense Depth Cameras D400, SR300 & L500 series and T265 Tracking Camera, with ROS.
For running in ROS2 environment please switch to the eloquent branch

SDK Supported Version

To check what is the LibRealSense supported version, please check the release notes

ROS Camera nodes Examples

For a list of camera nodes please refer to launch on GitHub

Check also the article Integrating the Intel RealSense D435 with ROS

1-Starting Camera Node

To start the camera node in ROS:

roslaunch realsense2_camera rs_camera.launch

This will stream all camera sensors and publish on the appropriate ROS topics.

Other stream resolutions and frame rates can optionally be provided as parameters to the 'rs_camera.launch' file.

An RViz visualization of the coloured 3D point cloud from the depth ROS topic.

An RViz visualization of the coloured 3D point cloud from the depth ROS topic.

2-PointCloud visualization

This example demonstrates how to start the camera node and make it publish point cloud using the pointcloud option.

roslaunch realsense2_camera rs_camera.launch filters:=pointcloud

Then open rviz to watch the pointcloud:

The following example demonstrates using rgbd that builds an ordered pointcloud from the streams (as opposed to the un-ordered pointcloud we currently publish). Running it requires the installation of rgbd package (as described in the file).

roslaunch realsense2_camera rs_rgbd.launch 

This example opens rviz and shows the camera model with different coordinate systems and the pointcloud, so it presents the pointcloud and the camera together.

roslaunch realsense2_camera rs_d435_camera_with_model 

3-Align Depth

This example shows how to start the camera node and align depth stream to other available streams such as color or infra-red.

roslaunch realsense2_camera rs_camera.launch align_depth:=true

You can also run the the example rs_aligned_depth.launch

As can be seen from the image below, Aligned Topics are now available

4-Multiple Cameras

Check the example of how to start the camera node and streaming with two cameras using the rs_multiple_devices.launch.


Multiple camera is currently not supported for T265 cameras.

roslaunch realsense2_camera rs_multiple_devices.launch serial_no_camera1:=<serial number of the first camera> serial_no_camera2:=<serial number of the second camera>

The camera serial number should be provided to 'serial_no_camera1' and 'serial_no_camera2' parameters. One way to get the serial number is from the rs-enumerate-devices tool.

rs-enumerate-devices | grep Serial

Another way of obtaining the serial number is connecting the camera alone, running

roslaunch realsense2_camera rs_camera.launch

and looking for the serial number in the log printed to screen under "[INFO][...]Device Serial No:".

Another way to use multiple cameras is running each from a different terminal. Make sure you set a different namespace for each camera using the "camera" argument:

roslaunch realsense2_camera rs_camera.launch camera:=cam_1 serial_no:=<serial number of the first camera>
roslaunch realsense2_camera rs_camera.launch camera:=cam_2 serial_no:=<serial number of the second camera>

5-T265 Example

To start the camera node in ROS:

roslaunch realsense2_camera rs_t265.launch

This will stream all camera sensors and publish on the appropriate ROS topics.

Check the T265 topics table for further information.

To visualize the pose output and frames in RViz, start:

roslaunch realsense2_camera demo_t265.launch

Additional example demonstrates 3D reconstruction with T265

roslaunch realsense2_camera rs_rtabmap.launch 

6-2D occupancy map

Occupancy ROS package can be used to generate a 2D occupancy map based on depth images, for example from Intel(R) Realsense(TM) Depth Camera D435 (or D415), and poses, for example from Intel(R) Realsense(TM) Tracking Camera T265. Internally, it uses a 3D representation to transform point clouds into a common reference frame, using T265 poses, and to accumulate information over time. The accumulated measurements are mapped to a probability of a field of beeing free (0) to occupied (100). This output can be used for robot navigation.

Please refer to the build instructions at Occupancy


source devel/setup.bash
roslaunch occupancy occupancy_live_rviz.launch

Expected output:

For convenience we recommend the following mechanical mounting T265 + D435
The stl file located here and the corresponding extrinsics

7-Object Analytics

Object Analytics (OA) is ROS wrapper for real-time object detection, localization and tracking. These packages aim to provide real-time object analyses over RGB-D camera inputs, enabling ROS developer to easily create amazing robotics advanced features, like intelligent collision avoidance and semantic SLAM. It consumes sensor_msgs::PointClould2 data delivered by RGB-D camera, publishing topics on object messages, object analytics in 3D camera coordination system.

OA keeps integrating with various "state-of-the-art" algorithms.

For further information please check ros_object_analytics and ros2_object_analytics

8 - Open from File

The following example allows using a rosbag file, saved by Intel RealSense Viewer, instead of a camera. It can be used for testing and repetition of the same sequence.

roslaunch realsense2_camera rs_from_file.launch 

Published Topics

The published topics differ according to the device and parameters.
See the table below for Topics supported according to SKUs.
For a full list of topics type 'rostopic list' with your camera connected.

D400 Stereo Depth Cameras

Camera Model



Applies to all D400 camera with RGB, ie: D415, D435, D435i, D455

*Note: /color refers to the stream coming out and /rgb_camera refers to the device itself



All D400

*Note: /depth refers to the stream coming out and /stereo_module refers to the device itself



All D400
after enabling color/infra1/infra2

Left IR Imager


All D400
after enabling infra1

Right IR Imager


All D400
after enabling infra2



All D400
after enabling pointcloud


/camera/gyro/sample (See Note)
/camera/accel/sample (See Note)

Applies to all D400 cameras with IMU: D435i, D455

After setting parameter unite_imu_method these topics are replaced with /camera/imu /gyro and /accel refer to the streams coming out of the device and /motion_module to the device itself



All D400

L500 Lidar Depth Cameras

Camera Model







IR Imager


after enabling infra





/camera/gyro/sample (See Note)
/camera/accel/sample (See Note)


After setting parameter unite_imu_method these topics are replaced with /camera/imu /gyro and /accel refer to the streams coming out of the device and /motion_module to the device itself




T265 Tracking Camera

Camera Model









after enabling fisheye1, fisheye2


/camera/gyro/sample (See Note)
/camera/accel/sample (See Note)

*Note: after setting parameter unite_imu_method these topics are replaced with /camera/imu




Compression packages

The compression packages apply to all the camera Modules (SKUs).




Requires compressed-image-transport package installed such as package ros-kinetic-compressed-image-transport



Requires compressed-depth-image-transport package installed such as package ros-kinetic-theora-image-transport



Requires theora-image-transport package installed such as package ros-kinetic-theora-image-transport

/camera prefix can be modified

The "/camera" prefix in the topics is the default parameter and can be changed - check tf_prefix parameter in the table below.
rs_multiple_devices.launch is an example for multiple cameras and modifying the camera prefix accordingly.

Launch parameters

Default / Comment


Attach to the device with the given serial number (serial_no) number.

Default: Attach to a randomly available RealSense device.


Attach to the device with the given USB port (usb_port_id). i.e 4-1, 4-2 etc.

Default: Ignore USB port when choosing a device.


Attach to a device whose name includes the given device_type regular expression pattern.

Default: Ignore device type
device_type:=d435 will match d435 and d435i.
device_type=d435(?!i) will match d435 but not d435i.


Publish topics from rosbag file


Allows resetting the device before using it. Recommended to be used for example, when a glitch prevented the device from being properly closed.

If set to true, the device will reset prior to usage.


If set to true, will publish additional topics with all the images aligned to the depth image.

The topics are of the form: /camera/aligned_depth_to_color/image_raw etc.


The options, separated by commas: colorizer, pointcloud,
(See details in Note-1)

The depth FOV and the texture FOV are not similar. By default, pointcloud is limited to the section of depth containing the texture. You can have a full depth to pointcloud, coloring the regions beyond the texture with zeros, by setting allow_no_texture_points to true.


Gather closest frames of different sensors, infra red, color and depth, to be sent with the same timetag.

This happens automatically when filters such as pointcloud are enabled.

<stream_type>_width, <stream_type>_height, <stream_type>_fps

<stream_type> can be any of infra, color, fisheye, depth, gyro, accel, pose.
Sets the required format of the device. If the specified combination of parameters is not available by the device, the stream will not be published.

Setting a value to 0, will choose the first format in the inner list. (i.e. consistent between runs but not defined).
Note: for gyro accel and pose, only _fps option is meaningful.


Choose whether to enable a specified stream or not. <stream_name> can be any of infra1, infra2, color, depth, fisheye, fisheye1, fisheye2, gyro, accel, pose.

Default is true.


This parameter allows changing the frames' IDs prefix per camera.

By default all frames' IDs have the same prefix - camera_.


Define the frame_id all static transformations refers to.


Define the origin coordinate system in ROS convention (X-Forward, Y-Left, Z-Up).
Pose topic defines the pose relative to that system.

The rest of the frame IDs can be found at nodelet.launch.xml


Cameras such as D435i, D455 and T265, have built in IMU components which produce 2 separate streams: gyro - which shows angular velocity and accel which shows linear acceleration. Each with it's own frequency. (See details in Note-2)

By default, 2 corresponding topics are available, each with only the relevant fields of the message sensor_msgs::Imu are filled out.


Remove from the depth image all values above a given value (meters).

Disable by giving negative value (default)

linear_accel_cov, angular_velocity_cov

Set the variance given to the Imu readings.

For the T265, these values are being modified by the inner confidence value.


Images processing takes time. Therefor there is a time gap between the moment the image arrives at the wrapper and the moment the image is published to the ROS environment. During this time, Imu messages keep on arriving which may result image with earlier timestamp to be published after Imu message with later timestamp.

Setting hold_back_imu_for_frames to true will hold the Imu messages back while processing the images and then publish them all in a burst, thus keeping the order of publication as the order of arrival.
Note that in either case, the timestamp in each message's header reflects the time of it's origin.


Applies to T265: add wheel odometry information through this topic.

The code refers only to the twist.linear field in the message.


Applies to T265: include odometry input, it must be given a configuration file.

Explanations can be found here. The calibration is done in ROS coordinates system.


A Boolean parameter, publish or not TF at all.

Default is True.


A double parameter, positive values mean dynamic transform publication with specified rate, all other values mean static transform publication.

Default is 0.


A Boolean parameter, publish if True TF from odom_frame to pose_frame.

Default is True.


(1) See detail description of filters at Post-processing filters

  • colorizer: will color the depth image. On the depth topic an RGB image will be published, instead of the 16bit depth values.
  • pointcloud: will add a pointcloud topic /camera/depth/color/points. The texture of the pointcloud can be modified in rqt_reconfigure (see below) or using the parameters: pointcloud_texture_stream and pointcloud_texture_index. Run 'rqt_reconfigure' to see available values for these parameters.

(2) Setting unite_imu_method creates a new topic, imu, that replaces the default gyro and accel topics. The imu topic is published at the rate of the gyro. All the fields of the Imu message under the imu topic are filled out.

  • linear_interpolation: Every gyro message is attached by the accel message interpolated to the gyro's timestamp.
  • copy: Every gyro message is attached by the last accel message.

Dynamic Reconfigure Parameters

The following command allow to change camera control values using [].

rosrun rqt_reconfigure rqt_reconfigure

Installation Guidelines

Please refer to GitHub for installation Instructions and ROS distributions
A summary of using the RealSense with ROS can be found on the official ROS RealSense Wiki page

Updated 2 months ago


Suggested Edits are limited on API Reference Pages

You can only suggest edits to Markdown body content, but not to the API spec.