rs-multicam
rs-multicam sample
Overview
The multicam sample demonstrates the ability to use the SDK for streaming and rendering multiple devices simultaneously.
Expected Output
The application opens and renders a mosaic view of all automatically-selected streams provided by the connected devices. (The selection is device-tailored and takes into account USB-type constrains).
Each tile displays an unique stream produced by a specific camera. The stream name appear at the top left.
In the following snapshot we use five Intel® RealSense™ devices to produce the mosaic: D435i, D415, SR300 and two T265 tracking cameras. Those are responsible to generate:
- Three Depth streams (D435i, D415, SR300)
- Three Color streams (D435i, D415, SR300)
- Two Pose streams (T265x2)
- Four Fisheye streams (T265x2)
- Six IMU streams (D435i, T265x2)
Alltogether the mosaic comprise of 18 simultaneous live feeds:
Code Overview
As with any SDK application we include the Intel RealSense Cross Platform API:
#include <librealsense2/rs.hpp> // Include RealSense Cross Platform API
In this example we will also use the auxiliary library of example.hpp
:
#include "example.hpp" // Include short list of convenience functions for rendering
examples.hpp
lets us easily open a new window and prepare textures for rendering.
The first object we use is a window
, that will be used to display the images from all the cameras.
// Create a simple OpenGL window for rendering:
window app(1280, 960, "CPP Multi-Camera Example");
The window
class resides in example.hpp
and lets us easily open a new window and prepare textures for rendering.
Next, we define the objects to be used in the example.
rs2::context ctx; // Create librealsense context for managing devices
rs2::colorizer colorizer; // Utility class to convert depth data RGB
std::vector<rs2::pipeline> pipelines;
The rs2::context
encapsulates all of the devices and sensors, and provides some additional functionalities. We employ the rs2::colorizer
to convert depth data to RGB format.
In the example we use multiple rs2::pipeline
objects, each controlling a lifetime of a single HW device.
The example's flow starts with listing and activating all the connected Intel® RealSense™ devices:
// Start a streaming pipe per each connected device
for (auto&& dev : ctx.query_devices())
{
rs2::pipeline pipe(ctx);
rs2::config cfg;
cfg.enable_device(dev.get_info(RS2_CAMERA_INFO_SERIAL_NUMBER));
pipe.start(cfg);
pipelines.emplace_back(pipe);
}
First, we allocate rs2::pipeline
object per recognized device. Note that we share the rs2::context
object between all rs2::pipeline
instances.
rs2::pipeline pipe(ctx);
To map the specific device to the newly-allocated pipeline we define rs2::config
object, and assign it with the device's serial number.
Then we request rs::pipeline
to start streaming and produce frames.
pipe.start(cfg);
Since we do not specify explicit stream requests, each device is configured internally to run a set of predefined stream profiles recommended for that specific device.
After adding the device, we begin our main loop of the application:
while (app)
Every application cycle we traverse the registered devices and retrieve all the available frames:
// Collect the new frames from all the connected devices
std::vector<rs2::frame> new_frames;
for (auto &&pipe : pipelines)
{
rs2::frameset fs;
if (pipe.poll_for_frames(&fs))
{
for (rs2::frame& f : fs)
new_frames.emplace_back(f);
}
}
Each rs::pipeline
produces a synchronized collection of frames for all streams configured for its allocated device. These are contained in rs2::frameset
object.
The rs2::frameset
itself is an wrapper for a composite_frame
, which can holds more than a single type of frame.
To minimize UI impact we're using non-blocking frames polling method:
if (pipe.poll_for_frames(&fs))
In order to simplify the presentation, we split those rs2::frameset
containers into separate frames and store them with a standard C++ container for later use:
for (rs2::frame& f : fs)
new_frames.emplace_back(f);
The Depth data is delivered as uint16_t
type which cannot be rendered directly, therefore we use rs2::colorizer
to convert the depth representation into human-readable RGB map:
// Convert the newly-arrived frames to render-friendly format
for (const auto& frame : new_frames)
{
render_frames[frame.get_profile().unique_id()] = colorizer.process(frame);
}
And finally send the collected frames to update the openGl mosaic:
app.show(render_frames);
Updated almost 5 years ago