Moving beyond green screens as well as stationary, expensive and hard to use setups
This repository contains VCL's evolving toolset for (multi-) RGB-D sensor 3D capturing, streaming and recording, initially presented in [1]. It is a research oriented but flexible and optimized software that can be / has been used in the context of:
- Live Tele-presence [2] in Augmented VR or Mixed/Augmented Reality settings
- Performance Capture [3]
- Free Viewpoint Video (FVV)
- Immersive Applications (i.e. events and/or gaming) [4]
- Motion Capture [5]
The toolset is designed as a distributed system where a number of processing units each manage and collect data from a single sensor using a headless application. A set of sensors is orchestrated by a UI application that is also the delivery point of the connected sensor streams. Communication is handled by a broker, typically co-hosted with the controlling application, albeit not necessary.
We currently only support Intel RealSense D415 sensors.
However, Azure Kinect DK is coming soon.
RGBD-Sensor | Compatibility |
---|---|
✔️ |
|
⏳ |
- Multi-sensor live streaming and recording (no actual restriction of number of sensors apart from the available resources, i.e system processing and/or switch bandwidth)
- Multi-sensor spatial alignment (currently supporting only 4 sensors via an adaptation of [6])
- Multi-sensor temporal alignment via the LAN-based Precision Time Protocol (PTP -- IEEE 1588-2002)
Check our releases, major release coming during October 2019.
Please use the Wiki on instructions on how to assemble, deploy and use the Volumetric Capture system.
If you used the system or found this work useful, please cite:
@inproceedings{sterzentsenko2018low,
title={A low-cost, flexible and portable volumetric capturing system},
author={Sterzentsenko, Vladimiros and Karakottas, Antonis and Papachristou, Alexandros and Zioulis, Nikolaos and Doumanoglou, Alexandros and Zarpalas, Dimitrios and Daras, Petros},
booktitle={2018 14th International Conference on Signal-Image Technology \& Internet-Based Systems (SITIS)},
pages={200--207},
year={2018},
organization={IEEE}
}
We currently only ship binaries for the Windows platform, supporting Windows 10.
[1] Sterzentsenko, V., Karakottas, A., Papachristou, A., Zioulis, N., Doumanoglou, A., Zarpalas, D. and Daras, P., 2018, November. A low-cost, flexible and portable volumetric capturing system. In 2018 14th International Conference on Signal-Image Technology & Internet-Based Systems (SITIS) (pp. 200-207). IEEE.
[2] Alexiadis, D.S., Chatzitofis, A., Zioulis, N., Zoidi, O., Louizis, G., Zarpalas, D. and Daras, P., 2016. An integrated platform for live 3D human reconstruction and motion capturing. IEEE Transactions on Circuits and Systems for Video Technology (TCSVT), 27(4), pp.798-813.
[3] Alexiadis, D.S., Zioulis, N., Zarpalas, D. and Daras, P., 2018. Fast deformable model-based human performance capture and FVV using consumer-grade RGB-D sensors. Pattern Recognition (PR), 79, pp.260-278.
[4] Zioulis, N., Alexiadis, D., Doumanoglou, A., Louizis, G., Apostolakis, K., Zarpalas, D. and Daras, P., 2016, September. 3D tele-immersion platform for interactive immersive experiences between remote users. In 2016 IEEE International Conference on Image Processing (ICIP) (pp. 365-369). IEEE.
[5] Chatzitofis, A., Zarpalas, D., Kollias, S. and Daras, P., 2019. DeepMoCap: Deep Optical Motion Capture Using Multiple Depth Sensors and Retro-Reflectors. Sensors, 19(2), p.282.
[6] Papachristou, A., Zioulis, N., Zarpalas, D., and Daras, P., 2018. Markerless structure-based multi-sensor calibration for free viewpoint video capture, International Conference on Computer Graphics, Visualization and Computer Vision (WSCG).