3D LiDAR and Stereo Fusion using Stereo Matching Network with Conditional Cost Volume Normalization

View Researcher's Other Codes

Disclaimer: The provided code links for this paper are external links. Science Nest has no responsibility for the accuracy, legality or content of these links. Also, by downloading this code(s), you agree to comply with the terms of use as set out by the author(s) of the code(s).

Please contact us in case of a broken link from here

Authors Hou-Ning Hu, Chieh Hubert Lin, Min Sun, Wei-Chen Chiu, Tsun-Hsuan Wang, Yi-Hsuan Tsai
Journal/Conference Name IEEE International Conference on Intelligent Robots and Systems
Paper Category
Paper Abstract The complementary characteristics of active and passive depth sensing techniques motivate the fusion of the Li-DAR sensor and stereo camera for improved depth perception. Instead of directly fusing estimated depths across LiDAR and stereo modalities, we take advantages of the stereo matching network with two enhanced techniques Input Fusion and Conditional Cost Volume Normalization (CCVNorm) on the LiDAR information. The proposed framework is generic and closely integrated with the cost volume component that is commonly utilized in stereo matching neural networks. We experimentally verify the efficacy and robustness of our method on the KITTI Stereo and Depth Completion datasets, obtaining favorable performance against various fusion strategies. Moreover, we demonstrate that, with a hierarchical extension of CCVNorm, the proposed method brings only slight overhead to the stereo matching network in terms of computation time and model size. For project page, see https//zswang666.github.io/Stereo-LiDAR-CCVNorm-Project-Page/
Date of publication 2019
Code Programming Language Python
Comment

Copyright Researcher 2021