Development of Naïve Algorithm to Identify Objects on Road and Measuring Distance using RANSAC Algorithm

Authors

  • Shreya Jaiswal Research Scholar, SHEAT College of Engineering, Varanasi, Uttar Pradesh, India Author
  • Sonam Singh Assistant Professor, SHEAT College of Engineering, Varanasi, Uttar Pradesh, India Author

Keywords:

Advanced Driver Assistance Systems, Autonomous Vehicles, Obstacle Avoidance, Tailgating Detection, Accident Prevention, Kitti, nuScenes, Lyft level 5

Abstract

Depth or distance estimation approaches that rely on deep learning need a substantial volume of data, and maintaining domain invariance is a hurdle. Hence, this study presents a streamlined and efficient method called the single view geometric approach. This technique utilizes the geometric characteristics of the road lane markers to accurately estimate distances. It seamlessly interacts with the lane and vehicle recognition components of an already established Advanced Driver Assistance System (ADAS). Our technique incorporates innovation in two aspects: firstly, it utilizes cross-ratios of lane borders to estimate the horizon. (2) It calculates an Inverse Perspective Mapping (IPM) and camera elevation based on a given lane width and the identified horizon. The distances of the cars on the road are determined by projecting the image point back onto a ray that intersects the rebuilt road plane. During the assessment process, we used data as the reference standard and assessed the effectiveness of our system in comparison to image dataser and the most advanced deep learning based monocular depth prediction methods. The results from the evaluation on three publicly available datasets shown that the suggested approach consistently maintains a Root Mean Square Error ranging from 6.10 to 7.31. It demonstrates superior performance compared to other algorithms on two of the datasets, but lags behind one deep learning approach on KITTI.

Downloads

Download data is not yet available.

References

Y. Cao, Z. Wu, and C. Shen. Estimating Depth From Monocular Images as Classification Using Deep Fully Convolutional Residual Networks. IEEE Transactions on Circuits and Systems for Video Technology, 28(11):3174–3182, nov 2018.

D. Eigen, C. Puhrsch, and R. Fergus. Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. In IEEE Computer Vision and Pattern Recognition (CVPR), pages 1–9, 2014.

H. Fu, M. Gong, C. Wang, K. Batmanghelich, and D. Tao. Deep Ordinal Regression Network for Monocular Depth Estimation. In IEEE Computer Vision and Pattern Recognition (CVPR), 2018.

R. Garg, G. VijayKumarB., and I. D. Reid. Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue. In European Conference on Computer Vision, 2016.

A. Geiger, P. Lenz, C. Stiller, and R. Urtasun. Vision meets robotics: The KITTI dataset. International Journal of Robotics Research, 32:1231–1237, 2013.

C. Godard, O. Mac Aodha, and G. J. Brostow. Unsupervised Monocular Depth Estimation with Left-Right Consistency. In International Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

K. He, X. Zhang, S. Ren, and J. Sun. Deep Residual Learning for Image Recognition. Conference on Computer Vision and Pattern Recognition (CVPR), pages 770–778, 2016.

V. Kesten, R. and Usman, M. and Houston, J. and Pandya, T. and Nadhamuni, K. and Ferreira, A. and Yuan, M. and Low, B. and Jain, A. and Ondruska, P. and Omari, S. and Shah, S. and Kulkarni, A. and Kazakova, A. and Tao, C. and Platinsky, L. and Jiang, W. and. Lyft Level 5 Dataset. https://level5.lyft.com/dataset/, 2019.

M. Khader and S. Cherian. An Introduction to Automotive LIDAR (Texas Instruments). Technical report, 2018.

H. Khan, A. Rafaqat, A. Hassan, A. Ali, W. Kazmi, and A. Zaheer. Lane detection using lane boundary marker network under road geometry constraints. In IEEE Winter Conference on Applications of Computer Vision, Snowmass, Co). IEEE, 2020.

J. N. Kundu, P. K. Uppala, A. Pahuja, and R. V. Babu. AdaDepth: Unsupervised Content Congruent Adaptation for Depth Estimation. In IEEE Computer Vision and Pattern Recognition (CVPR), 2018.

L. Ladicky, C. Hane, and M. Pollefeys. Learning the Match-ing Function. CoRR, abs/1502.0, 2015.

Z. Li and N. Snavely. MegaDepth: Learning Single-View Depth Prediction from Internet Photos. In IEEE, editor, IEEE Computer Vision and Pattern Recognition, pages 2041–2050, 2018.

J. Pan, M. Hebert, and T. Kanade. Inferring 3D layout of building facades from a single image. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 2918–2926, 2015.

X. Pan, J. Shi, P. Luo, X. Wang, and X. Tang. Spatial As Deep: Spatial CNN for Traffic Scene Understanding. In AAAI Conference on Artificial Intelligence, pages 7276–7283, 2018.

M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 91–99. Curran Associates, Inc., 2015.

TuSimple. TuSimple Velocity Estimation Challenge. Pages https://github.com/TuSimple/tusimple–benchmark/tre, 2017.

J. Uhrig, N. Schneider, L. Schneider, U. Franke, T. Brox, and A. Geiger. Sparsity Invariant CNNs. In International Conference on 3D Vision (3DV), 2017.

A. van den Hengel ; Mingyi He. Depth and surface normal estimation from monocular images using regression on deep features and hierarchical CRFs. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 1119–1127, jun 2015.

C. Wang, J. M. Zhu, and S. Lucey. Learning Depth from Monocular Videos using Direct Methods. In IEEE Comp Vision & Pattern Recog., 2018.

Downloads

Published

14-03-2024

Issue

Section

Research Articles

How to Cite

Development of Naïve Algorithm to Identify Objects on Road and Measuring Distance using RANSAC Algorithm. (2024). International Journal of Scientific Research in Science and Technology, 11(2), 956-964. https://ijsrst.com/index.php/home/article/view/IJSRST24112175

Similar Articles

1-10 of 175

You may also start an advanced similarity search for this article.