长沙理工大学学报(自然科学版)
激光雷达和相机的决策级融合目标检测方法
作者:
作者单位:

(1.长沙理工大学 交通运输工程学院,湖南 长沙 410114;2.长沙理工大学 智能道路与车路协同湖南省重点实验室,湖南 长沙 410114;3.长沙理工大学 计算机与通信工程学院,湖南 长沙 410114;4.东风悦享科技有限公司,湖北 武汉 430058)

作者简介:

通讯作者:

龙科军(1974—)(ORCID:0000-0002-5659-9855),男,教授,主要从事交通运输规划与管理方面的研究。 E-mail:longkejun@csust.edu.cn

中图分类号:

U495

基金项目:

国家自然科学基金资助项目(52172313);湖南省科技创新计划(2020RC4048);国家重点研发计划(2018YFB1600905-4);湖南省科技厅重点领域研发计划(2019SK2171)


Target detection method for decision level fusion of LIDAR and camera
Author:
Affiliation:

(1. School of Traffic and Transportation Engineering, Changsha University of Science & Technology, Changsha 410114, China; 2. Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems, Changsha University of Science & Technology, Changsha 410114, China; 3. School of Computer & Communication Engineering, Changsha 410114, China; 4. Dongfeng USharing Technology Co.,Ltd., Wuhan 430058, China)

Fund Project:

  • 摘要
  • |
  • 图/表
  • |
  • 访问统计
  • |
  • 参考文献
  • |
  • 相似文献
  • |
  • 引证文献
  • |
  • 资源附件
  • |
  • 文章评论
    摘要:

    【目的】激光雷达与相机这两类传感器检测数据格式不统一、分辨率不同,且数据级和特征级的融合计算复杂度高,故提出一种决策级的目标融合检测方法。【方法】对激光雷达与相机的安装位置进行联合标定,实现这两类传感器检测结果的坐标系转换;利用匈牙利算法将激光雷达点云检测目标框和相机图像检测目标框进行匹配,设定目标框重合面积阈值,检测获得目标物的位置、类型等。【结果】实车测试结果表明,根据检测目标检测框长宽比选取不同交并比阈值的方法使得车辆和行人的目标识别准确率分别提升了3.3%和5.3%。利用公开数据集KITTI对所提融合方法进行验证,结果表明,在3种不同难度等级场景下,所提融合方法的检测精度分别达到了75.42%、69.71%、63.71%,与现有常用的融合方法相比,检测精度均有所提升。【结论】这两类传感器的检测目标框重合面积阈值对决策级融合检测结果影响较大,根据检测目标检测框长宽比选取不同阈值可有效提升车辆和行人的目标识别准确率。决策级融合方法能准确匹配雷达和相机的检测目标,有效提升目标检测精度。

    Abstract:

    [Purposes] For the detection data format of LIDAR and camera sensors is not unified, the resolution is different, and the fusion calculation complexity of data level and feature level is high, a target fusion detection method at decision level is proposed in this paper. [Methods] The installation position of LIDAR and camera is calibrated jointly to realize the coordinate system transformation of the detection results of the two kinds of sensors; The Hungarian algorithm is used to match the LIDAR point cloud detection target frame with the camera image detection target frame, set the coincidence area threshold of the target frame, and detect the position and type of the target. [Findings] Through the real vehicle test, the results show that the target recognition accuracy of vehicles and pedestrians is improved by 3.3% and 5.3% respectively. The proposed fusion method is verified by using the public dataset KITTI. The results show that the detection accuracy of the proposed fusion method can reach 75.42%, 69.71% and 63.71% in different difficulty level scenarios, respectively. Compared with the existing common fusion methods, the detection accuracy is improved. [Conclusions] The overlapping area threshold of the detection target frame of the two types of sensors has a great impact on the decision level fusion's detection results. Selecting different thresholds according to the length width ratio of the detection target detection frame can effectively improve the target recognition accuracy of vehicles and pedestrians. The decision level fusion method can accurately match the detection targets of LIDAR and camera, improve the target detection accuracy effectively.

    参考文献
    相似文献
    引证文献
引用本文

龙科军,余娟,费怡,等.激光雷达和相机的决策级融合目标检测方法[J].长沙理工大学学报(自然科学版),2024,21(1):133-140.
LONG Kejun, YU Juan, FEI Yi, et al. Target detection method for decision level fusion of LIDAR and camera[J]. Journal of Changsha University of Science & Technology (Natural Science),2024,21(1):133-140.

复制
分享
文章指标
  • 点击次数:
  • 下载次数:
  • HTML阅读次数:
  • 引用次数:
历史
  • 收稿日期:2022-03-10
  • 最后修改日期:
  • 录用日期:
  • 在线发布日期: 2024-03-18
  • 出版日期: