3D motion estimation based on pitch and azimuth from respective camera and laser rangefinder sensing

 

Authors
Dung Hoang, Van; Cáceres Hernández, Danilo; Ha Le, My; Hyun Jo, Kang
Format
Article
Status
publishedVersion
Description

This paper proposes a new method to estimate the 3D motion of a vehicle based on car-like structured motion model using an omnidirectional camera and a laser rangefinder. In recent years, motion estimation using vision sensor has improved by assuming planar motion in most conventional research to reduce requirement parameters and computational cost. However, for real applications in environment of outdoor terrain, the motion does not satisfy this condition. In contrast, our proposed method uses one corresponding image point and motion orientation to estimate the vehicle motion in 3D. In order to reduce requirement parameters for speedup computational systems, the vehicle moves under car-like structured motion model assumption. The system consists of a camera and a laser rangefinder mounted on the vehicle. The laser rangefinder is used to estimate motion orientation and absolute translation of the vehicle. An omnidirectional image-based one-point correspondence is used for combining with motion orientation and absolute translation to estimate rotation components of yaw, pitch angles and three translation components of Tx, Ty, and Tz. Real experiments in sloping terrain demonstrate the accuracy of vehicle localization estimation using the proposed method. The error at the end of travel position of our method, one-point RANSAC are 1.1%, 5.1%, respectively.
This paper proposes a new method to estimate the 3D motion of a vehicle based on car-like structured motion model using an omnidirectional camera and a laser rangefinder. In recent years, motion estimation using vision sensor has improved by assuming planar motion in most conventional research to reduce requirement parameters and computational cost. However, for real applications in environment of outdoor terrain, the motion does not satisfy this condition. In contrast, our proposed method uses one corresponding image point and motion orientation to estimate the vehicle motion in 3D. In order to reduce requirement parameters for speedup computational systems, the vehicle moves under car-like structured motion model assumption. The system consists of a camera and a laser rangefinder mounted on the vehicle. The laser rangefinder is used to estimate motion orientation and absolute translation of the vehicle. An omnidirectional image-based one-point correspondence is used for combining with motion orientation and absolute translation to estimate rotation components of yaw, pitch angles and three translation components of Tx, Ty, and Tz. Real experiments in sloping terrain demonstrate the accuracy of vehicle localization estimation using the proposed method. The error at the end of travel position of our method, one-point RANSAC are 1.1%, 5.1%, respectively.

Publication Year
2018
Language
eng
Topic
Vehicles
Cameras
Three-dimensional displays
Trajectory
Motion estimation
Geometry
Roads
Vehicles
Cameras
Three-dimensional displays
Trajectory
Motion estimation
Geometry
Roads
Repository
RI de Documento Digitales de Acceso Abierto de la UTP
Get full text
https://ieeexplore.ieee.org/abstract/document/6696433/authors
http://ridda2.utp.ac.pa/handle/123456789/5078
Rights
embargoedAccess
License