— In this paper we focus on what meaningful 2D perceptual information we can get from 3D LiDAR point cloud. Current work [1] [2] [3] have demonstrated that the depth, height and local surface normal value of a 3D data are useful features for improving Deep Neural Networks (DNNs) based object detection. We thus propose to organise LiDAR point as three different maps: dense depth map, height map and surface normal map. Specifically, given a pair of RGB image and sparse depth map projected from LiDAR point cloud, we propose a parameter self-adaptive method to upgrade sparse depth map to dense depth map, which is then passed to a convex optimisation framework to gain global enhancement. Height map is obtained by reprojecting each pixel in dense depth map into 3D coordinate, which enables us to record its height value, surface normal map is obtained by a trilateral filter constructed from depth map and RGB image. Finally, we validate our framework on both KITTI tracking dataset and Midd...