Capturing the Environment with a Lidar-equipped Helmet
News

Capturing the Environment with a Lidar-equipped Helmet

A New Mobile Mapping Approach

The Dynamic Mapping Group at LIESMARS, Wuhan University in China, has designed and implemented a compact wearable mobile mapping system called the WHU-Helmet to explore the next generation of mobile laser scanning mapping.

High-precision point clouds are widely used for applications such as highway construction, indoor mapping, forest inventory, powerline corridor mapping and so on. Mobile laser mapping is a leading technique for collecting high-precision point clouds. In the past few decades, airborne/UAV-based systems, car-borne systems and hand-held (backpack) systems have been increasingly used for various purposes. With the development of unmanned aerial vehicles (UAVs or ‘drones’) and robotics, lightweight, low-cost and flexible mobile laser scanning systems are moving into the mainstream. However, UAV/robotics-based mobile laser scanning systems have limited capabilities in difficult environments (e.g. GNSS-denied areas and construction sites). The benefits of wearable mobile laser scanning systems include low cost, miniaturization, flexibility and the integration of wireless communication (e.g. 5G), which is why these systems represent the next generation mobile laser scanning. The Dynamic Mapping Group at LIESMARS, Wuhan University in China, led by Prof Bisheng Yang, has designed and implemented the WHU-Helmet – a compact wearable mobile mapping system – to explore the next generation of mobile laser scanning mapping.

Figure 1: Data capture using the WHU-Helmet.

WHU-Helmet

Sensors including Lidar, a monocular camera, IMU, GNSS receiver (optional) and a high-performance edge computing unit have all been integrated in the WHU-Helmet system (see Figure 2). The specifications of the WHU-Helmet are described in Table 1.

First, the point cloud and image are fused at geometric and semantic level (Li et al., 2019; Yang and Chen, 2015). Second, multi-sensor fusion simultaneous localization and mapping (SLAM) technology (Cadena et al., 2016; Wu et al., 2020) is applied to calculate the position and orientation of the system in real time and generate a high-precision three-dimensional point cloud. Lastly, with the integration of 5G, the point clouds are transmitted to the control centre in real time. The key techniques of the WHU-Helmet are briefly described below.

Table 1: WHU-Helmet specifications.

Multi-sensor Self-calibration

The extrinsic calibration of Lidar, camera and IMU is to determine the transformation matrix between the sensor coordinate systems, which is the basis of point cloud-image fusion and multi-sensor SLAM. A flexible and accurate self-calibration method has been developed (Li et al., 2020).

Point Cloud/Image Deep Fusion

Point cloud and image fusion is the basis of scene understanding and multi-sensor fusion-based SLAM. In the WHU-Helmet, a deep integration of point cloud and image is implemented at the semantic level.

Multi-sensor Fusion-based SLAM

SLAM based on the fusion of multiple sensors (such as Lidar, camera, IMU) can overcome the failure of a single sensor in complex environments, such as with rapidly changing light conditions, fast motion, poor textures or scene degradation. It is the basis for a mobile mapping system to obtain high-precision three-dimensional point clouds in challenging environments with long-term or multi-period operation.

Figure 2: Configuration of the WHU-Helmet system.

Applications

A number of screenshots (Figures 3-6) showing the centimetre-level point clouds captured by the WHU-Helmet demonstrate its potential for numerous applications such as forest inventory, building information modelling (BIM), tunnel engineering, heritage documentation and so on.

Figure 3: Forest inventory point cloud imagery.

Figure 4: WHU-Helmet-derived point cloud for BIM.

Figure 5: Lidar imagery of an underground tunnel.

Figure 6: Point cloud imagery for heritage documentation.

Further Reading

Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., Leonard, J.J., 2016. Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics 32, 1309-1332.

Li, J., Yang, B., Chen, C., Habib, A., 2019. NRLI-UAV: Non-rigid registration of sequential raw laser scans and images for low-cost UAV Lidar point cloud quality improvement. ISPRS Journal of Photogrammetry and Remote Sensing 158, 123-145.

Li, J., Yang, B., Chen, C., Wu, W., Zhang, L., 2020. AERIAL-TRIANGULATION AIDED BORESIGHT CALIBRATION FOR A LOW-COST UAV-LIDAR SYSTEM. ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences 5.

Wu, W., Chen, C., Li, J., Cong, Y., Yang, B., 2020. Segment-Based Lidar Odometry for Less Structured Outdoor Scenes. The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 43, 535-540.

Yang, B., Chen, C., 2015. Automatic registration of UAV-borne sequent images and Lidar data. ISPRS Journal of Photogrammetry and Remote Sensing 101, 262-274.

About the Authors

The authors Bisheng Yang, Jianping Li and Weitong Wu are affiliated with the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS) at Wuhan University in Wuhan, China.

Geomatics Newsletter

Value staying current with geomatics?

Stay on the map with our expertly curated newsletters.

We provide educational insights, industry updates, and inspiring stories to help you learn, grow, and reach your full potential in your field. Don't miss out - subscribe today and ensure you're always informed, educated, and inspired.

Choose your newsletter(s)

News