3D Visualisation of Lidar Data
Article

3D Visualisation of Lidar Data

State of the Art and Proposed System

Over recent years the geo-community has used advances in hardware, software and memory to develop software for visualising remotely sensed data. Its 3D nature and capability for integration with other remotely sensed data at high resolution renders Lidar data ideal for 3D visualisation, particularly in an immersive environment. The authors review the state of the art in (geo-)visualisation techniques and propose a system, effectively an algorithm warehouse for an immersive visualisation interface for Lidar data.

The search is on in 3D-GIS for an interface allowing 3D quer–ies and geo-processing operations. Attempts have been made to conceptualise the topological requirements for such a GIS for urban areas. Software such as Erdas Imagine Virtual GIS and ESRI ArcGIS has already incorporated 2.5D visualisation. Virtual Reality Modelling Language (VRML)/X3D technology has taken the third dimension to the World Wide Web.

Display Methods
A visualisation interface gains popularity if the user is able to interact with it, change its para–meters and obtain various outputs. Geo-Visualisation or the visualisation of terrain on the computer has been used effectively for the generation of models for development planning, fire propagation, climate, air quality, public safety, communication networks, and traffic. Today interactive visualisation tools are being developed to aid decision-makers. Visual display methods may be broadly classified as:
- 2D plan view, represented by toposheets, thematic maps or digital elevation models; but showing the third dimension by height or class leads to loss of the other information
- perspective 3D view, achieved by projecting an object into a view plane and then mapping the plane onto display screen
- stereoscopic view, based on binocular vision
- virtual reality, allowing a user to interact with computer-simulated environment.
In the sequel we focus on stereoscopic view and virtual reality.

Stereoscopy
Stereoscopic view is made possible by an arrangement whereby the right-eye image is presented to the right eye and the left-eye image to the left. The main techniques (see Figure 1) are (1) colour multiplexed, which makes use of anaglyph glasses, (2) polarisation multiplexed, which makes use of orthogonally polarised images and (3) time multiplexed, which makes use of shutter glasses with speeds 50-60Hz, (4) time and polarisation multiplexed, which is a mixture of the time and polarisation methods, and (5) localisation multiplexed which uses photographs of the terrain taken at different angles. Chromostereoscopy is based on dispersion of light: the eye may locate at differing distances objects of different colours situated at the same distance. The glasses are a combination of high-dispersion prism and low-dispersion prism. Autostereoscopy is based on lenticular imaging: an array of long, narrow lenses is built onto the display screen, which sends the interleaved images in different directions so that the observer can visualise a 3D image without glasses or other add-ons. However, display resolution is halved and interpretation of the image without the lenticular plate is inhibited. A recent technique, IRIS-3D, uses a dual-channel projection system enabling each of the two stereo views to be displayed at full resolution; and most importantly, cross-talk between images can be eliminated. (www.iris3d.com).

Virtual Reality
Most virtual-reality environments are primarily based on visual clues, displayed either on a computer screen or through stereoscopic displays. But some include additional clues, such as sound. With the advent of gaming engines, virtual-reality programming has become a popular market within the computer industry. The implementation of gaming engines in geospatial software has to some extent enabled immersive visualisation. Within a few years, monitors with 3D functionality might become consumer goods, as these are set to replace LCD flat panels and CRT screens. SeeReal Technologies of Dresden, Germany, has come up with SeeReal monitors that provide the facility for 3D viewing with the naked eye. Developments in processors, storage capacity, memory and graphics hardware is moving forward to further this capability.

Geo-visualisation
Broadly speaking, the requirements of geo-visualisation are speed, accuracy, interpretability, faithfulness and mensuration. But given the continual development of higher and higher-resolution sensors and resulting geo-spatial data, is 3D visualisation feas–ible? Geo-scientific visualisation is now considered to encompass the development of theory, tools and methods for the visualisation of spatial data. The Centre for Geoinformation, GmbH (www.cegi.de/) has showed that virtual reality may be applied in many areas (Figure 2). It may be divided into five classes: static, dynamic, interactive, animated and immersive. Software developers in the field of geo-visualisation have effectively leveraged scientific visualisation techniques. Landscape visualisation in 2.5D has been attempted in most geo-visualisation software.

Lidar
Three-dimensional immersive visualisation for Lidar data would be helpful in checking quality of data, but also for applications such as atmospheric science, bathymetric-data collection, law enforcement, telecommunications and disaster-management strategies. Though large volumes of data are being produced, fully automatic interpretation and mensuration methods do not seem an imminent possibility. 3D visualisation could lead to effective data mining and information extraction within a human-machine interface. The room-size visualisation system from Fujisaki, named Cave Automatic Virtual Environment (CAVE) and in which the user can make measurements of forest stands within the virtual environment is the only illustration of immersive visualisation using Lidar data.

Proposed System
There is a need for efficient and accurate processing of Lidar data within an immersive environment. We propose a system which can import and visualise raw Lidar data, use photographs to aid visualisation, create 3D visual models, enable editing/delineation of features and generate immersive fly-through scenes/animations (Figure 3). The aim is to design a black box that would take Lidar and photos as input and output a 3D-model. The Lidar data and aerial photographs would provide both texture and height information. The model would be tested with respect to visualisation ergonomics using existing and emergent visualisation methods. The possibility of integrating this model into a 3D-GIS environment would thus finally be explored. The data would first be procured and understood using commercial visualisation software (Erdas or Microstation); existent algorithms such as TIN and draped TIN would be evaluated for visualising Lidar data. The possibility of an emergent algorithm would be explored to address deficiencies in existing methods. This work would aim to come up with a new rendering method that enhances speed and increases interpretability of data. The end product might be a GIS Toolkit as an extension of existent GIS software. How might the system be quantified or benchmarked? This is difficult: a solution might be to quantify the visualisation experience with respect to specific benchmarks.

Concluding Remarks
In the near future it may become possible to take measurements and digitise features within the immersive environment. Spatial planning and decision-support systems (SPDSS), web-based 3D display, decision support and Lidargrammetry are waiting in the wings. With the aid of emergent technologies the user will soon have the facility for immersive visualisation and will be able to observe terrain to discover patterns and perform data mining, all with the naked eye.

Further Reading
• CeGi, 2003, Virtual Regions along the Rhine and the Ruhr 2006 (in German, original title: Machbarkeitsstudie ‘Virtuelle regionen an Rhein und Ruhr 2006’). Center for Geoinformation GmbH: Feasibility Study on behalf of the state government of North Rhine-Westphalia. Dortmund.
• Fujisaki, I., Evans, D. L., Moorhead, R. J., Irby, D. W., Aragh, M. J. M. and Roberts, S. D., 2003, Lidar-based Forest Visualisation: Modelling Forest Stands and User Studies. In: ASPRS Annual Conference, Fairbanks, Alaska.
• Nielsen, E., 2006. Stereoscopic Viewing Technologies: Seereal Technologies. GIM International, May, vol. 20, nr. 5.
• Ostnes, R., Abbott, V. and Lavender, S., 2004, Visualisation Techniques: An Overview Parts 1 and 2. The Hydrographic Journal.
• Zlatanova, S., 2000. 3D GIS for Urban Development. ISBN 90-6164-178-0, ITC, The Netherlands.

Geomatics Newsletter

Value staying current with geomatics?

Stay on the map with our expertly curated newsletters.

We provide educational insights, industry updates, and inspiring stories to help you learn, grow, and reach your full potential in your field. Don't miss out - subscribe today and ensure you're always informed, educated, and inspired.

Choose your newsletter(s)