Airborne Lidar and photogrammetric accuracy
Article

Airborne Lidar and photogrammetric accuracy

An insight into terminology and practices

Many surveying professionals – and contract specifications – use the term ‘accuracy’ without fully defining it, according to our technical editor Huibert-Jan Lekkerkerk. Using the ASPRS and IHO specifications as a basis, in this article he dives deeper into the relevant terminology and methods.

Over the years, many articles on airborne Lidar and photogrammetric surveys have been published in GIM International. As technical editor, one standard question I ask the authors is ‘What was the accuracy?’. The responses vary not only in value, but also in how accuracy is quoted. The definition of accuracy is not only relevant to articles but also in contract specifications. This article gives an overview of terminology and methods, using the ASPRS and IHO specifications as a basis.

Many people (and contract specifications) use the term ‘accuracy’ without fully defining it. In statistics, the term ‘accuracy’ is not well defined and could indicate different things to different people. In statistics terminology, it is based around three main types of errors: systematic errors, random errors and blunders. Blunders (sometimes also called ‘spikes’) are errors so large and/or obvious that they are usually removed from the dataset before further statistical processing and thus not relevant to specifications.

Systematic error or bias

The systematic error or bias is an error that follows a certain rule. When the rule is known, the error can be removed. The geodetics are an important systematic error in all surveys. Incorrect geodetic parameters will lead to a shift of all the coordinates surveyed. The survey may still look correct between objects, but is in fact completely in the wrong place. Once we figure out the size (rule) of a systematic error, the results can be corrected.

Many specifications consider the systematic error to be ‘near zero’ after careful calibration and installation. However, no matter how well the installation and calibration were performed, a small residual error usually remains. Such residual errors can be found in small offset errors between, for example, GNSS and sensor, but also in the boresight calibration for Lidar, the inertial measurement unit (IMU) alignment or the camera parameters in photogrammetry.

Figure 1: A-priori horizontal error computation for Lidar. (Image courtesy: ASPRS)

Random error

The random error is usually stated as the error which remains after all blunders and systematic errors have been removed. A random error is mainly influenced by the environment and instruments. The GNSS is a good example of this. Due to satellite movement, atmospheric conditions and receiver electronics, each position will deviate to a certain amount from the true, average, value.

In surveying, we often consider the random error to be ‘normally distributed’. This effectively means that the error follows a pattern which only seems random over a short amount of time. If we take enough measurements, we should find that the average of all measurements is equal (or very close) to the true position, provided there are no systematic errors present (and all blunders have been removed). Another important aspect of the normal distribution is that we can predict how far the errors will deviate from this average. This is called the ‘uncertainty’ of the measurement and is generally stated in terms of the standard deviation (or sigma, σ) of the measurements.

In the normal distribution, this uncertainty relates to how many measurements are within a certain distance of the average. Take for example an RTK dGNSS system with a standard deviation of 10mm + 1ppm. The definition of this random error indicates that there is a fixed error (10mm) and one that is distance-dependent (parts per million [ppm] or mm/km). At 10km from the base station, this computes to 10 + 1 x 10 = 20mm. The normal distribution now tells us that we may find (rounded) that 68% of all our measurements are within one standard deviation or 20mm from the average, and that 95% (rounded) of our measurements are within two standard deviations or 40mm from the average. 95% is also called the ‘confidence level’ as it indicates how many of our measurements are within this value of two standard deviations from the average.

Figure 2: Definition of ASPRS accuracy classes (NVA = Non-vegetated, VVA = Vegetated).

Root mean square error

In the real world, it is almost impossible to measure the true average – not only due to small residual errors, but also because the true average (real position) is also measured and thus not precisely known. Therefore, many specifications use a term like root mean square error (RMSE). This is effectively a combination of the (unknown) systematic error or bias and the (unknown) standard deviation as found in a real survey, when the results are compared to independent ground control points (GCPs) for example. RMSE is comparable to the standard deviation, and would even be the same if there would be no bias in the measurements. However, as both the GCP and the measurements have their own (residual) error, the RMSE is generally slightly more pessimistic than the standard deviation itself. The RMSE (after removal of blunders) is usually what is meant by ‘accuracy’ in a specification.

The RMSE indicates 68% of the measurements as it is based on the one standard deviation (sigma) level. Vertical accuracy is quoted as a 2RMSE value by the 2014 Positional Accuracy Standards for Digital Geospatial Data of the American Society for Photogrammetry and Remote Sensing (ASPRS), so at a 95% confidence level. For Lidar bathymetry, which follows the standards of the International Hydrographic Organization (IHO) (namely its S-44 Standards for Hydrographic Surveys), the term ‘total vertical uncertainty’ (TVU) is equivalent to the 2RMSE value.

Figure 3: A-priori computation in AMUST software for multibeam echosounder against IHO S44 Special Order.

Positional accuracy

Whereas the vertical accuracy is a one-dimensional number, positional accuracy consists of two dimensions: longitude / X / E and latitude / Y / N. An RMSE could be quoted for both dimensions, but most clients are more interested in ‘how far’ the measured point is from the real coordinate. This distance between the true and measured position is the distance root mean square (DRMS). The DRMS can be computed from the standard deviations in both horizontal directions and indicates a circle within which the real-world position should fall. The confidence level for the DRMS differs from the RMS(E) value in that 1DRMS represents a 63-68% confidence level (or around 66% on average) and 2DRMS indicates 95-98% confidence. Standards differ in how they approach positional accuracy, with ASPRS using a 2RMSE value for the X and Y directions separately as well as a ‘range’ 2RMSEr equivalent to 2DRMS, while the IHO uses the term ‘total horizontal uncertainty’ (THU) which effectively is a 2DRMS value. The confidence levels are not identical to a single RMSE but could be considered similar enough for practical purposes.

Accuracy in specifications

With accuracy defined, we now need to turn to specifications. As stated, both the IHO and the ASPRS have standards which are applicable to Lidar (both) and photogrammetric (ASPRS) surveys. The IHO works with ‘orders’ of accuracy for safety of navigation surveys and has a selection matrix that can be used to create specifications for all other types of surveys. The ASPRS defines ‘classes’ of accuracy in a similar way but distinguishes between vegetated and non-vegetated land. The latter is of course less of an issue under water.

Both standards focus on the random error component. The IHO states that systematic error should be minimized but does not quote a number. The ASPRS standards advise the user to limit systematic error to 25% of the overall 2RMSE values.

A-priori uncertainty

Ultimately, the accuracy of a survey can only be determined after the survey data is processed. However, it is unwise to embark on a survey if it can be predicted that the required accuracy cannot be met with the intended survey design and sensors. The test whether a chosen configuration will meet the requirements using the survey design is called the ‘a-priori uncertainty estimation’ or simulation.

The tools to compute this uncertainty are relatively limited. The ASPRS includes a simple mathematical model in its specification, but this is limited to positional accuracy and only contains a few parameters. Based on an internet search, there is some software available, albeit more for planning than for a-priori uncertainty estimates. Similarly, for bathymetric surveys, there are some software packages for a-priori computation for multibeam echosounders, but these do not include bathymetric Lidar. A-priori models for photogrammetric surveys have not been found.

Figure 4: Top: Coastal bathymetry of St. Thomas, US Virgin Islands, mapped using Lidar and presented in false color (purple indicating deep areas, orange indicating shallow). Land areas are shown with satellite imagery. Left: High-altitude topobathymetric Lidar data collected by Woolpert. Right: Illustration of how multisensor data is efficiently collected with real-time quality control. (Image courtesy, respectively: USGS, Woolpert/USACE/JALBTCX, and Teledyne Geospatial)

A-posteriori accuracy

While a-priori estimates are scarce, accuracy determination after the survey is commonplace for Lidar and photogrammetry. All major software vendors include statistical tools which give a variety of statistical parameters including computed accuracy. What is important to realize is that most parameters relate to the so-called ‘internal’ reliability of the results. That is, they describe how measurements are related to other measurements in the same survey. They generally do not represent systematic errors (which are part of the external reliability) very well.

External reliability, including systematic errors, can be tested by having independent (extra) testing points. These are like GCPs but should not be part of the original adjustment of the data like regular GCPs. They should preferably be measured using an independent and different technique. This also means that they should not be receiving corrections from the same (RTK) base station but should, for example, be derived from RINEX data processing or land survey techniques. Ideally, some of these test points are supplied by the client as part of the specification. This is to prevent systematic errors in, for example, a base station setup to propagate in both the control as well as the original measurements. The combination of internal and external reliability can demonstrate the RMSE (or THU/TVU) values for the overall survey accuracy.

Conclusion

Accuracy is not very well defined. Using terminology like systematic error and random error or RMS(E) makes it clearer what is meant. A measurement can only be called accurate if the requirements for systematic error and random error have been met. The final results should include tests for internal and external reliability of the data. To prevent survey results that do not meet the specifications, an a-priori uncertainty estimation can be performed.

Figure 5: IHO S44 bathymetric orders, also applicable to Lidar bathymetry.
Geomatics Newsletter

Value staying current with geomatics?

Stay on the map with our expertly curated newsletters.

We provide educational insights, industry updates, and inspiring stories to help you learn, grow, and reach your full potential in your field. Don't miss out - subscribe today and ensure you're always informed, educated, and inspired.

Choose your newsletter(s)