The Challenges of Collaborative Mapping - 17/04/2014
GIM Interviews Dr Peggy Agouris
The growing web phenomenon known as ‘user-generated content’ is impacting the collection of geospatial data in ways never before imagined as a result of volunteered geographic information (VGI). Data is being provided voluntarily by individuals at a prodigious rate, but what are the implications when it comes to standards and accuracy? GIM International spoke to Dr Peggy Agouris, acting dean of the College of Science at George Mason University in the USA to find out more.
What are your current research interests?
My research focuses on the automation of processes for spatiotemporal information extraction – including digital imagery, change detection and the integration of remote sensing and digital image processing & analysis within geospatial information systems. In general, my interests lie in how to make sense of imagery collected through various media and sources, and how to extract useful information from that. Considering our community’s pedigree, accuracy is a key issue. And the other key issue is the automation of processes to extract useful information from images, video and other sensory data in order to address the challenge of steady increases in data availability coupled with a shrinking workforce.
What are the latest trends and developments in these areas?
From digital image processing and analysis to remote sensing, spatiotemporal information modelling and management, geospatial information systems and photogrammetry, all of these sub-disciplines of our field have progressed significantly individually, advancing our ability to extract geospatial information from various sources. However, it is arguably the emergence of volunteered geographic information (VGI) which is having the most substantial transformative effect in our field by affecting all these sub-disciplines simultaneously.
As open-source VGI continues to gain popularity, the user community and data contributions are growing too. OpenStreetMap, for example, has become a base layer for several mapping applications. However, because of the lack of cartographic standards, we have to question the accuracy of the database; we should be asking not whether but rather how we will be able to use this vector data for more geopositionally sensitive applications, like GPS navigation, in future.
What has your research into this trend revealed?
In a paper published last year by my colleagues Roberto Canavosio-Zuzelski, Peter Doucette and I, entitled ‘A Photogrammetric Approach for Assessing Positional Accuracy of OpenStreetMap Roads’, we took a photogrammetric approach to determining the positional accuracy of OSM road features using stereo imagery and a vector adjustment model. The OSM database provides a unique, dynamic environment to use as the test subject for this research because its underlying purpose of providing open-source mapping by the people, to the people emphasises the importance of knowing how good the data is.
OSM contributors are mostly voluntary non-professionals who have an interest in mapping a local area they are familiar with. In addition, there are no cartographic or data quality standards in place to ensure that all the contributors ‘map’ in a similar fashion or adhere to any specific equipment requirements (GPS), field collection procedures, image mensuration standards or map accuracy standards.
In the past, mapping information was collected by experts according to specifications and standards. Now, non-expert users are collecting valuable information, but our understanding of the accuracy of this information is limited. Nonetheless, this is valuable information and our challenge is to figure out how best to integrate expert and non-expert information for the benefits of all users.
Researchers like myself and others in this field are conducting research to see what we can learn and what conclusions we can draw. Integration will not be easy, but it is a most interesting and challenging subject.Last updated: 20/11/2019