Three colleagues and I have been wrestling for two years with how we can best deliver a new version of our GIS textbook. The three previous editions have been successful, having sold 80,000 copies and being translated into five languages. The challenge we faced is that everything is changing so rapidly that it would be easy to be out of date or even irrelevant. Advancing technology is at the heart of the problem (and opportunity), but its consequences are manifested in many different ways.
For example, publishers are transitioning to a different publishing model with different staff, using digital versions of books to minimise the second-hand market in printed books. Obtaining explicit copyright permission for images to avoid legal challenges is mandatory – even if the originator has died! Meanwhile, competitive online materials (of widely differing standards of quality) are available from many sources, including those created to underpin massive open online courses (MOOCs).
We decided that our response should continue to focus on long-lasting scientific principles which underpin the use of GI systems. But beyond that continuity, we have had to take account of many other factors. That has led us to replace ‘GIS’ in the title with ‘GISS’ – Geographic Information Science and Systems. The systemic characteristics of GI and the selection of assumptions plugged into our models and software matter ever more. Last year, parts of the UK (and elsewhere) suffered major flooding with catastrophic consequences for families and businesses. The public reaction forced government to change some policies and provide additional funds for flood assessment and protection. Modelling of likely scenarios using GI was an important input. However, a hugely experienced expert has just published a paper claiming that estimates of the economic risk produced using the official model of flood damage are exaggerated by a factor of between four and five. How do we assess the likely quality of such GI-based modelling?