Tests of Positional Accuracy
Objectives of Lecture:
- Tests for well-defined points
- Examples: Digital Chart of the World, TIGER in Seattle
- French BD-Topo
- Positional Accuracy Handbook
- "Terrain nominale" or how to figure out what to
test
- Linework and Fuzzy features
A Basic Procedure
- Get a source of higher accuracy (or just another independent
source)
- High tech option: GPS in the field, surveying records
- Other sources: some other database (smaller area of coverage?)
- Identify "same" points on both (some art to this...)
- Place both sources in same projection, datum, etc.
- Tabulate differences in X and Y (spreadsheet),
- Report: mean bias (systematic error), standard deviation,
RMSE, 95% confidence...
Some issues:
- How many points? (20-30 should do; 20 minimum to get 95%...),
- Distribution (Points on edge get higher weight...)
- some proportion (20%?) in each quadrant
- more uniform than clustered (average distance 10% of diagonal?)
- Bias of selection
Geospatial Positioning Accuracy Standards
The rules developed over the years for this kind of testing
(see below) have been incorporated into National
Standard for Spatial Data Accuracy (NSSDA), adopted by FGDC
in 1998 as Part 3 of their Geospatial Positioning Accuracy Standards
(the other parts deal with Geodetic
Control networks (the work of the Geodetic Control Committee)
and Reporting
Methodology (applies ONLY to points - still!). Citations to
NSSDA seem to be from USGS and Minnesota more than anywhere else.
(Is any one noticing?)
The core idea in Part 1:
Horizontal: The reporting standard in the horizontal
component is the radius of a circle of uncertainty, such that
the true or theoretical location of the point falls within that
circle 95-percent of the time.
Part 3 implements the test and specifies two reports:
Tested __XX.xx__ (meters, feet) horizontal accuracy at
95% confidence level
when this specific product was tested. Or the following if
tests were applied to some other place, but the procedures remain
the same:
Compiled to meet __XX.xx__ (meters, feet) horizontal accuracy
at 95% confidence level
Evolution of Standards
- National Map Accuracy Standards 1947:
- 90 % of well-defined points within tolerance
- (but not idea how far off the outliers might be)
- ACIC 1962: The Positional Accuracy of Maps
- converts NMAS into a statistical statement; 90% = 1.66 standard
deviations
- (CMAS: Circular Map Accuracy from the missle targetting literature)
- American Society of Photogrammetry (Committee for Specifications
and Standards, Professional Practice Division). 1985: Accuracy
specification for large-scale line maps. Photogrammetric Engineering
and Remote Sensing, 51(2), 195-199.
- Draft version had bias and precision (mean of error and standard
deviation, not RMS); still had thresholds (based on CMAS version
of NMAS)
- An accuracy standard for large-scale (1:20 000 or more
detailed [to avoid USGS and the 1:24000 series!]) maps. Accuracy
is expressed in terms of the standard error, maximum error, circular
map accuracy and vertical map accuracy. Accuracy testing is designed
to indicate nominal accuracy equal to or better than the allowable
error. Statistical tests are designed to assess both bias and
precision. Map accuracy information is to be provided on the
map sheet.
A source for all of these, assembled into the Corps of Engineers
manual for topographic mapping, Chapter
2.
Terrain nominale:
reminder of Lecture 3: Map is NOT
a mirror, but a purpose built abstraction that simplifies and
symbolizes for a purpose...
Multi-layer testing:
- Conflation: how closely do things that should be the same
match?
- Registration: is everything shifted in some uniform direction?
(datum?)
Ill-defined points, linework, etc.
Lots of GIS data consist of lines or polygons; not well-defined
points because the points of curvature are more arbitrary.
A test point is difficult to compare to linework: Is it
at the nearest point on the line?
Fuzziness: a whole set of concepts about deliberate imprecision...
Techniques:
Visual inspection of two sources (how to report differences?)
Overlay of two sources: report area of slivers, derive average
distance
Point-matching to derive distances (research problem; French
use Hausdorff distance - distance between a point and nearest
line maybe the best one can do...)
Version of 6 February 2004