Actions

Automated Change Detection in Remote Sensing Imagery

From Santa Fe Institute Events Wiki

Revision as of 16:31, 27 September 2007 by Garkenyon (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Recently, we adapted a machine learning framework to the problem of anomalous change detection [1,2]. Given two images, taken of the same scene, but at different times and inevitably under different conditions, the goal is to find the interesting changes that have occurred in the scene. Informally, the idea is to "learn" (by looking at the pervasive changes that occur throughout the images) the transformation that takes one image to another, and then to identify those pixels that are most inconsistent with that transformation. Our approach avoids explicitly identifying this transformation, but instead recasts the problem as one of binary classification: a "normal" class is given by the image data, and a background class is obtained from a particular resampling of this data. The changes that fail to be classified as normal changes are considered anomalous.

In general, finding what is "interesting" in an image demands, or at least seems to demand, the ability to do some kind of high-level perception. But finding interesting changes in pairs of images is potentially a far more tractable problem, one that can be usefully addressed with low-level analysis. You might say that this provides a bridge over the semantic gap. Or you might say that the change detection problem is so far upstream of the real image understanding problem, that the gap has narrowed to something you can hop over without even getting your feet wet.