Details
Original language | English |
---|---|
Title of host publication | Multi-Image Analysis - 10th International Workshop on Theoretical Foundations of Computer Vision, Revised Papers |
Editors | Reinhard Klette, Georgy Gimel’farb, Thomas Huang |
Pages | 190-200 |
Number of pages | 11 |
Publication status | Published - 2 May 2001 |
Event | 10th International Workshop on Theoretical Foundations of Computer Vision, 2000 - Dagstuhl Castle, Germany Duration: 12 Mar 2000 → 17 Mar 2000 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 2032 |
ISSN (Print) | 0302-9743 |
ISSN (electronic) | 1611-3349 |
Abstract
The increasing amount of remotely sensed imagery from multiple platforms requires efficient analysis techniques. The leading idea of the presented work is to automate the interpretation of multisensor and multitemporal remote sensing images by the use of common prior knowledge about landscape scenes. In addition the system can use specific map knowledge of a GIS, information about sensor projections and temporal changes of scene objects. Prior expert knowledge about the scene con- tent is represented explicitly by a semantic net. A common concept has been developed to distinguish between the semantics of objects and their visual appearance in the different sensors considering the physical principle of the sensor and the material and surface properties of the objects. A flexible control system is used for the automated analysis, which employs mixtures of bottom up and top down strategies for image analysis dependent on the respective state of interpretation. The control strategy employs rule based systems and is independent of the application. The system permits the fusion of several sensors like optical, infrared, and SAR-images, laser-scans etc. and it can be used for the fusion of images taken at different instances of time. Sensor fusion can be achieved on a pixel level, which requires prior rectification of the images, on feature level, which means that the same object may show up differently in different sensors, and on object level, which means that different parts of an object can more accurately be recognized in different sensors. Results are shown for the extraction of roads from multisensor images. The approach for a multitemporal image analysis is illustrated for the recognition and extraction of an industrial fairground from an industrial area in an urban scene.
ASJC Scopus subject areas
- Mathematics(all)
- Theoretical Computer Science
- Computer Science(all)
- General Computer Science
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
Multi-Image Analysis - 10th International Workshop on Theoretical Foundations of Computer Vision, Revised Papers. ed. / Reinhard Klette; Georgy Gimel’farb; Thomas Huang. 2001. p. 190-200 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 2032).
Research output: Chapter in book/report/conference proceeding › Conference contribution › Research › peer review
}
TY - GEN
T1 - Knowledge-based concepts for the fusion of multisensor and multitemporal aerial images
AU - Liedtke, Claus Eberhard
AU - Growe, Stefan
N1 - Publisher Copyright: © Springer-Verlag Berlin Heidelberg 2001.
PY - 2001/5/2
Y1 - 2001/5/2
N2 - The increasing amount of remotely sensed imagery from multiple platforms requires efficient analysis techniques. The leading idea of the presented work is to automate the interpretation of multisensor and multitemporal remote sensing images by the use of common prior knowledge about landscape scenes. In addition the system can use specific map knowledge of a GIS, information about sensor projections and temporal changes of scene objects. Prior expert knowledge about the scene con- tent is represented explicitly by a semantic net. A common concept has been developed to distinguish between the semantics of objects and their visual appearance in the different sensors considering the physical principle of the sensor and the material and surface properties of the objects. A flexible control system is used for the automated analysis, which employs mixtures of bottom up and top down strategies for image analysis dependent on the respective state of interpretation. The control strategy employs rule based systems and is independent of the application. The system permits the fusion of several sensors like optical, infrared, and SAR-images, laser-scans etc. and it can be used for the fusion of images taken at different instances of time. Sensor fusion can be achieved on a pixel level, which requires prior rectification of the images, on feature level, which means that the same object may show up differently in different sensors, and on object level, which means that different parts of an object can more accurately be recognized in different sensors. Results are shown for the extraction of roads from multisensor images. The approach for a multitemporal image analysis is illustrated for the recognition and extraction of an industrial fairground from an industrial area in an urban scene.
AB - The increasing amount of remotely sensed imagery from multiple platforms requires efficient analysis techniques. The leading idea of the presented work is to automate the interpretation of multisensor and multitemporal remote sensing images by the use of common prior knowledge about landscape scenes. In addition the system can use specific map knowledge of a GIS, information about sensor projections and temporal changes of scene objects. Prior expert knowledge about the scene con- tent is represented explicitly by a semantic net. A common concept has been developed to distinguish between the semantics of objects and their visual appearance in the different sensors considering the physical principle of the sensor and the material and surface properties of the objects. A flexible control system is used for the automated analysis, which employs mixtures of bottom up and top down strategies for image analysis dependent on the respective state of interpretation. The control strategy employs rule based systems and is independent of the application. The system permits the fusion of several sensors like optical, infrared, and SAR-images, laser-scans etc. and it can be used for the fusion of images taken at different instances of time. Sensor fusion can be achieved on a pixel level, which requires prior rectification of the images, on feature level, which means that the same object may show up differently in different sensors, and on object level, which means that different parts of an object can more accurately be recognized in different sensors. Results are shown for the extraction of roads from multisensor images. The approach for a multitemporal image analysis is illustrated for the recognition and extraction of an industrial fairground from an industrial area in an urban scene.
UR - http://www.scopus.com/inward/record.url?scp=84957831511&partnerID=8YFLogxK
U2 - 10.1007/3-540-45134-x_14
DO - 10.1007/3-540-45134-x_14
M3 - Conference contribution
AN - SCOPUS:84957831511
SN - 354042122X
SN - 9783540421221
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 190
EP - 200
BT - Multi-Image Analysis - 10th International Workshop on Theoretical Foundations of Computer Vision, Revised Papers
A2 - Klette, Reinhard
A2 - Gimel’farb, Georgy
A2 - Huang, Thomas
T2 - 10th International Workshop on Theoretical Foundations of Computer Vision, 2000
Y2 - 12 March 2000 through 17 March 2000
ER -