Details
Originalsprache | Englisch |
---|---|
Titel des Sammelwerks | Computer Vision |
Untertitel | ECCV 2018 Workshops, Proceedings |
Herausgeber/-innen | Laura Leal-Taixé, Stefan Roth |
Erscheinungsort | Cham |
Herausgeber (Verlag) | Springer Verlag |
Seiten | 181-196 |
Seitenumfang | 16 |
Auflage | 1. |
ISBN (elektronisch) | 9783030110093 |
ISBN (Print) | 9783030110086 |
Publikationsstatus | Veröffentlicht - 23 Jan. 2019 |
Veranstaltung | 15th European Conference on Computer Vision, ECCV 2018 - Munich, Deutschland Dauer: 8 Sept. 2018 → 14 Sept. 2018 |
Publikationsreihe
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Band | 11129 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (elektronisch) | 1611-3349 |
Abstract
Semantic segmentation of fisheye images (e.g., from action-cameras or smartphones) requires different training approaches and data than those of rectilinear images obtained using central projection. The shape of objects is distorted depending on the distance between the principal point and the object position in the image. Therefore, classical semantic segmentation approaches fall short in terms of performance compared to rectilinear data. A potential solution to this problem is the recording and annotation of a new dataset, however this is expensive and tedious. In this study, an alternative approach that modifies the augmentation stage of deep learning training to re-use rectilinear training data is presented. In this way we obtain a considerably higher semantic segmentation performance on the fisheye images: +18.3% intersection over union (IoU) for action-camera test images, +8.3% IoU for artificially generated fisheye data, and +18.0% IoU for challenging security scenes acquired in bird’s eye view.
ASJC Scopus Sachgebiete
- Mathematik (insg.)
- Theoretische Informatik
- Informatik (insg.)
- Allgemeine Computerwissenschaft
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
Computer Vision: ECCV 2018 Workshops, Proceedings. Hrsg. / Laura Leal-Taixé; Stefan Roth. 1. Aufl. Cham: Springer Verlag, 2019. S. 181-196 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Band 11129 LNCS).
Publikation: Beitrag in Buch/Bericht/Sammelwerk/Konferenzband › Aufsatz in Konferenzband › Forschung › Peer-Review
}
TY - GEN
T1 - Semantic Segmentation of Fisheye Images
AU - Blott, Gregor
AU - Takami, Masato
AU - Heipke, Christian
PY - 2019/1/23
Y1 - 2019/1/23
N2 - Semantic segmentation of fisheye images (e.g., from action-cameras or smartphones) requires different training approaches and data than those of rectilinear images obtained using central projection. The shape of objects is distorted depending on the distance between the principal point and the object position in the image. Therefore, classical semantic segmentation approaches fall short in terms of performance compared to rectilinear data. A potential solution to this problem is the recording and annotation of a new dataset, however this is expensive and tedious. In this study, an alternative approach that modifies the augmentation stage of deep learning training to re-use rectilinear training data is presented. In this way we obtain a considerably higher semantic segmentation performance on the fisheye images: +18.3% intersection over union (IoU) for action-camera test images, +8.3% IoU for artificially generated fisheye data, and +18.0% IoU for challenging security scenes acquired in bird’s eye view.
AB - Semantic segmentation of fisheye images (e.g., from action-cameras or smartphones) requires different training approaches and data than those of rectilinear images obtained using central projection. The shape of objects is distorted depending on the distance between the principal point and the object position in the image. Therefore, classical semantic segmentation approaches fall short in terms of performance compared to rectilinear data. A potential solution to this problem is the recording and annotation of a new dataset, however this is expensive and tedious. In this study, an alternative approach that modifies the augmentation stage of deep learning training to re-use rectilinear training data is presented. In this way we obtain a considerably higher semantic segmentation performance on the fisheye images: +18.3% intersection over union (IoU) for action-camera test images, +8.3% IoU for artificially generated fisheye data, and +18.0% IoU for challenging security scenes acquired in bird’s eye view.
KW - Deep learning
KW - Fisheye images
KW - Semantic segmentation
UR - http://www.scopus.com/inward/record.url?scp=85061724543&partnerID=8YFLogxK
U2 - 10.1007/978-3-030-11009-3_10
DO - 10.1007/978-3-030-11009-3_10
M3 - Conference contribution
AN - SCOPUS:85061724543
SN - 9783030110086
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 181
EP - 196
BT - Computer Vision
A2 - Leal-Taixé, Laura
A2 - Roth, Stefan
PB - Springer Verlag
CY - Cham
T2 - 15th European Conference on Computer Vision, ECCV 2018
Y2 - 8 September 2018 through 14 September 2018
ER -