Details
Originalsprache | Englisch |
---|---|
Seiten (von - bis) | 257-269 |
Seitenumfang | 13 |
Fachzeitschrift | PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science |
Jahrgang | 88 |
Ausgabenummer | 3-4 |
Frühes Online-Datum | 7 Juli 2020 |
Publikationsstatus | Veröffentlicht - Aug. 2020 |
Abstract
We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
ASJC Scopus Sachgebiete
- Sozialwissenschaften (insg.)
- Geografie, Planung und Entwicklung
- Physik und Astronomie (insg.)
- Instrumentierung
- Erdkunde und Planetologie (insg.)
- Erdkunde und Planetologie (sonstige)
Zitieren
- Standard
- Harvard
- Apa
- Vancouver
- BibTex
- RIS
in: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Jahrgang 88, Nr. 3-4, 08.2020, S. 257-269.
Publikation: Beitrag in Fachzeitschrift › Artikel › Forschung › Peer-Review
}
TY - JOUR
T1 - Conditional Adversarial Networks for Multimodal Photo-Realistic Point Cloud Rendering
AU - Peters, Torben
AU - Brenner, Claus
N1 - Funding information: Open Access funding provided by Projekt DEAL. This work was funded by the German Research Foundation (DFG) as a part of the Research Training Group GRK2159, ‘Integrity and collaboration in dynamic sensor networks’ (i.c.sens).
PY - 2020/8
Y1 - 2020/8
N2 - We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
AB - We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
KW - Deep learning
KW - GAN
KW - Point cloud
UR - http://www.scopus.com/inward/record.url?scp=85087646238&partnerID=8YFLogxK
U2 - 10.1007/s41064-020-00114-z
DO - 10.1007/s41064-020-00114-z
M3 - Article
AN - SCOPUS:85087646238
VL - 88
SP - 257
EP - 269
JO - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
JF - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
SN - 2512-2789
IS - 3-4
ER -