Details
Original language | English |
---|---|
Pages (from-to) | 257-269 |
Number of pages | 13 |
Journal | PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science |
Volume | 88 |
Issue number | 3-4 |
Early online date | 7 Jul 2020 |
Publication status | Published - Aug 2020 |
Abstract
We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
Keywords
- Deep learning, GAN, Point cloud
ASJC Scopus subject areas
- Social Sciences(all)
- Geography, Planning and Development
- Physics and Astronomy(all)
- Instrumentation
- Earth and Planetary Sciences(all)
- Earth and Planetary Sciences (miscellaneous)
Cite this
- Standard
- Harvard
- Apa
- Vancouver
- BibTeX
- RIS
In: PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science, Vol. 88, No. 3-4, 08.2020, p. 257-269.
Research output: Contribution to journal › Article › Research › peer review
}
TY - JOUR
T1 - Conditional Adversarial Networks for Multimodal Photo-Realistic Point Cloud Rendering
AU - Peters, Torben
AU - Brenner, Claus
N1 - Funding information: Open Access funding provided by Projekt DEAL. This work was funded by the German Research Foundation (DFG) as a part of the Research Training Group GRK2159, ‘Integrity and collaboration in dynamic sensor networks’ (i.c.sens).
PY - 2020/8
Y1 - 2020/8
N2 - We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
AB - We investigate whether conditional generative adversarial networks (C-GANs) are suitable for point cloud rendering. For this purpose, we created a dataset containing approximately 150,000 renderings of point cloud–image pairs. The dataset was recorded using our mobile mapping system, with capture dates that spread across 1 year. Our model learns how to predict realistically looking images from just point cloud data. We show that we can use this approach to colourize point clouds without the usage of any camera images. Additionally, we show that by parameterizing the recording date, we are even able to predict realistically looking views for different seasons, from identical input point clouds.
KW - Deep learning
KW - GAN
KW - Point cloud
UR - http://www.scopus.com/inward/record.url?scp=85087646238&partnerID=8YFLogxK
U2 - 10.1007/s41064-020-00114-z
DO - 10.1007/s41064-020-00114-z
M3 - Article
AN - SCOPUS:85087646238
VL - 88
SP - 257
EP - 269
JO - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
JF - PFG - Journal of Photogrammetry, Remote Sensing and Geoinformation Science
SN - 2512-2789
IS - 3-4
ER -