-
Notifications
You must be signed in to change notification settings - Fork 38
/
Copy pathindex.html
229 lines (217 loc) · 62.4 KB
/
index.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
<!doctype html>
<html lang="en">
<head>
<!-- Required meta tags -->
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1, shrink-to-fit=no">
<!-- Bootstrap CSS -->
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/css/bootstrap.min.css"
integrity="sha384-Gn5384xqQ1aoWXA+058RXPxPg6fy4IWvTNh0E263XmFcJlSAwiGgFAW/dAiS6JXm" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.2.0/css/all.min.css" integrity="sha512-xh6O/CkQoPOWDdYTDqeRdPCVd1SpvCA9XXcUnZS2FmJNp1coAFzvtCN9BmamE+4aHK8yyUHUSCcJHgXloTyT2A==" crossorigin="anonymous" referrerpolicy="no-referrer" />
<title>Michael Niemeyer</title>
<link rel="icon" type="image/x-icon" href="assets/favicon.ico">
</head>
<body>
<div class="container">
<div class="row">
<div class="col-md-1"></div>
<div class="col-md-10">
<div class="row" style="margin-top: 3em;">
<div class="col-sm-12" style="margin-bottom: 1em;">
<h3 class="display-4" style="text-align: center;"><span style="font-weight: bold;">Michael</span> Niemeyer</h3>
</div>
<br>
<div class="col-md-10" style="">
<p>
I am a senior research scientist at Google working on 3D computer vision and generative modeling.
Prior to joining Google, I was a PhD student at the <a href="https://uni-tuebingen.de/en/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/autonomous-vision/home/" target="_blank">Max Planck Insitute for Intelligent Systems</a> supervised by <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a>.
As an undergraduate student, I studied Mathematics at the <a href="http://www.mi.uni-koeln.de/main/index.en.php" target="_blank">University of Cologne (Germany)</a> and computer science as the Master's at the
<a href="https://www.st-andrews.ac.uk/computer-science/" target="_blank">University of St Andrews (UK)</a>.
</p>
<p>For any inquiries, feel free to reach out to me via mail!</p>
<p>
<a href="https://m-niemeyer.github.io/assets/other/bio.txt" target="_blank" style="margin-right: 5px"><i class="fa-solid fa-graduation-cap"></i> Bio</a>
<a href="https://m-niemeyer.github.io/assets/pdf/CV_Niemeyer_Michael.pdf" target="_blank" style="margin-right: 5px"><i class="fa fa-address-card fa-lg"></i> CV</a>
<a href="mailto:[email protected]" style="margin-right: 5px"><i class="far fa-envelope-open fa-lg"></i> Mail</a>
<a href="https://twitter.com/Mi_Niemeyer" target="_blank" style="margin-right: 5px"><i class="fab fa-twitter fa-lg"></i> Twitter</a>
<a href="https://scholar.google.com/citations?user=v1O7i_0AAAAJ&hl=en" target="_blank" style="margin-right: 5px"><i class="fa-solid fa-book"></i> Scholar</a>
<a href="https://github.com/m-niemeyer" target="_blank" style="margin-right: 5px"><i class="fab fa-github fa-lg"></i> Github</a>
<a href="https://www.linkedin.com/in/michael-niemeyer" target="_blank" style="margin-right: 5px"><i class="fab fa-linkedin fa-lg"></i> LinkedIn</a>
<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#demo" data-toggle="collapse" style="margin-left: -6px; margin-top: -2px;"><i class="fa-solid fa-trophy"></i>Awards</button>
<div id="demo" class="collapse">
<span style="font-weight: bold;">Awards:</span>
In 2011, I graduated as top of my year from secondary school and received <a href="https://www.e-fellows.net/" target="_blank">the e-fellows scholarship</a> and was admitted to <a href="https://www.mathematik.de/" target="_blank">the Germany Mathematics Society</a> and <a href="https://www.dpg-physik.de/" target="_blank">the German Physics Society</a>. In 2017 I received the Dean's List Award for Academic Excellence for my Master's degree.
During my PhD studies, I was a scholar of <a href="https://imprs.is.mpg.de/" target="_blank">the International Max Planck Research School for Intelligent Systems (IMPRS-IS)</a>.
Our research projects Occupancy Networks, DVR, and ConvOnet were selected to be among the 15 most influencial <a href="https://www.paperdigest.org/2021/03/most-influential-cvpr-papers-2021-03/" target="_blank">CVPR</a> / <a href="https://www.paperdigest.org/2023/09/most-influential-eccv-papers-2023-09/" target="_blank">ECCV</a> papers of 2019 and 2020.
In 2021, we received the CS teaching award for our <a href="https://uni-tuebingen.de/fakultaeten/mathematisch-naturwissenschaftliche-fakultaet/fachbereiche/informatik/lehrstuehle/autonomous-vision/lectures/computer-vision/" target="_blank">computer vision lecture</a> as well as <a href="https://cyber-valley.de/en/news/meet-the-ai-gamedev-winners" target="blank">the AIGameDev scientific award</a> for our GRAF project and <a href="https://cvpr2021.thecvf.com/node/329" target="_blank">the CVPR Best Paper Award</a> for GIRAFFE (<a href="https://cyber-valley.de/en/news/best-paper-cvpr-2021" target="_blank">news coverage</a>).
</div>
</p>
</div>
<div class="col-md-2" style="">
<img src="assets/img/profile.jpg" class="img-thumbnail" width="280px" alt="Profile picture">
</div>
</div>
<div class="row" style="margin-top: 1em;">
<div class="col-sm-12" style="">
<h4>Publications</h4>
<div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/radsplat.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://m-niemeyer.github.io/radsplat/" target="_blank">RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS</a> <br><span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://campar.in.tum.de/Main/FabianManhardt" target="_blank">Fabian Manhardt</a>, <a href="http://www.lix.polytechnique.fr/Labo/Marie-Julie.RAKOTOSAONA/" target="_blank">Marie-Julie Rakotosaona</a>, <a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <a href="https://arvr.google.com/" target="_blank">Rama Gosula</a>, <a href="https://campar.in.tum.de/Main/KeisukeTateno" target="_blank">Keisuke Tateno</a>, <a href="https://arvr.google.com/" target="_blank">John Bates</a>, <a href="https://scholar.google.com/citations?user=DQ4838YAAAAJ&hl=en" target="_blank">Dominik Kaeser</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">Proc. of the International Conf. on 3D Vision (3DV)</span>, 2025 <br><a href="https://m-niemeyer.github.io/radsplat/" target="_blank">Project Page</a> / <a href="https://m-niemeyer.github.io/radsplat/static/pdf/niemeyer2024radsplat.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseniemeyer2024arxiv" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseniemeyer2024arxiv"><div class="card card-body"><pre><code>@InProceedings{niemeyer2024arxiv,
author = {Michael Niemeyer and Fabian Manhardt and Marie-Julie Rakotosaona and Michael Oechsle and Rama Gosula and Keisuke Tateno and John Bates and Dominik Kaeser and Federico Tombari},
title = {RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS},
booktitle = {Proc. of the International Conf. on 3D Vision (3DV)},
year = {2025},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/splatslam.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://github.com/google-research/Splat-SLAM" target="_blank">Splat-SLAM: Globally Optimized RGB-only SLAM with 3D Gaussians</a> <br><a href="https://scholar.google.com/citations?user=phiETm4AAAAJ&hl=en" target="_blank">Erik Sandström</a>, <a href="https://campar.in.tum.de/Main/KeisukeTateno" target="_blank">Keisuke Tateno</a>, <a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://insait.ai/prof-luc-van-gool/" target="_blank">Luc Van-Gool</a>, <a href="https://oswaldm.github.io/" target="_blank">Martin Oswald</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">arXiv.org</span>, 2024 <br><a href="https://github.com/google-research/Splat-SLAM" target="_blank">Project Page</a> / <a href="https://fnzhan.com/Evolutive-Rendering-Models/data/ERM.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapsesandstroem2024arxiv" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapsesandstroem2024arxiv"><div class="card card-body"><pre><code>@InProceedings{sandstroem2024arxiv,
author = {Erik Sandström and Keisuke Tateno and Michael Oechsle and Michael Niemeyer and Luc Van-Gool and Martin Oswald and Federico Tombari},
title = {Splat-SLAM: Globally Optimized RGB-only SLAM with 3D Gaussians},
booktitle = {arXiv.org},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/evolutive.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://fnzhan.com/Evolutive-Rendering-Models/" target="_blank">Evolutive Rendering Models</a> <br><a href="https://fnzhan.com/" target="_blank">Fangneng Zhan</a>, <a href="https://scholar.google.com/citations?user=XcxDA14AAAAJ&hl=en" target="_blank">Hanxue Liang</a>, <a href="https://yifita.netlify.app/" target="_blank">Yifan Wang</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <a href="https://genintel.mpi-inf.mpg.de/" target="_blank">Adam Kortylewski</a>, <a href="https://www.cl.cam.ac.uk/~aco41/" target="_blank">Cengiz Oztireli</a>, <a href="https://stanford.edu/~gordonwz/" target="_blank">Gordon Wetzstein</a>, <a href="https://people.mpi-inf.mpg.de/~theobalt/" target="_blank">Christian Theobalt</a> <br><span style="font-style: italic;">arXiv.org</span>, 2024 <br><a href="https://fnzhan.com/Evolutive-Rendering-Models/" target="_blank">Project Page</a> / <a href="https://fnzhan.com/Evolutive-Rendering-Models/data/ERM.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapsezhan2024arxiv" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapsezhan2024arxiv"><div class="card card-body"><pre><code>@InProceedings{zhan2024arxiv,
author = {Fangneng Zhan and Hanxue Liang and Yifan Wang and Michael Niemeyer and Michael Oechsle and Adam Kortylewski and Cengiz Oztireli and Gordon Wetzstein and Christian Theobalt},
title = {Evolutive Rendering Models},
booktitle = {arXiv.org},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/inserf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://mohamad-shahbazi.github.io/inserf/" target="_blank">InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes</a> <br><a href="https://mohamad-shahbazi.github.io/" target="_blank">Mohamad Shahbazi</a>, <a href="https://asl.ethz.ch/the-lab/people/person-detail.MjY5NDUz.TGlzdC8xNTg0LDEyMDExMzk5Mjg=.html" target="_blank">Liesbeth Claessens</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.linkedin.com/in/edo-collins/?originalSubdomain=ch" target="_blank">Edo Collins</a>, <a href="https://alessiotonioni.github.io/" target="_blank">Alessio Tonioni</a>, <a href="https://ee.ethz.ch/the-department/faculty/professors/person-detail.OTAyMzM=.TGlzdC80MTEsMTA1ODA0MjU5.html" target="_blank">Luc Van Gool</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">arXiv.org</span>, 2024 <br><a href="https://mohamad-shahbazi.github.io/inserf/" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2401.05335.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseshahbazi2024inserf" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseshahbazi2024inserf"><div class="card card-body"><pre><code>@InProceedings{shahbazi2024inserf,
author = {Mohamad Shahbazi and Liesbeth Claessens and Michael Niemeyer and Edo Collins and Alessio Tonioni and Luc Van Gool and Federico Tombari},
title = {InseRF: Text-Driven Generative Object Insertion in Neural 3D Scenes},
booktitle = {arXiv.org},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/unisdf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://fangjinhuawang.github.io/UniSDF/" target="_blank">UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections</a> <br><a href="https://fangjinhuawang.github.io/" target="_blank">Fangjinhua Wang</a>, <a href="http://www.lix.polytechnique.fr/Labo/Marie-Julie.RAKOTOSAONA/" target="_blank">Marie-Julie Rakotosaona</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://szeliski.org/" target="_blank">Richard Szeliski</a>, <a href="https://people.inf.ethz.ch/pomarc/" target="_blank">Marc Pollefeys</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">Advances in Neural Information Processing Systems (NeurIPS)</span>, 2024 <br><a href="https://fangjinhuawang.github.io/UniSDF/" target="_blank">Project Page</a> / <a href="https://fangjinhuawang.github.io/UniSDF/gfx/unisdf_arxiv.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapsewang2023unisdf" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapsewang2023unisdf"><div class="card card-body"><pre><code>@InProceedings{wang2023unisdf,
author = {Fangjinhua Wang and Marie-Julie Rakotosaona and Michael Niemeyer and Richard Szeliski and Marc Pollefeys and Federico Tombari},
title = {UniSDF: Unifying Neural Representations for High-Fidelity 3D Reconstruction of Complex Scenes with Reflections},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/dnsslam.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://arxiv.org/abs/2312.00204" target="_blank">DNS SLAM: Dense Neural Semantic-Informed SLAM</a> <br><a href="https://campus.tum.de/tumonline/ee/ui/ca2/app/desktop/#/pl/ui/$ctx/visitenkarte.show_vcard?$ctx=design=ca2;header=max;lang=de&pPersonenGruppe=3&pPersonenId=6EC78DAA25310FF2" target="_blank">Kunyi Li</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.professoren.tum.de/en/navab-nassir" target="_blank">Nassir Navab</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">Proc. of the International Conf. on Intelligent Robots and Systems (IROS)</span>, 2024 <br><a href="https://arxiv.org/abs/2312.00204" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2312.00204.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseLi2023ARXIV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseLi2023ARXIV"><div class="card card-body"><pre><code>@InProceedings{Li2023ARXIV,
author = {Kunyi Li and Michael Niemeyer and Nassir Navab and Federico Tombari},
title = {DNS SLAM: Dense Neural Semantic-Informed SLAM},
booktitle = {Proc. of the International Conf. on Intelligent Robots and Systems (IROS)},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/opennerf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://francisengelmann.github.io/OpenSet3DSegmentation.pdf" target="_blank">Open-Set 3D Scene Segmentation with Rendered Novel Views</a> <br><a href="https://francisengelmann.github.io/" target="_blank">Francis Engelmann</a>, <a href="https://campar.in.tum.de/Main/FabianManhardt" target="_blank">Fabian Manhardt</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://campar.in.tum.de/Main/KeisukeTateno" target="_blank">Keisuke Tateno</a>, <a href="https://people.inf.ethz.ch/pomarc/" target="_blank">Marc Pollefeys</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">Proc. of the International Conf. on Learning Representations (ICLR)</span>, 2024 <br><a href="https://francisengelmann.github.io/OpenSet3DSegmentation.pdf" target="_blank">Project Page</a> / <a href="https://francisengelmann.github.io/OpenSet3DSegmentation.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseEngelmann2024ICLR" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseEngelmann2024ICLR"><div class="card card-body"><pre><code>@InProceedings{Engelmann2024ICLR,
author = {Francis Engelmann and Fabian Manhardt and Michael Niemeyer and Keisuke Tateno and Marc Pollefeys and Federico Tombari},
title = {Open-Set 3D Scene Segmentation with Rendered Novel Views},
booktitle = {Proc. of the International Conf. on Learning Representations (ICLR)},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/textmesh.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://fabi92.github.io/textmesh/" target="_blank">TextMesh: Generation of Realistic 3D Meshes From Text Prompts</a> <br><a href="https://scholar.google.ch/citations?user=7D10QQkAAAAJ&hl=en" target="_blank">Christina Tsalicoglou</a>, <a href="https://campar.in.tum.de/Main/FabianManhardt" target="_blank">Fabian Manhardt</a>, <a href="https://alessiotonioni.github.io/" target="_blank">Alessio Tonioni</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">Proc. of the International Conf. on 3D Vision (3DV)</span>, 2024 <br><a href="https://fabi92.github.io/textmesh/" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2304.12439.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseTsalicoglou2023THREEDV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseTsalicoglou2023THREEDV"><div class="card card-body"><pre><code>@InProceedings{Tsalicoglou2023THREEDV,
author = {Christina Tsalicoglou and Fabian Manhardt and Alessio Tonioni and Michael Niemeyer and Federico Tombari},
title = {TextMesh: Generation of Realistic 3D Meshes From Text Prompts},
booktitle = {Proc. of the International Conf. on 3D Vision (3DV)},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/nerfmeshing.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://m-niemeyer.github.io/nerfmeshing/" target="_blank">NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes</a> <br><a href="http://www.lix.polytechnique.fr/Labo/Marie-Julie.RAKOTOSAONA/" target="_blank">Marie-Julie Rakotosaona</a>, <a href="https://campar.in.tum.de/Main/FabianManhardt" target="_blank">Fabian Manhardt</a>, <a href="https://martinarroyo.net/" target="_blank">Diego Martin Arroyo</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://abhijitkundu.info/" target="_blank">Abhijit Kundu</a>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">Proc. of the International Conf. on 3D Vision (3DV)</span>, 2024 <br><a href="https://m-niemeyer.github.io/nerfmeshing/" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2303.09431.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseRakotosaona2023THREEDV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseRakotosaona2023THREEDV"><div class="card card-body"><pre><code>@InProceedings{Rakotosaona2023THREEDV,
author = {Marie-Julie Rakotosaona and Fabian Manhardt and Diego Martin Arroyo and Michael Niemeyer and Abhijit Kundu and Federico Tombari},
title = {NeRFMeshing: Distilling Neural Radiance Fields into Geometrically-Accurate 3D Meshes},
booktitle = {Proc. of the International Conf. on 3D Vision (3DV)},
year = {2024},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/dreambooth3d.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://dreambooth3d.github.io" target="_blank">DreamBooth3D: Subject-Driven Text-to-3D Generation</a> <br><a href="https://amitraj93.github.io/" target="_blank">Amit Raj</a>, <a href="https://www.linkedin.com/in/srinivas-kaza-64223b74" target="_blank">Srinivas Kaza</a>, <a href="https://poolio.github.io/" target="_blank">Ben Poole</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://natanielruiz.github.io/" target="_blank">Nataniel Ruiz</a>, <a href="https://bmild.github.io/" target="_blank">Ben Mildenhall</a>, <a href="https://scholar.google.com/citations?user=I2qheksAAAAJ" target="_blank">Shiran Zada</a>, <a href="https://kfiraberman.github.io/" target="_blank">Kfir Aberman</a>, <a href="http://people.csail.mit.edu/mrub/" target="_blank">Michael Rubinstein</a>, <a href="https://jonbarron.info/" target="_blank">Jonathan Barron</a>, <a href="http://people.csail.mit.edu/yzli/" target="_blank">Yuanzhen Li</a>, <a href="https://varunjampani.github.io/" target="_blank">Varun Jampani</a> <br><span style="font-style: italic;">Proc. of the IEEE International Conf. on Computer Vision (ICCV)</span>, 2023 <br><a href="https://dreambooth3d.github.io" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2303.13508.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseRaj2023ICCV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseRaj2023ICCV"><div class="card card-body"><pre><code>@InProceedings{Raj2023ICCV,
author = {Amit Raj and Srinivas Kaza and Ben Poole and Michael Niemeyer and Nataniel Ruiz and Ben Mildenhall and Shiran Zada and Kfir Aberman and Michael Rubinstein and Jonathan Barron and Yuanzhen Li and Varun Jampani},
title = {DreamBooth3D: Subject-Driven Text-to-3D Generation},
booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
year = {2023},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/newton.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://arxiv.org/abs/2303.13654" target="_blank">NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM</a> <br><a href="https://dblp.org/pid/225/4487.html" target="_blank">Hidenobu Matsuki</a>, <a href="https://campar.in.tum.de/Main/KeisukeTateno" target="_blank">Keisuke Tateno</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.cs.cit.tum.de/camp/members/senior-research-scientists/federico-tombari/" target="_blank">Federico Tombari</a> <br><span style="font-style: italic;">IEEE Robotics and Automation Letters (RA-L)</span>, 2023 <br><a href="https://arxiv.org/abs/2303.13654" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2303.13654.pdf" target="_blank">Paper</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseMatsuki2023ARXIV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseMatsuki2023ARXIV"><div class="card card-body"><pre><code>@InProceedings{Matsuki2023ARXIV,
author = {Hidenobu Matsuki and Keisuke Tateno and Michael Niemeyer and Federico Tombari},
title = {NEWTON: Neural View-Centric Mapping for On-the-Fly Large-Scale SLAM},
booktitle = {IEEE Robotics and Automation Letters (RA-L)},
year = {2023},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/sdfstudio.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://autonomousvision.github.io/sdfstudio/" target="_blank">SDFStudio: A Unified Framework for Surface Reconstruction</a> <br><a href="https://niujinshuchong.github.io/" target="_blank">Zehao Yu</a>, <a href="https://apchenstu.github.io/" target="_blank">Anpei Chen</a>, <a href="https://bozidarantic.com/" target="_blank">Bozidar Antic</a>, <a href="https://pengsongyou.github.io/" target="_blank">Songyou Peng</a>, <a href="https://apratimbhattacharyya18.github.io/" target="_blank">Apratim Bhattacharyya</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://inf.ethz.ch/people/person-detail.MjYyNzgw.TGlzdC8zMDQsLTg3NDc3NjI0MQ==.html" target="_blank">Siyu Tang</a>, <a href="https://tsattler.github.io/" target="_blank">Torsten Sattler</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Open Source Project</span>, 2022 <br><a href="https://autonomousvision.github.io/sdfstudio/" target="_blank">Project Page</a> / <a href="https://github.com/autonomousvision/sdfstudio" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseYu2022SDFStudio" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseYu2022SDFStudio"><div class="card card-body"><pre><code>@InProceedings{Yu2022SDFStudio,
author = {Zehao Yu and Anpei Chen and Bozidar Antic and Songyou Peng and Apratim Bhattacharyya and Michael Niemeyer and Siyu Tang and Torsten Sattler and Andreas Geiger},
title = {SDFStudio: A Unified Framework for Surface Reconstruction},
booktitle = {Open Source Project},
year = {2022},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/monosdf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://niujinshuchong.github.io/monosdf/" target="_blank">MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction</a> <br><a href="https://niujinshuchong.github.io/" target="_blank">Zehao Yu</a>, <a href="https://pengsongyou.github.io/" target="_blank">Songyou Peng</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://tsattler.github.io/" target="_blank">Torsten Sattler</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Advances in Neural Information Processing Systems (NeurIPS)</span>, 2022 <br><a href="https://niujinshuchong.github.io/monosdf/" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2206.00665.pdf" target="_blank">Paper</a> / <a href="https://arxiv.org/pdf/2206.00665.pdf" target="_blank">Supplemental</a> / <a href="https://github.com/autonomousvision/monosdf" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseYu2022NEURIPS" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseYu2022NEURIPS"><div class="card card-body"><pre><code>@InProceedings{Yu2022NEURIPS,
author = {Zehao Yu and Songyou Peng and Michael Niemeyer and Torsten Sattler and Andreas Geiger},
title = {MonoSDF: Exploring Monocular Geometric Cues for Neural Implicit Surface Reconstruction},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2022},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/voxgraf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://katjaschwarz.github.io/voxgraf/" target="_blank">VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids</a> <br><a href="https://katjaschwarz.github.io/" target="_blank">Katja Schwarz</a>, <a href="https://axelsauer.com/" target="_blank">Axel Sauer</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://yiyiliao.github.io/" target="_blank">Yiyi Liao</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Advances in Neural Information Processing Systems (NeurIPS)</span>, 2022 <br><a href="https://katjaschwarz.github.io/voxgraf/" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2206.07695.pdf" target="_blank">Paper</a> / <a href="https://arxiv.org/pdf/2206.07695.pdf" target="_blank">Supplemental</a> / <a href="https://github.com/autonomousvision/voxgraf" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseSchwarz2022NEURIPS" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseSchwarz2022NEURIPS"><div class="card card-body"><pre><code>@InProceedings{Schwarz2022NEURIPS,
author = {Katja Schwarz and Axel Sauer and Michael Niemeyer and Yiyi Liao and Andreas Geiger},
title = {VoxGRAF: Fast 3D-Aware Image Synthesis with Sparse Voxel Grids},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2022},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/regnerf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://m-niemeyer.github.io/regnerf" target="_blank">RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs</a> <span style="color: red;">(Oral Presentation)</span><br><span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://jonbarron.info/" target="_blank">Jonathan Barron</a>, <a href="https://bmild.github.io/" target="_blank">Ben Mildenhall</a>, <a href="https://msajjadi.com/" target="_blank">Mehdi Sajjadi</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a>, <a href="http://www2.informatik.uni-freiburg.de/~radwann/" target="_blank">Noha Radwan</a> <br><span style="font-style: italic;">Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)</span>, 2022 <br><a href="https://m-niemeyer.github.io/regnerf" target="_blank">Project Page</a> / <a href="https://drive.google.com/file/d/1S_NnmhypZjyMfwqcHg-YbWSSYNWdqqlo/view?usp=sharing" target="_blank">Paper</a> / <a href="https://drive.google.com/file/d/15ip8Fvfxp6rNRfBnbJEnFCjIJeFMH4CE/view?usp=sharing" target="_blank">Supplemental</a> / <a href="https://youtu.be/QyyyvA4-Kwc" target="_blank">Video</a> / <a href="https://drive.google.com/file/d/1kYknB2Ap3I3avstmPxAa9IiW8m85AZEF/view?usp=sharing" target="_blank">Poster</a> / <a href="https://github.com/google-research/google-research/tree/master/regnerf" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseNiemeyer2022CVPR" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseNiemeyer2022CVPR"><div class="card card-body"><pre><code>@InProceedings{Niemeyer2022CVPR,
author = {Michael Niemeyer and Jonathan Barron and Ben Mildenhall and Mehdi Sajjadi and Andreas Geiger and Noha Radwan},
title = {RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs},
booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/sap.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://pengsongyou.github.io/sap" target="_blank">Shape As Points: A Differentiable Poisson Solver</a> <span style="color: red;">(Oral Presentation)</span><br><a href="https://pengsongyou.github.io/" target="_blank">Songyou Peng</a>, <a href="https://www.maxjiang.ml/" target="_blank">Chiyu Jiang</a>, <a href="https://yiyiliao.github.io/" target="_blank">Yiyi Liao</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://people.inf.ethz.ch/pomarc/" target="_blank">Marc Pollefeys</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Advances in Neural Information Processing Systems (NeurIPS)</span>, 2021 <br><a href="https://pengsongyou.github.io/sap" target="_blank">Project Page</a> / <a href="https://arxiv.org/abs/2106.03452" target="_blank">Paper</a> / <a href="https://youtu.be/FL8LMk_qWb4" target="_blank">Video</a> / <a href="https://pengsongyou.github.io/media/sap/sap_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/shape_as_points" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapsePeng2021NEURIPS" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapsePeng2021NEURIPS"><div class="card card-body"><pre><code>@InProceedings{Peng2021NEURIPS,
author = {Songyou Peng and Chiyu Jiang and Yiyi Liao and Michael Niemeyer and Marc Pollefeys and Andreas Geiger},
title = {Shape As Points: A Differentiable Poisson Solver},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2021},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/graf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://ps.is.mpg.de/publications/schwarz2020neurips" target="_blank">GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis</a> <br><a href="https://katjaschwarz.github.io/" target="_blank">Katja Schwarz</a>, <a href="https://yiyiliao.github.io/" target="_blank">Yiyi Liao</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Advances in Neural Information Processing Systems (NeurIPS)</span>, 2020 <br><a href="https://ps.is.mpg.de/publications/schwarz2020neurips" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Schwarz2020NEURIPS.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Schwarz2020NEURIPS_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=akQf7WaCOHo&vq=hd1080&autoplay=1" target="_blank">Video</a> / <a href="https://github.com/autonomousvision/graf" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseSchwarz2020NEURIPS" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseSchwarz2020NEURIPS"><div class="card card-body"><pre><code>@InProceedings{Schwarz2020NEURIPS,
author = {Katja Schwarz and Yiyi Liao and Michael Niemeyer and Andreas Geiger},
title = {GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2020},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/giraffe.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://m-niemeyer.github.io/project-pages/giraffe/index.html" target="_blank">GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields</a> <span style="color: red;">(Oral Presentation, Best Paper Award)</span><br><span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)</span>, 2021 <br><a href="https://m-niemeyer.github.io/project-pages/giraffe/index.html" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2021CVPR.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2021CVPR_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=fIaDXC-qRSg&vq=hd1080&autoplay=1" target="_blank">Video</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2021CVPR_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/giraffe" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseNiemeyer2021CVPR" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseNiemeyer2021CVPR"><div class="card card-body"><pre><code>@InProceedings{Niemeyer2021CVPR,
author = {Michael Niemeyer and Andreas Geiger},
title = {GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields},
booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2021},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/campari.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://github.com/autonomousvision/campari" target="_blank">CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields</a> <br><span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. of the International Conf. on 3D Vision (3DV)</span>, 2021 <br><a href="https://github.com/autonomousvision/campari" target="_blank">Project Page</a> / <a href="https://arxiv.org/pdf/2103.17269.pdf" target="_blank">Paper</a> / <a href="https://www.cvlibs.net/publications/Niemeyer2021THREEDV_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=rrIIEc2qYjM" target="_blank">Video</a> / <a href="https://www.cvlibs.net/publications/Niemeyer2021THREEDV_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/campari" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseNiemeyer2021THREEDV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseNiemeyer2021THREEDV"><div class="card card-body"><pre><code>@InProceedings{Niemeyer2021THREEDV,
author = {Michael Niemeyer and Andreas Geiger},
title = {CAMPARI: Camera-Aware Decomposed Generative Neural Radiance Fields},
booktitle = {Proc. of the International Conf. on 3D Vision (3DV)},
year = {2021},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/cslf.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://arxiv.org/abs/2003.12406" target="_blank">Learning Implicit Surface Light Fields</a> <br><a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://creiser.github.io/" target="_blank">Christian Reiser</a>, <a href="https://scholar.google.de/citations?user=h2k1gL4AAAAJ&hl=de" target="_blank">Lars Mescheder</a>, <a href="https://scholar.google.com/citations?user=VlymtLQAAAAJ&hl=en" target="_blank">Thilo Strauss</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. of the International Conf. on 3D Vision (3DV)</span>, 2020 <br><a href="https://arxiv.org/abs/2003.12406" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Oechsle2020THREEDV.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Oechsle2020THREEDV_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.cvlibs.net/publications/Oechsle2020THREEDV_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/cslf" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseOechsle2020THREEDV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseOechsle2020THREEDV"><div class="card card-body"><pre><code>@InProceedings{Oechsle2020THREEDV,
author = {Michael Oechsle and Michael Niemeyer and Christian Reiser and Lars Mescheder and Thilo Strauss and Andreas Geiger},
title = {Learning Implicit Surface Light Fields},
booktitle = {Proc. of the International Conf. on 3D Vision (3DV)},
year = {2020},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/conv_onet.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://pengsongyou.github.io/conv_onet" target="_blank">Convolutional Occupancy Networks</a> <span style="color: red;">(Spotlight Presentation)</span><br><a href="https://pengsongyou.github.io/" target="_blank">Songyou Peng</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://scholar.google.de/citations?user=h2k1gL4AAAAJ&hl=de" target="_blank">Lars Mescheder</a>, <a href="https://people.inf.ethz.ch/pomarc/" target="_blank">Marc Pollefeys</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. of the European Conf. on Computer Vision (ECCV)</span>, 2020 <br><a href="https://pengsongyou.github.io/conv_onet" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Peng2020ECCV.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Peng2020ECCV_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=EmauovgrDSM&vq=hd1080&autoplay=1" target="_blank">Video</a> / <a href="https://github.com/autonomousvision/convolutional_occupancy_networks" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapsePeng2020ECCV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapsePeng2020ECCV"><div class="card card-body"><pre><code>@InProceedings{Peng2020ECCV,
author = {Songyou Peng and Michael Niemeyer and Lars Mescheder and Marc Pollefeys and Andreas Geiger},
title = {Convolutional Occupancy Networks},
booktitle = {Proc. of the European Conf. on Computer Vision (ECCV)},
year = {2020},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/dvr.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://avg.is.mpg.de/publications/niemeyer2020cvpr" target="_blank">Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision</a> <br><span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://scholar.google.de/citations?user=h2k1gL4AAAAJ&hl=de" target="_blank">Lars Mescheder</a>, <a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)</span>, 2020 <br><a href="https://avg.is.mpg.de/publications/niemeyer2020cvpr" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2020CVPR.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2020CVPR_supplementary.pdf" target="_blank">Supplemental</a> / <a href="https://www.youtube.com/watch?v=U_jIN3qWVEw" target="_blank">Video</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2020CVPR_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/differentiable_volumetric_rendering" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseNiemeyer2020CVPR" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseNiemeyer2020CVPR"><div class="card card-body"><pre><code>@InProceedings{Niemeyer2020CVPR,
author = {Michael Niemeyer and Lars Mescheder and Michael Oechsle and Andreas Geiger},
title = {Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision},
booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2020},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/oflow.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://avg.is.mpg.de/publications/niemeyer2019iccv" target="_blank">Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics</a> <br><span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://scholar.google.de/citations?user=h2k1gL4AAAAJ&hl=de" target="_blank">Lars Mescheder</a>, <a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. of the IEEE International Conf. on Computer Vision (ICCV)</span>, 2019 <br><a href="https://avg.is.mpg.de/publications/niemeyer2019iccv" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2019ICCV.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2019ICCV_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=c0yOugTgrWc&vq=hd1080&autoplay=1" target="_blank">Video</a> / <a href="http://www.cvlibs.net/publications/Niemeyer2019ICCV_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/occupancy_flow" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseNiemeyer2019ICCV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseNiemeyer2019ICCV"><div class="card card-body"><pre><code>@InProceedings{Niemeyer2019ICCV,
author = {Michael Niemeyer and Lars Mescheder and Michael Oechsle and Andreas Geiger},
title = {Occupancy Flow: 4D Reconstruction by Learning Particle Dynamics},
booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
year = {2019},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/tfield.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://avg.is.mpg.de/publications/oechsle2019iccv" target="_blank">Texture Fields: Learning Texture Representations in Function Space</a> <span style="color: red;">(Oral Presentation)</span><br><a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <a href="https://scholar.google.de/citations?user=h2k1gL4AAAAJ&hl=de" target="_blank">Lars Mescheder</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="https://scholar.google.com/citations?user=VlymtLQAAAAJ&hl=en" target="_blank">Thilo Strauss</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. of the IEEE International Conf. on Computer Vision (ICCV)</span>, 2019 <br><a href="https://avg.is.mpg.de/publications/oechsle2019iccv" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Oechsle2019ICCV.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Oechsle2019ICCV_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=y8XHkl3vtpI&vq=hd1080&autoplay=1" target="_blank">Video</a> / <a href="http://www.cvlibs.net/publications/Oechsle2019ICCV_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/autonomousvision/texture_fields" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseOechsle2019ICCV" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseOechsle2019ICCV"><div class="card card-body"><pre><code>@InProceedings{Oechsle2019ICCV,
author = {Michael Oechsle and Lars Mescheder and Michael Niemeyer and Thilo Strauss and Andreas Geiger},
title = {Texture Fields: Learning Texture Representations in Function Space},
booktitle = {Proc. of the IEEE International Conf. on Computer Vision (ICCV)},
year = {2019},
}</pre></code></div></div> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/publications/onet.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9"><a href="https://avg.is.mpg.de/publications/occupancy-networks" target="_blank">Occupancy Networks: Learning 3D Reconstruction in Function Space</a> <span style="color: red;">(Oral Presentation, Best Paper Finalist)</span><br><a href="https://scholar.google.de/citations?user=h2k1gL4AAAAJ&hl=de" target="_blank">Lars Mescheder</a>, <a href="https://moechsle.github.io/" target="_blank">Michael Oechsle</a>, <span style="font-weight: bold";>Michael Niemeyer</span>, <a href="http://www.nowozin.net/sebastian/" target="_blank">Sebastian Nowozin</a>, <a href="https://www.cvlibs.net/" target="_blank">Andreas Geiger</a> <br><span style="font-style: italic;">Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)</span>, 2019 <br><a href="https://avg.is.mpg.de/publications/occupancy-networks" target="_blank">Project Page</a> / <a href="http://www.cvlibs.net/publications/Mescheder2019CVPR.pdf" target="_blank">Paper</a> / <a href="http://www.cvlibs.net/publications/Mescheder2019CVPR_supplementary.pdf" target="_blank">Supplemental</a> / <a href="http://www.youtube.com/watch?v=w1Qo3bOiPaE&t=6s&vq=hd1080&autoplay=1" target="_blank">Video</a> / <a href="http://www.cvlibs.net/publications/Mescheder2019CVPR_poster.pdf" target="_blank">Poster</a> / <a href="https://github.com/LMescheder/Occupancy-Networks" target="_blank">Code</a> /<button class="btn btn-link" type="button" data-toggle="collapse" data-target="#collapseMescheder2019CVPR" aria-expanded="false" aria-controls="collapseExample" style="margin-left: -6px; margin-top: -2px;">Expand bibtex</button><div class="collapse" id="collapseMescheder2019CVPR"><div class="card card-body"><pre><code>@InProceedings{Mescheder2019CVPR,
author = {Lars Mescheder and Michael Oechsle and Michael Niemeyer and Sebastian Nowozin and Andreas Geiger},
title = {Occupancy Networks: Learning 3D Reconstruction in Function Space},
booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
year = {2019},
}</pre></code></div></div> </div> </div> </div>
</div>
</div>
<div class="row" style="margin-top: 3em;">
<div class="col-sm-12" style="">
<h4>Talks</h4>
<div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/white.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Neural Representations for Real-time View Synthesis, 3D Asset Generation, and Beyond<br><span style="font-style: italic;">NITRE CVPR Workshop</span>, 2024 <br> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/white.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS<br><span style="font-style: italic;">Google Cloud AI Seminar</span>, 2024 <br> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/white.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS<br><span style="font-style: italic;">ETH ASL Group Visit</span>, 2024 <br> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/white.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">RadSplat: Radiance Field-Informed Gaussian Splatting for Robust Real-Time Rendering with 900+ FPS<br><span style="font-style: italic;">ETH CVG Group Visit</span>, 2024 <br> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/white.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Neural Representations for 3D Asset Reconstruction, Generation, and Beyond<br><span style="font-style: italic;">Electronic Arts Research</span>, 2024 <br> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/white.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Neural Representations for 3D Asset Reconstruction, Generation, and Beyond<br><span style="font-style: italic;">University of Massachusetts Amherst</span>, 2024 <br> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/diffrend.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Neural Scene Representations and Differentiable Rendering<br><span style="font-style: italic;">Delft University of Technology</span>, 2022 <br><a href="https://m-niemeyer.github.io/assets/pdf/diffrend-slides.pdf" target="_blank">Slides</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/games-talk.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Implicit Neural Scene Representations and 3D-Aware Generative Modelling<br><span style="font-style: italic;">GAMES Webinar Series</span>, 2022 <br><a href="https://m-niemeyer.github.io/assets/pdf/games-slides.pdf" target="_blank">Slides</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/adobe-pres.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Generative Neural Scene Representations<br><span style="font-style: italic;">Adobe Research</span>, 2021 <br><a href="https://m-niemeyer.github.io/assets/pdf/gnsr-slides.pdf" target="_blank">Slides</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/tum-lecture-img.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Implicit Scene Representations and Neural Rendering<br><span style="font-style: italic;">Technical University Munic - AI Lecture Series</span>, 2021 <br><a href="https://m-niemeyer.github.io/assets/pdf/isr_nr.pdf" target="_blank">Slides</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/amazon.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Generative Neural Scene Representations for 3D-Aware Image Synthesis<br><span style="font-style: italic;">AIT (ETH)</span>, 2021 <br><a href="https://m-niemeyer.github.io/assets/pdf/gnsr_slides.pdf" target="_blank">Slides</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/amazon.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Generative Neural Scene Representations for 3D-Aware Image Synthesis<br><span style="font-style: italic;">Amazon Research</span>, 2021 <br><a href="https://m-niemeyer.github.io/assets/pdf/gnsr_slides.pdf" target="_blank">Slides</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/mit.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">Generative Neural Scene Representations for 3D-Aware Image Synthesis<br><span style="font-style: italic;">Massachusetts Institute of Technology</span>, 2021 <br><a href="https://yenchenlin.me/3D-representation-reading/assets/Michael.pdf" target="_blank">Slides</a> / <a href="https://www.youtube.com/watch?v=scnXyCSMJF4" target="_blank">Recording</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/fraunhofer.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">KI Forschung und 3D Deep Learning<br><span style="font-style: italic;">Frauenhofer IAO event 100 KI Talents</span>, 2020 <br><a href="https://tiny.cc/100-ki-talente" target="_blank">Slides</a> / <a href="https://www.youtube.com/watch?v=lpX85uNFZ0s" target="_blank">Recording</a> </div> </div> </div><div style="margin-bottom: 3em;"> <div class="row"><div class="col-sm-3"><img src="assets/img/talks/gtc.jpg" class="img-fluid img-thumbnail" alt="Project image"></div><div class="col-sm-9">3D Deep Learning in Function Space<br><span style="font-style: italic;">NVIDIA. NVIDIA GPU Technology Conference (GTC)</span>, 2020 <br><a href="https://m-niemeyer.github.io/slides/gtc/" target="_blank">Slides</a> / <a href="https://www.youtube.com/watch?v=U_jIN3qWVEw" target="_blank">Recording</a> </div> </div> </div>
</div>
</div>
<div class="row" style="margin-top: 3em; margin-bottom: 1em;">
<div class="col-sm-12" style="">
<h4>Homepage Template</h4>
<p>
Feel free to use this website as a template! It is fully responsive and very easy to use and maintain as it uses a python script that crawls your bib files to automatically add the papers and talks. If you find it helpful, please add a link to my website - I will also add a link to yours (if you want). <a href="https://github.com/m-niemeyer/m-niemeyer.github.io" target="_blank">Checkout the github repository for instructions on how to use it</a>. <br>
<a href="https://kashyap7x.github.io/" target="_blank">⚛</a>
<a href="https://kait0.github.io/" target="_blank">⚛</a>
</p>
</div>
</div>
</div>
<div class="col-md-1"></div>
</div?
</div>
<!-- Optional JavaScript -->
<!-- jQuery first, then Popper.js, then Bootstrap JS -->
<script src="https://code.jquery.com/jquery-3.2.1.slim.min.js"
integrity="sha384-KJ3o2DKtIkvYIK3UENzmM7KCkRr/rE9/Qpg6aAZGJwFDMVNA/GpGFF93hXpG5KkN"
crossorigin="anonymous"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.12.9/umd/popper.min.js"
integrity="sha384-ApNbgh9B+Y1QKtv3Rn7W3mgPxhU9K/ScQsAP7hUibX39j7fakFPskvXusvfa0b4Q"
crossorigin="anonymous"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0/js/bootstrap.min.js"
integrity="sha384-JZR6Spejh4U02d8jOt6vLEHfe/JQGiRRSQQxSfFWpi1MquVdAyjUar5+76PVCmYl"
crossorigin="anonymous"></script>
</body>
</html>