Dima Kotovenko

I am a Senior Research Scientist at the Huawei Research Center, Zurich (since 11/2024), working on video quality on mobile devices, video enhancement, and 3D video/photo techniques for stabilization and novel view synthesis. Previously, I was a Research Scientist Intern at Meta (09/2023–01/2024) focusing on representation learning and 3D geometry editing, including sparse‑view novel view synthesis with 3D Gaussian splatting, plenoxels, and NeRFs.

I completed my Ph.D. between LMU Munich and Heidelberg with Prof. Björn Ommer. My work spans 3D Gaussians, style transfer, depth estimation, and representation/metric learning. I’ve supervised Bachelor/Master theses and served as a TA for courses in computer vision, deep learning, and generative AI.

Email  /  Google Scholar  /  LinkedIn  /  Github

Dima Kotovenko
Selected Publications

I am interested in 3D/4D vision, Gaussian splatting, neural rendering, generative modeling, style transfer. Please see the full list on Google Scholar.
EDGS: Eliminating Densification for Efficient Convergence of 3DGS
D. Kotovenko, O. Grebenkova, B. Ommer
CVPR, 2026
project page /HF demo /code

Efficient 3D Gaussian Splatting without densification.

WaSt‑3D: Wasserstein‑2 Distance for Scene‑to‑Scene Stylization on 3D Gaussians
D. Kotovenko, O. Grebenkova, N. Sarafianos, A. Paliwal, P. Ma, O. Poursaeed, S. Mohan, Y. Fan, Y. Li, R. Ranjan, B. Ommer
ECCV, 2024
project page

Scene‑to‑scene stylization on 3D Gaussians via OT with W2 distance.

CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
A. Paliwal, W. Ye, J. Xiong, D. Kotovenko, R. Ranjan, V. Chandra, N.K. Kalantari
ECCV, 2024
project page

Sparse‑view NVS with coherence constraints in Gaussian splats.

DepthFM: Fast Generative Monocular Depth Estimation with Flow Matching
M. Gui, J. Fischer, U. Prestel, P. Ma, D. Kotovenko, O. Grebenkova, S. Baumann, V.T. Hu, B. Ommer
AAAI, 2025 (Oral)
project page /code

Flow‑matching based monocular depth with realistic geometry.

ProSty: 3D Prototype‑based Style Transfer
D. Kotovenko, O. Grebenkova, J.S. Fischer, M. Gui, N. Sarafianos, A. Paliwal, O. Poursaeed, Y. Li, R. Ranjan, B. Ommer
TPAMI (in review)
paper / video

Prototype‑based style transfer for 3D scenes.

Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes
D. Kotovenko, M. Wright, A. Heimbrecht, B. Ommer
CVPR, 2021
project page /code

Style transfer with parameterized brushstrokes.

Content and Style Disentanglement for Artistic Style Transfer
D. Kotovenko, A. Sanakoyeu, S. Lang, B. Ommer
ICCV, 2019
paper / video

Disentangling content and style for artistic transfer.

A Style‑Aware Content Loss for Real‑time HD Style Transfer
D. Kotovenko, A. Sanakoyeu, S. Lang, B. Ommer
ECCV, 2018 (Oral)
project page /code

Style‑aware content loss enabling real‑time HD stylization.

Teaching & Service
  • Presented our work EDGS: Eliminating Densification for Efficient Convergence of 3DGS at Google Munich.
  • Reviewer: CVPR (2021, 2024–2026), NeurIPS (2020, 2024–2025), ECCV (2020), ICCV (2025), CGF (2025), 3DV (2026), IEEE TMM.
  • TA for Computer Vision, Deep Learning, Generative AI (LMU/Heidelberg).
  • Supervised 1 Bachelor and 4 Master theses; guest lecture at Yessenov Data Lab.
Education
  • Ph.D., Computer Vision & Deep Learning, LMU Munich — 2019–2024
  • M.Sc., Scientific Computing, University of Heidelberg — 2017–2019
  • B.Sc., Mathematics, University of Heidelberg — 2012–2016

Thanks to Jon Barron and Haofei Xu for the website's source code