[04.2025] Our new papers on Iterative Implicit Neural Representations (♻️ I-INR) and Motion Deblurring Dynamic 3DGS (🕵🏻♂️ MoBGS) have been released on arXiv.
[03.2025] Our new paper on Multi-object Video Editing (🎯 PRIMEdit ) has been released on arXiv. Check out our project page.
[02.2025] Three papers (MoDec🎄-GS, ⏱️ BiM-VFI, ∿ SplineGS ) have been accepted to CVPR2025.
[02.2025] Sukhun, Weeyoung, and Gyuwon joined our lab in the spring of 2025 as MS students. Welcome!
[01.07.2025] Our new paper on Compact Dynamic 3D Gaussian Splatting (MoDec🎄-GS) has been released on arXiv. Check out our project page.
[12.16.2024] Our new paper on Motion Field-Guided Video Frame Interpolation (⏱️ BiM-VFI) has been released on arXiv. Check out our project page.
[12.13.2024] Our new paper on Real-Time Dynamic 3D Gaussians (∿ SplineGS ) has been released on arXiv. Check out our project page.
[11.2024] Prof. Oh joined inshorts as a CAIO to research and integrate Low-Level Vision techniques into real-world products.
[09.2024] Prof. Oh was invited to talk on "Handling Real-World Videos in Low-Level Vision" at LG Electronics.
[09.2024] Juhyun joined our lab in the fall of 2024 as MS student. Welcome!
[08.2024] Prof. Oh will serve as a Program Committee for AAAI2025.
[06.27.2024] Prof. Oh gave a presentation on "Handling Real-World Videos in Computer Vision Tasks" at the KIBME2024 Young Researcher session.
[06.17.2024] Prof. Oh was invited to talk on 🍎 FMA-Net and XVFI at the AIS (CVPR2024 Workshop).
[05.29.2024] Prof. Oh has been selected as a CVPR2024 Outstanding Reviewer (2%).
[04.05.2024] Our 🍎 FMA-Net (Final Review Scores: 5/5/4) has been selected as an 🏅Oral Paper (0.78% ≈ 90/11532), which will be a huge honor to be presented in front of an audience of thousands of people.