Welcome to Creative Vision and Multimedia Lab (CMLab) @ Chung-Ang University (CAU)!
CMLab has been directed by Assistant Prof. Jihyong Oh since Fall 2023.
Our mission is to conduct cutting-edge research in AI-based Computer Vision and Multimedia, with a focus on machine & deep learning techniques. We are dedicated to exploring innovative solutions that can be applied to real-world problems.
We are looking for self-motivated graduate & undergraduate students. If you're interested, send an email to [jihyongoh@cau.ac.kr] with the following information:
(1) full Curriculum Vitae (CV), (2) transcript (3) future plan & purpose, and (4) available participation time.
If you are interested in joining our CMLab and would like to prepare the prerequisites, please refer to the [Contact] page for more information.
Research Area
Low-Level Vision
--> But, not limited to!
News [mm.dd.yyyy]
[01.07.2025] Our new paper on Compact Dynamic 3D Gaussian Splatting (MoDec🎄-GS) has been released on arXiv. Check out our project page.
[12.17.2024] Our new paper on Multi-Instance Video Editing (⇶ MIVE ) has been released on arXiv. Check out our project page.
[12.16.2024] Our new paper on Motion Field-Guided Video Frame Interpolation (⏱️ BiM-VFI) has been released on arXiv. Check out our project page.
[12.13.2024] Our new paper on Real-Time Dynamic 3D Gaussians (∿ SplineGS ) has been released on arXiv. Check out our project page.
[09.2024] Prof. Oh was invited to talk on "Handling Real-World Videos in Low-Level Vision" at LG Electronics.
[09.2024] Juhyun Park will be joining our lab in the fall of 2024 as MS student. Welcome!
[08.2024] Prof. Oh will serve as a Program Committee for AAAI2025.
[06.27.2024] Prof. Oh gave a presentation on "Handling Real-World Videos in Computer Vision Tasks" at the KIBME2024 Young Researcher session.
[06.17.2024] Prof. Oh was invited to talk on 🍎 FMA-Net and XVFI at the AIS (CVPR2024 Workshop).
[05.29.2024] Prof. Oh has been selected as a CVPR2024 Outstanding Reviewer (2%).
[04.05.2024] Our 🍎 FMA-Net (Final Review Scores: 5/5/4) has been selected as an 🏅Oral Paper (0.78% ≈ 90/11532), which will be a huge honor to be presented in front of an audience of thousands of people.
[02.27.2024] One paper (🍎 FMA-Net) on Joint Video Super-Resolution and Deblurring has been accepted to CVPR2024.
[02.01.2024] Prof. Oh gave a presentation on "Handling Real-World Videos in Computer Vision Tasks" at the IPIU2024 Young Researcher session.
[12.21.2023] Our new paper on Dynamic Deblurring NeRF (DyBluRF🫨) has been released on arXiv. Check out our project page.
[11.02.2023] Dahyeon Kye, Giyeol Kim, Changhyun Roh, Jeahun Sung and Hyunseo Lee will be joining our lab in the spring of 2024 as MS students. Welcome!
[09.01.2023] CMLab has been newly opened by Assistant Prof. Jihyong Oh.
Recent Research
[(new!) arXiv2024 (under review)] "MoDec🎄-GS: Global-to-Local Motion Decomposition and Temporal Interval Adjustment for Compact Dynamic 3D Gaussian Splatting",
Sangwoon Kwak, Joonsoo Kim, Jun Young Jeong, Won-Sik Cheong, Jihyong Oh☨ and Munchurl Kim☨ (☨co-corresponding authors)
[(new!) arXiv2024 (under review)] "∿ SplineGS: Robust Motion-Adaptive Spline for Real-Time Dynamic 3D Gaussians from Monocular Video",
Minh-Quan Viet Bui*, Jongmin Park*, Juan Luis Gonzalez Bello, Jaeho Moon, Jihyong Oh☨ and Munchurl Kim☨ (*equal contribution, ☨co-corresponding authors)
[(new!) arXiv2024 (under review)] "⇶ MIVE: New Design and Benchmark for Multi-Instance Video Editing",
Samuel Teodoro*, Agus Gunawan*, Soo Ye Kim, Jihyong Oh☨ and Munchurl Kim☨ (*equal contribution, ☨co-corresponding authors)
[(new!) arXiv2024 (under review)] "⏱️ BiM-VFI: Bidirectional Motion Field-Guided Frame Interpolation for Video with Non-uniform Motions",
Wonyong Seo, Jihyong Oh☨ and Munchurl Kim☨ (☨co-corresponding authors)
[arXiv2024 (under review)] "DyBluRF 🫨: Dynamic Deblurring Neural Radiance Fields for Blurry Monocular Video",
Minh-Quan Viet Bui*, Jongmin Park*, Jihyong Oh☨ and Munchurl Kim☨ (*equal contribution, ☨co-corresponding authors)
[arXiv2024] [Project Page] [GitHub (will be updated)] [Demo] [Video7mins]
[CVPR2024, 🏅Oral 0.78%] "🍎 FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring",
Geunhyuk Youk, Jihyong Oh☨ and Munchurl Kim☨. (☨co-corresponding authors)
[arXiv2024] [Project Page] [GitHub] [Demo] [Video6mins]
[ICCV2021, 🏅Oral 3%] "XVFI: eXtreme Video Frame Interpolation",
Hyeonjun Sim*, Jihyong Oh* and Munchurl Kim (*equal contribution).
[arXiv] [ICCV2021] [Supp] [GitHub] [X4K1000FPS] [Demo] [Oral12mins] [Flowframes(GUI)] [Poster]
[AAAI2020] "FISR: Deep Joint Frame Interpolation and Super-Resolution with a Multi-Scale Temporal Loss",
Soo Ye Kim*, Jihyong Oh* and Munchurl Kim (*equal contribution).
[Paper] [GitHub] [Poster] [Spotlight-PPT]
[AAAI2020] "JSI-GAN: GAN-Based Joint Super-Resolution and Inverse Tone-Mapping with Pixel-Wise Task-Specific Filters for UHD HDR Video",
Soo Ye Kim*, Jihyong Oh* and Munchurl Kim (*equal contribution).
[Paper] [GitHub] [Poster] [Spotlight-PPT]