Researchers from NVIDIA developed a system that will let users transform their standard video into a less blurry, more fluid slow motion with the use of artificial intelligence (AI).
According to a report by Nivida Developer, the deep learning-based system can produce high-quality videos in slow motion from a from a 30-frame-per-second video.
The developers stated in their research paper which they will present in the annual Computer Vision and Pattern Recognition (CVPR) conference: “There are many memorable moments in your life that you might want to record with a camera in slow-motion because they are hard to see clearly with your eyes: the first time a baby walks, a difficult skateboard trick, a dog catching a ball.”
“While it is possible to take 240-frame-per-second videos with a cell phone, recording everything at high frame rates is impractical, as it requires large memories and is power-intensive for mobile devices,” they explained.
According to researchers Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik Learned-Miller, and Jan Kautz, the method can generate multiple intermediate frames that are spatially and temporally coherent, outperforming state-of-the-art single frame methods.
The team trained their system over 11,000 videos shot at 240 frames-per-second using NVIDIA Tesla V100 GPUs and cuDNN-accelerated PyTorch deep learning framework where the convolutional neural network predicted the extra frames once trained.
Here is a video of their demonstration with clips from Gavin Free and Daniel Gruchy’s ‘The Slow Mo Guys’:
(Photo source: YouTube – NVIDIA)