"Spotting fake videos has proved especially tricky in social media, where they are generally uploaded as compressed, low-resolution images," says Prof. Matthias Niessner. "The same methods used to manipulate video content are also capable of detecting fake content with a high degree of accuracy – even when the image resolution is poor."
For artificial intelligence to decide whether a video is the product of manipulation, it must be able to recognize the patterns that occur in faked content. To learn the recurring elements of such content, neural networks need to be fed enormous volumes of fake videos. In the past, researchers had to manipulate video material manually using image or video processing software. As a result, they lacked the required volumes of training data. Using new deep learning methods and graphic processes, Prof. Niessner has succeeded for the first time in building an extensive data pool, mainly with automated methods, including, among other tools, his own Face2Face software, which permits the real-time transfer of facial expressions from one person to another. With the new data pool, he was then able to train his FaceForensics (++) algorithm with more than half a million frames from over a thousand faked videos.