Spotting Fake Videos with Artificial Intelligence – Prof. Matthias Niessner Developed a New Algorithm
Today it is often no longer possible to distinguish between real and faked video content just by looking. To help users know for sure whether politicians have actually said what they appear to say in online video clips, Prof. Matthias Niessner of the Technical University of Munich (TUM) has developed a new algorithm.
"Spotting fake videos has proved especially tricky in social media, where they are generally uploaded as compressed, low-resolution images," says Prof. Matthias Niessner. "The same methods used to manipulate video content are also capable of detecting fake content with a high degree of accuracy – even when the image resolution is poor."
For artificial intelligence to decide whether a video is the product of manipulation, it must be able to recognize the patterns that occur in faked content. To learn the recurring elements of such content, neural networks need to be fed enormous volumes of fake videos. In the past, researchers had to manipulate video material manually using image or video processing software. As a result, they lacked the required volumes of training data. Using new deep learning methods and graphic processes, Prof. Niessner has succeeded for the first time in building an extensive data pool, mainly with automated methods, including, among other tools, his own Face2Face software, which permits the real-time transfer of facial expressions from one person to another. With the new data pool, he was then able to train his FaceForensics (++) algorithm with more than half a million frames from over a thousand faked videos.