Recent years have seen great advances in computer vision and machine learning. But with these advances comes an ethical dilemma: as our methods get better, so do the tools for malicious image manipulation. While these malicious uses were once only the domain of well-resourced dictators, spy agencies, and unscrupulous photojournalists, recent advances have made it possible to create fake images with only basic computer skills, and social networks have made it easier than ever to disseminate them.
We propose to detect fake images by developing algorithms that exploit the limited representational power of deep convolutional neural networks. These methods generate only a subset of the possible images that could appear in the world. We hypothesize that, as a consequence of this limitation, there are subtle differences from real images that can be detected. We plan to investigate this idea in two directions: 1) analyzing the limitations on the representational space of these networks, 2) using the limitations we discover to create methods that can detect images that were generated by neural networks.
Research Findings and Presentations
- CNN-generated images are surprisingly easy to spot… for now (2020) published in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8695-8704).