Video tampering localisation using features learned from authentic content.
Johnston, Pamela; Elyan, Eyad; Jayne, Chrisina
Video tampering detection remains an open problem in the field of digital media forensics. As video manipulation techniques advance, it becomes easier for tamperers to create convincing forgeries that can fool human eyes. Deep learning methods have already shown great promise in discovering effective features from data, particularly in the image domain, however they are exceptionally data hungry. Labelled datasets of varied, state-of-the-art, tampered video which are large enough to facilitate machine learning do not exist and, moreover, may never exist while the field of digital video manipulation is advancing at such an unprecedented pace. Therefore, it is vital to develop techniques which can be trained on authentic or synthesised video but used to localise the patterns of manipulation within tampered videos. In this paper, we developed a framework for tampering detection which derives features from authentic content and utilises them to localise key frames and tampered regions in three publicly available tampered video datasets. We used Convolutional Neural Networks (CNNs) to estimate quantisation parameter, deblock setting and intra/inter mode of pixel patches from an H.264/AVC sequence. Extensive evaluation suggests that these features can be used to aid localisation of tampered regions within video.
|Journal Article Type||Article|
|Journal||Neural computing and applications|
|Publisher||Springer (part of Springer Nature)|
|Peer Reviewed||Peer Reviewed|
|Institution Citation||JOHNSTON, P., ELYAN, E. and JAYNE, C. 2019. Video tampering localisation using features learned from authentic content. Neural computing and applications [online], Latest Articles. Available from: https://doi.org/10.1007/s00521-019-04272-z|
|Keywords||CNN; Compression; Video tampering detection; Deep learning|
JOHNSTON 2019 Video tampering