All Projects → ISICV → Mantranet

ISICV / Mantranet

ManTra-Net: Manipulation Tracing Network For Detection And Localization of Image Forgeries With Anomalous Features

Projects that are alternatives of or similar to Mantranet

Developerworks
Stars: ✭ 112 (-1.75%)
Mutual labels:  jupyter-notebook
Differentiable sorting
Differentiable bitonic sorting
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Introspective
Repo for the ML_Insights python package
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Pytorch Generative
Easy generative modeling in PyTorch.
Stars: ✭ 112 (-1.75%)
Mutual labels:  jupyter-notebook
Programer log
最新动态在这里【我的程序员日志】
Stars: ✭ 112 (-1.75%)
Mutual labels:  jupyter-notebook
Loandefault Prediction
Lending Club Loan data analysis
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Numerical Python Book Code
Stars: ✭ 112 (-1.75%)
Mutual labels:  jupyter-notebook
Tensorflow Nlp
NLP and Text Generation Experiments in TensorFlow 2.x / 1.x
Stars: ✭ 1,487 (+1204.39%)
Mutual labels:  jupyter-notebook
Mmaml Classification
An official PyTorch implementation of “Multimodal Model-Agnostic Meta-Learning via Task-Aware Modulation” (NeurIPS 2019) by Risto Vuorio*, Shao-Hua Sun*, Hexiang Hu, and Joseph J. Lim
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Pythondata
repo for code published on pythondata.com
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Machine learning
参考了西瓜书,sklearn源码,李航统计学,机器学习实战、机器学习中的数学
Stars: ✭ 112 (-1.75%)
Mutual labels:  jupyter-notebook
V2ray Deep Packet Inspection
Notebook demo V2Ray traffic classification by deep packet inspection
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Pedestrian Cam
Monitoring Foot Traffic over IP Webcams with ML
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Algocode
Welcome everyone!🌟 Here you can solve problems, build scrappers and much more💻
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Tfbook
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Everware
Everware is about re-useable science, it allows people to jump right in to your research code.
Stars: ✭ 112 (-1.75%)
Mutual labels:  jupyter-notebook
Kaggle Houseprices
Kaggle Kernel for House Prices competition https://www.kaggle.com/massquantity/all-you-need-is-pca-lb-0-11421-top-4
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Kerasobjectdetector
Keras Object Detection API with YOLK project 🍳
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook
Course Content
NMA Computational Neuroscience course
Stars: ✭ 2,082 (+1726.32%)
Mutual labels:  jupyter-notebook
Deep Nlp Seminars
Materials for deep NLP course
Stars: ✭ 113 (-0.88%)
Mutual labels:  jupyter-notebook

ManTraNet: Manipulation Tracing Network For Detection And Localization of Image ForgeriesWith Anomalous Features


This is the official repo for the ManTraNet (CVPR2019). For method details, please refer to

  @inproceedings{Wu2019ManTraNet,
      title={ManTra-Net: Manipulation Tracing Network For Detection And Localization of Image ForgeriesWith Anomalous Features},
      author={Yue Wu, Wael AbdAlmageed, and Premkumar Natarajan},
      journal={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      year={2019}
  }

Overview

ManTraNet is an end-to-end image forgery detection and localization solution, which means it takes a testing image as input, and predicts pixel-level forgery likelihood map as output. Comparing to existing methods, the proposed ManTraNet has the following advantages:

  1. Simplicity: ManTraNet needs no extra pre- and/or post-processing
  2. Fast: ManTraNet puts all computations in a single network, and accepts an image of arbitrary size.
  3. Robustness: ManTraNet does not rely on working assumptions other than the local manipulation assumption, i.e. some region in a testing image is modified differently from the rest.

Technically speaking, ManTraNet is composed of two sub-networks as shown below:

  1. Image Manipulation Trace Feature Extractor: the feature extraction network for the image manipulation classification task, which is sensitive to different manipulation types, and encodes the image manipulation in a patch into a fixed dimension feature vector.
  2. Local Anomaly Detection Network: the anomaly detection network to compare a local feature against the dominant feature averaged from a local region, whose activation depends on how far a local feature deviates from the reference feature instead of the absolute value of a local feature.

ManTraNet

Extension

ManTraNet is pretrained with all synthetic data. To prevent overfitting, we

  1. Pretrain the Image Manipulation Classification (385 classes) task to obtain the Image Manipulation Trace Feature Extractor
  2. Train ManTraNet with four types of synthetic data, i.e. copy-move, splicing, removal, and enhancement

To extend the provided ManTraNet, one may introduce the new manipulation either to the IMC pretrain task, or to the end-to-end ManTraNet task, or both. It is also worth noting that the IMC task can be a self-supervised task.

Dependency

ManTraNet is written in Keras with the TensorFlow backend.

  • Keras: 2.2.0
  • TensorFlow: 1.8.0

Other versions might also work, but are not tested.

Demo

One may simply download the repo and play with the provided ipython notebook.

Alternatively, one may play with the inference code using this google colab link.

Contact

For any paper related questions, please contact rex.yue.wu(AT)gmail.com

Licence

The Software is made available for academic or non-commercial purposes only. The license is for a copy of the program for an unlimited term. Individuals requesting a license for commercial use must pay for a commercial license.

USC Stevens Institute for Innovation 
University of Southern California 
1150 S. Olive Street, Suite 2300 
Los Angeles, CA 90115, USA 
ATTN: Accounting 

DISCLAIMER. USC MAKES NO EXPRESS OR IMPLIED WARRANTIES, EITHER IN FACT OR BY OPERATION OF LAW, BY STATUTE OR OTHERWISE, AND USC SPECIFICALLY AND EXPRESSLY DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, VALIDITY OF THE SOFTWARE OR ANY OTHER INTELLECTUAL PROPERTY RIGHTS OR NON-INFRINGEMENT OF THE INTELLECTUAL PROPERTY OR OTHER RIGHTS OF ANY THIRD PARTY. SOFTWARE IS MADE AVAILABLE AS-IS. LIMITATION OF LIABILITY. TO THE MAXIMUM EXTENT PERMITTED BY LAW, IN NO EVENT WILL USC BE LIABLE TO ANY USER OF THIS CODE FOR ANY INCIDENTAL, CONSEQUENTIAL, EXEMPLARY OR PUNITIVE DAMAGES OF ANY KIND, LOST GOODWILL, LOST PROFITS, LOST BUSINESS AND/OR ANY INDIRECT ECONOMIC DAMAGES WHATSOEVER, REGARDLESS OF WHETHER SUCH DAMAGES ARISE FROM CLAIMS BASED UPON CONTRACT, NEGLIGENCE, TORT (INCLUDING STRICT LIABILITY OR OTHER LEGAL THEORY), A BREACH OF ANY WARRANTY OR TERM OF THIS AGREEMENT, AND REGARDLESS OF WHETHER USC WAS ADVISED OR HAD REASON TO KNOW OF THE POSSIBILITY OF INCURRING SUCH DAMAGES IN ADVANCE.

For commercial license pricing and annual commercial update and support pricing, please contact:

Rakesh Pandit USC Stevens Institute for Innovation 
University of Southern California 
1150 S. Olive Street, Suite 2300
Los Angeles, CA 90115, USA 

Tel: +1 213-821-3552
Fax: +1 213-821-5001 
Email: [email protected] and ccto: [email protected]

IMPORTANT NOTICE

First I want to thank you all for using this repo. I've received several emails every month regarding to different issues. Two important questions are listed below:

  1. Can you release the training code, training dataset, and/or testing code?

No, I can't. For training code or commerial usage, you should contact the USC ISI. For training dataset, I think it should be straightforward to create your own version. For testing code, the inference part has already been included in the repo; the evaluation part has not been included yet, but I can work on it in future.

  1. Why the released pretrained model is of a different architecture from the one described in the paper?

I highly appreciated zhang.y****'s email which pointed out that the released pretrained model's first block has 32 filters instead of 16 (i.e. the IMC-VGG-W&D setting described in paper Table 5). I confirmed this is a mistake, possibly because I failed to name models with different architectures differently or simply picked a wrong model. However, I have already left the USC ISI for years, and thus don't have the resources to correct this mistake. I deeply apologize for any inconvience, but I hope you guys could understand. This mistake might also explain why some of you (who tried to reproduce the evaluation results) observed slightly different performance scores than those reported in paper, but it will not affect any main contributions/conclusions made in the paper.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].