All Projects → otenim → Xception-with-Your-Own-Dataset

otenim / Xception-with-Your-Own-Dataset

Licence: MIT license
Easy-to-use scripts for training and inferencing with Xception on your own dataset

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to Xception-with-Your-Own-Dataset

Pytorch Cifar100
Practice on cifar100(ResNet, DenseNet, VGG, GoogleNet, InceptionV3, InceptionV4, Inception-ResNetv2, Xception, Resnet In Resnet, ResNext,ShuffleNet, ShuffleNetv2, MobileNet, MobileNetv2, SqueezeNet, NasNet, Residual Attention Network, SENet, WideResNet)
Stars: ✭ 2,423 (+4650.98%)
Mutual labels:  image-classification, xception
eins-modal
Simple to use modal / alert / dialog / popup. Created with pure JS. No javascript knowledge required! Works on every browser and device! IE9
Stars: ✭ 30 (-41.18%)
Mutual labels:  easy-to-use
Effortless-SPIFFS
A class designed to make reading and storing data on the ESP8266 and ESP32 effortless
Stars: ✭ 27 (-47.06%)
Mutual labels:  easy-to-use
memento
Organize your meme image cluster in a better format using OCR from the meme to sort them using tesseract along with editing memes by segmenting them using OpenCV within a directory
Stars: ✭ 70 (+37.25%)
Mutual labels:  image-classification
vframe
VFRAME: Visual Forensics and Metadata Extraction
Stars: ✭ 41 (-19.61%)
Mutual labels:  image-classification
Resolvedor-de-Sudoku
Resolver Sudoku de genina.com
Stars: ✭ 17 (-66.67%)
Mutual labels:  image-classification
Parametric-Contrastive-Learning
Parametric Contrastive Learning (ICCV2021)
Stars: ✭ 155 (+203.92%)
Mutual labels:  image-classification
Easy-HotSpot
Easy HotSpot is a super easy WiFi hotspot user management utility for Mikrotik RouterOS based Router devices. Voucher printing in 6 ready made templates are available. Can be installed in any PHP/MySql enabled servers locally or in Internet web servers. Uses the PHP PEAR2 API Client by boenrobot.
Stars: ✭ 45 (-11.76%)
Mutual labels:  easy-to-use
Billion-scale-semi-supervised-learning
Implementing Billion-scale semi-supervised learning for image classification using Pytorch
Stars: ✭ 81 (+58.82%)
Mutual labels:  image-classification
CyberPunkNetrunner
Cyberpunk 2077 Netrunner Hacking Tool (Easy to use and install). Don't use it on illegal and malicious activity. Inspired by the game CyberPunk 2077 https://www.cyberpunk.net/
Stars: ✭ 69 (+35.29%)
Mutual labels:  easy-to-use
food-detection-yolov5
🍔🍟🍗 Food analysis baseline with Theseus. Integrate object detection, image classification and multi-class semantic segmentation. 🍞🍖🍕
Stars: ✭ 68 (+33.33%)
Mutual labels:  image-classification
Raunaksingh-hacktober-2020
Welcome to Hackertober fest 2020
Stars: ✭ 7 (-86.27%)
Mutual labels:  easy-to-use
thread-pool
BS::thread_pool: a fast, lightweight, and easy-to-use C++17 thread pool library
Stars: ✭ 1,043 (+1945.1%)
Mutual labels:  easy-to-use
awesome-computer-vision-models
A list of popular deep learning models related to classification, segmentation and detection problems
Stars: ✭ 419 (+721.57%)
Mutual labels:  image-classification
encrypted-skin-cancer-detection
Detecting skin cancer in encrypted images with TensorFlow
Stars: ✭ 27 (-47.06%)
Mutual labels:  image-classification
ctxmenu
Tiny and customizable context menu generator
Stars: ✭ 20 (-60.78%)
Mutual labels:  easy-to-use
Custom-CNN-based-Image-Classification-in-PyTorch
No description or website provided.
Stars: ✭ 41 (-19.61%)
Mutual labels:  image-classification
Food-Categories-Classification
This repository contains the dataset and the source code for the classification of food categories from meal images.
Stars: ✭ 48 (-5.88%)
Mutual labels:  image-classification
BottleneckTransformers
Bottleneck Transformers for Visual Recognition
Stars: ✭ 231 (+352.94%)
Mutual labels:  image-classification
img classification deep learning
No description or website provided.
Stars: ✭ 19 (-62.75%)
Mutual labels:  image-classification

Training Xception with your own dataset

Description

This repository contains some scripts to train Xception introduced by François Chollet, the founder of Keras.

Environments

We tested our scripts in the following environment.

  • GTX1070 (8GB) A middle-range or more powerful GPU is required.
  • python 3.6.5
  • numpy 1.17.4
  • scipy 1.3.3
  • h5py 2.10.0
  • Keras 2.3.1
  • tensorflow-gpu 1.15.0

Demo

Here, we'll show how to train Xception on the Caltech101 dataset (9145 images, 102 classes) as an example.

1. Prepare dataset

Please download and expand the dataset with the following command.

$ sh download_dataset.sh
$ tar zxvf 101_ObjectCategories.tar.gz

2. Make classes.txt

You must create a text file where all the class names are listed line by line.
This can be easily done with the below command.

$ ls 101_ObjectCategories > classes.txt

3. Train the model

$ python fine_tune.py 101_ObjectCategories/ classes.txt result/

In fine_tune.py...

  • Xception's weights are initialized with the ones pre-trained on the ImageNet dataset (officialy provided by the keras team).
  • In the first training stage, only the top classifier of the model is trained for 5 epochs.
  • In the second training stage, the whole model is trained for 50 epochs with a lower learning rate.
  • All the result data (serialized model files and figures) are to be saved under result/

4. Inference

$ python inference.py result/model_fine_final.h5 classes.txt images/airplane.jpg

[Input Image]:
image

[Output Result]:
result

How to train with your own dataset ?

What do you have to prepare ?

1. A dataset you wanna use

You have to prepare a directory which has the same structure as the caltech101 dataset as shown bellow:
Imgur

The above example dataset has 3 classes and 5 images in total. Each class name must be unique, but the image files' can be anything.

2. classes.txt

You have to create a text file where all the class names are listed line by line. This can be done with the following command.

$ ls root/ > classes.txt

The file name does not need to be classes.txt, but you can name it as you want.

Let's train your model on your own dataset !!

$ python fine_tune.py root/ classes.txt <result_root> [epochs_pre] [epochs_fine] [batch_size_pre] [batch_size_fine] [lr_pre] [lr_fine] [snapshot_period_pre] [snapshot_period_fine]

NOTE: [] indicates an optional argument. <> indicates a required argument.

  • <result_root>: Path to the directory where all the result data will be saved.
  • [epochs_pre]: The number of epochs during the first training stage (default: 5).
  • [epochs_fine]: The number of epochs during the second training stage (default: 50).
  • [batch_size_pre]: Batch size during the first training stage (default: 32).
  • [batch_size_fine]: Batch size during the second training stage (default: 16).
  • [lr_pre]: Learning rate during the first training stage (default:1e-3).
  • [lr_fine]: Learning rate during the second training stage (default:1e-4).
  • [snapshot_period_pre]: Snapshot period during the first training stage (default:1). At the every spedified epochs, a serialized model file will be saved under <result_root>.
  • [snapshot_period_fine]: Snapshot period during the second training stage (default:1).

For example, if you'd like to pre-train a model for 2 epochs with leraning rate 5e-3 and fine-tune it for 10 epochs with learning rate 5e-4, please run the following command.

$ python fine_tune.py root/ classes.txt result/ --epochs_pre 2 --epochs_fine 10 --lr_pre 5e-3 --lr_fine 5e-4

How to inference with your trained model ?

$ python inference.py <model> <classes> <image> [top_n]

NOTE: [] indicates an optional argument. <> indicates a required argument.

  • <model>: Path to a serialized model file.
  • <classes>: Path to a txt file where all the class names are listed line by line.
  • <image>: Path to an image file that you would like to classify.
  • [top_n]: Show top n results (default: 10).
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].