All Projects → zzw922cn → TensorFlow-Input-Pipeline

zzw922cn / TensorFlow-Input-Pipeline

Licence: other
TensorFlow Input Pipeline Examples based on multi-thread and FIFOQueue

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to TensorFlow-Input-Pipeline

cucumber-performance
A performance testing framework for cucumber
Stars: ✭ 28 (-48.15%)
Mutual labels:  multi-threading
KeywordSpotting
Train a 4-layer Convolutional Neural Network to detect trigger word
Stars: ✭ 49 (-9.26%)
Mutual labels:  tfrecords
MemoryAllocator.KanameShiki
Fast multi-threaded memory allocator
Stars: ✭ 73 (+35.19%)
Mutual labels:  multi-threading
Fibrous
Concurrency library for .Net
Stars: ✭ 47 (-12.96%)
Mutual labels:  multi-threading
hatrack
Fast, multi-reader, multi-writer, lockless data structures for parallel programming
Stars: ✭ 55 (+1.85%)
Mutual labels:  fifo-queue
jsonpyes
The tool which imports raw JSON to ElasticSearch in one line of commands
Stars: ✭ 67 (+24.07%)
Mutual labels:  multi-threading
async fifo
A dual clock asynchronous FIFO written in verilog, tested with Icarus Verilog
Stars: ✭ 117 (+116.67%)
Mutual labels:  fifo-queue
safe
C++11 header only RAII guards for mutexes and locks.
Stars: ✭ 119 (+120.37%)
Mutual labels:  multi-threading
TAOMP
《多处理器编程的艺术》一书中的示例代码实现,带有注释与单元测试
Stars: ✭ 39 (-27.78%)
Mutual labels:  multi-threading
Discovery
Mining Discourse Markers for Unsupervised Sentence Representation Learning
Stars: ✭ 48 (-11.11%)
Mutual labels:  large-dataset
synapse
Non-intrusive C++ signal programming library
Stars: ✭ 48 (-11.11%)
Mutual labels:  multi-threading
keras tfrecord
Extending Keras to support tfrecord dataset
Stars: ✭ 61 (+12.96%)
Mutual labels:  tfrecords
request store rails
📦 Per-request global storage for Rails prepared for multi-threaded apps
Stars: ✭ 78 (+44.44%)
Mutual labels:  multi-threading
superfast
⚡ SuperFast codecs for fre:ac
Stars: ✭ 59 (+9.26%)
Mutual labels:  multi-threading
fork-helper
A PHP helper to fork processes and allow multi-threading
Stars: ✭ 69 (+27.78%)
Mutual labels:  multi-threading
table2pojo
Generate POJOs for database table/columns
Stars: ✭ 16 (-70.37%)
Mutual labels:  multi-threading
EasySparse
Sparse learning in TensorFlow using data acquired from Spark.
Stars: ✭ 21 (-61.11%)
Mutual labels:  tfrecords
Paraphrase
Multi-core suitable Forth-like language
Stars: ✭ 27 (-50%)
Mutual labels:  multi-threading
XNet
CNN implementation for medical X-Ray image segmentation
Stars: ✭ 71 (+31.48%)
Mutual labels:  small-dataset
glfwm
GLFW Manager - C++ wrapper with multi-threading
Stars: ✭ 60 (+11.11%)
Mutual labels:  multi-threading

TensorFlow-Input-Pipeline

Input Pipeline Examples based on multi-threads and FIFOQueue in TensorFlow, including mini-batching training.

Graphs

Graph for small dataset example code

image

Graph for big dataset example code

image

Usage

If your dataset is too large to load once, you can first convert your dataset to TFRecords files, my example code shows how to write data into TFRecords and how to read data from TFRecords correctly. You can run the example code python big_input.py:

usage: small_input.py [-h] [--scale SCALE] [--logdir LOGDIR]
                      [--samples_num SAMPLES_NUM] [--time_length TIME_LENGTH]
                      [--feature_size FEATURE_SIZE] [--num_epochs NUM_EPOCHS]
                      [--batch_size BATCH_SIZE] [--num_classes NUM_CLASSES]

optional arguments:
  -h, --help            show this help message and exit
  --scale SCALE         specify your dataset scale
  --logdir LOGDIR       specify the location to store log or model
  --samples_num SAMPLES_NUM
                        specify your total number of samples
  --time_length TIME_LENGTH
                        specify max time length of sample
  --feature_size FEATURE_SIZE
                        specify feature size of sample
  --num_epochs NUM_EPOCHS
                        specify number of training epochs
  --batch_size BATCH_SIZE
                        specify batch size when training
  --num_classes NUM_CLASSES
                        specify number of output classes

Well, if your dataset is not large enough, you can totally load your dataset once and then use FIFOQueue to improve training. You can also run the example code python small_input.py:

usage: big_input.py [-h] [--scale SCALE] [--logdir LOGDIR]
                    [--samples_num SAMPLES_NUM] [--time_length TIME_LENGTH]
                    [--feature_size FEATURE_SIZE] [--num_epochs NUM_EPOCHS]
                    [--batch_size BATCH_SIZE] [--num_classes NUM_CLASSES]

optional arguments:
  -h, --help            show this help message and exit
  --scale SCALE         specify your dataset scale
  --logdir LOGDIR       specify the location to store log or model
  --samples_num SAMPLES_NUM
                        specify your total number of samples
  --time_length TIME_LENGTH
                        specify max time length of sample
  --feature_size FEATURE_SIZE
                        specify feature size of sample
  --num_epochs NUM_EPOCHS
                        specify number of training epochs
  --batch_size BATCH_SIZE
                        specify batch size when training
  --num_classes NUM_CLASSES
                        specify number of output classes

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].