All Projects → deepakkumar1984 → Mxnet.sharp

deepakkumar1984 / Mxnet.sharp

Licence: apache-2.0
.NET Standard bindings for Apache MxNet with Imperative, Symbolic and Gluon Interface for developing, training and deploying Machine Learning models in C#. https://mxnet.tech-quantum.com/

Projects that are alternatives of or similar to Mxnet.sharp

Gluon Cv
Gluon CV Toolkit
Stars: ✭ 5,001 (+3632.09%)
Mutual labels:  object-detection, image-classification, mxnet, gluon
Autogluon
AutoGluon: AutoML for Text, Image, and Tabular Data
Stars: ✭ 3,920 (+2825.37%)
Mutual labels:  object-detection, image-classification, mxnet, gluon
Cv Pretrained Model
A collection of computer vision pre-trained models.
Stars: ✭ 995 (+642.54%)
Mutual labels:  object-detection, image-classification, mxnet
Imgclsmob
Sandbox for training deep learning networks
Stars: ✭ 2,405 (+1694.78%)
Mutual labels:  image-classification, mxnet, gluon
Quantization.mxnet
Simulate quantization and quantization aware training for MXNet-Gluon models.
Stars: ✭ 42 (-68.66%)
Mutual labels:  mxnet, gluon
Gluonrank
Ranking made easy
Stars: ✭ 39 (-70.9%)
Mutual labels:  mxnet, gluon
Computervision Recipes
Best Practices, code samples, and documentation for Computer Vision.
Stars: ✭ 8,214 (+6029.85%)
Mutual labels:  object-detection, image-classification
Ko en neural machine translation
Korean English NMT(Neural Machine Translation) with Gluon
Stars: ✭ 55 (-58.96%)
Mutual labels:  mxnet, gluon
Efficientnet
Gluon implementation of EfficientNet and EfficientNet-lite
Stars: ✭ 30 (-77.61%)
Mutual labels:  mxnet, gluon
Aws Machine Learning University Accelerated Cv
Machine Learning University: Accelerated Computer Vision Class
Stars: ✭ 1,068 (+697.01%)
Mutual labels:  mxnet, gluon
Ya mxdet
Yet Another MXnet DETection
Stars: ✭ 61 (-54.48%)
Mutual labels:  object-detection, mxnet
Sockeye
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
Stars: ✭ 990 (+638.81%)
Mutual labels:  mxnet, gluon
Channel Pruning
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)
Stars: ✭ 979 (+630.6%)
Mutual labels:  object-detection, image-classification
Albumentations
Fast image augmentation library and an easy-to-use wrapper around other libraries. Documentation: https://albumentations.ai/docs/ Paper about the library: https://www.mdpi.com/2078-2489/11/2/125
Stars: ✭ 9,353 (+6879.85%)
Mutual labels:  object-detection, image-classification
Mxnet Im2rec tutorial
this simple tutorial will introduce how to use im2rec for mx.image.ImageIter , ImageDetIter and how to use im2rec for COCO DataSet
Stars: ✭ 97 (-27.61%)
Mutual labels:  mxnet, gluon
Mish
Official Repsoitory for "Mish: A Self Regularized Non-Monotonic Neural Activation Function" [BMVC 2020]
Stars: ✭ 1,072 (+700%)
Mutual labels:  object-detection, image-classification
Mxnet Gluon Syncbn
MXNet Gluon Synchronized Batch Normalization Preview
Stars: ✭ 78 (-41.79%)
Mutual labels:  mxnet, gluon
Tensorflow2.0 Examples
🙄 Difficult algorithm, Simple code.
Stars: ✭ 1,397 (+942.54%)
Mutual labels:  object-detection, image-classification
Aws Machine Learning University Accelerated Nlp
Machine Learning University: Accelerated Natural Language Processing Class
Stars: ✭ 1,695 (+1164.93%)
Mutual labels:  mxnet, gluon
Pytorch Toolbelt
PyTorch extensions for fast R&D prototyping and Kaggle farming
Stars: ✭ 942 (+602.99%)
Mutual labels:  object-detection, image-classification

Work In Progress version 2.0. There will be breaking change as per RFC: https://github.com/apache/incubator-mxnet/issues/16167

Gitter MxNet.Sharp CI


Apache MXNet (incubating) for Deep Learning

Apache MXNet (incubating) is a deep learning framework designed for both efficiency and flexibility. It allows you to mix symbolic and imperative programming to maximize efficiency and productivity. At its core, MXNet contains a dynamic dependency scheduler that automatically parallelizes both symbolic and imperative operations on the fly. A graph optimization layer on top of that makes symbolic execution fast and memory efficient. MXNet is portable and lightweight, scaling effectively to multiple GPUs and multiple machines.

MxNet.Sharp

MxNet.Sharp is a CSharp binding coving all the Imperative, Symbolic and Gluon API's with an easy to use interface. The Gluon library in Apache MXNet provides a clear, concise, and simple API for deep learning. It makes it easy to prototype, build, and train deep learning models without sacrificing training speed.

High Level Arch

High Level Arch

Nuget

Install the package: Install-Package MxNet.Sharp

https://www.nuget.org/packages/MxNet.Sharp

Add the MxNet redistributed package available as per below.

Important: Make sure your installed CUDA version matches the CUDA version in the nuget package.

Check your CUDA version with the following command:

nvcc --version

You can either upgrade your CUDA install or install the MXNet package that supports your CUDA version.

MxNet Version Build: https://github.com/apache/incubator-mxnet/releases/tag/1.5.0

Win-x64 Packages

Type Name Nuget
MxNet-CPU MxNet CPU Version Install-Package MxNet.Runtime.Redist
MxNet-MKL MxNet CPU with MKL Install-Package MxNet-MKL.Runtime.Redist
MxNet-CU101 MxNet for Cuda 10.1 and CuDnn 7 Install-Package MxNet-CU101.Runtime.Redist
MxNet-CU101MKL MxNet for Cuda 10.1 and CuDnn 7 Install-Package MxNet-CU101MKL.Runtime.Redist
MxNet-CU100 MxNet for Cuda 10 and CuDnn 7 Install-Package MxNet-CU100.Runtime.Redist
MxNet-CU100MKL MxNet with MKL for Cuda 10 and CuDnn 7 Install-Package MxNet-CU100MKL.Runtime.Redist
MxNet-CU92 MxNet for Cuda 9.2 and CuDnn 7 Install-Package MxNet-CU100.Runtime.Redist
MxNet-CU92MKL MxNet with MKL for Cuda 9.2 and CuDnn 7 Install-Package MxNet-CU92MKL.Runtime.Redist
MxNet-CU80 MxNet for Cuda 8.0 and CuDnn 7 Install-Package MxNet-CU100.Runtime.Redist
MxNet-CU80MKL MxNet with MKL for Cuda 8.0 and CuDnn 7 Install-Package MxNet-CU80MKL.Runtime.Redist

Linux-x64 Packages

Type Name Nuget
MxNet-CPU MxNet CPU Version Install-Package MxNet.Linux.Runtime.Redist
MxNet-MKL MxNet CPU with MKL Install-Package MxNet-MKL.Linux.Runtime.Redist
MxNet-CU101 MxNet for Cuda 10.1 and CuDnn 7 Yet to publish
MxNet-CU101MKL MxNet for Cuda 10.1 and CuDnn 7 Yet to publish
MxNet-CU100 MxNet for Cuda 10 and CuDnn 7 Yet to publish
MxNet-CU100MKL MxNet with MKL for Cuda 10 and CuDnn 7 Yet to publish
MxNet-CU92 MxNet for Cuda 9.2 and CuDnn 7 Yet to publish
MxNet-CU92MKL MxNet with MKL for Cuda 9.2 and CuDnn 7 Yet to publish
MxNet-CU80 MxNet for Cuda 8.0 and CuDnn 7 Yet to publish
MxNet-CU80MKL MxNet with MKL for Cuda 8.0 and CuDnn 7 Yet to publish

OSX-x64 Packages

Type Name Nuget
MxNet-CPU MxNet CPU Version Yet to publish
MxNet-MKL MxNet CPU with MKL Yet to publish
MxNet-CU101 MxNet for Cuda 10.1 and CuDnn 7 Yet to publish
MxNet-CU101MKL MxNet for Cuda 10.1 and CuDnn 7 Yet to publish
MxNet-CU100 MxNet for Cuda 10 and CuDnn 7 Yet to publish
MxNet-CU100MKL MxNet with MKL for Cuda 10 and CuDnn 7 Yet to publish
MxNet-CU92 MxNet for Cuda 9.2 and CuDnn 7 Yet to publish
MxNet-CU92MKL MxNet with MKL for Cuda 9.2 and CuDnn 7 Yet to publish
MxNet-CU80 MxNet for Cuda 8.0 and CuDnn 7 Yet to publish
MxNet-CU80MKL MxNet with MKL for Cuda 8.0 and CuDnn 7 Yet to publish

Gluon MNIST Example

Demo as per: https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/image/mnist.html

var mnist = TestUtils.GetMNIST(); //Get the MNIST dataset, it will download if not found
var batch_size = 200; //Set training batch size
var train_data = new NDArrayIter(mnist["train_data"], mnist["train_label"], batch_size, true);
var val_data = new NDArrayIter(mnist["test_data"], mnist["test_label"], batch_size);

// Define simple network with dense layers
var net = new Sequential();
net.Add(new Dense(128, ActivationType.Relu));
net.Add(new Dense(64, ActivationType.Relu));
net.Add(new Dense(10));

//Set context, multi-gpu supported
var gpus = TestUtils.ListGpus();
var ctx = gpus.Count > 0 ? gpus.Select(x => Context.Gpu(x)).ToArray() : new[] {Context.Cpu(0)};

//Initialize the weights
net.Initialize(new Xavier(magnitude: 2.24f), ctx);

//Create the trainer with all the network parameters and set the optimizer
var trainer = new Trainer(net.CollectParams(), new Adam());

var epoch = 10;
var metric = new Accuracy(); //Use Accuracy as the evaluation metric.
var softmax_cross_entropy_loss = new SoftmaxCELoss();
float lossVal = 0; //For loss calculation
for (var iter = 0; iter < epoch; iter++)
{
    var tic = DateTime.Now;
    // Reset the train data iterator.
    train_data.Reset();
    lossVal = 0;

    // Loop over the train data iterator.
    while (!train_data.End())
    {
        var batch = train_data.Next();

        // Splits train data into multiple slices along batch_axis
        // and copy each slice into a context.
        var data = Utils.SplitAndLoad(batch.Data[0], ctx, batch_axis: 0);

        // Splits train labels into multiple slices along batch_axis
        // and copy each slice into a context.
        var label = Utils.SplitAndLoad(batch.Label[0], ctx, batch_axis: 0);

        var outputs = new NDArrayList();

        // Inside training scope
        using (var ag = Autograd.Record())
        {
            outputs = Enumerable.Zip(data, label, (x, y) =>
            {
                var z = net.Call(x);

                // Computes softmax cross entropy loss.
                NDArray loss = softmax_cross_entropy_loss.Call(z, y);

                // Backpropagate the error for one iteration.
                loss.Backward();
                lossVal += loss.Mean();
                return z;
            }).ToList();
        }

        // Updates internal evaluation
        metric.Update(label, outputs.ToArray());

        // Make one step of parameter update. Trainer needs to know the
        // batch size of data to normalize the gradient by 1/batch_size.
        trainer.Step(batch.Data[0].Shape[0]);
    }

    var toc = DateTime.Now;

    // Gets the evaluation result.
    var (name, acc) = metric.Get();

    // Reset evaluation result to initial state.
    metric.Reset();
    Console.Write($"Loss: {lossVal} ");
    Console.WriteLine($"Training acc at epoch {iter}: {name}={(acc * 100).ToString("0.##")}%, Duration: {(toc - tic).TotalSeconds.ToString("0.#")}s");
}

Reached accuracy of 98% within 6th epoch.

alt text

Documentation (In Progress)

https://mxnet.tech-quantum.com/

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].