All Projects → JetBrains → Kotlindl

JetBrains / Kotlindl

Licence: apache-2.0
High-level Deep Learning Framework written in Kotlin and inspired by Keras

Programming Languages

kotlin
9241 projects

Projects that are alternatives of or similar to Kotlindl

Clearml Agent
ClearML Agent - ML-Ops made easy. ML-Ops scheduler & orchestration solution
Stars: ✭ 117 (-66.95%)
Mutual labels:  gpu, deeplearning
Gbrain
GPU Javascript Library for Machine Learning
Stars: ✭ 48 (-86.44%)
Mutual labels:  gpu, deeplearning
Bmw Yolov4 Inference Api Gpu
This is a repository for an nocode object detection inference API using the Yolov3 and Yolov4 Darknet framework.
Stars: ✭ 237 (-33.05%)
Mutual labels:  gpu, deeplearning
Keras Multiple Process Prediction
A Simply Example to show how to use Keras model in multiple processes to do the prediction
Stars: ✭ 107 (-69.77%)
Mutual labels:  gpu, deeplearning
Curl
CURL: Contrastive Unsupervised Representation Learning for Sample-Efficient Reinforcement Learning
Stars: ✭ 346 (-2.26%)
Mutual labels:  gpu, deeplearning
Deeplearning4j
Suite of tools for deploying and training deep learning models using the JVM. Highlights include model import for keras, tensorflow, and onnx/pytorch, a modular and tiny c++ library for running math code and a java based math library on top of the core c++ library. Also includes samediff: a pytorch/tensorflow like library for running deep learni…
Stars: ✭ 12,277 (+3368.08%)
Mutual labels:  gpu, deeplearning
Deep Learning In Cloud
List of Deep Learning Cloud Providers
Stars: ✭ 298 (-15.82%)
Mutual labels:  gpu, deeplearning
Mbpmid2010 gpufix
MBPMid2010_GPUFix is an utility program that allows to fix MacBook Pro (15-inch, Mid 2010) intermittent black screen or loss of video. The algorithm is based on a solution provided by user fabioroberto on MacRumors forums.
Stars: ✭ 334 (-5.65%)
Mutual labels:  gpu
Pytorchzerotoall
Simple PyTorch Tutorials Zero to ALL!
Stars: ✭ 3,586 (+912.99%)
Mutual labels:  deeplearning
Arrayfire
ArrayFire: a general purpose GPU library.
Stars: ✭ 3,693 (+943.22%)
Mutual labels:  gpu
Pixellib
Visit PixelLib's official documentation https://pixellib.readthedocs.io/en/latest/
Stars: ✭ 327 (-7.63%)
Mutual labels:  deeplearning
Nvptx
How to: Run Rust code on your NVIDIA GPU
Stars: ✭ 335 (-5.37%)
Mutual labels:  gpu
T81 558 deep learning
Washington University (in St. Louis) Course T81-558: Applications of Deep Neural Networks
Stars: ✭ 4,152 (+1072.88%)
Mutual labels:  deeplearning
Caffe64
No dependency caffe replacement
Stars: ✭ 335 (-5.37%)
Mutual labels:  deeplearning
Magnet
Deep Learning Projects that Build Themselves
Stars: ✭ 351 (-0.85%)
Mutual labels:  deeplearning
Agi
Android GPU Inspector
Stars: ✭ 327 (-7.63%)
Mutual labels:  gpu
Qpulib
Language and compiler for the Raspberry Pi GPU
Stars: ✭ 357 (+0.85%)
Mutual labels:  gpu
Action Recognition Visual Attention
Action recognition using soft attention based deep recurrent neural networks
Stars: ✭ 350 (-1.13%)
Mutual labels:  deeplearning
Rendu
A simple realtime graphics playground for experimentations.
Stars: ✭ 343 (-3.11%)
Mutual labels:  gpu
Gpu Physics Unity
Through this configuration, no per voxel data is transferred between the GPU and the CPU at runtime.
Stars: ✭ 342 (-3.39%)
Mutual labels:  gpu

KotlinDL: High-level Deep Learning API in Kotlin official JetBrains project

Download Kotlin Slack channel

KotlinDL is a high-level Deep Learning API written in Kotlin and inspired by Keras. Under the hood it is using TensorFlow Java API. KotlinDL offers simple APIs for training deep learning models from scratch, importing existing Keras models for inference, and leveraging transfer learning for tweaking existing pre-trained models to your tasks.

This project aims to make Deep Learning easier for JVM developers, and to simplify deploying deep learning models in JVM production environments.

Here's an example of what a classic convolutional neural network LeNet would look like in KotlinDL:

private const val EPOCHS = 3
private const val TRAINING_BATCH_SIZE = 1000
private const val NUM_CHANNELS = 1L
private const val IMAGE_SIZE = 28L
private const val SEED = 12L
private const val TEST_BATCH_SIZE = 1000

private val lenet5Classic = Sequential.of(
    Input(
        IMAGE_SIZE,
        IMAGE_SIZE,
        NUM_CHANNELS
    ),
    Conv2D(
        filters = 6,
        kernelSize = longArrayOf(5, 5),
        strides = longArrayOf(1, 1, 1, 1),
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Zeros(),
        padding = ConvPadding.SAME
    ),
    AvgPool2D(
        poolSize = intArrayOf(1, 2, 2, 1),
        strides = intArrayOf(1, 2, 2, 1),
        padding = ConvPadding.VALID
    ),
    Conv2D(
        filters = 16,
        kernelSize = longArrayOf(5, 5),
        strides = longArrayOf(1, 1, 1, 1),
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Zeros(),
        padding = ConvPadding.SAME
    ),
    AvgPool2D(
        poolSize = intArrayOf(1, 2, 2, 1),
        strides = intArrayOf(1, 2, 2, 1),
        padding = ConvPadding.VALID
    ),
    Flatten(), // 3136
    Dense(
        outputSize = 120,
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Constant(0.1f)
    ),
    Dense(
        outputSize = 84,
        activation = Activations.Tanh,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Constant(0.1f)
    ),
    Dense(
        outputSize = NUMBER_OF_CLASSES,
        activation = Activations.Linear,
        kernelInitializer = GlorotNormal(SEED),
        biasInitializer = Constant(0.1f)
    )
)

fun main() {
    val (train, test) = Dataset.createTrainAndTestDatasets(
        TRAIN_IMAGES_ARCHIVE,
        TRAIN_LABELS_ARCHIVE,
        TEST_IMAGES_ARCHIVE,
        TEST_LABELS_ARCHIVE,
        NUMBER_OF_CLASSES,
        ::extractImages,
        ::extractLabels
    )

    lenet5Classic.use {
        it.compile(
            optimizer = Adam(clipGradient = ClipGradientByValue(0.1f)),
            loss = Losses.SOFT_MAX_CROSS_ENTROPY_WITH_LOGITS,
            metric = Metrics.ACCURACY
        )

        it.summary()

        it.fit(dataset = train, epochs = EPOCHS, batchSize = TRAINING_BATCH_SIZE)

        val accuracy = it.evaluate(dataset = test, batchSize = TEST_BATCH_SIZE).metrics[Metrics.ACCURACY]

        println("Accuracy: $accuracy")
    }
}

Table of Contents

TensorFlow Engine

KotlinDL is built on top of TensorFlow 1.15 Java API. The Java API for TensorFlow 2.+ has recently had first public release, and this project will be switching to it in the nearest future. This, however, does not affect the high-level API.

Limitations

Currently, only a limited set of deep learning architectures is supported. Here's the list of available layers:

  • Input()
  • Flatten()
  • Dense()
  • Dropout()
  • Conv2D()
  • MaxPool2D()
  • AvgPool2D()

KotlinDL supports model inference in JVM backend applications, Android support is coming in later releases.

How to configure KotlinDL in your project

To use KotlinDL in your project, you need to add the following dependency to your build.gradle file:

   repositories {
      jcenter()
       maven {
           url  "https://kotlin.bintray.com/kotlin-datascience"
       }
   }
   
   dependencies {
       implementation 'org.jetbrains.kotlin-deeplearning:api:[KOTLIN-DL-VERSION]'
   }

The latest KotlinDL version is 0.1.1. For more details, as well as for pom.xml and build.gradle.kts examples, please refer to the Quick Start Guide.

Working with KotlinDL in Jupyter Notebook

You can work with KotlinDL interactively in Jupyter Notebook with Kotlin kernel. To do so, add the following dependency in your notebook:

   @file:Repository("https://kotlin.bintray.com/kotlin-datascience")
   @file:DependsOn("org.jetbrains.kotlin-deeplearning:api:[KOTLIN-DL-VERSION]")

For more details on how to install Jupyter Notebook and add Kotlin kernel, check out the Quick Start Guide.

Examples and tutorials

You do not need to have any prior deep learning experience to start using KotlinDL. We are working on including extensive documentation to help you get started. At this point, feel free to check out the following tutorials:

For more inspiration, take a look at the code examples in this repo.

Running KotlinDL on GPU

To enable the training and inference on GPU, please read this TensorFlow GPU Support page and install the CUDA framework to enable calculations on a GPU device.

Note that only NVIDIA devices are supported.

You will also need to add the following dependencies in your project if you wish to leverage GPU:

  compile 'org.tensorflow:libtensorflow:1.15.0'_
  compile 'org.tensorflow:libtensorflow_jni_gpu:1.15.0'_

On Windows the following distributions are required:

Logging

By default, the API module uses kotlin-logging library to organize the logging process separately from specific logger implementation.

You could use any widely known JVM logging library with Simple Logging Facade for Java (SLF4J) implementation such as Logback or Log4j/Log4j2.

You will also need to add the following dependencies and configuration file log4j2.xml to the src/resource folder in your project if you wish to use log4j2

  compile 'org.apache.logging.log4j:log4j-api:2.14.0'
  compile 'org.apache.logging.log4j:log4j-core:2.14.0'
  compile 'org.apache.logging.log4j:log4j-slf4j-impl:2.14.0'
<Configuration status="WARN">
    <Appenders>
        <Console name="STDOUT" target="SYSTEM_OUT">
            <PatternLayout pattern="%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"/>
        </Console>
    </Appenders>
    <Loggers>
        <Root level="debug">
            <AppenderRef ref="STDOUT" level="DEBUG"/>
        </Root>
    </Loggers>
</Configuration>

or the following dependency and configuration file logback.xml to src/resource folder in your project if you wish to use Logback

  compile 'ch.qos.logback:logback-classic:1.2.3'
<configuration>
    <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
        </encoder>
    </appender>

    <root level="info">
        <appender-ref ref="STDOUT"/>
    </root>
</configuration>

These configuration files could be found in the Examples module.

Fat Jar issue

There is a known StackOverflow question and TensorFlow issue with Fat Jar creation and execution (on Amazon EC2 instances, for example).

java.lang.UnsatisfiedLinkError: /tmp/tensorflow_native_libraries-1562914806051-0/libtensorflow_jni.so: libtensorflow_framework.so.1: cannot open shared object file: No such file or directory

Despite the fact that the bug describing this problem was closed in the release of Tensorflow 1.14, it was not fully fixed and requires an additional line in the build script

One simple solution is to add a Tensorflow version specification to the Jar's Manifest. Below you could find an example of Gradle build task for Fat Jar creation.

task fatJar(type: Jar) {
    manifest {
        attributes 'Implementation-Version': '1.15'
    }
    classifier = 'all'
    from { configurations.runtimeClasspath.collect { it.isDirectory() ? it : zipTree(it) } }
    with jar
}

Reporting issues/Support

Please use GitHub issues for filing feature requests and bug reports. You are also welcome to join #deeplearning channel in the Kotlin Slack.

Code of Conduct

This project and the corresponding community is governed by the JetBrains Open Source and Community Code of Conduct. Please make sure you read it.

License

KotlinDL is licensed under the Apache 2.0 License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].