All Projects → tariqdaouda → Mariana

tariqdaouda / Mariana

Licence: apache-2.0
The Cutest Deep Learning Framework which is also a wonderful Declarative Language

Programming Languages

javascript
184084 projects - #8 most used programming language
python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Mariana

Free Ai Resources
🚀 FREE AI Resources - 🎓 Courses, 👷 Jobs, 📝 Blogs, 🔬 AI Research, and many more - for everyone!
Stars: ✭ 192 (+27.15%)
Mutual labels:  artificial-intelligence, data-science, deep-neural-networks, machine-learning-algorithms, machinelearning, artificial-neural-networks
Letslearnai.github.io
Lets Learn AI
Stars: ✭ 33 (-78.15%)
Mutual labels:  artificial-intelligence, deep-neural-networks, machine-learning-algorithms, machinelearning, deep-learning-algorithms
Deeplearning.ai
deeplearning.ai , By Andrew Ng, All video link
Stars: ✭ 625 (+313.91%)
Mutual labels:  artificial-intelligence, deep-neural-networks, deeplearning, artificial-neural-networks, deep-learning-algorithms
Java Deep Learning Cookbook
Code for Java Deep Learning Cookbook
Stars: ✭ 156 (+3.31%)
Mutual labels:  artificial-intelligence, deeplearning, machinelearning, artificial-neural-networks
Awesome Deep Learning And Machine Learning Questions
【不定期更新】收集整理的一些网站中(如知乎、Quora、Reddit、Stack Exchange等)与深度学习、机器学习、强化学习、数据科学相关的有价值的问题
Stars: ✭ 203 (+34.44%)
Mutual labels:  deeplearning, machine-learning-algorithms, machinelearning, deep-learning-algorithms
Best ai paper 2020
A curated list of the latest breakthroughs in AI by release date with a clear video explanation, link to a more in-depth article, and code
Stars: ✭ 2,140 (+1317.22%)
Mutual labels:  artificial-intelligence, deep-neural-networks, deeplearning, machinelearning
Real Time Ml Project
A curated list of applied machine learning and data science notebooks and libraries across different industries.
Stars: ✭ 143 (-5.3%)
Mutual labels:  deeplearning, machine-learning-algorithms, machinelearning, theano
Echotorch
A Python toolkit for Reservoir Computing and Echo State Network experimentation based on pyTorch. EchoTorch is the only Python module available to easily create Deep Reservoir Computing models.
Stars: ✭ 231 (+52.98%)
Mutual labels:  artificial-intelligence, machine-learning-algorithms, machinelearning, artificial-neural-networks
datascience-mashup
In this repo I will try to gather all of the projects related to data science with clean datasets and high accuracy models to solve real world problems.
Stars: ✭ 36 (-76.16%)
Mutual labels:  machine-learning-algorithms, deep-learning-algorithms, machinelearning, deeplearning
Learn Data Science For Free
This repositary is a combination of different resources lying scattered all over the internet. The reason for making such an repositary is to combine all the valuable resources in a sequential manner, so that it helps every beginners who are in a search of free and structured learning resource for Data Science. For Constant Updates Follow me in …
Stars: ✭ 4,757 (+3050.33%)
Mutual labels:  artificial-intelligence, data-science, deeplearning, machine-learning-algorithms
Igel
a delightful machine learning tool that allows you to train, test, and use models without writing code
Stars: ✭ 2,956 (+1857.62%)
Mutual labels:  artificial-intelligence, data-science, machine-learning-algorithms, machinelearning
Echo
Python package containing all custom layers used in Neural Networks (Compatible with PyTorch, TensorFlow and MegEngine)
Stars: ✭ 126 (-16.56%)
Mutual labels:  deep-neural-networks, deeplearning, machine-learning-algorithms, deep-learning-algorithms
Har Keras Cnn
Human Activity Recognition (HAR) with 1D Convolutional Neural Network in Python and Keras
Stars: ✭ 97 (-35.76%)
Mutual labels:  artificial-intelligence, data-science, deep-neural-networks, deeplearning
25daysinmachinelearning
I will update this repository to learn Machine learning with python with statistics content and materials
Stars: ✭ 53 (-64.9%)
Mutual labels:  data-science, machine-learning-algorithms, machinelearning
Php Ml
PHP-ML - Machine Learning library for PHP
Stars: ✭ 7,900 (+5131.79%)
Mutual labels:  artificial-intelligence, data-science, machine-learning-algorithms
Pycm
Multi-class confusion matrix library in Python
Stars: ✭ 1,076 (+612.58%)
Mutual labels:  artificial-intelligence, data-science, deeplearning
Mit Deep Learning
Tutorials, assignments, and competitions for MIT Deep Learning related courses.
Stars: ✭ 8,912 (+5801.99%)
Mutual labels:  artificial-intelligence, data-science, deeplearning
Ludwig
Data-centric declarative deep learning framework
Stars: ✭ 8,018 (+5209.93%)
Mutual labels:  deep-neural-networks, deeplearning, machinelearning
My Journey In The Data Science World
📢 Ready to learn or review your knowledge!
Stars: ✭ 1,175 (+678.15%)
Mutual labels:  data-science, deep-neural-networks, deep-learning-algorithms
Awesome Quantum Machine Learning
Here you can get all the Quantum Machine learning Basics, Algorithms ,Study Materials ,Projects and the descriptions of the projects around the web
Stars: ✭ 1,940 (+1184.77%)
Mutual labels:  artificial-intelligence, machine-learning-algorithms, artificial-neural-networks

What will happen now that Theano is no longer developed?

Mariana works! I still use it almost everyday.

I am still taking care of the maintenance and may still add some minor features. For the future, the most straightforward path would be a complete port to Tensorflow or PyTorch. Let me know if you'd like to help!

T .

.. image:: https://github.com/tariqdaouda/Mariana/blob/master/MarianaLogo.png logo by Sawssan Kaddoura_.

.. _Sawssan Kaddoura: http://sawssankaddoura.com

Mariana V1 is here_

.. _here: https://github.com/tariqdaouda/Mariana/tree/master

MARIANA: The Cutest Deep Learning Framework

.. image:: https://img.shields.io/badge/python-2.7-blue.svg

MARIANA V2.

Mariana is meant to be a efficient language through which complex deep neural networks can be easily expressed and easily manipulated. It's simple enough for beginners and doesn't get much complicated. Intuitive, user-friendly and yet flexible enough for research. It's here to empower researchers, teachers and students alike, while greatly facilitating AI knowledge transfer into other domains.

Mariana is also compatible with google's GPUs and colab. For a running basic example you can check here on colab_.

.. _colab: https://colab.research.google.com/github/tariqdaouda/Mariana/blob/V2-dev/Mariana_basics_example.ipynb

.. code:: python

import Mariana.layers as ML
import Mariana.scenari as MS
import Mariana.costs as MC
import Mariana.activations as MA
import Mariana.regularizations as MR

import Mariana.settings as MSET

ls = MS.GradientDescent(lr = 0.01, momentum=0.9)
cost = MC.NegativeLogLikelihood()

inp = ML.Input(28*28, name = "InputLayer")
h1 = ML.Hidden(300, activation = MA.ReLU(), name = "Hidden1", regularizations = [ MR.L1(0.0001) ])
h2 = ML.Hidden(300, activation = MA.ReLU(), name = "Hidden2", regularizations = [ MR.L1(0.0001) ])
o = ML.SoftmaxClassifier(10, learningScenari = [ls], cost = cost, name = "Probabilities")

#Connecting layers
inp > h1 > h2
concat = ML.C([inp, h2])

MLP_skip = concat > o
MLP_skip.init()

#Visualizing
MLP_skip.saveHTML("mySkipMLP")

	#training:
for i in xrange(1000) :
	MLP_skip["Probabilities"].train({"InputLayer.inputs": train_set[0], "Probabilities.targets": train_set[1]})

#testing
	print MLP_skip["Probabilities"].test({"InputLayer.inputs": test_set[0], "Probabilities.targets": test_set[1]})

V2's most exciting stuff

V2 is almost a complete rewite of V1. It is much better.

What's done

  • New built-in visualization: Interactive visuallization that shows architecture along with parameters, hyper-parameters as well as user defined notes. A great tool for collaboration.
  • Function mixins (my favorite): Mariana functions can now be added together! The result is a function that performs the actions of all its components at once. Let's say we have two output layers and want a function that optimises on losses for both outputs. Creating it is as simple as: f = out1.train + out2.train, and then calling f. Mariana will derive f from both functions adding the costs, calculating gradients and updates seemlessly in the background
  • Easy access to gradients and updates: Just call .getGradients(), .getUpdates() on any function to get a view of either gradients of updates for all parameters.
  • User friendly error messages: Functions will tell you what arguments they expect.
  • Very very clean code with Streams!: You probably heard of batchnorm... and how it has a different behaviour in training and in testing. Well that simple fact can be the cause of some very messy DL code. With streams all this is over. Streams are parallel universes of execution for functions. You can define your own streams and have as many as you want. For batchnorm it mean that depending on the stream you call your function in (test or train), the behaviour will be different, even though you only changed one word.
  • Chainable optimization rules: As in the previous version, layers inherit their learning scenari from outputs, but have the possibility to redifine them. This is still true, but rules can now be chained. Here's how to define a layer with fixed bias: l = Dense( learningScenari=[GradientDescent(lr = 0.1), Fixed('b')])
  • Just in time function compilation: All functions (including mixins) are only compiled if needed.
  • Lasagne compatible: Every lasagne layer can be seemlessly imported and used into Mariana
  • Convolutions, Deconvolutions (Transpose convolution), all sorts of convolutions...
  • Much easier to extend: The (almost) complete rewrite made for a much more cleaner code that is much more easy to extend. It is now much simpler to create your own layers, decorators, etc... Function that you need to implement end with _abs and Mariana has whole new bunch of custom type that support streams.
  • New merge layer: Need a layer that is a linear combination of other layers? The new MergeLayer is perfect for that newLayer = M(layer1 + (layers3 * layer3) + 4 )
  • New concatenation layer: newLayer = C([Layer1, layer2])
  • Unlimited number of inputs per layer: Each layer used to be limited to one. Now it is infinit
  • Abstractions are now divided into trainable (layers, decorators, activations) and untrainable (scenari, costs, initializations): All trainable abstractions can hold parameters and have untrainable abstractions applied to them. PReLU will finally join ReLU as an activation!
  • Fancy ways to go downhill: Adam, Adagrad, ...

What's almost done

  • Inclusion of popular recurrences (LSTM, recurent layers, ...)

What's next

  • Complete refactorisation of training encapsulation. Training encapsulation was the least popular aspect of Mariana so far. I will completely rewrite it to give it the same level of intuitiveness as the rest of the framework. The next iterration will be a huge improvement.
  • Arbitrary recurrences in the graph
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].