All Projects → Barqawiz → Shakkala

Barqawiz / Shakkala

Licence: other
Deep learning for Arabic text Vocalization - التشكيل الالي للنصوص العربية

Programming Languages

python
139335 projects - #7 most used programming language

Labels

Projects that are alternatives of or similar to Shakkala

Load forecasting
Load forcasting on Delhi area electric power load using ARIMA, RNN, LSTM and GRU models
Stars: ✭ 160 (-23.08%)
Mutual labels:  rnn
Pytorch Sentiment Neuron
Stars: ✭ 178 (-14.42%)
Mutual labels:  rnn
Nn compression
Stars: ✭ 193 (-7.21%)
Mutual labels:  rnn
Lstm Music Genre Classification
Music genre classification with LSTM Recurrent Neural Nets in Keras & PyTorch
Stars: ✭ 166 (-20.19%)
Mutual labels:  rnn
Sentence Classification
Sentence Classifications with Neural Networks
Stars: ✭ 177 (-14.9%)
Mutual labels:  rnn
Keraspp
코딩셰프의 3분 딥러닝, 케라스맛
Stars: ✭ 178 (-14.42%)
Mutual labels:  rnn
Rnnt Speech Recognition
End-to-end speech recognition using RNN Transducers in Tensorflow 2.0
Stars: ✭ 158 (-24.04%)
Mutual labels:  rnn
Chameleon recsys
Source code of CHAMELEON - A Deep Learning Meta-Architecture for News Recommender Systems
Stars: ✭ 202 (-2.88%)
Mutual labels:  rnn
Tensorflow2 Docs Zh
TF2.0 / TensorFlow 2.0 / TensorFlow2.0 官方文档中文版
Stars: ✭ 177 (-14.9%)
Mutual labels:  rnn
Char Rnn Chinese
Multi-layer Recurrent Neural Networks (LSTM, GRU, RNN) for character-level language models in Torch. Based on code of https://github.com/karpathy/char-rnn. Support Chinese and other things.
Stars: ✭ 192 (-7.69%)
Mutual labels:  rnn
Pytorch Kaldi
pytorch-kaldi is a project for developing state-of-the-art DNN/RNN hybrid speech recognition systems. The DNN part is managed by pytorch, while feature extraction, label computation, and decoding are performed with the kaldi toolkit.
Stars: ✭ 2,097 (+908.17%)
Mutual labels:  rnn
Rnn For Joint Nlu
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling" (https://arxiv.org/abs/1609.01454)
Stars: ✭ 176 (-15.38%)
Mutual labels:  rnn
Stylenet
A cute multi-layer LSTM that can perform like a human 🎶
Stars: ✭ 187 (-10.1%)
Mutual labels:  rnn
Rnnvis
A visualization tool for understanding and debugging RNNs
Stars: ✭ 162 (-22.12%)
Mutual labels:  rnn
Deep Learning With Python
Deep learning codes and projects using Python
Stars: ✭ 195 (-6.25%)
Mutual labels:  rnn
Poetry Seq2seq
Chinese Poetry Generation
Stars: ✭ 159 (-23.56%)
Mutual labels:  rnn
Tensorflow Tutorials
텐서플로우를 기초부터 응용까지 단계별로 연습할 수 있는 소스 코드를 제공합니다
Stars: ✭ 2,096 (+907.69%)
Mutual labels:  rnn
Doc Han Att
Hierarchical Attention Networks for Chinese Sentiment Classification
Stars: ✭ 206 (-0.96%)
Mutual labels:  rnn
Iseebetter
iSeeBetter: Spatio-Temporal Video Super Resolution using Recurrent-Generative Back-Projection Networks | Python3 | PyTorch | GANs | CNNs | ResNets | RNNs | Published in Springer Journal of Computational Visual Media, September 2020, Tsinghua University Press
Stars: ✭ 202 (-2.88%)
Mutual labels:  rnn
Gru4rec tensorflow
TensorFlow implemenation of GRu4Rec model
Stars: ✭ 192 (-7.69%)
Mutual labels:  rnn

Shakkala Project V 2.0 مشروع شكّالة

Model

Introduction

Shakkala project use recurrent neural network for Arabic text vocalization to automatically form Arabic characters (تشكيل الحروف) which can be used to enhance text-to-speech systems.
This model can be used in many applications such as enhance text-to-speech systems or search results.

Requirements

Execute following commands:

cd requirements
pip install -r requirements.txt

Code Examples (How to)

Check full example in (demo.py) file.

  1. Create Shakkala object
sh = Shakkala(folder_location, version={version_num})
  1. Prepare input
input_int = sh.prepare_input(input_text)
  1. Call the neural network
model, graph = sh.get_model()
with graph.as_default():
  logits = model.predict(input_int)[0]
  1. Predict output
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)

Available models:

  • version_num=1: First test of the solution.
  • version_num=2: Main release version.
  • version_num=3: Some enhancements from version number 2.

It worth to try both version_num=2 and version_num=3.

Perfomance Tips

Shakkala built in object oriented way to load the model once into memory for faster prediction, to make sure you dont load it multiple times in your service or application follow the steps:

  • Load the model in global variable:
sh = Shakkala(folder_location, version={version_num})
model, graph = sh.get_model()
  • Then inside your request function or loop add:
input_int = sh.prepare_input(input_text)
with graph.as_default():
  logits = model.predict(input_int)[0]
predicted_harakat = sh.logits_to_text(logits)
final_output = sh.get_final_text(input_text, predicted_harakat)

Accuracy

In this beta version 2 accuracy reached up to 95% and in some data it reach more based on complexity and data disribution. This beta version trained on more than million sentences with majority of historical Arabic data from books and some of available formed modern data in the internet.

history

Prediction Example

For live demo based on Shakkala library click the link

Real output Predicted output
فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى فَإِنْ لَمْ يَكُونَا كَذَلِكَ أَتَى بِمَا يَقْتَضِيهِ الْحَالُ وَهَذَا أَوْلَى
قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا قَالَ الْإِسْنَوِيُّ وَسَوَاءٌ فِيمَا قَالُوهُ مَاتَ فِي حَيَاةِ أَبَوَيْهِ أَمْ لَا
طَابِعَةٌ ثُلَاثِيَّةُ الْأَبْعَاد طَابِعَةٌ ثَلَاثِيَّةُ الْأَبْعَادِ

Accuracy Enhancements

The model can be enhanced to reach more than 95% accuracy with following:

  • Availability of more formed modern data to train the network. (because current version trained with mostly available historical Arabic data and some modern data)
  • Stack different models

Model Design

Model

References

Citation

For academic work use

Shakkala, Arabic text vocalization, Barqawi & Zerrouki

OR bibtex format

@misc{
  title={Shakkala, Arabic text vocalization},
  author={Barqawi, Zerrouki},
  url={https://github.com/Barqawiz/Shakkala},
  year={2017}
}

Contribution

Core Team

  1. Ahmad Barqawi: Neural Network Developer.
  2. Taha Zerrouki: Mentor Data and Results.

Contributors

  1. Zaid Farekh & propellerinc.me: Provide infrastructure and consultation support.
  2. Mohammad Issam Aklik: Artist.
  3. Brahim Sidi: Form new sentences.
  4. Fadi Bakoura: Aggregate online content.
  5. Ola Ghanem: Testing.
  6. Ali Hamdi Ali Fadel: Contribute code.

License

Free to use and distribute only mention the original project name Shakkala as base model.

The MIT License (MIT)

Copyright (c) 2017 Shakkala Project

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].