All Projects → keon → Codegan

keon / Codegan

[Deprecated] Source Code Generation using Sequence Generative Adversarial Networks

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Codegan

sgrnn
Tensorflow implementation of Synthetic Gradient for RNN (LSTM)
Stars: ✭ 40 (-45.21%)
Mutual labels:  recurrent-neural-networks, rnn
Text summurization abstractive methods
Multiple implementations for abstractive text summurization , using google colab
Stars: ✭ 359 (+391.78%)
Mutual labels:  rnn, policy-gradient
EdgarAllanPoetry
Computer-generated poetry
Stars: ✭ 22 (-69.86%)
Mutual labels:  recurrent-neural-networks, rnn
python-machine-learning-book-2nd-edition
<머신러닝 교과서 with 파이썬, 사이킷런, 텐서플로>의 코드 저장소
Stars: ✭ 60 (-17.81%)
Mutual labels:  recurrent-neural-networks, rnn
Theano Kaldi Rnn
THEANO-KALDI-RNNs is a project implementing various Recurrent Neural Networks (RNNs) for RNN-HMM speech recognition. The Theano Code is coupled with the Kaldi decoder.
Stars: ✭ 31 (-57.53%)
Mutual labels:  rnn, recurrent-neural-networks
tiny-rnn
Lightweight C++11 library for building deep recurrent neural networks
Stars: ✭ 41 (-43.84%)
Mutual labels:  recurrent-neural-networks, rnn
Rnnsharp
RNNSharp is a toolkit of deep recurrent neural network which is widely used for many different kinds of tasks, such as sequence labeling, sequence-to-sequence and so on. It's written by C# language and based on .NET framework 4.6 or above versions. RNNSharp supports many different types of networks, such as forward and bi-directional network, sequence-to-sequence network, and different types of layers, such as LSTM, Softmax, sampled Softmax and others.
Stars: ✭ 277 (+279.45%)
Mutual labels:  rnn, recurrent-neural-networks
modules
The official repository for our paper "Are Neural Nets Modular? Inspecting Functional Modularity Through Differentiable Weight Masks". We develop a method for analyzing emerging functional modularity in neural networks based on differentiable weight masks and use it to point out important issues in current-day neural networks.
Stars: ✭ 25 (-65.75%)
Mutual labels:  paper, rnn
Rgan
Recurrent (conditional) generative adversarial networks for generating real-valued time series data.
Stars: ✭ 480 (+557.53%)
Mutual labels:  paper, rnn
Tensorflow Char Rnn
Char-RNN implemented using TensorFlow.
Stars: ✭ 429 (+487.67%)
Mutual labels:  rnn, recurrent-neural-networks
automatic-personality-prediction
[AAAI 2020] Modeling Personality with Attentive Networks and Contextual Embeddings
Stars: ✭ 43 (-41.1%)
Mutual labels:  recurrent-neural-networks, rnn
Deepseqslam
The Official Deep Learning Framework for Route-based Place Recognition
Stars: ✭ 49 (-32.88%)
Mutual labels:  rnn, recurrent-neural-networks
sequence-rnn-py
Sequence analyzing using Recurrent Neural Networks (RNN) based on Keras
Stars: ✭ 28 (-61.64%)
Mutual labels:  recurrent-neural-networks, rnn
rindow-neuralnetworks
Neural networks library for machine learning on PHP
Stars: ✭ 37 (-49.32%)
Mutual labels:  recurrent-neural-networks, rnn
SpeakerDiarization RNN CNN LSTM
Speaker Diarization is the problem of separating speakers in an audio. There could be any number of speakers and final result should state when speaker starts and ends. In this project, we analyze given audio file with 2 channels and 2 speakers (on separate channels).
Stars: ✭ 56 (-23.29%)
Mutual labels:  recurrent-neural-networks, rnn
Lstm Human Activity Recognition
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
Stars: ✭ 2,943 (+3931.51%)
Mutual labels:  rnn, recurrent-neural-networks
Human-Activity-Recognition
Human activity recognition using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six categories (WALKING, WALKING_UPSTAIRS, WALKING_DOWNSTAIRS, SITTING, STANDING, LAYING).
Stars: ✭ 16 (-78.08%)
Mutual labels:  recurrent-neural-networks, rnn
VariationalNeuralAnnealing
A variational implementation of classical and quantum annealing using recurrent neural networks for the purpose of solving optimization problems.
Stars: ✭ 21 (-71.23%)
Mutual labels:  recurrent-neural-networks, rnn
Rmdl
RMDL: Random Multimodel Deep Learning for Classification
Stars: ✭ 375 (+413.7%)
Mutual labels:  rnn, recurrent-neural-networks
Predicting Myers Briggs Type Indicator With Recurrent Neural Networks
Stars: ✭ 43 (-41.1%)
Mutual labels:  rnn, recurrent-neural-networks

CodeGAN

Source Code Generation with Generative Adversarial Networks (SeqGAN)

Requirements:

  • Tensorflow r0.11
  • Cuda 7.5 or higher (for GPU)
  • nltk python package

Comparison with other Models & Experiments

Code Written by: Character Recurrent Neural Network

def media ( self  ) :
    choices = s
def cgets_to_reating_request ( _default  ) :
    charset = _errors
def field_with__ in get_language (  ) :
    if func . __iter__

import unicode_litible
self . encode ( self  ) :
    if isinstance ( value , items  ) :
    if not os . path . lower for dotage_nurn :
        pass
    except XTERTAD_MI_NUN_FITCL

@ DEFILL
self . _funacod_location . copy ( i  )
    return s

It writes some texts that look like a program. But clearly Char-RNN is a bad programmer. (it can implement functions).

Code Written by: Word Recurrent Neural Network

class number_format ( _html_parser . signals . alias  ) :
    def __init__ ( self , commit = False  ) :
        widgets = value
        content = [  ]
        s = key
        self . kwargs = current_app
        self . _clean_form (  )

    def required ( self , name  ) :
        value = self . add_prefix ( name  )

    def nud ( self , filter_expr , subdir  ) :
        value = force_text ( value  )
        else :
            return self . _headers . contents tempdir . set_app ( cls  )
        if timezone . to_locale ( value  ) :
            return formats . zone msgs

As you can see, Word-RNN can hold on to the context longer than Char-RNN, thus it writes a longer program. (it can implement classes).

Code Written By: CodeGAN - Reinforce

Debugging...

Code Written By: CodeGAN - Polcy Gradient

' , 5 : _ ( : ] for key , default = '/dev/null ' , help = 'nominates a
, 'sender ' , 'reply-to ' , 'to ' , 400 , '-a ' , action = 'store ' ,
' , ord ( ext , true ) : 
 	 	 	 	 	 	 declared_fields . choice_cache =
' ) , 'max_decimal_places ' : ungettext_lazy ( 'ensure that there are no more than % ( max ) s
) : 
 	 def __init__ ( self , * args , ** kwargs ) : 
 	 	 	
' , action = 'store_false ' , dest = 'load_initial_data ' , default = true , help = 'tells django
, ( k , { ) ) , 'max_whole_digits ' : ungettext_lazy ( 'ensure that there are no more than
, 'migrate_failure ' : { 'fg ' : 'red ' , 'opts ' : ( 'bold ' , ) }
, exclude = use_natural_foreign_keys == '' and not self . port is none : 
 	 	 	 	 	
are no more than % ( max ) s is not . ' ' . ' ) 
 collect .
' , action = 'store_false ' , dest = 'load_initial_data ' , default = false , help = 'tells =
) : 
 	 	 def __init__ ( self , * args , ** kwargs ) : 
 	 	
, 'get_language_bidi ' , 'hiddeninput ' , 'multiplehiddeninput ' , 'clearablefileinput ' , 'fileinput ' , 'dateinput ' , 'datetimeinput
' ) 
 parser . add_argument ( ' -- database ' -- 'mar ' : ( ) -- 'bpython '
, { 'fg ' : 'red ' , 'opts ' : ( 'bold ' , ' ) , 'sender '
) 
 parser . add_argument ( ' -- no-initial-data ' , action = 'store_false ' , dest = 'load_initial_data '
) : 
 return mark_safe ( '\n ' . join ( model ) s . ' , } 
 	
= ' ) , 'max_decimal_places ' : ungettext_lazy ( 'ensure that there are no more than % ( max )
) , 'max_decimal_places ' : ungettext_lazy ( 'ensure that there are no more than % ( max ) s {
' , help = 'tells django not not pk . using argument class natural-foreign appcommand ( ) : 
 from
* args , s = ' , `` '' , 1 : 
 	 	 	 	 	 return ``
) : 
 	 	 def __init__ ( self , * args , ** kwargs ) : 
 	 	
' , action = 'store_false ' , dest = 'load_initial_data ' , default = true , help = 'tells django
' ) 
 parser . add_argument ( ' -- all ' , '-a ' , action = 'store_true ' ,
) } 
 	 def __init__ ( self , * args , ** kwargs ) : 
 	 	 	
, action = 'store_false ' , dest = 'load_initial_data ' , default = true , help = minute == default_db_alias
, 'sender ' , 'reply-to ' , 'to ' , 'cc ' , 'bcc ' , 'resent-from ' , keyerror
	 	 def __init__ ( self , * args , ** kwargs ) : 
 	 	 	 self .
, 'migrate_failure ' : { 'fg ' : 'red ' , 'opts ' : ( 'bold ' , ) }
, 'http_bad_request ' : { 'fg ' : 'red ' , 'opts ' : ( 'bold ' , ) }
' , 'httpresponseservererror ' , 'http404 ' , 'badheadererror ' , 'fix_location_header ' , 'jsonresponse ' , 'conditional_content_removal ' ,
' ) 

SeqGAN quickly loses the context in a long sequence. I will keep improve this in the future.

Model

The model used for the code generation is called Sequence Generative Adversarial Nets (with Policy Gradient).

The illustration of SeqGAN. Left: D is trained over the real data and the generated data by G. Right: G is trained by policy gradient where the final reward signal is provided by D and is passed back to the intermediate action value via Monte Carlo search.

The research paper SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient has been accepted at the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17). The final version of the paper will be updated soon.

Run

Move to codegan-pg folder and run

python pretrain_experiment.py

will start maximum likelihood training with default parameters. In the same folder, run

python sequence_gan.py

will start SeqGAN training.

Aknowledgements

This is one of many exiting projects going on in the DeepCoding Project. Stay tuned for more awesome stuff.

Note: I built it on top of the original implementation of SeqGAN which is based on the previous work by ofirnachum. Many thanks to ofirnachum and LantaoYu.

After running the experiments, the learning curve should be like this:

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].