All Projects → gaoisbest → Nlp Projects

gaoisbest / Nlp Projects

word2vec, sentence2vec, machine reading comprehension, dialog system, text classification, pretrained language model (i.e., XLNet, BERT, ELMo, GPT), sequence labeling, information retrieval, information extraction (i.e., entity, relation and event extraction), knowledge graph, text generation, network embedding

Projects that are alternatives of or similar to Nlp Projects

Chatbot cn
基于金融-司法领域(兼有闲聊性质)的聊天机器人,其中的主要模块有信息抽取、NLU、NLG、知识图谱等,并且利用Django整合了前端展示,目前已经封装了nlp和kg的restful接口
Stars: ✭ 791 (+119.72%)
Mutual labels:  knowledge-graph, text-classification, dialogue-systems
Dan Jurafsky Chris Manning Nlp
My solution to the Natural Language Processing course made by Dan Jurafsky, Chris Manning in Winter 2012.
Stars: ✭ 124 (-65.56%)
Mutual labels:  information-retrieval, text-classification, information-extraction
Deep-NLP-Resources
Curated list of all NLP Resources
Stars: ✭ 65 (-81.94%)
Mutual labels:  text-classification, text-generation, information-extraction
Knowledge Graphs
A collection of research on knowledge graphs
Stars: ✭ 845 (+134.72%)
Mutual labels:  knowledge-graph, information-retrieval, dialogue-systems
alter-nlu
Natural language understanding library for chatbots with intent recognition and entity extraction.
Stars: ✭ 45 (-87.5%)
Mutual labels:  text-classification, information-extraction
text-classification-cn
中文文本分类实践,基于搜狗新闻语料库,采用传统机器学习方法以及预训练模型等方法
Stars: ✭ 81 (-77.5%)
Mutual labels:  text-classification, word2vec
Snips Nlu
Snips Python library to extract meaning from text
Stars: ✭ 3,583 (+895.28%)
Mutual labels:  text-classification, information-extraction
Product-Categorization-NLP
Multi-Class Text Classification for products based on their description with Machine Learning algorithms and Neural Networks (MLP, CNN, Distilbert).
Stars: ✭ 30 (-91.67%)
Mutual labels:  text-classification, word2vec
news-graph
Key information extraction from text and graph visualization
Stars: ✭ 83 (-76.94%)
Mutual labels:  information-extraction, knowledge-graph
Pytorch-NLU
Pytorch-NLU,一个中文文本分类、序列标注工具包,支持中文长文本、短文本的多类、多标签分类任务,支持中文命名实体识别、词性标注、分词等序列标注任务。 Ptorch NLU, a Chinese text classification and sequence annotation toolkit, supports multi class and multi label classification tasks of Chinese long text and short text, and supports sequence annotation tasks such as Chinese named entity recognition, part of speech ta…
Stars: ✭ 151 (-58.06%)
Mutual labels:  text-classification, sequence-labeling
SWDM
SIGIR 2017: Embedding-based query expansion for weighted sequential dependence retrieval model
Stars: ✭ 35 (-90.28%)
Mutual labels:  information-retrieval, word2vec
Casrel
A Novel Cascade Binary Tagging Framework for Relational Triple Extraction. Accepted by ACL 2020.
Stars: ✭ 329 (-8.61%)
Mutual labels:  knowledge-graph, information-extraction
sarcasm-detection-for-sentiment-analysis
Sarcasm Detection for Sentiment Analysis
Stars: ✭ 21 (-94.17%)
Mutual labels:  text-classification, word2vec
knowledge-graph-nlp-in-action
从模型训练到部署,实战知识图谱(Knowledge Graph)&自然语言处理(NLP)。涉及 Tensorflow, Bert+Bi-LSTM+CRF,Neo4j等 涵盖 Named Entity Recognition,Text Classify,Information Extraction,Relation Extraction 等任务。
Stars: ✭ 58 (-83.89%)
Mutual labels:  information-extraction, knowledge-graph
Knowledge Graph Wander
A collection of papers, codes, projects, tutorials ... for Knowledge Graph and other NLP methods
Stars: ✭ 26 (-92.78%)
Mutual labels:  information-extraction, knowledge-graph
evildork
Evildork targeting your fiancee👁️
Stars: ✭ 46 (-87.22%)
Mutual labels:  information-retrieval, information-extraction
ebe-dataset
Evidence-based Explanation Dataset (AACL-IJCNLP 2020)
Stars: ✭ 16 (-95.56%)
Mutual labels:  text-classification, text-generation
Kenlg Reading
Reading list for knowledge-enhanced text generation, with a survey
Stars: ✭ 257 (-28.61%)
Mutual labels:  knowledge-graph, text-generation
Bert For Sequence Labeling And Text Classification
This is the template code to use BERT for sequence lableing and text classification, in order to facilitate BERT for more tasks. Currently, the template code has included conll-2003 named entity identification, Snips Slot Filling and Intent Prediction.
Stars: ✭ 293 (-18.61%)
Mutual labels:  text-classification, sequence-labeling
Vaaku2Vec
Language Modeling and Text Classification in Malayalam Language using ULMFiT
Stars: ✭ 68 (-81.11%)
Mutual labels:  text-classification, word2vec

NLP-Projects

Natural Language Processing projects, which includes concepts and scripts about:

Concepts

1. Attention

  • Attention == weighted averages
  • The attention review 1 and review 2 summarize attention mechanism into several types:
    • Additive vs Multiplicative attention
    • Self attention
    • Soft vs Hard attention
    • Global vs Local attention

2. CNNs, RNNs and Transformer

  • Parallelization [1]

    • RNNs
      • Why not good ?
      • Last step's output is input of current step
    • Solutions
      • Simple Recurrent Units (SRU)
        • Perform parallelization on each hidden state neuron independently
      • Sliced RNNs
        • Separate sequences into windows, use RNNs in each window, use another RNNs above windows
        • Same as CNNs
    • CNNs
      • Why good ?
      • For different windows in one filter
      • For different filters
  • Long-range dependency [1]

    • CNNs
      • Why not good ?
      • Single convolution can only caputure window-range dependency
    • Solutions
      • Dilated CNNs
      • Deep CNNs
        • N * [Convolution + skip-connection]
        • For example, window size=3 and sliding step=1, second convolution can cover 5 words (i.e., 1-2-3, 2-3-4, 3-4-5)
    • Transformer > RNNs > CNNs
  • Position [1]

    • CNNs

      • Why not good ?
      • Convolution preserves relative-order information, but max-pooling discards them
    • Solutions

      • Discard max-pooling, use deep CNNs with skip-connections instead
      • Add position embedding, just like in ConvS2S
    • Transformer

      • Why not good ?
      • In self-attention, one word attends to other words and generate the summarization vector without relative position information
  • Semantic features extraction [2]

    • Transformer > CNNs == RNNs

3. Pattern of DL in NLP models [3]

  • Data

    • Preprocess
    • Pre-training (e.g., ELMO, BERT)
    • Multi-task learning
    • Transfer learning, ref_1, ref_2
      • Use source task/domain S to increase target task/domain T
    • If S has a zero/one/few instances, we call it zero-shot, one-shot, few-shot learning, respectively
  • Model

    • Encoder
      • CNNs, RNNs, Transformer
    • Structure
      • Sequential, Tree, Graph
  • Learning (change loss definition)

    • Adversarial learning
    • Reinforcement learning

References

Awesome public apis

Awesome packages

Chinese

English

Future directions

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].