All Projects → thunlp → Nlp Thu

thunlp / Nlp Thu

NLP Course Material & QA

Labels

Projects that are alternatives of or similar to Nlp Thu

Eve Building Restful Mongodb Backed Apis Course
Course materials and handouts for EVE: Building RESTful MongoDB-backed APIs course
Stars: ✭ 53 (-47.52%)
Mutual labels:  course
Courselit
Start your own online teaching business. Features include course maker, students manager, payments and more.
Stars: ✭ 73 (-27.72%)
Mutual labels:  course
Spy
A simple module that displays DOM attributes on mouseover inside a tooltip.
Stars: ✭ 93 (-7.92%)
Mutual labels:  course
Go Videos Ru
Каталог докладов, лекций и других видеоматериалов по Go
Stars: ✭ 64 (-36.63%)
Mutual labels:  course
Dayasadev
📗 A course to teach non-technical team members what developers do 📗
Stars: ✭ 73 (-27.72%)
Mutual labels:  course
Data Driven Web Apps With Pyramid And Sqlalchemy
Demos and handouts for Talk Python's Data-Driven Web Apps with Pyramid and SQLAlchemy course
Stars: ✭ 79 (-21.78%)
Mutual labels:  course
Ppd599
USC urban data science course series with Python and Jupyter
Stars: ✭ 1,062 (+951.49%)
Mutual labels:  course
30 Days Of Python 3.6
This is a soon-to-be archived project version of 30 Days of Python. The original tutorial still works but we have an updated version in the works right now.
Stars: ✭ 98 (-2.97%)
Mutual labels:  course
Course Computational Literary Analysis
Course materials for Introduction to Computational Literary Analysis, taught at UC Berkeley in Summer 2018, 2019, and 2020, and at Columbia University in Fall 2020.
Stars: ✭ 74 (-26.73%)
Mutual labels:  course
Cride Platzi
REST API project used to teach Django on Platzi
Stars: ✭ 88 (-12.87%)
Mutual labels:  course
Kubernetes Learning
《从Docker到Kubernetes进阶课程》在线文档
Stars: ✭ 1,128 (+1016.83%)
Mutual labels:  course
Topics In Deep Learning
Materials for class on topics in deep learning (STAT 991, UPenn/Wharton)
Stars: ✭ 72 (-28.71%)
Mutual labels:  course
Coreapifundamentals
The Starting Code for the Core API Fundamentals course using ASP.NET Core course on Pluralsight
Stars: ✭ 82 (-18.81%)
Mutual labels:  course
Laravel Course
Laravel Essentials Udemy course Full Source Code
Stars: ✭ 59 (-41.58%)
Mutual labels:  course
Cs234 Reinforcement Learning Winter 2019
My Solutions of Assignments of CS234: Reinforcement Learning Winter 2019
Stars: ✭ 93 (-7.92%)
Mutual labels:  course
Cpp houjie
侯捷C++课程PPT及代码,动手学起来
Stars: ✭ 1,069 (+958.42%)
Mutual labels:  course
Go Collection
🌷 awesome awesome go, study golang from basic to proficient
Stars: ✭ 1,193 (+1081.19%)
Mutual labels:  course
Learngo
1000+ Hand-Crafted Go Examples, Exercises, and Quizzes
Stars: ✭ 11,847 (+11629.7%)
Mutual labels:  course
Python For Absolute Beginners Course
Code samples and other handouts for our course.
Stars: ✭ 1,352 (+1238.61%)
Mutual labels:  course
Compling nlp hse course
Материалы курса по компьютерной лингвистике Школы Лингвистики НИУ ВШЭ
Stars: ✭ 85 (-15.84%)
Mutual labels:  course

This repository provides reading materials recommended by NLP-THU Course.

1. Introduction

Introduction

  1. Foundations of statistical natural language processing. Christopher D. Manning and Hinrich Schütze. MIT Press 2001. [link]
  2. Introduction to information retrieval. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze. Cambridge University Press 2008. [link]
  3. Semantic Relations Between Nominals. Vivi Nastase, Preslav Nakov, Diarmuid Ó Séaghdha and Stan Szpakowicz. Morgan & Claypool Publishers 2013. [link]

2. Word Representation and Neural Networks

a. Word Representation

  1. Linguistic Regularities in Continuous Space Word Representations. Tomas Mikolov, Wen-tau Yih and Geoffrey Zweig. NAACL 2013. [link]
  2. Glove: Global Vectors for Word Representation. Jeffrey Pennington, Richard Socher and Christopher D. Manning. EMNLP 2014. [link]
  3. Deep Contextualized Word Representations. Matthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee and Luke Zettlemoyer. NAACL 2018. [link]
  4. Parallel Distributed Processing. Jerome A. Feldman, Patrick J. Hayes and David E. Rumelhart. 1986.

b. RNN & CNN

  1. ImageNet Classification with Deep Convolutional Neural Networks. NIPS 2012 [link]
  2. Convolutional Neural Networks for Sentence Classification. EMNLP 2014 [link]
  3. Long short-term memory. MIT Press 1997 [link]

3. Seq2Seq Modeling

a. Machine Translation

Must-read Papers

  1. The Mathematics of Statistical Machine Translation: Parameter Estimation. Peter EBrown, Stephen ADella Pietra, Vincent JDella Pietra, and Robert LMercer. Computational Linguistics 1993 [link]
  2. (Seq2seq) Sequence to Sequence Learning with Neural Networks. Ilya Sutskever, Oriol Vinyals, and Quoc VLe. NIPS 2014 [link]
  3. (BLEU) BLEU: a Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. ACL 2002 [link]

Further Reading

  1. Statistical Phrase-Based Translation. Philipp Koehn, Franz JOch, and Daniel Marcu. NAACL 2003 [link]
  2. Hierarchical Phrase-Based Translation. David Chiang. Computational Linguistics 2007 [link]
  3. (Beam Search) Beam Search Strategies for Neural Machine Translation. Markus Freitag and Yaser Al-Onaizan. 2017 [link]
  4. MT paper list. [link]
  5. THUMT toolkit. [link]

b. Attention

  1. Introduction to attention. [link]
  2. Neural Machine Translation by Jointly Learning to Align and Translate. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. ICLR 2015 [link]

c. Transformer

Must-read Papers

  1. (Transformer) Attention is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan NGomez, Lukasz Kaiser, and Illia Polosukhin. NIPS 2017 [link]
  2. (BPE) Neural Machine Translation of Rare Words with Subword Units. Rico Sennrich, Barry Haddow, and Alexandra Birch. ACL 2016 [link]

Further Reading

  1. Illustrated Transformer. [link]
  2. Layer normalization. Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. 2016 [link]
  3. Deep residual learning for image recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. CVPR 2016 [link]

4. Pre-Trained Language Models

Must-read papers

  1. Semi-supervised Sequence Learning. [link]
  2. (ELMo) Deep contextualized word representations. [link]
  3. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. [link]

Further Reading

  1. Introduction of Pre-trained LM. [link]
  2. Transformer code repo. [link]
  3. Transfer Learning in Natural Language Processing. Sebastian Ruder, Matthew E. Peters, Swabha Swayamdipta, Thomas Wolf. NAACL 2019 [link]
  4. PLM paper list. [link]

5. Knowledge Graph

a. Introduction to KG

  1. Towards a Definition of Knowledge Graphs. Lisa Ehrlinger, Wolfram Wöß [link]
  2. KG Definition & History Wiki [link]
  3. Semantic Network [link]

b. Knowledge Representation Learning

Must-read papers

  1. KRL paper list [link]
  2. Knowledge Representation Learning: A Review. (In Chinese) Zhiyuan Liu, Maosong Sun, Yankai Lin, Ruobing Xie. 计算机研究与发展 2016.  [link]
  3. A Review of Relational Machine Learning for Knowledge Graphs. Maximilian Nickel, Kevin Murphy, Volker Tresp, Evgeniy Gabrilovich. 2016.  [link]
  4. Knowledge Graph Embedding: A Survey of Approaches and Applications. Quan Wang, Zhendong Mao, Bin Wang, Li Guo. TKDE 2017.  [link]

Further reading

  1. OpenKE [link]

c. Reasoning

  1. KG Reasoning paper list [link] & PPT [link]

6. Information Extraction - 1

a. Part of Speech Taggin(POS Tagging)

  1. Introduction from Wikipedia [link]
  2. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. Plank2016 [link]
  3. Blog: NLP Guide: Identifying Part of Speech Tags using Conditional Random Fields [link]

b. Sequence Labelling

  1. Hierarchically-Refined Label Attention Network for Sequence Labeling. Cui, Leyang. EMNLP-IJCNLP,2019 [link]
  2. End-to-end Sequence Labeling via Bi-directional LSTM-CNNs-CRF. Ma X [link]
  3. Comparisons of sequence labeling algorithms and extensions. ICML2007 [link]

c. Named Entity Recognition

  1. Blog: Named Entity Recognition Tagging, CS230 [link]
  2. A survey of named entity recognition and classification. David Nadeau, Satoshi Sekine. 2007 [link]
  3. Neural Architectures for Named Entity Recognition [link]
  4. Named entity recognition with bidirectional LSTM-CNNs [link]

7. Information Extraction - 2

a. Relation Extraction

Must-read papers

  1. Relation Classification via Convolutional Deep Neural Network. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, Jun Zhao. COLING 2014. [link]
  2. Distant Supervision for Relation Extraction without Labeled Data. Mike Mintz, Steven Bills, Rion Snow, Dan Jurafsky. ACL-IJCNLP 2009. [link]
  3. Neural Relation Extraction with Selective Attention over Instances. Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, Maosong Sun. ACL 2016. [link]

Further reading

  1. RE paper list [link]

b. Advanced Topics

- Event Extraction

  1. Joint Event Extraction via Structured Prediction with Global Features. Qi Li, Heng Ji and Liang Huang. ACL 2013. [link]
  2. Event Extraction via Dynamic Multi-Pooling Convolutional Neural Networks. Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng and Jun Zhao. ACL 2015. [link]
  3. Adversarial Training for Weakly Supervised Event Detection. Xiaozhi Wang, Xu Han, Zhiyuan Liu, Maosong Sun and Peng Li. NAACL 2019. [link]

- OpenRE

  1. Unsupervised open relation extraction. Hady Elsahar, Elena Demidova, Simon Gottschalk, Christophe Gravier, and Frederique Laforest. In Proceedings of European Semantic Web Conference 2017 [link]
  2. Open Relation Extraction: Relational Knowledge Transfer from Supervised Data to Unsupervised Data. Ruidong Wu, Yuan Yao, Xu Han, Ruobing Xie, Zhiyuan Liu, Fen Lin, Leyu Lin, Maosong Sun. EMNLP 2019 [link]
  3. Discrete-state variational autoencoders for joint discovery and factorization of relations. Diego Marcheggiani and Ivan Titov. TACL 2016 [link]

- Document-Level RE

  1. DocRED: A Large-Scale Document-Level Relation Extraction Dataset. Yuan Yao, Deming Ye, Peng Li, Xu Han, Yankai Lin, Zhenghao Liu, Zhiyuan Liu, Lixin Huang, Jie Zhou, Maosong Sun. ACL 2019 [link]
  2. A Walk-based Model on Entity Graphs for Relation Extraction. Fenia Christopoulou, Makoto Miwa, Sophia Ananiadou. ACL 2017 [link]
  3. Graph Neural Networks with Generated Parameters for Relation Extraction. Hao Zhu, Yankai Lin, Zhiyuan Liu, Jie Fu, Tat-Seng Chua, Maosong Sun. ACL 2019 [link]

- Few-shot RE

  1. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun. ACL 2019 [link]
  2. Matching Networks for One Shot Learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra [link]
  3. Prototypical Networks for Few-shot Learning. Jake Snell, Kevin Swersky, Richard SZemel [link]

8. Knowledge-Guided NLP

Must-read papers

  1. ERNIE: Enhanced Language Representation with Informative Entities Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu. ACL 2019 [link]
  2. Neural natural language inference models enhanced with external knowledge. Qian Chen, Xiaodan Zhu, Zhen-Hua Ling, Diana Inkpen, and Si Wei. ACL 2018 [link]
  3. Neural knowledge acquisition via mutual attention between knowledge graph and text. Xu Han, Zhiyuan Liu, and Maosong Sun. AAAI 2018 [link]

Further reading

  1. Language Models as Knowledge Bases? [link]
  2. Knowledge enhanced contextual word representations. Matthew EPeters, Mark Neumann, Robert Logan, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah ASmith. EMNLP 2019 [link]
  3. Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling. Robert Logan, Nelson FLiu, Matthew EPeters, Matt Gardner, and Sameer Singh. ACL 2019 [link]
  4. Knowledgeable Reader: Enhancing Cloze-style Reading Comprehension with External Commonsense Knowledge. Todor Mihaylov and Anette Frank. ACL 2018 [link]
  5. Improving question answering by commonsense-based pre-training. Wanjun Zhong, Duyu Tang, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2018 [link]
  6. Adaptive knowledge sharing in multi-task learning: Improving low-resource neural machine translation. Poorya Zaremoodi, Wray Buntine, and Gholamreza Haffari. ACL 2018 [link]

9. Advanced Learning Methods

a. Adversarial Training

Must-read papers
  1. Explaining and Harnessing Adversarial Examples. Ian JGoodfellow, Jonathon Shlens, and Christian Szegedy. ICLR 2015 [link])
  2. Generative Adversarial Nets. Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. NIPS 2015 [link]
  3. Wasserstein GAN. Martín Arjovsky, Soumith Chintala, and Léon Bottou. ICML 2017 [link]
Further reading
  1. Adversarial Examples for Evaluating Reading Comprehension Systems. Robin Jia, Percy Liang. EMNLP 2017 [link]
  2. Certified Defenses Against Adversarial Examples. Raghunathan, Aditi, Jacob Steinhardt, and Percy Liang. ICLR 2018 [link]
  3. Robust Neural Machine Translation with Doubly Adversarial Inputs. Yong Cheng, Lu Jiang, and Wolfgang Macherey. ACL 2019 [link]
  4. Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks. Radford Alec, Metz Luke, and Chintala SoumithICLR 2016 [link]
  5. Improved Training of Wasserstein GANs. Martin Arjovsky, Soumith Chintala, and Léon BottouGulrajani Ishaan, Ahmed Faruk, Arjovsky Martin, Dumoulin Vincent, and Courville Aaron. NIPS 2017 [link]
  6. Are GANs Created Equal? A Large-scale Study. Mario Lucic, Karol Kurach, Marcin Michalski, Sylvain Gelly, and Olivier Bousquet. NIPS 2018 [link]
  7. Unsupervised Machine Translation Using Monolingual Corpora Only. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. ICLR 2018 [link]
  8. Adversarial Multi-task Learning for Text Classification. Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. ACL 2017 [link]
  9. SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient. Lantao Yu, Weinan Zhang, Jun Wang, and Yong Yu. AAAI 2018 [link]

b. Reinforcement Learning

Must-read papers
  1. Playing atari with deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Alex Graves, Ioannis Antonoglou, Daan Wierstra, Martin Riedmiller. 2013 [link]
  2. Human-level control through deep reinforcement learning. Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, Demis Hassabis. Nature 2015 [link]
  3. Mastering the game of go with deep neural networks and tree search. David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, Demis Hassabis. Nature 2016 [link]
  4. Reinforcement learning for relation classification from noisy data. Jun Feng, Minlie Huang, Li Zhao, Yang Yang, Xiaoyan Zhu. AAAI 2018 [link]
Further reading
  1. Reinforced co-training. Jiawei Wu, Lei Li, William Yang Wang. NAACL 2018 [link]
  2. Playing 20 question game with policy-based reinforcement learning. Huang Hu, Xianchao Wu, Bingfeng Luo, Chongyang Tao, Can Xu, Wei Wu, Zhan Chen. EMNLP 2018 [link]
  3. Entity-relation extraction as multi-turn question answering. Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li. ACL 2019 [link]
  4. Language understanding for text-based games using deep reinforcement learning. Karthik Narasimhan, Tejas D Kulkarni, Regina Barzilay. EMNLP 2015 [link]
  5. Deep reinforcement learning with a natural language action space. Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, Lihong Li, Li Deng, Mari Ostendorf. ACL 2016 [link]

c. Few-Shot Learning

Must-read papers
  1. FewRel: A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation. Xu Han, Hao Zhu, Pengfei Yu, Ziyun Wang, Yuan Yao, Zhiyuan Liu, Maosong Sun. ACL 2019 [link]
  2. Matching Networks for One Shot Learning. Oriol Vinyals, Charles Blundell, Timothy Lillicrap, Koray Kavukcuoglu, Daan Wierstra [link]
  3. Prototypical Networks for Few-shot Learning. Jake Snell, Kevin Swersky, Richard SZemel [link]
Further reading
  1. FewRel 2.0: Towards More Challenging Few-Shot Relation Classification. Tianyu Gao, Xu Han, Hao Zhu, Zhiyuan Liu, Peng Li, Maosong Sun, Jie Zhou. EMNLP 2019 [link]
  2. Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks. Chelsea Finn, Pieter Abbeel, Sergey Levine [link]
  3. Matching the Blanks: Distributional Similarity for Relation Learnin. Livio Baldini Soares, Nicholas FitzGerald, Jeffrey Ling, Tom Kwiatkowski. ACL 2019 [link]

10. Information Retrieval

Must-read papers

  1. PACRR: A Position-Aware Neural IR Model for Relevance Matching. EMNLP 2017 [link]
  2. Entity-Duet Neural Ranking: Understanding the Role of Knowledge Graph Semantics in Neural Information Retrieval. ACL 2018 [link]
  3. A Deep Look into Neural Ranking Models for Information Retrieval. 2019 [link]
  4. Selective Weak Supervision for Neural Information Retrieval. WWW 2020 [link]

Further reading

  1. Explicit Semantic Ranking for Academic Search via Knowledge Graph Embedding. WWW 2017 [link]
  2. Query suggestion with feedback memory network. WWW 2018 [link]
  3. NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval. EMNLP 2018 [link]
  4. Towards Better Text Understanding and Retrieval through Kernel Entity Salience Modeling. SIGIR 2018 [link]
  5. Deeper Text Understanding for IR with Contextual Neural Language Modeling. SIGIR 2019 [link]

11. Question Answering

a. Reading Comprehension

  1. SQuAD: 100,000+ Questions for Machine Comprehension of Text. EMNLP 2016 [link]
  2. Bidirectional Attention Flow for Machine Comprehension. ICLR 2017 [link]
  3. Simple and Effective Multi-Paragraph Reading Comprehension. ACL 2018 [link]

b. Open-domain QA

  1. Reading Wikipedia to Answer Open-Domain Questions. ACL 2017 [link]
  2. Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text. EMNLP 2018 [link]

c. KBQA

  1. Question Answering with Subgraph Embedding. EMNLP 2014 [link]
  2. Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base. ACL 2015 [link]

d. Other Topics

  1. (Multi-hop) Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning. EMNLP 2019 [link]
  2. (Symbolic) Neural symbolic Reader: scalable integration of distributed and symbolic representations for reading comprehension. ICLR 2020 [link]
  3. (Adversial) Adversarial Examples for Evaluating Reading Comprehension Systems. EMNLP 2017 [link]
  4. (PIQA) Phrase-indexed question answering: A new challenge for scalable document comprehension. EMNLP 2018 [link]
  5. (Common Sense) Graph-Based Reasoning over Heterogeneous External Knowledge for Commonsense Question Answering. [link]
  6. (CQA) SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering. [link]

12. Text Generation

a. Survey

  1. Tutorial on variational autoencoders [link]
  2. Neural text generation: A practical guide [link]
  3. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation [link] 4. Neural Text Generation: Past, Present and Beyond [link] 5. Survey of the state of the art in natural language generation: Core tasks, applications and evaluation [link]

b. Classic

  1. A neural probabilistic language model [link] (NNLM)
  2. Recurrent neural network based language model [link] (RNNLM)
  3. Sequence to sequence learning with neural networks [link] (seq2seq)

c. VAE based

  1. Generating Sentences from a Continuous Space [link]
  2. Long and Diverse Text Generation with Planning-based Hierarchical Variational Model [link]

d. GAN based

  1. Adversarial feature matching for text generation [link] (TextGAN)

e. Knowledge based

  1. Text Generation from Knowledge Graphs with Graph Transformers [link]
  2. Neural Text Generation from Rich Semantic Representations [link]

13. Discourse Analysis

a. Reference in Language & Coreference Resolution

  1. Unsupervised Models for Coreference Resolution. EMNLP 2008. [link]
  2. End-to-end Neural Coreference Resolution. EMNLP 2017. [link]
  3. Coreference Resolution as Query-based Span Prediction. 2019. [link]

b. Coherence & Discourse Relation Classification

  1. Implicit Discourse Relation Classification via Multi-Task Neural Networks. AAAI 2016 [link]
  2. Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network. ACL 2016 [link]
  3. Employing the Correspondence of Relations and Connectives to Identify Implicit Discourse Relations via Label Embeddings. ACL 2019 [link]
  4. Linguistic properties matter for implicit discourse relation recognition: Combining semantic interaction, topic continuity and attribution. AAAI 2018 [link]

c. Context Modeling and Conversation

  1. A Survey on Dialogue Systems: Recent Advances and New Frontiers. Hongshen Chen, Xiaorui Liu, Dawei Yin, Jiliang Tang. 2018 [link]
  2. A Diversity-Promoting Objective Function for Neural Conversation Models. Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, Bill Dolan. NAACL 2016 [link]
  3. A Persona-Based Neural Conversation Model. Jiwei Li, Michel Galley, Chris Brockett, Georgios PSpithourakis, Jianfeng Gao, Bill Dolan. ACL 2016 [link]

14. Interdiscipline

a. Cognitive Linguistics and NLP

  1. Computational Cognitive Linguistics [link]
  2. Ten Lectures on Cognitive Linguistics by George Lakoff [link (access using Tsinghua Laboratory Account)]

b. Psycholinguistics and NLP

  1. A Computational Psycholinguistic Model of Natural Language Processing [link]
  2. Slides of the Cambridge NLP course [link]
  3. Reading materials of the MIT course Computational Psycholinguistics [link]

c. Sociolinguistics and NLP

  1. Computational Sociolinguistics- A Survey [link]
  2. Research Topic of Computational Sociolinguistics in Frontiers [link]
  3. Introduction to Computational Sociolinguistics [link]
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].