All Projects → yassouali → Ml_paper_notes

yassouali / Ml_paper_notes

📖 Notes and summaries of some Machine Learning / Computer Vision / NLP papers.

Projects that are alternatives of or similar to Ml paper notes

Notes
The notes for Math, Machine Learning, Deep Learning and Research papers.
Stars: ✭ 53 (-89.31%)
Mutual labels:  summary, natural-language-processing
Paper Survey
📚Survey of previous research and related works on machine learning (especially Deep Learning) in Japanese
Stars: ✭ 140 (-71.77%)
Mutual labels:  summary, natural-language-processing
Arxivnotes
IssuesにNLP(自然言語処理)に関連するの論文を読んだまとめを書いています.雑です.🚧 マークは編集中の論文です(事実上放置のものも多いです).🍡 マークは概要のみ書いてます(早く見れる的な意味で団子).
Stars: ✭ 190 (-61.69%)
Mutual labels:  summary, natural-language-processing
Jionlp
中文 NLP 任务预处理工具包,准确、高效、零使用门槛
Stars: ✭ 449 (-9.48%)
Mutual labels:  natural-language-processing
Kaggle Homedepot
3rd Place Solution for HomeDepot Product Search Results Relevance Competition on Kaggle.
Stars: ✭ 452 (-8.87%)
Mutual labels:  natural-language-processing
Android Skill Summary
Android 技能总结,各种基础和进阶内容的资料收集
Stars: ✭ 470 (-5.24%)
Mutual labels:  summary
Neural Vqa
❔ Visual Question Answering in Torch
Stars: ✭ 487 (-1.81%)
Mutual labels:  natural-language-processing
Practical Pytorch
Go to https://github.com/pytorch/tutorials - this repo is deprecated and no longer maintained
Stars: ✭ 4,329 (+772.78%)
Mutual labels:  natural-language-processing
Learn Data Science For Free
This repositary is a combination of different resources lying scattered all over the internet. The reason for making such an repositary is to combine all the valuable resources in a sequential manner, so that it helps every beginners who are in a search of free and structured learning resource for Data Science. For Constant Updates Follow me in …
Stars: ✭ 4,757 (+859.07%)
Mutual labels:  natural-language-processing
Word forms
Accurately generate all possible forms of an English word e.g "election" --> "elect", "electoral", "electorate" etc.
Stars: ✭ 463 (-6.65%)
Mutual labels:  natural-language-processing
Book Socialmediaminingpython
Companion code for the book "Mastering Social Media Mining with Python"
Stars: ✭ 462 (-6.85%)
Mutual labels:  natural-language-processing
Courses
Quiz & Assignment of Coursera
Stars: ✭ 454 (-8.47%)
Mutual labels:  natural-language-processing
Tokenizers
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
Stars: ✭ 5,077 (+923.59%)
Mutual labels:  natural-language-processing
Practical Nlp
Official Repository for 'Practical Natural Language Processing' by O'Reilly Media
Stars: ✭ 452 (-8.87%)
Mutual labels:  natural-language-processing
Ml Mipt
Open Machine Learning course at MIPT
Stars: ✭ 480 (-3.23%)
Mutual labels:  natural-language-processing
Spacy
💫 Industrial-strength Natural Language Processing (NLP) in Python
Stars: ✭ 21,978 (+4331.05%)
Mutual labels:  natural-language-processing
Stealth
An open source Ruby framework for text and voice chatbots. 🤖
Stars: ✭ 481 (-3.02%)
Mutual labels:  natural-language-processing
Awesome Persian Nlp Ir
Curated List of Persian Natural Language Processing and Information Retrieval Tools and Resources
Stars: ✭ 460 (-7.26%)
Mutual labels:  natural-language-processing
Ml Visuals
🎨 ML Visuals contains figures and templates which you can reuse and customize to improve your scientific writing.
Stars: ✭ 5,676 (+1044.35%)
Mutual labels:  natural-language-processing
Weixin public corpus
微信公众号语料库
Stars: ✭ 465 (-6.25%)
Mutual labels:  natural-language-processing

ML Papers

This repo contains notes and short summaries of some ML related papers I come across, organized by subjects and the summaries are in the form of PDFs.

Self-Supervised & Contrastive Learning

  • Big Self-Supervised Models are Strong Semi-Supervised Learners (2020): [Paper] [Notes]
  • Debiased Contrastive Learning (2020): [Paper] [Notes]
  • Selfie: Self-supervised Pretraining for Image Embedding (2019): [Paper] [Notes]
  • Self-Supervised Representation Learning by Rotation Feature Decoupling (2019): [Paper] [Notes]
  • Revisiting Self-Supervised Visual Representation Learning (2019): [Paper] [Notes]
  • AET vs. AED: Unsupervised Representation Learning by Auto-Encoding Transformations (2019): [Paper] [Notes]
  • Boosting Self-Supervised Learning via Knowledge Transfer (2018): [Paper] [Notes]
  • Self-Supervised Feature Learning by Learning to Spot Artifacts (2018): [Paper] [Notes]
  • Unsupervised Representation Learning by Predicting Image Rotations (2018): [Paper] [Notes]
  • Cross Pixel Optical-Flow Similarity for Self-Supervised Learning (2018): [Paper] [Notes]
  • Multi-task Self-Supervised Visual Learning (2017): [Paper] [Notes]
  • Split-Brain Autoencoders: Unsupervised Learning by Cross-Channel Prediction (2017): [Paper] [Notes]
  • Colorization as a Proxy Task for Visual Understanding (2017): [Paper] [Notes]
  • Unsupervised Learning of Visual Representations by Solving Jigsaw Puzzles (2017): [Paper] [Notes]
  • Unsupervised Visual Representation Learning by Context Prediction (2016): [Paper] [Notes]
  • Colorful image colorization (2016): [Paper] [Notes]
  • Learning visual groups from co-occurrences in space and time (2015): [Paper] [Notes]
  • Discriminative unsupervised feature learning with exemplar convolutional neural networks (2015): [Paper] [Notes]

Semi-Supervised Learning

  • Negative sampling in semi-supervised learning (2020): [Paper] [Notes]
  • Time-Consistent Self-Supervision for Semi-Supervised Learning (2020): [Paper] [Notes]
  • Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning (2019): [Paper] [Notes]
  • S4L: Self-Supervised Semi-Supervised Learning (2019): [Paper] [Notes]
  • Semi-Supervised Learning by Augmented Distribution Alignment (2019): [Paper] [Notes]
  • MixMatch: A Holistic Approach toSemi-Supervised Learning (2019): [Paper] [Notes]
  • Unsupervised Data Augmentation (2019): [Paper] [Notes]
  • Interpolation Consistency Training forSemi-Supervised Learning (2019): [Paper] [Notes]
  • Deep Co-Training for Semi-Supervised Image Recognition (2018): [Paper] [Notes]
  • Unifying semi-supervised and robust learning by mixup (2019): [Paper] [Notes]
  • Realistic Evaluation of Deep Semi-Supervised Learning Algorithms (2018): [Paper] [Notes]
  • Semi-Supervised Sequence Modeling with Cross-View Training (2018): [Paper] [Notes]
  • Virtual Adversarial Training:A Regularization Method for Supervised andSemi-Supervised Learning (2017): [Paper] [Notes]
  • Mean teachers are better role models (2017): [Paper] [Notes]
  • Temporal Ensembling for Semi-Supervised Learning (2017): [Paper] [Notes]
  • Semi-Supervised Learning with Ladder Networks (2015): [Paper] [Notes]

Domain Adaptation, Domain & Out-of-Distribution Generalization

  • Rethinking Distributional Matching Based Domain Adaptation (2020): [Paper] [Notes]
  • Transferability vs. Discriminability: Batch Spectral Penalization for ADA (2019): [Paper] [Notes]
  • On Learning Invariant Representations for Domain Adaptation (2019): [Paper] [Notes]
  • Universal Domain Adaptation (2019): [Paper] [Notes]
  • Transferable Adversarial Training (2019): [Paper] [Notes]
  • Multi-Adversarial Domain Adaptation (2018): [Paper] [Notes]
  • Conditional Adversarial Domain Adaptation (2018): [Paper] [Notes]
  • Learning Adversarially Fair and Transferable Representations (2018): [Paper] [Notes]
  • What is the Effect of Importance Weighting in Deep Learning? (2018): [Paper] [Notes]

Generative Modeling

  • Generative Pretraining from Pixels (2020): [Paper] [Notes]
  • Consistency Regularization for Generative Adversarial Networks (2020): [Paper] [Notes]

Unsupervised Learning

  • Invariant Information Clustering for Unsupervised Image Classification and Segmentation (2019): [Paper] [Notes]
  • Deep Clustering for Unsupervised Learning of Visual Feature (2018): [Paper] [Notes]

Semantic Segmentation

  • DeepLabv3+: Encoder-Decoder with Atrous Separable Convolution (2018): [Paper] [Notes]
  • Large Kernel Matter, Improve Semantic Segmentation by Global Convolutional Network (2017): [Paper] [Notes]
  • Understanding Convolution for Semantic Segmentation (2018): [Paper] [Notes]
  • Rethinking Atrous Convolution for Semantic Image Segmentation (2017): [Paper] [Notes]
  • RefineNet: Multi-path refinement networks for high-resolution semantic segmentation (2017): [Paper] [Notes]
  • Pyramid Scene Parsing Network (2017): [Paper] [Notes]
  • SegNet: A Deep ConvolutionalEncoder-Decoder Architecture for ImageSegmentation (2016): [Paper] [Notes]
  • ENet: A Deep Neural Network Architecture for Real-Time Semantic Segmentation (2016): [Paper] [Notes]
  • Attention to Scale: Scale-aware Semantic Image Segmentation (2016): [Paper] [Notes]
  • Deeplab: semantic image segmentation with DCNN, atrous convs and CRFs (2016): [Paper] [Notes]
  • U-Net: Convolutional Networks for Biomedical Image Segmentation (2015): [Paper] [Notes]
  • Fully Convolutional Networks for Semantic Segmentation (2015): [Paper] [Notes]
  • Hypercolumns for object segmentation and fine-grained localization (2015): [Paper] [Notes]

Weakly- and Semi-supervised Semantic segmentation

  • Box-driven Class-wise Region Masking and Filling Rate Guided Loss (2019): [Paper] [Notes]
  • FickleNet: Weakly and Semi-supervised Semantic Segmentation using Stochastic Inference (2019): [Paper] [Notes]
  • Weakly-Supervised Semantic Segmentation Network with Deep Seeded Region Growing (2018): [Paper] [Notes]
  • Learning Pixel-level Semantic Affinity with Image-level Supervision (2018): [Paper] [Notes]
  • Object Region Mining with Adversarial Erasing (2018): [Paper] [Notes]
  • Revisiting Dilated Convolution: A Simple Approach for Weakly- and Semi- Supervised Segmentation (2018): [Paper] [Notes]
  • Tell Me Where to Look: Guided Attention Inference Network (2018): [Paper] [Notes]
  • Semi Supervised Semantic Segmentation Using Generative Adversarial Network (2017): [Paper] [Notes]
  • Decoupled Deep Neural Network for Semi-supervised Semantic Segmentation (2015): [Paper] [Notes]
  • Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation (2015): [Paper] [Notes]

Information Retrieval

  • VSE++: Improving Visual-Semantic Embeddings with Hard Negatives (2018): [Paper] [Notes]

Visual Explanation & Attention

  • Attention Branch Network: Learning of Attention Mechanism for Visual Explanation (2019): [Paper] [Notes]
  • Attention-based Dropout Layer for Weakly Supervised Object Localization (2019): [Paper] [Notes]
  • Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer (2016): [Paper] [Notes]

Graph Neural Network

  • Pixels to Graphs by Associative Embedding (2017): [Paper] [Notes]
  • Associative Embedding: End-to-End Learning forJoint Detection and Grouping (2017): [Paper] [Notes]
  • Interaction Networks for Learning about Objects , Relations and Physics (2016): [Paper] [Notes]
  • DeepWalk: Online Learning of Social Representation (2014): [Paper] [Notes]
  • The graph neural network model (2009): [Paper] [Notes]

Regularization

  • Manifold Mixup: Better Representations by Interpolating Hidden States (2018): [Paper] [Notes]

Deep learning Methods & Models

Document analysis and segmentation

  • dhSegment: A generic deep-learning approach for document segmentation (2018): [Paper] [Notes]
  • Learning to extract semantic structure from documents using multimodal fully convolutional neural networks (2017): [Paper] [Notes]
  • Page Segmentation for Historical Handwritten Document Images Using Conditional Random Fields (2016): [Paper] [Notes]
  • ICDAR 2015 competition on text line detection in historical documents (2015): [Paper] [Notes]
  • Handwritten text line segmentation using Fully Convolutional Network (2017): [Paper] [Notes]
  • Deep Neural Networks for Large Vocabulary Handwritten Text Recognition (2015): [Paper] [Notes]
  • Page Segmentation of Historical Document Images with Convolutional Autoencoders (2015): [Paper] [Notes]
Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].