All Projects → mattdeitke → cvpr-buzz

mattdeitke / cvpr-buzz

Licence: MIT license
🐝 Explore Trending Papers at CVPR

Programming Languages

typescript
32286 projects
SCSS
7915 projects
python
139335 projects - #7 most used programming language
javascript
184084 projects - #8 most used programming language

Projects that are alternatives of or similar to cvpr-buzz

CoMoGAN
CoMoGAN: continuous model-guided image-to-image translation. CVPR 2021 oral.
Stars: ✭ 139 (+275.68%)
Mutual labels:  cvpr, cvpr2021
Cvpr2021 Papers With Code
CVPR 2021 论文和开源项目合集
Stars: ✭ 7,138 (+19191.89%)
Mutual labels:  cvpr, cvpr2021
AMP-Regularizer
Code for our paper "Regularizing Neural Networks via Adversarial Model Perturbation", CVPR2021
Stars: ✭ 26 (-29.73%)
Mutual labels:  cvpr, cvpr2021
MetaBIN
[CVPR2021] Meta Batch-Instance Normalization for Generalizable Person Re-Identification
Stars: ✭ 58 (+56.76%)
Mutual labels:  cvpr, cvpr2021
cfvqa
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias
Stars: ✭ 96 (+159.46%)
Mutual labels:  cvpr, cvpr2021
AODA
Official implementation of "Adversarial Open Domain Adaptation for Sketch-to-Photo Synthesis"(WACV 2022/CVPRW 2021)
Stars: ✭ 44 (+18.92%)
Mutual labels:  cvpr, cvpr2021
HistoGAN
Reference code for the paper HistoGAN: Controlling Colors of GAN-Generated and Real Images via Color Histograms (CVPR 2021).
Stars: ✭ 158 (+327.03%)
Mutual labels:  cvpr, cvpr2021
Modaily-Aware-Audio-Visual-Video-Parsing
Code for CVPR 2021 paper Exploring Heterogeneous Clues for Weakly-Supervised Audio-Visual Video Parsing
Stars: ✭ 19 (-48.65%)
Mutual labels:  cvpr, cvpr2021
Scan2Cap
[CVPR 2021] Scan2Cap: Context-aware Dense Captioning in RGB-D Scans
Stars: ✭ 81 (+118.92%)
Mutual labels:  cvpr, cvpr2021
LBYLNet
[CVPR2021] Look before you leap: learning landmark features for one-stage visual grounding.
Stars: ✭ 46 (+24.32%)
Mutual labels:  cvpr, cvpr2021
CVPR2021-Papers-with-Code-Demo
收集 CVPR 最新的成果,包括论文、代码和demo视频等,欢迎大家推荐!
Stars: ✭ 752 (+1932.43%)
Mutual labels:  cvpr, cvpr2021
BCNet
Deep Occlusion-Aware Instance Segmentation with Overlapping BiLayers [CVPR 2021]
Stars: ✭ 434 (+1072.97%)
Mutual labels:  cvpr, cvpr2021
Restoring-Extremely-Dark-Images-In-Real-Time
The project is the official implementation of our CVPR 2021 paper, "Restoring Extremely Dark Images in Real Time"
Stars: ✭ 79 (+113.51%)
Mutual labels:  cvpr, cvpr2021
Awesome Frontendmasters
📚 List of awesome frontendmasters course resources
Stars: ✭ 110 (+197.3%)
Mutual labels:  d3, gatsby
SGGpoint
[CVPR 2021] Exploiting Edge-Oriented Reasoning for 3D Point-based Scene Graph Analysis (official pytorch implementation)
Stars: ✭ 41 (+10.81%)
Mutual labels:  cvpr, cvpr2021
single-positive-multi-label
Multi-Label Learning from Single Positive Labels - CVPR 2021
Stars: ✭ 63 (+70.27%)
Mutual labels:  cvpr, cvpr2021
data-driven-range-slider
D3.js based data-driven range slider, date time support
Stars: ✭ 21 (-43.24%)
Mutual labels:  d3
gatsby-theme-help-center
A Gatsby theme for your knowledge base or help center
Stars: ✭ 95 (+156.76%)
Mutual labels:  gatsby
wellioviz
d3.js v5 visualization of well logs
Stars: ✭ 36 (-2.7%)
Mutual labels:  d3
md-editor-v3
Markdown editor for vue3, developed in jsx and typescript, dark theme、beautify content by prettier、render articles directly、paste or clip the picture and upload it...
Stars: ✭ 326 (+781.08%)
Mutual labels:  tsx

🐝 CVPR Buzz 🍯

Explore Trending Papers at CVPR 2021


📊 Dataset

The scraping code is in tasks.py. Data is cached on the website, which makes it extremely fast to use with GraphQL and allows for fewer dependencies to be relied on.

👁️ CVPR Accepted Papers

The accepted papers and their abstracts are extracted from CVPR.

📖 Citation Data

Citation data comes from Semantic Scholar. There is no easy way to go from the paper title to Semantic Scholar's paper ID (i.e., it's not possible from the API). So we just search for it with Selenium and apply a few checks to see if it's the same paper.

This may take an hour or so to get through all the papers. There are also occasional hit limits where you may have to pick up from a point. Thus, one may need to monitor the window that opens. (It's possible to automate this by checking for the existence of certain elements on the screen, though I haven't been bothered enough by it to implement this.)

With the paper ID, we can then use Semantic Scholar's API to easily fetch the number of citations for the paper.

🐦 Twitter

To fetch the engagement for each paper on Twitter, Twint is used. Currently there are 2 queries for each paper:

  1. Paper title. Searches for the title of the paper in quotes (e.g., "VirTex: Learning Visual Representations From Textual Annotations"). Paper titles are unique enough that I've found it extraordinarily rare for there to be a tweet with the title of a paper, and it not actually being about the paper. For papers where the title is a common phrase, I have attempted to remove its results.
  2. ArXiv URL. Search for where the arXiv URL has been shared in quotes (e.g., "arxiv.org/abs/2006.06666").

Each unique tweet is only counted once.

One can now also add tweets manually (keep reading), which aren't caught in the above criteria.

👊 Adding Data

Please open a PR to add new data!

🐤 Tweets

If you want to index Tweets, open a PR and add them to manual-data.json. All that is needed is the account username and the tweetId.

For instance, if https://twitter.com/quocleix/status/1349443438698143744 is the Tweet, one would format it as ["quocleix", "1349443438698143744"]. The paper IDs (a.k.a. the keys) can be found in paper-data.

🏙 Citation Data

Citation data comes from Semantic Scholar. If your paper is on Semantic Scholar, but it is not showing up on the site, please edit the s2id field in paper-data/{paperId}.json. The s2id is found at the tail end of the Semantic Scholar URL.

For instance, if the Semantic Scholar URL is https://www.semanticscholar.org/paper/Meta-Pseudo-Labels-Pham-Xie/43497fe8aa7c730e075b08facc2aa560a6d4dd85, the s2id would be 43497fe8aa7c730e075b08facc2aa560a6d4dd85.

📄 License

MIT License.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].