251. rlmetaRLMeta is a light-weight flexible framework for Distributed Reinforcement Learning Research.
252. DeepFoveaNeural Reconstruction for Foveated Rendering and Video Compression using Learned Statistics of Natural Videos
253. dinoPyTorch code for Vision Transformers training with the Self-Supervised learning method DINO
254. opendialkgOpenDialKG: Explainable Conversational Reasoning with Attention-based Walks over Knowledge Graphs
255. MephistoA suite of tools for managing crowdsourcing tasks from the inception through to data packaging for research use.
256. aleboRe-Examining Linear Embeddings for High-dimensional Bayesian Optimization
258. phosaPerceiving 3D Human-Object Spatial Arrangements from a Single Image in the Wild
259. voxelcnnVoxelCNN: Order-Aware Generative Modeling Using the 3D-Craft Dataset
262. MetaICLAn original implementation of "MetaICL Learning to Learn In Context" by Sewon Min, Mike Lewis, Luke Zettlemoyer and Hannaneh Hajishirzi
263. moeMisspelling Oblivious Word Embeddings
265. SimulEvalSIMULEVAL A General Evaluation Toolkit for Simultaneous Translation
268. UNLUCode for the paper "UnNatural Language Inference" to appear at ACL 2021 (Long Paper)
269. EasyComDatasetThe Easy Communications (EasyCom) dataset is a world-first dataset designed to help mitigate the *cocktail party effect* from an augmented-reality (AR) -motivated multi-sensor egocentric world view.
270. wsd-biencodersExperiment code for the ACL 2020 paper "Moving Down the Long Tail of Word Sense Disambiguation with Gloss Informed Bi-encoders".
271. TimeSformerThe official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"
273. assetA Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations
275. loop toolA thin, highly portable toolkit for dense loop-based computation.
276. dynalabThe Python library with command line tools to interact with Dynabench(https://dynabench.org/), such as uploading models.
277. Private-IDA collection of algorithms that can do join between two parties while preserving the privacy of keys on which the join happens
278. drqv2DrQ-v2: Improved Data-Augmented Reinforcement Learning
279. minihackMiniHack the Planet: A Sandbox for Open-Ended Reinforcement Learning Research
281. co3dTooling for the Common Objects In 3D dataset.
282. meshtalkRepository for MeshTalk supplemental material and code once the (already approved) 16 GHS captures our lab will make publicly available are released.
286. salinaa Lightweight library for sequential learning agents, including reinforcement learning
287. augmentation-corruptionThis repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".
288. contrieverContriever: Unsupervised Dense Information Retrieval with Contrastive Learning
289. xR-EgoPoseNew egocentric synthetic dataset for egocentric 3D human pose estimation
290. fairringFairring (FAIR + Herring) is a plug-in for PyTorch that provides a process group for distributed training that outperforms NCCL at large scales
291. online dialog evalCode for the paper "Learning an Unreferenced Metric for Online Dialogue Evaluation", ACL 2020
292. dcemThe Differentiable Cross-Entropy Method
293. DONERFCode for "DONeRF Towards Real-Time Rendering of Compact Neural Radiance Fields using Depth Oracle Networks"
294. speech-resynthesisAn official reimplementation of the method described in the INTERSPEECH 2021 paper - Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.
298. fbpcsFBPCS (Facebook Private Computation Solutions) leverages secure multi-party computation (MPC) to output aggregated data without making unencrypted, readable data available to the other party or any third parties. Facebook provides impression & opportunity data, and the advertiser provides conversion / outcome data. Both parties have dedicated cl…
299. xcitOfficial code Cross-Covariance Image Transformer (XCiT)
300. GDTWe present a framework for training multi-modal deep learning models on unlabelled video data by forcing the network to learn invariances to transformations applied to both the audio and video streams.