All Projects → CoinCheung → fixmatch-pytorch

CoinCheung / fixmatch-pytorch

Licence: MIT license
90%+ with 40 labels. please see the readme for details.

Programming Languages

python
139335 projects - #7 most used programming language
shell
77523 projects

Projects that are alternatives of or similar to fixmatch-pytorch

Tricks Of Semi Superviseddeepleanring Pytorch
PseudoLabel 2013, VAT, PI model, Tempens, MeanTeacher, ICT, MixMatch, FixMatch
Stars: ✭ 240 (+788.89%)
Mutual labels:  ssl, semi-supervised-learning
celery-connectors
Want to handle 100,000 messages in 90 seconds? Celery and Kombu are that awesome - Multiple publisher-subscriber demos for processing json or pickled messages from Redis, RabbitMQ or AWS SQS. Includes Kombu message processors using native Producer and Consumer classes as well as ConsumerProducerMixin workers for relay publish-hook or caching
Stars: ✭ 37 (+37.04%)
Mutual labels:  ssl
semi-supervised-NFs
Code for the paper Semi-Conditional Normalizing Flows for Semi-Supervised Learning
Stars: ✭ 23 (-14.81%)
Mutual labels:  semi-supervised-learning
Pseudo-Label-Keras
Pseudo-Label: Semi-Supervised Learning on CIFAR-10 in Keras
Stars: ✭ 36 (+33.33%)
Mutual labels:  semi-supervised-learning
AdversarialAudioSeparation
Code accompanying the paper "Semi-supervised adversarial audio source separation applied to singing voice extraction"
Stars: ✭ 70 (+159.26%)
Mutual labels:  semi-supervised-learning
diyca
Do-It-Yourself Certificate Authority
Stars: ✭ 18 (-33.33%)
Mutual labels:  ssl
sack.vfs
Node addon which adds a virtual file system interface; websockets; json(6) parsing; sql support(sqlite,odbc); javascript sched_yield; ssl certificate generation; more...
Stars: ✭ 29 (+7.41%)
Mutual labels:  ssl
mixed-content-scanner-cli
A cli tool to check your site for mixed content
Stars: ✭ 82 (+203.7%)
Mutual labels:  ssl
FuckDPI V2
FuckDPIv2 can fuck the Korean Government's internet censorship by fragmenting SSL ClientHello.
Stars: ✭ 44 (+62.96%)
Mutual labels:  ssl
improving segmentation with selfsupervised depth
[CVPR21] Implementation of our work "Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation"
Stars: ✭ 189 (+600%)
Mutual labels:  semi-supervised-learning
conan-openssl
[OBSOLETE] The recipe is now in https://github.com/conan-io/conan-center-index
Stars: ✭ 25 (-7.41%)
Mutual labels:  ssl
spear
SPEAR: Programmatically label and build training data quickly.
Stars: ✭ 81 (+200%)
Mutual labels:  semi-supervised-learning
nginx-session-ticket-key-rotation
Nginx session ticket key rotation program for secure rotation of TLS session ticket keys and sharing in server clusters.
Stars: ✭ 23 (-14.81%)
Mutual labels:  ssl
smtplib-bruteforce
bruteforcing gmail (TLS/SSL)
Stars: ✭ 26 (-3.7%)
Mutual labels:  ssl
Stubmatic
Mock HTTP calls without coding. Designed specially for testing and testers.
Stars: ✭ 118 (+337.04%)
Mutual labels:  ssl
openssl-ca
Shell scripts to manage a private Certificate Authority using OpenSSL
Stars: ✭ 38 (+40.74%)
Mutual labels:  ssl
amiws
Asterisk Management Interface (AMI) to Web-socket proxy
Stars: ✭ 60 (+122.22%)
Mutual labels:  ssl
node-grpc-ssl
Basic example gRPC protocol with NodeJS + SSL + Docker
Stars: ✭ 40 (+48.15%)
Mutual labels:  ssl
tls-ca-manage
Multi-level Certificate Authority Management tool, front-end tool to OpenSSL, written in bash shell.
Stars: ✭ 19 (-29.63%)
Mutual labels:  ssl
ProSelfLC-2021
noisy labels; missing labels; semi-supervised learning; entropy; uncertainty; robustness and generalisation.
Stars: ✭ 45 (+66.67%)
Mutual labels:  semi-supervised-learning

FixMatch

This is my implementation of the experiment in the paper of fixmatch. I only implemented experiements on cifar-10 dataset without CTAugment.

Environment setup

My platform is:

  • 2080ti gpu
  • ubuntu-16.04
  • python3.6.9
  • pytorch-1.3.1 installed via conda
  • cudatoolkit-10.1.243
  • cudnn-7.6.3 in /usr/lib/x86_64-linux-gpu

Dataset

download cifar-10 dataset:

    $ mkdir -p dataset && cd dataset
    $ wget -c http://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
    $ tar -xzvf cifar-10-python.tar.gz

Train the model

To train the model with 40 labeled samples, you can run the script:

    $ python train.py --n-labeled 40 

where 40 is the number of labeled sample during training.

Results

After training the model with 40 labeled samples for 5 times with the command:

    $ python train.py --n-labeled 40 

I observed top-1 accuracy like this:

#No. 1 2 3 4 5
acc 91.81 91.29 89.51 91.32 79.42

Note:

  1. There is no need to add interleave, since interleave is used to avoid the bias of bn status. MixMatch uses interleave because because they run forward computation with three data batches for three times, if you combine the three batches together and run only one pass of forward computation with the combined batch, the results should be same. You may refer to my implementation of mixmatch here, which does not use interleave and still achieves similar results.

  2. There are two methods to deal with the buffers in the operation of ema: One is directly copying them as the ema buffer states and the other is implementing ema on these buffer states. Generally speaking, there should not be a large gap between these two methods, since in the model of resnet, the buffers are the running_mean/running_var of the nn.BatchNorm layers. During training process, these BN buffers are updated with the moving average method which is same as what the ema operator does. What the ema operator does is to estimate the expectation of the associated parameters by smoothing a series of values. Ema operation can be seen as averaging the most recent values of the series, and by averaging the values, we are computing the less noised parameter value(which can simply be treated as the expectation). To directly copy the buffers, we are using the one-order smoothing, while to implement ema on them, we are using the two-order smoothing. In general, we can have a good enough expected value with one-order smoothing, though two-order smoothing should be more unbiased.

  3. The method based on naive random augmentation will cause a relatively large variance. If you set random seed free, and generate the split of labeled training set randomly each time, you may observe that the validation accuracy would fluctuate within a big range. In the paper, the authors used CTAugment which introduced some feedback to the data augmentation strategy, which will reduce the variance.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].