All Projects → Oldpan → Pytorch Memory Utils

Oldpan / Pytorch Memory Utils

pytorch memory track code

Programming Languages

python
139335 projects - #7 most used programming language

Projects that are alternatives of or similar to Pytorch Memory Utils

Touch Bar Istats
Show CPU/GPU/MEM temperature on Touch Bar with BetterTouchTool!
Stars: ✭ 141 (-71.4%)
Mutual labels:  gpu, memory
Hardened malloc
Hardened allocator designed for modern systems. It has integration into Android's Bionic libc and can be used externally with musl and glibc as a dynamic library for use on other Linux-based platforms. It will gain more portability / integration over time.
Stars: ✭ 472 (-4.26%)
Mutual labels:  memory
Memory.dll
C# Hacking library for making PC game trainers.
Stars: ✭ 411 (-16.63%)
Mutual labels:  memory
Fastai
The fastai deep learning library
Stars: ✭ 21,718 (+4305.27%)
Mutual labels:  gpu
Digits
Deep Learning GPU Training System
Stars: ✭ 4,056 (+722.72%)
Mutual labels:  gpu
Caer
High-performance Vision library in Python. Scale your research, not boilerplate.
Stars: ✭ 452 (-8.32%)
Mutual labels:  gpu
Lmdb Embeddings
Fast word vectors with little memory usage in Python
Stars: ✭ 404 (-18.05%)
Mutual labels:  memory
Neurokernel
Neurokernel Project
Stars: ✭ 491 (-0.41%)
Mutual labels:  gpu
Bitcracker
BitCracker is the first open source password cracking tool for memory units encrypted with BitLocker
Stars: ✭ 463 (-6.09%)
Mutual labels:  gpu
Open3d
Open3D: A Modern Library for 3D Data Processing
Stars: ✭ 5,860 (+1088.64%)
Mutual labels:  gpu
Python Opengl
An open access book on Python, OpenGL and Scientific Visualization, Nicolas P. Rougier, 2018
Stars: ✭ 441 (-10.55%)
Mutual labels:  gpu
Stablefluids
A straightforward GPU implementation of Jos Stam's "Stable Fluids" on Unity.
Stars: ✭ 430 (-12.78%)
Mutual labels:  gpu
Halide
a language for fast, portable data-parallel computation
Stars: ✭ 4,722 (+857.81%)
Mutual labels:  gpu
H2o4gpu
H2Oai GPU Edition
Stars: ✭ 416 (-15.62%)
Mutual labels:  gpu
Volatility
An advanced memory forensics framework
Stars: ✭ 5,042 (+922.72%)
Mutual labels:  memory
Gpu Rest Engine
A REST API for Caffe using Docker and Go
Stars: ✭ 412 (-16.43%)
Mutual labels:  gpu
Sympact
🔥 Stupid Simple CPU/MEM "Profiler" for your JS code.
Stars: ✭ 439 (-10.95%)
Mutual labels:  memory
Picongpu
Particle-in-Cell Simulations for the Exascale Era ✨
Stars: ✭ 452 (-8.32%)
Mutual labels:  gpu
Deepspeed
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
Stars: ✭ 6,024 (+1121.91%)
Mutual labels:  gpu
Regl Cnn
Digit recognition with Convolutional Neural Networks in WebGL
Stars: ✭ 490 (-0.61%)
Mutual labels:  gpu

Pytorch-Memory-Utils

These codes can help you to detect your GPU memory during training with Pytorch.

A blog about this tool and explain the details : https://oldpan.me/archives/pytorch-gpu-memory-usage-track

Requirement:

pynvml(pip install nvidia-ml-py3)

The following is the print content.

  • Calculate the memory usage of a single model
Model Sequential : params: 0.450304M
Model Sequential : intermedite variables: 336.089600 M (without backward)
Model Sequential : intermedite variables: 672.179200 M (with backward)
  • Track the amount of GPU memory usage
# 12-Sep-18-21:48:45-gpu_mem_track.txt

GPU Memory Track | 12-Sep-18-21:48:45 | Total Used Memory:696.5  Mb

At __main__ <module>: line 13                        Total Used Memory:696.5  Mb

+ | 7 * Size:(512, 512, 3, 3)     | Memory: 66.060 M | <class 'torch.nn.parameter.Parameter'>
+ | 1 * Size:(512, 256, 3, 3)     | Memory: 4.7185 M | <class 'torch.nn.parameter.Parameter'>
+ | 1 * Size:(64, 64, 3, 3)       | Memory: 0.1474 M | <class 'torch.nn.parameter.Parameter'>
+ | 1 * Size:(128, 64, 3, 3)      | Memory: 0.2949 M | <class 'torch.nn.parameter.Parameter'>
+ | 1 * Size:(128, 128, 3, 3)     | Memory: 0.5898 M | <class 'torch.nn.parameter.Parameter'>
+ | 8 * Size:(512,)               | Memory: 0.0163 M | <class 'torch.nn.parameter.Parameter'>
+ | 3 * Size:(256, 256, 3, 3)     | Memory: 7.0778 M | <class 'torch.nn.parameter.Parameter'>
+ | 1 * Size:(256, 128, 3, 3)     | Memory: 1.1796 M | <class 'torch.nn.parameter.Parameter'>
+ | 2 * Size:(64,)                | Memory: 0.0005 M | <class 'torch.nn.parameter.Parameter'>
+ | 4 * Size:(256,)               | Memory: 0.0040 M | <class 'torch.nn.parameter.Parameter'>
+ | 2 * Size:(128,)               | Memory: 0.0010 M | <class 'torch.nn.parameter.Parameter'>
+ | 1 * Size:(64, 3, 3, 3)        | Memory: 0.0069 M | <class 'torch.nn.parameter.Parameter'>

At __main__ <module>: line 15                        Total Used Memory:1142.0 Mb

+ | 1 * Size:(60, 3, 512, 512)    | Memory: 188.74 M | <class 'torch.Tensor'>
+ | 1 * Size:(30, 3, 512, 512)    | Memory: 94.371 M | <class 'torch.Tensor'>
+ | 1 * Size:(40, 3, 512, 512)    | Memory: 125.82 M | <class 'torch.Tensor'>

At __main__ <module>: line 21                        Total Used Memory:1550.9 Mb

+ | 1 * Size:(120, 3, 512, 512)   | Memory: 377.48 M | <class 'torch.Tensor'>
+ | 1 * Size:(80, 3, 512, 512)    | Memory: 251.65 M | <class 'torch.Tensor'>

At __main__ <module>: line 26                        Total Used Memory:2180.1 Mb

- | 1 * Size:(120, 3, 512, 512)   | Memory: 377.48 M | <class 'torch.Tensor'> 
- | 1 * Size:(40, 3, 512, 512)    | Memory: 125.82 M | <class 'torch.Tensor'> 

At __main__ <module>: line 32                        Total Used Memory:1676.8 Mb

How to use

Track the amount of GPU memory usage

simple example:

import torch
import inspect

from torchvision import models
from gpu_mem_track import  MemTracker

device = torch.device('cuda:0')

frame = inspect.currentframe()          # define a frame to track
gpu_tracker = MemTracker(frame)         # define a GPU tracker

gpu_tracker.track()                     # run function between the code line where uses GPU
cnn = models.vgg19(pretrained=True).features.to(device).eval()
gpu_tracker.track()                     # run function between the code line where uses GPU

dummy_tensor_1 = torch.randn(30, 3, 512, 512).float().to(device)  # 30*3*512*512*4/1000/1000 = 94.37M
dummy_tensor_2 = torch.randn(40, 3, 512, 512).float().to(device)  # 40*3*512*512*4/1000/1000 = 125.82M
dummy_tensor_3 = torch.randn(60, 3, 512, 512).float().to(device)  # 60*3*512*512*4/1000/1000 = 188.74M

gpu_tracker.track()

dummy_tensor_4 = torch.randn(120, 3, 512, 512).float().to(device)  # 120*3*512*512*4/1000/1000 = 377.48M
dummy_tensor_5 = torch.randn(80, 3, 512, 512).float().to(device)  # 80*3*512*512*4/1000/1000 = 251.64M

gpu_tracker.track()

dummy_tensor_4 = dummy_tensor_4.cpu()
dummy_tensor_2 = dummy_tensor_2.cpu()
torch.cuda.empty_cache()

gpu_tracker.track()

This will output a .txt to current dir and the content of output is above(print content).

REFERENCE

Part of the code is referenced from:

http://jacobkimmel.github.io/pytorch_estimating_model_size/ https://gist.github.com/MInner/8968b3b120c95d3f50b8a22a74bf66bc

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].