All Projects → kornelski → dupe-krill

kornelski / dupe-krill

Licence: other
A fast file deduplicator

Programming Languages

rust
11053 projects

Projects that are alternatives of or similar to dupe-krill

yadf
Yet Another Dupes Finder
Stars: ✭ 32 (-78.23%)
Mutual labels:  dedupe, file-deduplication
otp
One Time Password for 2-Factor-Authentication implemented in Rust
Stars: ✭ 21 (-85.71%)
Mutual labels:  rust-library
webbrowser-rs
Rust library to open URLs in the web browsers available on a platform
Stars: ✭ 150 (+2.04%)
Mutual labels:  rust-library
twang
Library for pure Rust advanced audio synthesis.
Stars: ✭ 83 (-43.54%)
Mutual labels:  rust-library
zingg
Scalable identity resolution, entity resolution, data mastering and deduplication using ML
Stars: ✭ 655 (+345.58%)
Mutual labels:  dedupe
hassle-rs
🦀 This crate provides an FFI layer and idiomatic rust wrappers for the new DirectXShaderCompiler library.
Stars: ✭ 34 (-76.87%)
Mutual labels:  rust-library
rdp
A library providing FFI access to fast Ramer–Douglas–Peucker and Visvalingam-Whyatt line simplification algorithms
Stars: ✭ 20 (-86.39%)
Mutual labels:  rust-library
rust-cross-libs
Cross-compile the Rust standard library for custom targets without a full bootstrap build.
Stars: ✭ 29 (-80.27%)
Mutual labels:  rust-library
lcs-image-diff-rs
🖼 Image diff tool with LCS algorithm
Stars: ✭ 67 (-54.42%)
Mutual labels:  rust-library
InputBot
A Rust library for creating global hotkeys, and emulating inputs.
Stars: ✭ 246 (+67.35%)
Mutual labels:  rust-library
cala
Cross-platform system interface for hardware IO
Stars: ✭ 46 (-68.71%)
Mutual labels:  rust-library
whoami
Rust crate to get the current user and environment.
Stars: ✭ 68 (-53.74%)
Mutual labels:  rust-library
ctrs
Category Theory For Programmers (Bartosz Milewski)
Stars: ✭ 62 (-57.82%)
Mutual labels:  rust-library
cdc
A library for performing Content-Defined Chunking (CDC) on data streams.
Stars: ✭ 18 (-87.76%)
Mutual labels:  rust-library
mpris-rs
Idiomatic MPRIS D-Bus interface library for Rust
Stars: ✭ 37 (-74.83%)
Mutual labels:  rust-library
mailparse
Rust library to parse mail files
Stars: ✭ 148 (+0.68%)
Mutual labels:  rust-library
bitcrust
Bitcoin software suite
Stars: ✭ 61 (-58.5%)
Mutual labels:  rust-library
rspark
▁▂▆▇▁▄█▁ Sparklines for Rust apps
Stars: ✭ 50 (-65.99%)
Mutual labels:  rust-library
waihona
Rust crate for performing cloud storage CRUD actions across major cloud providers e.g aws
Stars: ✭ 46 (-68.71%)
Mutual labels:  rust-library
intersection-wasm
Mesh-Mesh and Triangle-Triangle Intersection tests based on the algorithm by Tomas Akenine-Möller
Stars: ✭ 17 (-88.44%)
Mutual labels:  rust-library

Dupe krill — a fast file deduplicator

Replaces files that have identical content with hardlinks, so that file data of all copies is stored only once, saving disk space. Useful for reducing sizes of multiple backups, messy collections of photos and music, countless copies of node_modules, macOS app bundles, and anything else that's usually immutable (since all hardlinked copies of a file will change when any one of them is changed).

Features

  • It's very fast and reasonably memory-efficient.
  • Deduplicates incrementally as soon as duplicates are found.
  • Replaces files atomically and it's safe to interrupt at any time.
  • Proven to be reliable. Used for years without an issue.
  • It's aware of existing hardlinks and supports merging of multiple groups of hardlinks.
  • Gracefully handles symlinks and special files.

Usage

Download binaries from the releases page.

Works on macOS and Linux. Windows is not supported.

If you have the latest stable Rust (1.42+), build the program with either cargo install dupe-krill or clone this repo and cargo build --release.

dupe-krill -d <files or directories> # find dupes without doing anything
dupe-krill <files or directories> # find and replace with hardlinks

See dupe-krill -h for details.

Output

It prints one duplicate per line. It prints both paths on the same line with the difference between them highlighted as {first => second}.

Progress shows:

<number unique file bodies>+<number of hardlinks> dupes. <files checked>+<files skipped> files scanned.

Symlinks, special device files, and 0-sized files are always skipped.

Don't try to parse program's usual output. Add --json option if you want machine-readable output. You can also use this program as a Rust library for seamless integration.

How does hardlinking work?

Files are deduplicated by making a hardlink. They're not deleted. Instead, litreally the same file will exist in two or more directories at once. Unlike symlinks, the hardlinks behave like real files. Deleting one of hardlinks leaves other hardlinks unchanged. Editing a hardlinked file edits it in all places at once (except in some applications that delete & create a new file, instead of overwriting existing files). Hardlinking will make all duplicates of a file have the same file permissions.

This program will only deduplicate files larger than a single disk block (4KB, usually), because in many filesystems hardlinking tiny files may not actually save space. You can add -s flag to dedupe small files, too.

Nerding out about the fast deduplication algorithm

In short: it uses Rust's standard library BTreeMap for deduplication, but with a twist that allows it to compare files lazily, reading only as little file content as necessary.


Theoretically, you could find all duplicate files by putting them in a giant hash table aggregating file paths and using file content as the key:

HashMap<Vec<u8>, Vec<Path>>

but of course that would use ludicrous amounts of memory. You can fix it by using hashes of the content instead of the content itself.

BTW, I can't stress enough how mind-bogglingly improbable accidental cryptographic hash collisions are. It's not just "you're probably safe if you're lucky". It's "creating this many files would take more energy than our civilisation has ever produced in all of its history".

HashMap<[u8; 16], Vec<Path>>

but that's still pretty slow, since you still read entire content of all the files. You can save some work by comparing file sizes first:

HashMap<u64, HashMap<[u8; 20], Vec<Path>>

but it helps only a little, since files with identical sizes are surprisingly common. You can eliminate a bit more of near-duplicates by comparing only beginnings of the files first:

HashMap<u64, HashMap<[u8; 20], HashMap<[u8; 20], Vec<Path>>>

and then maybe compare only the ends, and maybe a few more fragments in the middle, etc.:

HashMap<u64, HashMap<[u8; 20], HashMap<[u8; 20], HashMap<[u8; 20], Vec<Path>>>>
HashMap<u64, HashMap<[u8; 20], HashMap<[u8; 20], HashMap<[u8; 20], HashMap<[u8; 20], HashMap<[u8; 20], …>>>>

These endlessly nested hashmaps can be generalized. BTreeMap doesn't need to see the whole key at once. It only compares keys with each other, and the comparison can be done incrementally — by only reading enough of the file to show that its key is unique, without even knowing the full key.

BTreeMap<LazilyHashing<File>, Vec<Path>>

And that's what this program does (and a bit of wrangling with inodes).

The whole heavy lifting of deduplication is done by Rust's standard library BTreeMap and overloaded </> operators that incrementally hash the files (yes, operator overloading that does file I/O is a brilliant idea. I couldn't use <<, unfortunately).

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].