All Projects → feature23 → Stringsimilarity.net

feature23 / Stringsimilarity.net

Licence: other
A .NET port of java-string-similarity

Projects that are alternatives of or similar to Stringsimilarity.net

indexed-string-variation
Experimental JavaScript module to generate all possible variations of strings over an alphabet using an n-ary virtual tree
Stars: ✭ 16 (-93.39%)
Mutual labels:  string, strings
Competitive Programming Repository
Competitive Programming templates that I used during the past few years.
Stars: ✭ 367 (+51.65%)
Mutual labels:  algorithms, string
StringPool
A performant and memory efficient storage for immutable strings with C++17. Supports all standard char types: char, wchar_t, char16_t, char32_t and C++20's char8_t.
Stars: ✭ 19 (-92.15%)
Mutual labels:  string, strings
tplv
👣 Nano string template library for modern, based on ES6 template string syntax.
Stars: ✭ 31 (-87.19%)
Mutual labels:  string, strings
Golang Combinations
Golang library which provide an algorithm to generate all combinations out of a given string array.
Stars: ✭ 51 (-78.93%)
Mutual labels:  string, strings
Cracking The Coding Interview
📚 C++ and Python solutions with automated tests for Cracking the Coding Interview 6th Edition.
Stars: ✭ 396 (+63.64%)
Mutual labels:  algorithms, strings
Mlib
Library of generic and type safe containers in pure C language (C99 or C11) for a wide collection of container (comparable to the C++ STL).
Stars: ✭ 321 (+32.64%)
Mutual labels:  algorithms, string
Textdistance
Compute distance between sequences. 30+ algorithms, pure python implementation, common interface, optional external libs usage.
Stars: ✭ 2,575 (+964.05%)
Mutual labels:  algorithms, distance
Cracking The Coding Interview
Solutions for Cracking the Coding Interview - 6th Edition
Stars: ✭ 35 (-85.54%)
Mutual labels:  string, strings
Mightystring
Making Ruby Strings Powerful
Stars: ✭ 28 (-88.43%)
Mutual labels:  string, strings
Algorithms
A collection of algorithms and data structures
Stars: ✭ 11,553 (+4673.97%)
Mutual labels:  algorithms, strings
Util
A collection of useful utility functions
Stars: ✭ 201 (-16.94%)
Mutual labels:  string, strings
My Awesome Ai Bookmarks
Curated list of my reads, implementations and core concepts of Artificial Intelligence, Deep Learning, Machine Learning by best folk in the world.
Stars: ✭ 223 (-7.85%)
Mutual labels:  algorithms
Quiz
Ex 1 - Run timed quizzes via the command line
Stars: ✭ 234 (-3.31%)
Mutual labels:  strings
Stringy
A PHP string manipulation library with multibyte support
Stars: ✭ 2,461 (+916.94%)
Mutual labels:  strings
Php To String
Cast any php value into a string
Stars: ✭ 219 (-9.5%)
Mutual labels:  string
Ctci 6th Edition Cn
《Cracking the Coding Interview, 6th Edition》CtCI中文翻译
Stars: ✭ 237 (-2.07%)
Mutual labels:  algorithms
Jbook
Notes about programming, advices, algorithms and a lot of good stuff with Java
Stars: ✭ 233 (-3.72%)
Mutual labels:  algorithms
Codejam
Set of handy reusable .NET components that can simplify your daily work and save your time when you copy and paste your favorite helper methods and classes from one project to another
Stars: ✭ 217 (-10.33%)
Mutual labels:  string
Blitz
Android Library: Set self-updating string with relative time in TextView (e.g. 5 minutes ago)
Stars: ✭ 217 (-10.33%)
Mutual labels:  string

Build Status

String Similarity .NET

A .NET port of java-string-similarity: https://github.com/tdebatty/java-string-similarity

A library implementing different string similarity and distance measures. A dozen of algorithms (including Levenshtein edit distance and sibblings, Jaro-Winkler, Longest Common Subsequence, cosine similarity etc.) are currently implemented. Check the summary table below for the complete list...

Download

Using NuGet:

Install-Package F23.StringSimilarity

Overview

The main characteristics of each implemented algorithm are presented below. The "cost" column gives an estimation of the computational cost to compute the similarity between two strings of length m and n respectively.

Normalized? Metric? Type Cost
Levenshtein distance No Yes O(m*n) 1
Normalized Levenshtein distance
similarity
Yes No O(m*n) 1
Weighted Levenshtein distance No No O(m*n) 1
Damerau-Levenshtein 3 distance No Yes O(m*n) 1
Optimal String Alignment 3 not implemented yet No No O(m*n) 1
Jaro-Winkler similarity
distance
Yes No O(m*n)
Longest Common Subsequence distance No No O(m*n) 1,2
Metric Longest Common Subsequence distance Yes Yes O(m*n) 1,2
N-Gram distance Yes No O(m*n)
Q-Gram distance No No Profile O(m+n)
Cosine similarity similarity
distance
Yes No Profile O(m+n)
Jaccard index similarity
distance
Yes Yes Set O(m+n)
Sorensen-Dice coefficient similarity
distance
Yes No Set O(m+n)

[1] In this library, Levenshtein edit distance, LCS distance and their sibblings are computed using the dynamic programming method, which has a cost O(m.n). For Levenshtein distance, the algorithm is sometimes called Wagner-Fischer algorithm ("The string-to-string correction problem", 1974). The original algorithm uses a matrix of size m x n to store the Levenshtein distance between string prefixes.

If the alphabet is finite, it is possible to use the method of four russians (Arlazarov et al. "On economic construction of the transitive closure of a directed graph", 1970) to speedup computation. This was published by Masek in 1980 ("A Faster Algorithm Computing String Edit Distances"). This method splits the matrix in blocks of size t x t. Each possible block is precomputed to produce a lookup table. This lookup table can then be used to compute the string similarity (or distance) in O(nm/t). Usually, t is choosen as log(m) if m > n. The resulting computation cost is thus O(mn/log(m)). This method has not been implemented (yet).

[2] In "Length of Maximal Common Subsequences", K.S. Larsen proposed an algorithm that computes the length of LCS in time O(log(m).log(n)). But the algorithm has a memory requirement O(m.n²) and was thus not implemented here.

[3] There are two variants of Damerau-Levenshtein string distance: Damerau-Levenshtein with adjacent transpositions (also sometimes called unrestricted Damerau–Levenshtein distance) and Optimal String Alignment (also sometimes called restricted edit distance). For Optimal String Alignment, no substring can be edited more than once.

Normalized, metric, similarity and distance

Although the topic might seem simple, a lot of different algorithms exist to measure text similarity or distance. Therefore the library defines some interfaces to categorize them.

(Normalized) similarity and distance

  • StringSimilarity : Implementing algorithms define a similarity between strings (0 means strings are completely different).
  • NormalizedStringSimilarity : Implementing algorithms define a similarity between 0.0 and 1.0, like Jaro-Winkler for example.
  • StringDistance : Implementing algorithms define a distance between strings (0 means strings are identical), like Levenshtein for example. The maximum distance value depends on the algorithm.
  • NormalizedStringDistance : This interface extends StringDistance. For implementing classes, the computed distance value is between 0.0 and 1.0. NormalizedLevenshtein is an example of NormalizedStringDistance.

Generally, algorithms that implement NormalizedStringSimilarity also implement NormalizedStringDistance, and similarity = 1 - distance. But there are a few exceptions, like N-Gram similarity and distance (Kondrak)...

Metric distances

The MetricStringDistance interface : A few of the distances are actually metric distances, which means that verify the triangle inequality d(x, y) <= d(x,z) + d(z,y). For example, Levenshtein is a metric distance, but NormalizedLevenshtein is not.

A lot of nearest-neighbor search algorithms and indexing structures rely on the triangle inequality. You can check "Similarity Search, The Metric Space Approach" by Zezula et al. for a survey. These cannot be used with non metric similarity measures.

Shingles (n-gram) based similarity and distance

A few algorithms work by converting strings into sets of n-grams (sequences of n characters, also sometimes called k-shingles). The similarity or distance between the strings is then the similarity or distance between the sets.

Some ot them, like jaccard, consider strings as sets of shingles, and don't consider the number of occurences of each shingle. Others, like cosine similarity, work using what is sometimes called the profile of the strings, which takes into account the number of occurences of each shingle.

For these algorithms, another use case is possible when dealing with large datasets:

  1. compute the set or profile representation of all the strings
  2. compute the similarity between sets or profiles

Levenshtein

The Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other.

It is a metric string distance. This implementation uses dynamic programming (Wagner–Fischer algorithm), with only 2 rows of data. The space requirement is thus O(m) and the algorithm runs in O(m.n).

using System;
using F23.StringSimilarity;

public class Program
{    
    public static void Main(string[] args)
    {
        var l = new Levenshtein();

        Console.WriteLine(l.Distance("My string", "My $tring"));
        Console.WriteLine(l.Distance("My string", "My $tring"));
        Console.WriteLine(l.Distance("My string", "My $tring"));
    }    
}

Normalized Levenshtein

This distance is computed as levenshtein distance divided by the length of the longest string. The resulting value is always in the interval [0.0 1.0] but it is not a metric anymore!

The similarity is computed as 1 - normalized distance.

using System;
using F23.StringSimilarity;

public class Program
{    
    public static void Main(string[] args)
    {
        var l = new NormalizedLevenshtein();

        Console.WriteLine(l.Distance("My string", "My $tring"));
        Console.WriteLine(l.Distance("My string", "My $tring"));
        Console.WriteLine(l.Distance("My string", "My $tring"));
    }
}

Weighted Levenshtein

An implementation of Levenshtein that allows to define different weights for different character substitutions.

This algorithm is usually used for optical character recognition (OCR) applications. For OCR, the cost of substituting P and R is lower then the cost of substituting P and M for example because because from and OCR point of view P is similar to R.

It can also be used for keyboard typing auto-correction. Here the cost of substituting E and R is lower for example because these are located next to each other on an AZERTY or QWERTY keyboard. Hence the probability that the user mistyped the characters is higher.

using System;
using F23.StringSimilarity;

public class Program
{    
    public static void Main(string[] args)
    {
        var l = new WeightedLevenshtein(new ExampleCharSub());

        Console.WriteLine(l.Distance("String1", "String1"));
        Console.WriteLine(l.Distance("String1", "Srring1"));
        Console.WriteLine(l.Distance("String1", "Srring2"));
    }
}

private class ExampleCharSub : ICharacterSubstitution
{
    public double Cost(char c1, char c2)
    {
        // The cost for substituting 't' and 'r' is considered smaller as these 2 are located next to each other on a keyboard
        if (c1 == 't' && c2 == 'r') return 0.5; 

        // For most cases, the cost of substituting 2 characters is 1.0
        return 1.0;
    }
}

Damerau-Levenshtein

Similar to Levenshtein, Damerau-Levenshtein distance with transposition (also sometimes calls unrestricted Damerau-Levenshtein distance) is the minimum number of operations needed to transform one string into the other, where an operation is defined as an insertion, deletion, or substitution of a single character, or a transposition of two adjacent characters.

It does respect triangle inequality, and is thus a metric distance.

This is not to be confused with the optimal string alignment distance, which is an extension where no substring can be edited more than once.

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {
        var d = new Damerau();
        
        // 1 substitution
        Console.WriteLine(d.Distance("ABCDEF", "ABDCEF"));
        
        // 2 substitutions
        Console.WriteLine(d.Distance("ABCDEF", "BACDFE"));
        
        // 1 deletion
        Console.WriteLine(d.Distance("ABCDEF", "ABCDE"));
        Console.WriteLine(d.Distance("ABCDEF", "BCDEF"));
        Console.WriteLine(d.Distance("ABCDEF", "ABCGDEF"));
        
        // All different
        Console.WriteLine(d.Distance("ABCDEF", "POIU"));
    }    
}

Will produce:

1.0
2.0
1.0
1.0
1.0
6.0

Jaro-Winkler

Jaro-Winkler is a string edit distance that was developed in the area of record linkage (duplicate detection) (Winkler, 1990). The Jaro–Winkler distance metric is designed and best suited for short strings such as person names, and to detect typos.

Jaro-Winkler computes the similarity between 2 strings, and the returned value lies in the interval [0.0, 1.0]. It is (roughly) a variation of Damerau-Levenshtein, where the substitution of 2 close characters is considered less important then the substitution of 2 characters that a far from each other.

The distance is computed as 1 - Jaro-Winkler similarity.

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {
        var jw = new JaroWinkler();
        
        // substitution of s and t
        Console.WriteLine(jw.Similarity("My string", "My tsring"));
        
        // substitution of s and n
        Console.WriteLine(jw.Similarity("My string", "My ntrisg"));
    }
}

will produce:

0.9740740656852722
0.8962963223457336

Longest Common Subsequence

The longest common subsequence (LCS) problem consists in finding the longest subsequence common to two (or more) sequences. It differs from problems of finding common substrings: unlike substrings, subsequences are not required to occupy consecutive positions within the original sequences.

It is used by the diff utility, by Git for reconciling multiple changes, etc.

The LCS distance between strings X (of length n) and Y (of length m) is n + m - 2 |LCS(X, Y)| min = 0 max = n + m

LCS distance is equivalent to Levenshtein distance when only insertion and deletion is allowed (no substitution), or when the cost of the substitution is the double of the cost of an insertion or deletion.

This class implements the dynamic programming approach, which has a space requirement O(m.n), and computation cost O(m.n).

In "Length of Maximal Common Subsequences", K.S. Larsen proposed an algorithm that computes the length of LCS in time O(log(m).log(n)). But the algorithm has a memory requirement O(m.n²) and was thus not implemented here.

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {
        var lcs = new LongestCommonSubsequence();

        // Will produce 4.0
        Console.WriteLine(lcs.Distance("AGCAT", "GAC"));
        
        // Will produce 1.0
        Console.WriteLine(lcs.Distance("AGCAT", "AGCT"));
    }
}

Metric Longest Common Subsequence

Distance metric based on Longest Common Subsequence, from the notes "An LCS-based string metric" by Daniel Bakkelund. http://heim.ifi.uio.no/~danielry/StringMetric.pdf

The distance is computed as 1 - |LCS(s1, s2)| / max(|s1|, |s2|)

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {
        var lcs = new MetricLCS();

        string s1 = "ABCDEFG";   
        string s2 = "ABCDEFHJKL";
        // LCS: ABCDEF => length = 6
        // longest = s2 => length = 10
        // => 1 - 6/10 = 0.4
        Console.WriteLine(lcs.Distance(s1, s2));

        // LCS: ABDF => length = 4
        // longest = ABDEF => length = 5
        // => 1 - 4 / 5 = 0.2
        Console.WriteLine(lcs.Distance("ABDEF", "ABDIF"));
    }
}

N-Gram

Normalized N-Gram distance as defined by Kondrak, "N-Gram Similarity and Distance", String Processing and Information Retrieval, Lecture Notes in Computer Science Volume 3772, 2005, pp 115-126.

http://webdocs.cs.ualberta.ca/~kondrak/papers/spire05.pdf

The algorithm uses affixing with special character '\n' to increase the weight of first characters. The normalization is achieved by dividing the total similarity score the original length of the longest word.

In the paper, Kondrak also defines a similarity measure, which is not implemented (yet).

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {   
        // produces 0.416666
        var twogram = new NGram(2);
        Console.WriteLine(twogram.Distance("ABCD", "ABTUIO"));
        
        // produces 0.97222
        string s1 = "Adobe CreativeSuite 5 Master Collection from cheap 4zp";
        string s2 = "Adobe CreativeSuite 5 Master Collection from cheap d1x";
        var ngram = new NGram(4);
        Console.WriteLine(ngram.Distance(s1, s2));
    }
}

Shingle (n-gram) based algorithms

A few algorithms work by converting strings into sets of n-grams (sequences of n characters, also sometimes called k-shingles). The similarity or distance between the strings is then the similarity or distance between the sets.

The cost for computing these similarities and distances is mainly domnitated by k-shingling (converting the strings into sequences of k characters). Therefore there are typically two use cases for these algorithms:

Directly compute the distance between strings:

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {
        var dig = new QGram(2);
        
        // AB BC CD CE
        // 1  1  1  0
        // 1  1  0  1
        // Total: 2

        Console.WriteLine(dig.Distance("ABCD", "ABCE"));
    }
}

Or, for large datasets, pre-compute the profile or set representation of all strings. The similarity can then be computed between profiles or sets:

using System;
using F23.StringSimilarity;

public class Program
{
    public static void Main(string[] args)
    {
        string s1 = "My first string";
        string s2 = "My other string...";
        
        // Let's work with sequences of 2 characters...
        var cosine = new Cosine(2);
        
        // For cosine similarity I need the profile of strings
        var profile1 = cosine.GetProfile(s1);
        var profile2 = cosine.GetProfile(s2);
        
        // Prints 0.516185
        Console.WriteLine(cosine.Similarity(profile1, profile2));
    }
}

Pay attention, this only works if the same Cosine object is used to parse all input strings!

Q-Gram

Q-gram distance, as defined by Ukkonen in "Approximate string-matching with q-grams and maximal matches" http://www.sciencedirect.com/science/article/pii/0304397592901434

The distance between two strings is defined as the L1 norm of the difference of their profiles (the number of occurences of each n-gram): SUM( |V1_i - V2_i| ). Q-gram distance is a lower bound on Levenshtein distance, but can be computed in O(m + n), where Levenshtein requires O(m.n)

Cosine similarity

The similarity between the two strings is the cosine of the angle between these two vectors representation, and is computed as V1 . V2 / (|V1| * |V2|)

Distance is computed as 1 - cosine similarity.

Jaccard index

Like Q-Gram distance, the input strings are first converted into sets of n-grams (sequences of n characters, also called k-shingles), but this time the cardinality of each n-gram is not taken into account. Each input string is simply a set of n-grams. The Jaccard index is then computed as |V1 inter V2| / |V1 union V2|.

Distance is computed as 1 - cosine similarity. Jaccard index is a metric distance.

Sorensen-Dice coefficient

Similar to Jaccard index, but this time the similarity is computed as 2 * |V1 inter V2| / (|V1| + |V2|).

Distance is computed as 1 - cosine similarity.

License

This code is licensed under the MIT license.

Note that the project description data, including the texts, logos, images, and/or trademarks, for each open source project belongs to its rightful owner. If you wish to add or remove any projects, please contact us at [email protected].