String metric
Template:Short description Template:Redirect
In mathematics and computer science, a string metric (also known as a string similarity metric or string distance function) is a metric that measures distance ("inverse similarity") between two text strings for approximate string matching or comparison and in fuzzy string searching. A requirement for a string metric (e.g. in contrast to string matching) is fulfillment of the triangle inequality. For example, the strings "Sam" and "Samuel" can be considered to be close.[1] A string metric provides a number indicating an algorithm-specific indication of distance.
The most widely known string metric is a rudimentary one called the Levenshtein distance (also known as edit distance).[2] It operates between two input strings, returning a number equivalent to the number of substitutions and deletions needed in order to transform one input string into another. Simplistic string metrics such as Levenshtein distance have expanded to include phonetic, token, grammatical and character-based methods of statistical comparisons.
String metrics are used heavily in information integration and are currently used in areas including fraud detection, fingerprint analysis, plagiarism detection, ontology merging, DNA analysis, RNA analysis, image analysis, evidence-based machine learning, database data deduplication, data mining, incremental search, data integration, malware detection,[3] and semantic knowledge integration.
List of string metrics
- Levenshtein distance, or its generalization edit distance
- Damerau–Levenshtein distance
- Sørensen–Dice coefficient
- Block distance or L1 distance or City block distance
- Hamming distance
- Simple matching coefficient (SMC)
- Jaccard similarity or Jaccard coefficient or Tanimoto coefficient
- Tversky index
- Overlap coefficient
- Variational distance[4]
- Hellinger distance or Bhattacharyya distance
- Information radius (Jensen–Shannon divergence)
- Skew divergence[4]
- Confusion probability[4]
- Tau metric, an approximation of the Kullback–Leibler divergence
- Fellegi and Sunters metric (SFS)[4]
- Maximal matches[4]
- Grammar-based distance[5]
- TFIDF distance metric[6]
There also exist functions which measure a dissimilarity between strings, but do not necessarily fulfill the triangle inequality, and as such are not metrics in the mathematical sense. An example of such function is the Jaro–Winkler distance.
Selected string measures examples
| Name | Description | Example |
|---|---|---|
| Hamming distance | Only for strings of the same length. Number of changed characters. | "Template:Mono" and "Template:Mono" is 3. |
| Levenshtein distance and Damerau–Levenshtein distance | Generalization of Hamming distance that allows for different length strings, and (with Damerau) for transpositions | Template:Mono and Template:Mono have a distance of 3.
|
| Jaro–Winkler distance | JaroWinklerDist("MARTHA","MARHTA") =
| |
| Most frequent k characters | MostFreqKeySimilarity('research', 'seeking', 2) = 2 |
References
External links
- String Similarity Metrics for Information Integration A fairly complete overview Template:Webarchive
- Carnegie Mellon University open source library
- StringMetric project a Scala library of string metrics and phonetic algorithms
- Natural project a JavaScript natural language processing library which includes implementations of popular string metrics
- ↑ Template:Cite book
- ↑ Template:Cite journal
- ↑ Template:Cite journal
- ↑ 4.0 4.1 4.2 4.3 4.4 Sam's String Metrics - Computational Linguistics and Phonetics
- ↑ Russell, David J., et al. "A grammar-based distance metric enables fast and accurate clustering of large sets of 16S sequences." BMC bioinformatics 11.1 (2010): 1-14.
- ↑ Template:Cite journal