Methods for identifying duplicates
Duplication detection methods
Below are a series of resources considering duplication detection in Big Data. There are not many resources presented here, as this is just a fews hours worth of research. This is intended to help others get started. I am sure there are many more resources and ideas that I am missing…
This resource is more theoretical. Chapter 3 considers duplication detection in Big Data. Chapter 3 is available in slide and document form.
The approach suggested is a hashing approach, followed by Jaccard Similarity. Basically, it blocks similar items and then compares only the items within the same hashed block using Jaccard Similarity. This is to avoid having to compare all items to each other.
I have found a Python Module (Library) which covers duplicated detection. I have not tested this Module. The Module is a similar approach to that outlined above, that is of hashing and Jaccard Similarity detection.
More advanced tools:
Below is a brief exploration into the use of distributed technologies (Spark) in record matching. I don’t think we need Spark for the Sprint. However, it is useful to research: As it would be needed for large scale problems. Thus, we may need such tools later in the project.
Using Spark to merge Brazilian Health Databases
This is an example from Brazil: concerning the joining together of several large health databases. The databases do not contain shared unique identifiers. Hence, they need to use Similarity matching, at scale.
Hashing, Blocking and Similarity matching using Spark
They use the Spark(scala) environment. They standardise the features (data wrangling). The features are hashed (bigrams to bit vectors ), blocked (by region) and similarity matched using Sorensen.
Sorensen ( D(a,b) = 2h / ( |a| + |b| ) ): which compares the position of 1’s in both vectors a and b. The common positions are counted in h, then divided by the total number of 1’s in a + b. (where 1 is a complete match).
Machine Learning Methods
Machine learning can also be used for duplication detection. The method suggested below is the use of supervised machine learning (for record linkage).
Record matching with Spark ML (video)
Slides (for the video above)