Difference between revisions of "WP1 Sprint 2016 07 28-29 Virtual Ideas (background)" English (en) français (fr)

From ESSnet Big Data
Jump to: navigation, search
Line 23: Line 23:
 
#How to map from the organization numbers in the adverts to the legal units in statistics of job vacancy, so that we can compare the advertisements data with the job vacancy statistics?
 
#How to map from the organization numbers in the adverts to the legal units in statistics of job vacancy, so that we can compare the advertisements data with the job vacancy statistics?
 
#How to find the duplicate adverts in one source/multiple sources, can we combine the usage of the structured data and the text analysis?
 
#How to find the duplicate adverts in one source/multiple sources, can we combine the usage of the structured data and the text analysis?
 +
 +
 +
 +
<span style="font-size:larger;">'''Duplication detection methods'''</span>
 +
 +
Below are a series of resources considering duplication detection in Big Data. There are not many resources presented here, as this is just a fews hours worth of research. This is intended to help others get started. I am sure there are many more resources and ideas that I am missing…
 +
 +
[http://www.mmds.org/ http://www.mmds.org/]
 +
 +
This resource is more theoretical. Chapter 3 considers duplication detection in Big Data. Chapter 3 is available in slide and document form.
 +
 +
The approach suggested is a hashing approach, followed by Jaccard Similarity. Basically, it blocks similar items and then compares only the items within the same hashed block using Jaccard Similarity. This is to avoid having to compare all items to each other.
 +
 +
&nbsp;
 +
 +
'''Python library'''
 +
 +
[https://pypi.python.org/pypi/NearDuplicatesDetection/0.2.0 https://pypi.python.org/pypi/NearDuplicatesDetection/0.2.0]
 +
 +
I have found a Python Module (Library) which covers duplicated detection. I have not tested this Module. The Module is a similar approach to that outlined above, that is of hashing and Jaccard Similarity detection.
 +
 +
&nbsp;
 +
 +
'''More advanced tools:'''
 +
 +
Below is a brief exploration into the use of distributed technologies (Spark) in record matching. I don’t think we need Spark for the Sprint. However, it is useful to research: As it would be needed for large scale problems. Thus, we may need such tools later in the project.
 +
 +
&nbsp;
 +
 +
'''Using Spark to merge Brazilian Health Databases&nbsp;'''
 +
 +
[http://ceur-ws.org/Vol-1330/paper-04.pdf http://ceur-ws.org/Vol-1330/paper-04.pdf]
 +
 +
This is an example from Brazil: concerning the joining together of several large health databases. The databases do not contain shared unique identifiers. Hence, they need to use Similarity matching, at scale.&nbsp;
 +
 +
&nbsp;
 +
 +
'''Hashing, Blocking and Similarity matching using Spark'''
 +
 +
They use the Spark(scala) environment. They standardise the features (data wrangling). The features are hashed (bigrams to bit vectors [10110000110101]), blocked (by region) and similarity matched using Sorensen.
 +
 +
&nbsp;
 +
 +
'''Similarity measure'''
 +
 +
Sorensen ( D(a,b) = 2h / ( |a| + |b| ) ): which compares the position of 1’s in both vectors a and b. The common positions are counted in h, then divided by the total number of 1’s in a + b. (where 1 is a complete match).
 +
 +
&nbsp;
 +
 +
'''Machine Learning Methods'''
 +
 +
Machine learning can also be used for duplication detection. The method suggested below is the use of supervised machine learning (for record linkage).
 +
 +
&nbsp;
 +
 +
'''Record matching with Spark ML (video)'''
 +
 +
[https://speakerdeck.com/aseigneurin/record-linkage-a-real-use-case-with-spark-ml https://speakerdeck.com/aseigneurin/record-linkage-a-real-use-case-with-spark-ml]
 +
 +
&nbsp;
 +
 +
'''Slides (for the video above)'''
 +
 +
[https://speakerd.s3.amazonaws.com/presentations/b91b10b4150746a0848bccc8139ad940/Record_Linkage__a_real_use_case_with_Spark_ML.pdf https://speakerd.s3.amazonaws.com/presentations/b91b10b4150746a0848bccc8139ad940/Record_Linkage__a_real_use_case_with_Spark_ML.pdf]
 +
 +
&nbsp;
 +
 +
&nbsp;
 +
 +
&nbsp;
 +
 +
&nbsp;
 +
 +
&nbsp;
 +
 +
&nbsp;

Revision as of 11:13, 15 July 2016

Joint analysis of CEDEFOP data? The query system we have access to has limitations and so we really need access to the underlying data. We are following this up but this may not be available in time


Ideas from Dan Wu (Stats Sweden)

In the final report of project ”Real-time labour market information on skill requirements: feasibility study and working prototype”, a data model developed has been presented, see the figure below. This figure is used to explaining a general approach of web scraping for statistics. In Sweden, four data sets of job advertisements from the state employment agency have been exploited, concerning module 2, 3 and 5.



RTENOTITLE


The data sets are xml files; they are then cleaned and transformed into a database, illustrated in module 2 to 3 in the figure. Since the data are from one source, duplication removal is not considered at the first place. Based on the data in module 3, we studied variables e.g. occupation, organization number and enterprise´s sectors.

The data sets have good coverage of occupations and the sectors. Only a small percentage of companies are covered comparing with the business register. However we cannot by the data itself conclude how good the coverage is. Data of other sources need to be complemented.

In the virtual sprint, the interesting questions can be for example:

  1. How to map from the organization numbers in the adverts to the legal units in statistics of job vacancy, so that we can compare the advertisements data with the job vacancy statistics?
  2. How to find the duplicate adverts in one source/multiple sources, can we combine the usage of the structured data and the text analysis?


Duplication detection methods

Below are a series of resources considering duplication detection in Big Data. There are not many resources presented here, as this is just a fews hours worth of research. This is intended to help others get started. I am sure there are many more resources and ideas that I am missing…

http://www.mmds.org/

This resource is more theoretical. Chapter 3 considers duplication detection in Big Data. Chapter 3 is available in slide and document form.

The approach suggested is a hashing approach, followed by Jaccard Similarity. Basically, it blocks similar items and then compares only the items within the same hashed block using Jaccard Similarity. This is to avoid having to compare all items to each other.

 

Python library

https://pypi.python.org/pypi/NearDuplicatesDetection/0.2.0

I have found a Python Module (Library) which covers duplicated detection. I have not tested this Module. The Module is a similar approach to that outlined above, that is of hashing and Jaccard Similarity detection.

 

More advanced tools:

Below is a brief exploration into the use of distributed technologies (Spark) in record matching. I don’t think we need Spark for the Sprint. However, it is useful to research: As it would be needed for large scale problems. Thus, we may need such tools later in the project.

 

Using Spark to merge Brazilian Health Databases 

http://ceur-ws.org/Vol-1330/paper-04.pdf

This is an example from Brazil: concerning the joining together of several large health databases. The databases do not contain shared unique identifiers. Hence, they need to use Similarity matching, at scale. 

 

Hashing, Blocking and Similarity matching using Spark

They use the Spark(scala) environment. They standardise the features (data wrangling). The features are hashed (bigrams to bit vectors [10110000110101]), blocked (by region) and similarity matched using Sorensen.

 

Similarity measure

Sorensen ( D(a,b) = 2h / ( |a| + |b| ) ): which compares the position of 1’s in both vectors a and b. The common positions are counted in h, then divided by the total number of 1’s in a + b. (where 1 is a complete match).

 

Machine Learning Methods

Machine learning can also be used for duplication detection. The method suggested below is the use of supervised machine learning (for record linkage).

 

Record matching with Spark ML (video)

https://speakerdeck.com/aseigneurin/record-linkage-a-real-use-case-with-spark-ml

 

Slides (for the video above)

https://speakerd.s3.amazonaws.com/presentations/b91b10b4150746a0848bccc8139ad940/Record_Linkage__a_real_use_case_with_Spark_ML.pdf