Document Similarity in Machine Learning Text Analysis with TF-IDF

Despite of the appearance of new word embedding techniques for converting textual data into numbers, TF-IDF still often can be found in many articles or blog posts for information retrieval, user modeling, text classification algorithms, text analytics (extracting top terms for example) and other text mining techniques.

In this text we will look what is TF-IDF, how we can calculate TF-IDF, retrieve calculated values in different formats and how we compute similarity between 2 text documents using TF-IDF technique.

tf–idf is term frequency–inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. The tf–idf value increases proportionally to the number of times a word appears in the document and is offset by the number of documents in the corpus that contain the word, which helps to adjust for the fact that some words appear more frequently in general.[1]

Here we will look how we can convert text corpus of documents to numbers and how we can use above technique for computing document similarity.

We will use sklearn.feature_extraction.text.TfidfVectorizer from python scikit-learn library for calculating tf-idf. TfidfVectorizer converts a collection of raw documents to a matrix of TF-IDF features.

We need to provide text documents as input, all other input parameters are optional and have default values or set to None. [2]

Here is the list of inputs from documentation:

input=’content’, encoding=’utf-8’, decode_error=’strict’, strip_accents=None, lowercase=True, preprocessor=None,
tokenizer=None, analyzer=’word’, stop_words=None, token_pattern=’(?u)\b\w\w+\b’, ngram_range=(1, 1),
max_df=1.0, min_df=1, max_features=None, vocabulary=None, binary=False,dtype=, norm=’l2’,
use_idf=True, smooth_idf=True, sublinear_tf=False)

Our text documents will be represented just one sentence and all documents will be inputted as via array corpus.
Below code demonstrates how to get document similarity matrix.

# -*- coding: utf-8 -*-

from sklearn.feature_extraction.text import TfidfVectorizer

from sklearn.metrics.pairwise import cosine_similarity
import pandas as pd

corpus=["I'd like an apple juice",
                            "An apple a day keeps the doctor away",
                             "Eat apple every day",
                             "We buy apples every week",
                             "We use machine learning for text classification",
                             "Text classification is subfield of machine learning"]

vect = TfidfVectorizer(min_df=1)
tfidf = vect.fit_transform(corpus)
print ((tfidf * tfidf.T).A)


"""
[[1.         0.2688172  0.16065234 0.         0.         0.        ]
 [0.2688172  1.         0.28397982 0.         0.         0.        ]
 [0.16065234 0.28397982 1.         0.19196066 0.         0.        ]
 [0.         0.         0.19196066 1.         0.13931166 0.        ]
 [0.         0.         0.         0.13931166 1.         0.48695659]
 [0.         0.         0.         0.         0.48695659 1.        ]]
""" 

We can print all our features or the values of features for specific document. In our example feature is a word, but it can be also 2 or more words:

print(vect.get_feature_names())
#['an', 'apple', 'apples', 'away', 'buy', 'classification', 'day', 'doctor', 'eat', 'every', 'for', 'is', 'juice', 'keeps', 'learning', 'like', 'machine', 'of', 'subfield', 'text', 'the', 'use', 'we', 'week']
print(tfidf.shape)
#(6, 24)


print (tfidf[0])
"""
  (0, 15)	0.563282410145744
  (0, 0)	0.46189963418608976
  (0, 1)	0.38996740989416023
  (0, 12)	0.563282410145744
"""  

We can load features in dataframe and print them from dataframe in several ways:

df=pd.DataFrame(tfidf.toarray(), columns=vect.get_feature_names())

print (df)

"""
         an     apple    apples    ...          use        we      week
0  0.461900  0.389967  0.000000    ...     0.000000  0.000000  0.000000
1  0.339786  0.286871  0.000000    ...     0.000000  0.000000  0.000000
2  0.000000  0.411964  0.000000    ...     0.000000  0.000000  0.000000
3  0.000000  0.000000  0.479748    ...     0.000000  0.393400  0.479748
4  0.000000  0.000000  0.000000    ...     0.431849  0.354122  0.000000
5  0.000000  0.000000  0.000000    ...     0.000000  0.000000  0.000000
"""

with pd.option_context('display.max_rows', None, 'display.max_columns', None):   
    print(df)

"""
     doctor       eat     every       for        is     juice     keeps  \
0  0.000000  0.000000  0.000000  0.000000  0.000000  0.563282  0.000000   
1  0.414366  0.000000  0.000000  0.000000  0.000000  0.000000  0.414366   
2  0.000000  0.595054  0.487953  0.000000  0.000000  0.000000  0.000000   
3  0.000000  0.000000  0.393400  0.000000  0.000000  0.000000  0.000000   
4  0.000000  0.000000  0.000000  0.431849  0.000000  0.000000  0.000000   
5  0.000000  0.000000  0.000000  0.000000  0.419233  0.000000  0.000000   

   learning      like   machine        of  subfield      text       the  \
0  0.000000  0.563282  0.000000  0.000000  0.000000  0.000000  0.000000   
1  0.000000  0.000000  0.000000  0.000000  0.000000  0.000000  0.414366   
2  0.000000  0.000000  0.000000  0.000000  0.000000  0.000000  0.000000   
3  0.000000  0.000000  0.000000  0.000000  0.000000  0.000000  0.000000   
4  0.354122  0.000000  0.354122  0.000000  0.000000  0.354122  0.000000   
5  0.343777  0.000000  0.343777  0.419233  0.419233  0.343777  0.000000   

        use        we      week  
0  0.000000  0.000000  0.000000  
1  0.000000  0.000000  0.000000  
2  0.000000  0.000000  0.000000  
3  0.000000  0.393400  0.479748  
4  0.431849  0.354122  0.000000  
5  0.000000  0.000000  0.000000  

"""    
# this prints but not nice as above    
print(df.to_string())    



print ("Second Column");
print (df.iloc[1])
"""
an                0.339786
apple             0.286871
apples            0.000000
away              0.414366
buy               0.000000
classification    0.000000
day               0.339786
doctor            0.414366
eat               0.000000
every             0.000000
for               0.000000
is                0.000000
juice             0.000000
keeps             0.414366
learning          0.000000
like              0.000000
machine           0.000000
of                0.000000
subfield          0.000000
text              0.000000
the               0.414366
use               0.000000
we                0.000000
week              0.000000
"""
print ("Second Column only values (without keys");
print (df.iloc[1].values)

"""
[0.33978594 0.28687063 0.         0.41436586 0.         0.
 0.33978594 0.41436586 0.         0.         0.         0.
 0.         0.41436586 0.         0.         0.         0.
 0.         0.         0.41436586 0.         0.         0.        ]
""" 

Finally we can compute document similarity matrix using cosine_similarity. And we got the same matrix that we got in the beginning using just ((tfidf * tfidf.T).A).

print(cosine_similarity(df.values, df.values))

"""
[[1.         0.2688172  0.16065234 0.         0.         0.        ]
 [0.2688172  1.         0.28397982 0.         0.         0.        ]
 [0.16065234 0.28397982 1.         0.19196066 0.         0.        ]
 [0.         0.         0.19196066 1.         0.13931166 0.        ]
 [0.         0.         0.         0.13931166 1.         0.48695659]
 [0.         0.         0.         0.         0.48695659 1.        ]]
""" 

print ("Number of docs in corpus")
print (len(corpus))

So in this post we learned how to use tf idf sklearn, get values in different formats, load to dataframe and calculate document similarity matrix using just tfidf values or cosine similarity function from sklearn.metrics.pairwise. This techniques can be used in machine learning text analysis, information retrieval machine learning, text mining process and many other areas when we need convert textual data into numeric data (or features).

References
1. Tf-idf – Wikipedia
2. TfidfVectorizer

1 thought on “Document Similarity in Machine Learning Text Analysis with TF-IDF”

Leave a Comment