Using Pretrained Word Embeddings in Machine Learning

In this post you will learn how to use pre-trained word embeddings in machine learning. Google provides News corpus (3 billion running words) word vector model (3 million 300-dimension English word vectors).

Download file from this link word2vec-GoogleNews-vectors and save it in some local folder. Open it with zip program and extract the .bin file. So instead of file GoogleNews-vectors-negative300.bin.gz you will have the file GoogleNews-vectors-negative300.bin

Now you can use the below snippet to load this file using gensim. Change the file path to actual file folder where you saved the file in the previous step.

Gensim
Gensim is a Python library for topic modelling, document indexing and similarity retrieval with large corpora. It is Python framework for fast Vector Space Modelling.

The below python code snippet demonstrates how to load pretrained Google file into the model and then query model for example for similarity between word.
# -*- coding: utf-8 -*-

import gensim

model = gensim.models.Word2Vec.load_word2vec_format('C:\\Users\\GoogleNews-vectors-negative300.bin', binary=True)  

vocab = model.vocab.keys()
wordsInVocab = len(vocab)
print (wordsInVocab)
print (model.similarity('this', 'is'))
print (model.similarity('post', 'book'))

Output from the above code:
3000000
0.407970363878
0.0572043891977

You can do all other things same way as if you would use own trained word embeddings. The Google file however is big, it is 1.5 GB original size, and unzipped it has 3.3GB. On my 6GB RAM laptop it took a while to run the below code. But it run it. However some other commands I was not able to run.

See this post K Means Clustering Example with Word2Vec which is showing embedding in machine learning algorithm. Here Word2Vec model will be feeded into several k-means clustering algorithms from NLTK and Scikit-learn libraries.

GloVe and fastText Word Embedding in Machine Learning

Word2vec is not the the only word embedding available for use. Below are the few links for other word embeddings.
Here How to Convert Word to Vector with GloVe and Python you will find how to convert word to vector with GloVe – Global Vectors for Word Representation. Detailed example is shown how to use pretrained GloVe data file that can be downloaded.

And one more link is here FastText Word Embeddings for Text Classification with MLP and Python In this post you will discover fastText word embeddings – how to load pretrained fastText, get text embeddings and use it in document classification example.

1. Google’s trained Word2Vec model in Python
2. word2vec-GoogleNews-vectors
3. gensim 3.1.0

1 thought on “Using Pretrained Word Embeddings in Machine Learning”

Leave a Comment