Sentiment Analysis of Twitter Data

Sentiment analysis of text (or opinion mining) allows us to extract opinion from user comments on the web. The applications of sentiment analysis can be such as understanding what customers think about product or product features, discovering user reaction on certain events.

A basic task in sentiment analysis of text is classifying the polarity of a given text from the document. Polarity can be classified as positive, negative, or neutral.

Advanced, “beyond polarity” sentiment classification looks at emotional states such as “angry”, “sad”, and “happy”. [1]

In this post you will find example how to calculate polarity in sentiment analysis for twitter data using python. Polarity in this example will have two labels: positive or negative.
In the end of this post you also will find links to several most comprehensive posts from other websites on the topic twitter sentiment analysis tutorial.

Dataset for Sentiment Analysis of Twitter Data

We will use dataset from Twitter that can be downloaded from this link [3] from CrowdFlower [4]. This dataset contains labels for the emotional content (such as happiness, sadness, and anger) of texts. About 40000 rows of examples across 13 labels. A subset of this data was used in an experiment for Microsoft’s Cortana Intelligence Gallery.
The dataset has 4 columns
tweet_id
sentiment (for example happy, sad )
author
content

Preprocessing of Twitter Data

We will remove some special characters and links using below function found on Internet.

import re
# below function is based on example from 
# http://www.geeksforgeeks.org/twitter-sentiment-analysis-using-python/
def clean_tweet( tweet):
        '''
        Utility function to clean tweet text by removing links, special characters
        using simple regex statements.
        '''
        tweet = tweet.lower() 
        return ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)", " ", tweet).split())

Also we remove stop words as below

from many_stop_words import get_stop_words
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from itertools import chain


from nltk.classify import NaiveBayesClassifier, accuracy

stop_words = list(get_stop_words('en'))         #About 900 stopwords
nltk_words = list(stopwords.words('english'))   #About 150 stopwords
stop_words.extend(nltk_words)

def remove_stopwords(word_list):
               
        filtered_tweet=""
        for word in word_list:
            word = word.lower() 
            if word not in stopwords.words("english"):
                filtered_tweet=filtered_tweet + " " + word
        
        
        return filtered_tweet.lstrip()

Approach for Tweet Sentiment Analysis

We will divide tweets data into training and testing datasets. For training classifier for detecting polarity in the content column we will use training dataset with content (X) and sentiment (Y) fields.

As we already have emotion column for tweets we do not need do feature selection for classification.

However we will map emotions (13 categories) in positive negative, neutral and skip neutral.

Here is how we do mapping in the script:

polarity = {'empty' : 'N',
                'sadness' : 'N',
                'enthusiasm' : 'P',
                'neutral' : 'neutral',
                'worry' : 'N',
                'surprise' : 'P',
                'love' : 'P',
                'fun' : 'P',
                'hate' : 'N',
                'happiness' : 'P',
                'boredom' : 'N',
                'relief' : 'P',
                'anger' : 'N'
         }  

Text Classification – Using NLTK for Sentiment Analysis

There are different classifications techniques that can be utilized in sentiment analysis, the detailed survey of methods was published in the paper [2]. The paper has also accuracy comparison and sentiment analysis process description.

Our task is to train classifier to detect polarity (negative, positive) for not seen text tweets.
We will use NLTK NaiveBayesClassifier algorithm.

For NLTK we do not need to convert to numeric vectors like we do for ski-learn. We need just tokenize our text and then input to machine learning classification algorithm.

Our vocabulary consists of tweet words and polarity (P or N) for each tweet. Here is how it looks:

vocabulary for sentiment analysis twitter data with NLTK
vocabulary for sentiment analysis twitter data with NLTK

From vocabulary we need to create feature set for Naive Bayes Classifier that we are going to use. In our model each word in the tweet is treated as the feature. Each tweet is “projected” into vocabulary and each word in vocabulary is getting value True if this word is in the given tweet, and value False if the word is not found in the vocabulary. In the end of tweet we have the label for polarity of tweet.

Below is screenshot for feature set, the polarity label (N or P is highlighted, the vocabulary is decreased just to 10 tweets for this picture.

sentiment analysis twitter data - feature set
sentiment analysis twitter data – feature set
vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))
feature_set = [({i:(i in word_tokenize(sentence.lower())) for i in vocabulary},tag) for sentence, tag in training_data]
size = int(len(feature_set) * 0.2)
train_set, test_set = feature_set[size:], feature_set[:size]

classifier = NaiveBayesClassifier.train(train_set)
print(accuracy(classifier, test_set))

Results of Tweet Sentiment Analysis

Here are the results of execution python source code described above:
Accuracy 73%
Run time was long as 50 min and data sample was limited to 1000 rows. May be because laptop has only 6GB memory.

So we learned how to detect negative or positive polarity for sentiment analysis in twitter data. The results are showing that some improvements still would be needed. For example we could better preprocess twitter data using transformation of twitter slang words or short form words to regular words.

Below you can find full python source code.

# sentiment analysis of text twitter data
import re


# below function is based on http://www.geeksforgeeks.org/twitter-sentiment-analysis-using-python/
def clean_tweet( tweet):
        '''
        Utility function to clean tweet text by removing links, special characters
        using simple regex statements.
        '''
        tweet = tweet.lower() 
        return ' '.join(re.sub("(@[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)", " ", tweet).split())
    

# below few lines are from https://stackoverflow.com/questions/5486337/how-to-remove-stop-words-using-nltk-or-python   
from many_stop_words import get_stop_words
from nltk.corpus import stopwords
from nltk.tokenize import word_tokenize
from itertools import chain

from nltk.classify import NaiveBayesClassifier, accuracy
stop_words = list(get_stop_words('en'))         #About 900 stopwords
nltk_words = list(stopwords.words('english'))   #About 150 stopwords
stop_words.extend(nltk_words)

def remove_stopwords(word_list):
 
        filtered_tweet=""
        for word in word_list:
            word = word.lower() # in case they arenet all lower cased
            if word not in stopwords.words("english"):
                filtered_tweet=filtered_tweet + " " + word
        
        
        return filtered_tweet.lstrip()
    

filefolder="C:\\Users\\Downloads"
filename=filefolder + "\\text_emotion.csv"
   
polarity = {'empty' : 'N',
                'sadness' : 'N',
                'enthusiasm' : 'P',
                'neutral' : 'neutral',
                'worry' : 'N',
                'surprise' : 'P',
                'love' : 'P',
                'fun' : 'P',
                'hate' : 'N',
                'happiness' : 'P',
                'boredom' : 'N',
                'relief' : 'P',
                'anger' : 'N'
         }  
   
tweets = []
training_data = []
import csv
with open(filename) as csvDataFile:
    csvReader = csv.reader(csvDataFile)
    count=0
    for row in csvReader:
      
        if (row[1] == 'neutral' or row[1] == 'sentiment') :
            continue
        tweet= clean_tweet(row[3])
        tweet = remove_stopwords(tweet.split())
        tweets.append(tweet)
        training_data.append([tweet,  polarity[row[1]] ])
        count=count+1
        if (count >1000):
            break
        
print (training_data)
vocabulary = set(chain(*[word_tokenize(i[0].lower()) for i in training_data]))

feature_set = [({i:(i in word_tokenize(sentence.lower())) for i in vocabulary},tag) for sentence, tag in training_data]

size = int(len(feature_set) * 0.2)
train_set, test_set = feature_set[size:], feature_set[:size]

classifier = NaiveBayesClassifier.train(train_set)
print(accuracy(classifier, test_set))

External Resources for Twitter Sentiment Analysis Tutorial

Comprehensive Hands on Guide to Twitter Sentiment Analysis with dataset and code
The author of this article is showing how to solve the Twitter Sentiment Analysis Practice Problem.

Another Twitter sentiment analysis with Python — Part 1 This is post 1 of series of 11 posts all about sentiment analysis twitter python and related concepts. The posts cover such topics like word embeddings and neural networks. Below are just 2 posts from this series.

Another Twitter sentiment analysis with Python — Part 10 (Neural Network with Doc2Vec/Word2Vec/GloVe)

Another Twitter sentiment analysis with Python — Part 11 (CNN + Word2Vec)

Yet Another Twitter Sentiment Analysis Part 1 — tackling class imbalance

Basic data analysis on Twitter with Python – Here you will find a simple data analysis program that takes a given number of tweets, analyzes them, and displays the data in a scatter plot. The data represent how Twitter users were perceiving the bot created by author and their sentiment.

References
1. Sentiment Analysis
2. Analysis of Various Sentiment Classification Techniques
3. Emotion Dataset
4. Data for Everyone

Leave a Comment