Sentiment Analysis of Social Media Posts using Random Forest in R

by | R

Table of Contents

Introduction to Sentiment Analysis

Overview

The goal of this project is to classify social media comments into positive, negative, or neutral sentiments by training a Random Forest model. This will help the company gain valuable insights into customer sentiments regarding its products and services.

Setup Instructions

Prerequisites

Ensure that R and RStudio (or any other R IDE) are installed on your machine.

Required Libraries

Install the necessary libraries using the following commands:

install.packages("tidyverse")
install.packages("textdata")
install.packages("tidytext")
install.packages("caret")
install.packages("randomForest")

Data Preparation

Loading the Data

Prepare your dataset which contains social media comments. The dataset should ideally have at least two columns: comment and sentiment.

library(tidyverse)
data <- read.csv("social_media_comments.csv")

# Display the first few rows of the dataset
head(data)

Data Cleaning

Clean the text by removing unnecessary characters, converting text to lowercase, etc.

clean_text <- function(text) {
  text %>%
    tolower() %>%
    str_replace_all("[^[:alnum:]\\s]", "") %>%
    str_replace_all("\\s+", " ") %>%
    trimws()
}

data$comment <- sapply(data$comment, clean_text)

# Display the cleaned data
head(data)

Feature Extraction

Tokenization

Convert the text data into a format suitable for machine learning by tokenizing the words.

library(tidytext)
tokenized_data <- data %>% 
  unnest_tokens(word, comment)

# Display the first few tokenized rows
head(tokenized_data)

Term Frequency-Inverse Document Frequency (TF-IDF)

Calculate the TF-IDF scores for the tokenized words.

tf_idf_data <- tokenized_data %>% 
  count(sentiment, word) %>%
  bind_tf_idf(word, sentiment, n)

tf_idf_matrix <- tf_idf_data %>%
  cast_dtm(sentiment, word, tf_idf)

# Display the TF-IDF matrix
head(as.matrix(tf_idf_matrix))

Model Training

Splitting the Data

Split the dataset into training and testing sets.

library(caret)
set.seed(123)

# Split the data into training (70%) and testing (30%) sets
trainIndex <- createDataPartition(data$sentiment, p = 0.7, 
                                  list = FALSE)
trainData <- data[trainIndex, ]
testData  <- data[-trainIndex, ]

Training the Random Forest Model

Train a Random Forest model using the training data.

library(randomForest)

# Train the model
rf_model <- randomForest(sentiment ~ ., data = trainData, 
                         ntree = 100, 
                         mtry = 2)

# Display the model summary
print(rf_model)

Model Evaluation

Predicting on Test Data

Evaluate the model’s performance using the testing data.

# Predict sentiments for the test data
predictions <- predict(rf_model, newdata = testData)

# Display the confusion matrix
confusionMatrix(predictions, testData$sentiment)

By following these steps, you will have a trained Random Forest model that classifies social media comments into positive, negative, or neutral sentiments, and you can evaluate its performance using standard metrics.

Data Collection from Social Media Platforms

Required Libraries

To collect data from social media platforms in R, we need to use specific R packages that provide APIs to access these platforms.

library(httr)
library(jsonlite)
library(tidyverse)

Twitter API

Below is an implementation for collecting tweets from Twitter using the API. Ensure you have your API credentials ready.

# Set API credentials
api_key <- "your_api_key"
api_secret_key <- "your_api_secret_key"
access_token <- "your_access_token"
access_token_secret <- "your_access_token_secret"

# Create the URL for the GET request
url <- "https://api.twitter.com/2/tweets/search/recent"
query_params <- list(
  query = "your search query",
  tweet.fields = "created_at,lang,author_id",
  max_results = 100
)

# Create the GET request
response <- GET(url, add_headers(Authorization = paste("Bearer", access_token)), query = query_params)

# Check if the request was successful
if (status_code(response) == 200) {
  # Parse the response
  data <- content(response, "parsed", simplifyVector = TRUE)
  tweets <- data$data
  
  # Convert to a data frame
  tweets_df <- as_tibble(tweets)
  
  # Print the data frame
  print(tweets_df)
} else {
  print("Failed to fetch tweets")
}

Facebook API

For Facebook, we can use the httr package to make API calls. Ensure you have your access token.

# Set Access Token
access_token <- "your_access_token"

# Define the URL for the GET request
url <- "https://graph.facebook.com/v11.0/me/feed"

# Create the GET request
response <- GET(url, query = list(access_token = access_token))

# Check if the request was successful
if (status_code(response) == 200) {
  # Parse the response
  data <- content(response, "parsed", simplifyVector = TRUE)
  posts <- data$data
  
  # Convert to a data frame
  posts_df <- as_tibble(posts)
  
  # Print the data frame
  print(posts_df)
} else {
  print("Failed to fetch Facebook posts")
}

General Flow

  1. Set up API credentials – Ensure you have the necessary API credentials.
  2. Create URLs for GET requests – Define the endpoints and parameters.
  3. Make API requests – Use the GET function with appropriate headers or query parameters.
  4. Check the response – Ensure the response status is successful.
  5. Parse and Convert – Parse the response and convert it to a data frame.

Conclusion

This code flow enables effective data collection from Twitter and Facebook for sentiment analysis using a Random Forest model. Ensure you handle rate limiting and API restrictions as per each platform’s guidelines.

Data Cleaning and Preprocessing

Step 3: Data Cleaning and Preprocessing

In this step, we will prepare our dataset for sentiment classification using a Random Forest model. Assuming we have already collected the social media comments, let’s proceed with the data cleaning and preprocessing.

1. Load the Data

Using R, we’ll load the dataset into a data frame. We’ll assume the dataset is in a CSV file named social_media_comments.csv.

data <- read.csv("social_media_comments.csv", stringsAsFactors = FALSE)

2. Remove Unnecessary Columns

Assuming our dataset has columns like comment_id, user_id, and timestamp that are not needed for sentiment analysis:

data <- data[, !(names(data) %in% c("comment_id", "user_id", "timestamp"))]

3. Handle Missing Values

We will remove rows with missing comments or labels.

data <- na.omit(data)

4. Text Cleaning

Convert to Lowercase

Convert all text data to lowercase to ensure uniformity.

data$comment <- tolower(data$comment)

Remove Punctuation, Numbers, Special Characters

data$comment <- gsub("[^a-z\\s]", "", data$comment)

Remove Stop Words

Removing common stop words that are unlikely to contribute to sentiment classification.

library(tm)
stopwords <- stopwords("en")
data$comment <- removeWords(data$comment, stopwords)
data$comment <- stripWhitespace(data$comment)

5. Tokenization

Splitting comments into individual words.

library(tidytext)
data <- data %>%
  unnest_tokens(word, comment)

6. Remove Sparse Terms

Creating a Document-Term Matrix (DTM) and removing sparse terms.

library(tm)
corpus <- Corpus(VectorSource(data$word))
dtm <- DocumentTermMatrix(corpus)
dtm <- removeSparseTerms(dtm, 0.99)
data_clean <- as.data.frame(as.matrix(dtm))

7. Label Encoding

Converting categorical labels (positive, negative, neutral) to numerical factors.

data$sentiment <- factor(data$sentiment, levels = c("negative", "neutral", "positive"), labels = c(0, 1, 2))

8. Combine Preprocessed Data

Combine the cleaned data with sentiment labels.

data_final <- cbind(data_clean, sentiment = data$sentiment)

Conclusion

With this, the data cleaning and preprocessing step is complete. The data_final dataset is now ready for training a Random Forest model for sentiment classification.

Part 4: Text Tokenization and Vectorization

To move forward with the Random Forest model for classifying social media comments as positive, negative, or neutral, you will need to tokenize and vectorize the text data. Here’s how you can achieve that in R:

1. Load Required Libraries

You will need the tm, SnowballC, and caret libraries for text processing and tokenization.

library(tm)
library(SnowballC)
library(caret)

2. Create a Corpus

Assuming you have a data frame df with a column comment containing the text data:

text_corpus <- Corpus(VectorSource(df$comment))

3. Clean and Preprocess the Text

Use transformations like converting to lowercase, removing punctuation, numbers, and stopwords, and stemming the words.

text_corpus <- tm_map(text_corpus, content_transformer(tolower))
text_corpus <- tm_map(text_corpus, removePunctuation)
text_corpus <- tm_map(text_corpus, removeNumbers)
text_corpus <- tm_map(text_corpus, removeWords, stopwords("english"))
text_corpus <- tm_map(text_corpus, stemDocument)
text_corpus <- tm_map(text_corpus, stripWhitespace)

4. Tokenization and Vectorization using Tfidf

Create a Term-Document Matrix (TDM) and use TF-IDF to vectorize the text data.

tdm <- TermDocumentMatrix(text_corpus, control = list(weighting = weightTfIdf))
tdm_matrix <- as.matrix(tdm)

5. Prepare Your Data for Modelling

Ensure that your TDM matrix is suitable for your Random Forest model in terms of dimensions and format:

# Transpose the matrix to have comments as rows and terms as columns
tdm_matrix <- t(tdm_matrix)

# Ensure column names are unique to avoid errors during model training
colnames(tdm_matrix) <- make.names(colnames(tdm_matrix))

# Combine the TDM matrix with your original dataframe
final_data <- cbind(df, tdm_matrix)

6. Splitting the Data

Assuming you have a column sentiment in your data frame that contains labels (positive, negative, neutral), split the data into training and test sets using the caret package:

set.seed(123)  # For reproducibility
trainIndex <- createDataPartition(final_data$sentiment, p = .8, 
                                  list = FALSE, 
                                  times = 1)
TrainData <- final_data[ trainIndex,]
TestData  <- final_data[-trainIndex,]

Now, you are ready to move to the next step, which is training the Random Forest model on the tokenized and vectorized text data.

Make sure to adjust any path or variable names to fit your specific dataset. These steps should directly help you implement text tokenization and vectorization within your project.

# Load necessary libraries
library(tidyverse)
library(caret)
library(randomForest)

# Assume 'tokenized_data' is the tokenized and vectorized dataset from previous steps
# and 'labels' contains the sentiment labels (Positive, Negative, Neutral).

# Feature Extraction
# For illustration, we'll use simple Term Frequency - Inverse Document Frequency (TF-IDF) for feature extraction.

# Compute TF-IDF
tdm <- TermDocumentMatrix(tokenized_data)
tfidf <- weightTfIdf(tdm)

# Converting the TDM to a matrix
tfidf_matrix <- as.matrix(tfidf)

# Splitting the dataset into training and test sets
set.seed(123)  # For reproducibility
index <- createDataPartition(labels, p=0.8, list=FALSE)
train_data <- tfidf_matrix[index,]
test_data <- tfidf_matrix[-index,]
train_labels <- labels[index]
test_labels <- labels[-index]

# Apply Random Forest Model
rf_model <- randomForest(x=train_data, y=as.factor(train_labels), ntree=100)
print(rf_model)

# Predictions on the test set
predictions <- predict(rf_model, test_data)

# Evaluate the Model
confusion_matrix <- confusionMatrix(predictions, factor(test_labels))
print(confusion_matrix)

Explanation

Libraries Loaded:

  • tidyverse: For data manipulation.
  • caret: For splitting data and evaluating models.
  • randomForest: For building the Random Forest model.

TF-IDF Computation:

  • TermDocumentMatrix: Constructs a term-document matrix.
  • weightTfIdf: Computes TF-IDF weights.

Data Partitioning:

  • createDataPartition: Splits the data into training (80%) and testing (20%).

Model Training:

  • randomForest: Creates a Random Forest model with 100 trees.
  • predict: Generates predictions using the trained model.

Model Evaluation:

  • confusionMatrix: Evaluates accuracy, sensitivity, specificity, etc.

Output:

  • Performance metrics of the model are printed.

By following this approach, you can directly implement and evaluate the feature extraction process, ensuring that sentiment analysis can be applied to social media comments effectively.

#6 Introduction to Random Forest Algorithm

Overview

Random Forest is an ensemble learning method used for classification, regression, and other tasks that operates by constructing multiple decision trees during training. For classification tasks, it outputs the class that is the mode of the classes predicted by individual trees. This method boosts the accuracy and prevents overfitting by averaging the predictions.

Implementing Random Forest for Sentiment Classification in R

Load Necessary Libraries

Ensure the required libraries are loaded.

library(randomForest)
library(caret)

Load and Prepare the Data

Assume you have already performed sentiment analysis, tokenization, vectorization, and feature extraction and saved the data into a training set train_data and test set test_data.

# Assuming train_data and test_data are loaded along with their labels (sentiments)
# train_data$features and train_data$labels should be available
# Similarly, test_data$features and test_data$labels should be available

Train the Random Forest Model

# Convert labels to factors for classification
train_data$labels <- as.factor(train_data$labels)
test_data$labels <- as.factor(test_data$labels)

# Fit the Random Forest model
set.seed(42)  # For reproducibility
rf_model <- randomForest(x = train_data$features, y = train_data$labels, ntree = 100)

Evaluate the Model

Evaluate the performance of the model on the test set.

# Predicting the sentiments on the test data
predictions <- predict(rf_model, test_data$features)

# Confusion matrix to evaluate the quality of the classification
conf_matrix <- confusionMatrix(predictions, test_data$labels)

# Print the confusion matrix
print(conf_matrix)

Interpretation

  • Accuracy: An important metric that tells you the percentage of correctly classified instances out of all instances.
  • Confusion Matrix: Provides a detailed breakdown of the correct and incorrect classifications, allowing you to understand where your model might be going wrong.

Random Forest in this context should give you a robust model that can handle the complexities of text data and provide useful insights into customer sentiments. Make sure your train_data and test_data are properly preprocessed and vectorized as the quality of input data significantly impacts the model’s performance.

This completes the practical implementation of the Random Forest algorithm for classifying social media comments into positive, negative, or neutral sentiments.

Training Random Forest Model in R

Below is an R script that follows after the preprocessing steps you described, focusing exclusively on training a Random Forest model for sentiment classification.

# Load necessary libraries
library(randomForest)
library(caret)

# Load the dataset
# Assuming that 'data' is a dataframe that includes preprocessed text features and a sentiment label
# 'data' should have columns: 'features' and 'label'

# If you haven't converted text features to numerical vectors:
# Convert your text data to a document-term matrix or any numerical vectors using your previous vectorization
# Here, feature_matrix is assumed to be your feature-set and sentiment_labels is your classification labels
# feature_matrix <- ... (your previously extracted features)
# sentiment_labels <- ... (your sentiment labels)

# Set seed for reproducibility
set.seed(123)

# Create Training (80%) and Test (20%) sets
trainIndex <- createDataPartition(data$label, p = .8, 
                                  list = FALSE, 
                                  times = 1)
dataTrain <- data[trainIndex,]
dataTest  <- data[-trainIndex,]

# Training features and labels
train_features <- dataTrain[,!colnames(dataTrain) == "label"]
train_labels <- dataTrain$label

# Test features and labels
test_features <- dataTest[,!colnames(dataTest) == "label"]
test_labels <- dataTest$label

# Train Random Forest model
rf_model <- randomForest(x = train_features, 
                         y = as.factor(train_labels),
                         ntree = 500,    # Number of trees
                         mtry = 10,      # Number of variables sampled per split
                         importance = TRUE)  # Variable importance

# Print model summary
print(rf_model)

# Model evaluation on the test set
test_predictions <- predict(rf_model, test_features)

# Confusion matrix to evaluate performance
conf_matrix <- confusionMatrix(test_predictions, as.factor(test_labels))
print(conf_matrix)

# Variable importance plot
varImpPlot(rf_model)

# Optionally, save the model
save(rf_model, file = "rf_model.RData")

# Optionally, load the model for future use
# load("rf_model.RData")

Explanation

  1. Library Imports: Loads the necessary libraries, randomForest for building the model and caret for data partitioning and evaluation.
  2. Data Loading: Assumes data is a dataframe that includes preprocessed features (features) and sentiment labels (label).
  3. Train-Test Split: Splits the dataset into training and testing sets using an 80-20 ratio.
  4. Model Training: Trains the Random Forest model with specified parameters (ntree and mtry).
  5. Model Summary: Prints the model summary to provide details on the trained model.
  6. Evaluation: Predicts the sentiment for the test set and prints the confusion matrix to evaluate the model performance.
  7. Variable Importance: Plots the importance of each feature in the Random Forest model.
  8. Save and Load Model: Optionally saves the trained model to a file and shows how to load it later.

This script, provided inline with your existing project, should allow a complete implementation of the Random Forest classification for sentiment analysis in R.

Model Evaluation and Fine-Tuning for Sentiment Analysis using Random Forest

Objective

To evaluate the performance of the trained Random Forest model and fine-tune it to improve classification accuracy for social media comments.

1. Evaluate the Model

1.1 Load Necessary Libraries

library(caret)       # For confusion matrix and other model evaluation metrics
library(randomForest)  # Random Forest model

1.2 Use the Test Data

# Assume `test_data` and `test_labels` are already defined
test_predictions <- predict(rf_model, newdata = test_data)

1.3 Confusion Matrix

confusion_matrix <- confusionMatrix(test_predictions, test_labels)
print(confusion_matrix)

1.4 Calculate Other Metrics

# Extract overall metrics
accuracy <- confusion_matrix$overall['Accuracy']
kappa <- confusion_matrix$overall['Kappa']

# Extract class-specific metrics
class_metrics <- confusion_matrix$byClass

2. Fine-Tuning the Model

2.1 Define Grid Search Parameters

tune_grid <- expand.grid(mtry = c(2, 4, 6, 8, 10),
                         splitrule = c("gini", "extratrees"),
                         min.node.size = c(1, 3, 5))

2.2 Perform Cross-Validation with Grid Search

control <- trainControl(method = "cv", number = 5)  # 5-fold cross-validation

rf_tuned_model <- train(x = train_data, 
                        y = train_labels, 
                        method = "ranger", 
                        tuneGrid = tune_grid, 
                        trControl = control)

2.3 Select the Best Model and Print Results

best_rf_model <- rf_tuned_model$finalModel
print(rf_tuned_model$bestTune)
print(rf_tuned_model$results)

2.4 Evaluate the Tuned Model

tuned_test_predictions <- predict(best_rf_model, newdata = test_data)
tuned_confusion_matrix <- confusionMatrix(tuned_test_predictions, test_labels)
print(tuned_confusion_matrix)

Conclusion

By using the grid search and cross-validation techniques, we can fine-tune the hyperparameters of the Random Forest model to achieve better performance. It’s important to evaluate the tuned model using the same metrics to verify improvements. You can implement this in your environment by adapting the provided R code to your specific dataset and trained model objects.

Visualization of Sentiment Analysis Results

In this section, we’ll visualize the results of our sentiment analysis using various plotting techniques in R. The visualizations will help us understand the distribution and trends of customer sentiments towards the company’s products and services.

Libraries Required

library(ggplot2)
library(dplyr)

Data Preparation

Assume sentiment_results is the output dataframe from the Random Forest model prediction, which includes columns: comment_id, text, predicted_sentiment. The predicted_sentiment column will have values “positive”, “negative”, or “neutral”.

# Summarize Sentiment Counts
sentiment_summary <- sentiment_results %>%
  group_by(predicted_sentiment) %>%
  summarise(count = n())

Visualization

Bar Plot – Sentiment Distribution

# Plot the sentiment distribution using ggplot2
ggplot(sentiment_summary, aes(x = predicted_sentiment, y = count, fill = predicted_sentiment)) +
  geom_bar(stat = "identity") +
  theme_minimal() +
  labs(title = "Distribution of Sentiments", x = "Sentiment", y = "Count") +
  scale_fill_manual(values = c("positive" = "green", "negative" = "red", "neutral" = "blue"))

Pie Chart – Sentiment Proportion

# Pie chart for sentiment proportions
ggplot(sentiment_summary, aes(x = "", y = count, fill = predicted_sentiment)) +
  geom_bar(width = 1, stat = "identity") +
  coord_polar("y") +
  theme_minimal() +
  labs(title = "Sentiment Proportion") +
  scale_fill_manual(values = c("positive" = "green", "negative" = "red", "neutral" = "blue")) +
  theme(axis.title.x=element_blank(), axis.title.y=element_blank(), 
        panel.border=element_blank(), panel.grid=element_blank(), 
        axis.ticks=element_blank(), axis.text.x=element_blank(), 
        plot.title=element_text(size=14, face="bold"))

Timeline Plot – Sentiment Over Time

Assume we also have a timestamp column in sentiment_results which captures the date and time of the comments.

# Prepare data for timeline plot
sentiment_results$timestamp <- as.Date(sentiment_results$timestamp)

sentiment_timeline <- sentiment_results %>%
  group_by(timestamp, predicted_sentiment) %>%
  summarise(count = n()) %>%
  ungroup()

# Plot the sentiment trend over time
ggplot(sentiment_timeline, aes(x = timestamp, y = count, color = predicted_sentiment)) +
  geom_line(size = 1) +
  theme_minimal() +
  labs(title = "Sentiment Trend Over Time", x = "Date", y = "Count") +
  scale_color_manual(values = c("positive" = "green", "negative" = "red", "neutral" = "blue"))

Conclusion

These plots provide a visual representation of the sentiment analysis results, helping the company to easily understand customer sentiment trends and distributions. You can extend these visualizations with more customizations based on requirements.

Insights and Business Applications

Extracting Insights

Once we have classified the social media comments as positive, negative, or neutral using our trained Random Forest model, we can extract meaningful insights. Here are the steps:


  1. Load the Classified Data:
    Load the dataset where each comment has a sentiment label from the Random Forest.


    classified_comments <- read.csv("classified_comments.csv")


  2. Aggregate Sentiment Counts:
    Count the occurrences of each sentiment.


    sentiment_summary <- table(classified_comments$sentiment)
    print(sentiment_summary)


  3. Time-Series Analysis (if timestamp data is available):
    Analyze the trend of sentiments over time.


    library(dplyr)
    library(ggplot2)

    classified_comments$date <- as.Date(classified_comments$date)

    sentiment_trend <- classified_comments %>%
    group_by(date, sentiment) %>%
    summarise(count = n()) %>%
    ungroup()

    ggplot(sentiment_trend, aes(x=date, y=count, color=sentiment)) +
    geom_line() +
    labs(title="Sentiment Trend Over Time", x="Date", y="Count")

Business Applications

Now that we have analyzed the sentiments, we can use these insights in various business contexts.

Customer Feedback Loop

Objective: To improve products/services based on customer feedback.


  1. Identify common themes in negative comments:


    library(tm)
    negative_comments <- classified_comments[classified_comments$sentiment == "negative",]
    negative_corpus <- Corpus(VectorSource(negative_comments$text))

    negative_tdm <- TermDocumentMatrix(negative_corpus)
    negative_matrix <- as.matrix(negative_tdm)
    negative_word_freq <- sort(rowSums(negative_matrix), decreasing=TRUE)

    head(negative_word_freq, 10)

    Extracting the top 10 most common terms in negative comments will help identify recurring issues.



  2. Communicate feedback to relevant departments:


    Create a report summarizing the common issues and share it with the product development and customer support teams for actionable insights.


Marketing Strategy

Objective: To enhance marketing campaigns based on customer sentiment.


  1. Analyze positive comments to find key phrases:


    positive_comments <- classified_comments[classified_comments$sentiment == "positive",]
    positive_corpus <- Corpus(VectorSource(positive_comments$text))

    positive_tdm <- TermDocumentMatrix(positive_corpus)
    positive_matrix <- as.matrix(positive_tdm)
    positive_word_freq <- sort(rowSums(positive_matrix), decreasing=TRUE)

    head(positive_word_freq, 10)


  2. Highlight these positive phrases in marketing materials:


    Use the identified key phrases from positive comments to create authentic marketing content that resonates with potential customers.


Customer Support Prioritization

Objective: To prioritize customer support based on sentiment analysis.


  1. Flag negative comments for immediate action:


    immediate_action <- classified_comments[classified_comments$sentiment == "negative",]
    write.csv(immediate_action, "flagged_negative_comments.csv")

    Set up a workflow where the customer support team gets notified of negative comments promptly to take rapid action.


Conclusion

By following these practical steps, the company can not only comprehensively understand customer sentiments but also strategically leverage this data to enhance products, refine marketing strategies, and provide better customer support. These actions, informed by data-driven insights, will likely lead to improved customer satisfaction and retention.

Related Posts