Comparative Analysis of LLMs and Traditional Models for Emotion Classification on the GoEmotions Dataset
Large Language Models
This study evaluated DistilGPT-2, GPT-2, BERT, Multinomial Naive Bayes, and Random Forest models on the GoEmotions dataset, focusing on accuracy, recall, and efficiency. Multinomial Naive Bayes showed moderate performance, leveraging Count Vectorization and Laplace smoothing effectively for text processing, but class imbalance led to lower recall for less frequent emotion labels.