Detecting the Political Leaning of News Articles and News Media

Document Type

Dissertation

Abstract

This thesis addresses the challenge of detecting political bias in news articles and media outlets by proposing an automated method for classifying news sources and articles as left, center, or right-leaning. With the growth of mass media consumption, identifying bias in news reporting has become increasingly important, given the adverse effects of unaddressed political bias on society. To address this issue, we present a comprehensive approach that employs machine learning techniques to detect political leaning in news media and articles. We evaluate the performance of several models, including SVM(baseline), CatBoost, LightGBM, BERT, XLNet, and XGBoost, on a diverse dataset of over 55,000 news articles from AllSides. We use majority voting to aggregate the predictions made over the news by a single medium at the media level for each model. Our dataset, collected and annotated from over 900 popular online information platforms from Media Bias/Fact Check ,classifies political bias as left, center, or right-wing. Additionally, we crawl approximately ten articles from each website, totaling over 8,000 articles. We employ Transformer-based models BERT, LightGBM, XGBoost, CatBoost, CatBoost OF and SVM for media-level classification, effectively detecting political ideology in various media sources. Our results reveal that CatBoost outperforms other models for article-level classification, demonstrating robustness and effectiveness in handling diverse data. Moreover, we observe that applying the majority voting technique over the news by a single medium at the media level enhances the performance of the models. We emphasize the importance of addressing class imbalance and using balanced data splits to improve model performance. For article-level classification using CatBoost, we achieve an MAE of 0.270, an F1 score of 0.690, and an accuracy of 0.694. For media-level classification, we obtain a competitive MAE of 0.299 with BERT, and by using the majority voting classifier, the CatBoost model reached an F1 score of 0.727 and an accuracy of 0.725.

First Page

i

Last Page

48

Publication Date

6-2023

Comments

Thesis submitted to the Deanship of Graduate and Postdoctoral Studies

In partial fulfillment of the requirements for the M.Sc degree in Natural Language Processing

Advisors: Dr. Shady Shehata, Dr. Preslav Nakov

Online access available for MBZUAI patrons

Share

COinS