Facebook Political Ad Classifier

Final Project for CS 184: Bridging Tech & Public Policy in partnership with Facebook. Special thanks to our mentors: Alon Levy, Andreas Paepcke, Jeffrey Ullman, and Janna Huang.

The Problem.

Political ad running without a paid for by byline, which is against Facebook's political ad policy.

Under Facebook’s current ad system, it’s easy for malicious actors to publish political ads as ‘non-political’ and evade important disclaimer requirements -- this weakness enables the spread of misinformation. We examined the ads in Facebook’s political ad database that were
initially published without a disclaimer byline. We sought to understand the political leaning of these 450,000+ ads and the connection between ad sentiment, political leaning, and impression counts. By forming a more robust understanding of these ads, we formed reasoned recommendations for how to improve the overall system for the sake of American users.

Political Leaning Classifier.

In order to address our research question, I created a classifier that labelled ads as left leaning, right leaning, and neutral. This is the part of the project that I did, and so I will talk about the process that this required. If you're interested in our overall project and the implications of this classifier and the work of my group mates, feel free to take a look at our final presentation.

For our data, we decided to use ads without a paid for by byline in the Facebook API Ad Library. We used NYU's Online Political Transparency Project's ad collector code to extract the ads from the Facebook Ad Library and store them in a PostgreSQL database. We then extracted the messages of all of the ads missing a paid for by byline. Each ad message was analyzed to determine positive, neutral, and negative sentiment using the Vader NLP sentiment analysis package. After calculating the sentiment for each word in a given ad’s text, the sentiment analysis tool produces an average sentiment value for each ad. This average sentiment value is used as the overall sentiment of the ad.

In order to include analysis on how political leaning affects the number of impressions a misclassified political ad receives, I built a machine learning classifier to determine the political leaning of each ad. Specifically, we use the BERT Fine-tuning Sentence Classification, a neural-network based technique for natural language processing (NLP) for language patterns. The final model has a classification accuracy of 0.58, this mediocre value likely due to the typeof training data available. We used Facebook ads from the Ad Library, but many of them were neutral, contained the same message, or lacked contextual text to strongly signify a left or right leaning ad. The training set consisted of 27,078 ads of liberal, conservative, and neutral leaning.

In an effort to correct for some of the inaccuracy of the classifier, after classification, ads which were published by advertisers with known political leaning were hard-coded as either liberal or conservative.

Results of Ad Classification

Challenges & Reflection

Overall, this project was an amazing learning experience for me. I had little to no experience with machine learning, and this was the first time I worked with such a large dataset. I started with a basic SVM classifier, but found that it classified the ads based on keywords versus the understanding of the words together. I pivoted to using BERT Fine-tuning Sentence Classification in hopes of yielding more accurate ad classification. The process of creating the classifier was definitely a challenge. Many technical issues arose, some of these issues included working with BERT, seeing as this was a new model that no one, not even the mentors, had used before. Furthermore, using the Google CoLab notebook presented issues with timing out, limited connection, and limited memory usage. My mentor helped set up a server where the classifier could run on a Jupyter notebook. The issue that arose here was with the CUDA memory running out before the model could finish training. We found that reducing our batch size to 4 let the model train without using all of the memory. 


With the help of my mentor, we were able to get the classifier working. We were able to run the entire set of 476,146 ads using the BERT model and we had an accuracy rating of 58% -- this was most likely due to the type of training data we had. We used Facebook ads from the Ad Library, but many of them were neutral, contained the same message, or didn’t have enough contextual text to strongly signify a left or right leaning ad. The training set consisted of 27,078 ads of left, right, and neutral leanings. We tried to get the same dataset from a paper we based our model off of, but we were unable to contact the owners of this dataset. If we had this dataset, in accordance with the paper, we would’ve had an accuracy of 68%.