• Ешқандай Нәтиже Табылған Жоқ

thesis approval form

N/A
N/A
Protected

Academic year: 2024

Share "thesis approval form"

Copied!
48
0
0

Толық мәтін

(1)

SCHOOL OF SCIENCES AND HUMANITIES

PREDICTING ELECTION RESULTS USING ONLINE SENTIMENTS IN RUSSIA AND THE US

ПРОГНОЗИРОВАНИЕ РЕЗУЛЬТАТОВ ВЫБОРОВ НА ОСНОВЕ ОНЛАЙН СЕНТИМЕНТОВ В РОССИИ И США

ОНЛАЙН СЕНТИМЕНТТЕР АРҚЫЛЫ РЕСЕЙ МЕН АҚШ-ТАҒЫ САЙЛАУ НӘТИЖЕЛЕРІН БОЛЖАУ

BY

Aigerim Aibassova NU Student Number: 201325346

APPROVED BY

Dr. Hélène Thibault ON

10 May 2022

Signature of Principal Thesis Adviser

Dr. Gento Kato

In Agreement with Thesis Advisory Committee Second Adviser: Dr. Chunho Park

External Reader: Dr. Kokil Jaidka

(2)

PREDICTING ELECTION RESULTS USING ONLINE SENTIMENTS IN RUSSIA AND THE US

ПРОГНОЗИРОВАНИЕ РЕЗУЛЬТАТОВ ВЫБОРОВ НА ОСНОВЕ ОНЛАЙН СЕНТИМЕНТОВ В РОССИИ И США

ОНЛАЙН СЕНТИМЕНТТЕР АРҚЫЛЫ РЕСЕЙ МЕН АҚШ-ТАҒЫ САЙЛАУ НӘТИЖЕЛЕРІН БОЛЖАУ

by

Aigerim Aibassova

A thesis submitted in partial fulfilment of the requirements for the degree of

Master of Arts in

Political Science and International Relations

at

NAZARBAYEV UNIVERSITY -

SCHOOL OF SCIENCES AND HUMANITIES

2022

(3)

Abstract

Social media is one of the most prominent spaces for communication in the 21st century and as such is deeply intertwined with our political life being both its reflection and influencer.

Due to this phenomenon, there is a rising interest among scholars in using social media to predict election results with mixed results. This thesis aims to test the connection between political processes, regime types, and predictive power of social media data by using two countries as case studies: the United States and Russia. Several tentative results are produced. Firstly, the predictive power of online opinions is revealed to be higher for the US as compared to Russia – presumably due to the former’s democratic and the latter’s non-democratic political system.

Secondly, filtering on certain sociodemographic groups can affect the accuracy of predictions.

For instance, while only selecting large city urban populations can increase the errors in predictions for both countries, removing tweets from election candidates can have an asymmetric effect in two countries: improving the predictions for Russia, while decreasing their accuracy for the US. While the results have little claim to generalizability across regime types, they can provide a starting groundwork for further research on the way different political phenomena and conditions shape the way election predictions can be improved.

(4)

Contents

Chapter 1: Introduction ...5

Chapter 2: Literature review ...6

Social media and its political usage across regimes ...6

Predicting elections using social media ...8

Situating the research question ...11

Chapter 3: Hypotheses ...12

Hypothesis 1: On regime influences ...13

Hypothesis 2A: On public figures and organizations ...14

Hypothesis 2B: On urban areas ...15

Hypothesis 2C: On candidate activity ...15

Chapter 4: Methodology ...15

Data collection ...16

Measurements and operationalizations ...17

Data analysis ...18

Chapter 5: Results ...21

Chapter 6: Discussion ...36

Chapter 7: Limitations ...38

Chapter 8: Conclusion ...39

Appendix: Baseline predictions dataset ...40

Bibliography ...43

(5)

List of Figures

Figure 1: Total number of tweets by country ...22

Figure 2: Metadata volume per tweet by country ...23

Figure 3: Metadata volume in Russia by year ...24

Figure 4: Metadata volume in the US by year ...24

Figure 5: Predictions of 2012 Russian presidential elections ...25

Figure 6: Predictions of 2018 Russian presidential elections ...25

Figure 7: Predictions of 2012 US presidential elections ...26

Figure 8: Predictions of 2016 US presidential elections ...26

Figure 9: Predictions of 2020 US presidential elections ...27

Figure 10: Comparing prediction errors ...28

Figure 11: Comparing prediction errors by method ...28

Figure 12: Prediction errors without verified accounts ...29

Figure 13: Mean average error (MAE) change ...30

Figure 14: Prediction errors for large cities ...31

Figure 15: Mean average error (MAE) change ...31

Figure 16: Total number of tweets by candidates | RU 2012 ...32

Figure 17: Total number of tweets by candidates | RU 2018 ...33

Figure 18: Total number of tweets by candidates | US 2012 ...33

Figure 19: Total number of tweets by candidates | US 2016 ...34

Figure 20: Total number of tweets by candidates | US 2020 ...34

Figure 21: Prediction errors without candidate tweets ...35

Figure 22: Mean average error (MAE) change ...36

(6)

Chapter 1: Introduction

As the number of people who have access to the internet and social media grows, opinion-mining becomes a prominent technique for analysing community perceptions of political or otherwise events over time. One of the advantages of such analysis is that it potentially allows for circumventing certain types of government-imposed censorship, e.g., news media censorship, by virtue of the anonymity those platforms sometimes allow for or just the sheer amount of content that can be challenging to crack down on. Thus, the relative transparency of social media as compared to traditional media – such as TV or mainstream news coverage outlets – means that the underlying political sentiments that users share can be exposed as individuals are more likely to speak their true beliefs under the hood of anonymity and lack of top-down censorship. As such, I argue that opinion-mining of social media is a technique that can grant unique perspectives specifically in contrasting non-democratic and democratic states when it comes to political sentiments of the population.

During the last decade, opinion-mining of social media has been extensively used to predict the results of elections across different countries (Jungherr, Jürgens, and Schoen 2011;

Soler, Cuartero, and Roblizo 2012; Tumasjan et al. 2010; Tsakalidis et al. 2015). At the same time, the scholarly literature debates the accuracy of election predictions based on social media data (Chung and Mustafaraj 2011; Metaxas and Gayo-Avello 2011), as while some predictions are more consistent with official election results, others – not so much. Hence, it is unclear whether the usability of this technique is consistent across every country or whether it is better suited for some types of countries or contexts. Thus, in this thesis I aim to focus on whether online sentiments’ – in this case, Twitter’s – predictive power regarding election results differs across regime types and if yes, how exactly. I specifically focus on the regime type as it is perhaps one of the broadest political factors that can influence both the process of elections and how people tend to speak of it. To approach this puzzle, I study two countries that fall on different sides of the democracy vs. non-democracy divide: the United States and Russia. My research question is as follows:

“How do the approaches for social media-based predictions of election results differ in performance across different political regimes in the example of the US and Russia?”

I will begin my thesis by presenting the literature review to situate the research question within the larger body of scholarly work on the usage of social media across regimes and election prediction methods. Here, my research question addresses the link between these two bodies of literature and specifically, how the former one can be relevant in investigating the latter one. After that in Chapter 3, I will go over my hypotheses which propose several ideas. One, that the predictive power of online sentiments should be higher in the US than in Russia. Two, that there are several moderators based on sociodemographic groups of users that can have asymmetric impact on improving or worsening the predictions when comparing the two

(7)

countries. Public figures and organization opinions are hypothesized to be more predictive in Russia, while the urban population opinions – in the US. At the same time, I propose that election candidate tweets are less predictive in Russia compared to the US. In Chapter 4, I will present my methodology which is consistent with other works in the field of election predictions, i.e., web-scraping of Twitter to collect the data and then using volumetric and sentiment analysis-based prediction techniques – which will be discussed in detail – to produce estimates of candidate vote shares. In Chapter 5, the descriptive statistics will be presented first summarizing the collected data in order to introduce the results. Results for each hypothesis will be summarized in figures after which the next chapter will discuss them in relation to the hypotheses. Shortly, Hypothesis 1 on the differences in predictive power of online sentiments across the two countries seems to be supported by the analysis carried out in this thesis.

However, there is no conclusive evidence in favour of the asymmetric effects of public figure and organization opinions or urban population sentiments. On the other hand, there is some tentative support for how candidate tweets can be less predictive of results in Russia as compared to the US. Chapter 7 will outline the main limitations of the methodology, while Chapter 8 concludes the thesis and presents some directions and recommendations for future research.

Chapter 2: Literature review

The research question stated above interlinks two areas of scholarly literature: one on the nexus between politics and social media and the other on social media-based election predictions. As the latter body of literature sheds light on technical aspects of making those predictions, the former is equally – if not more – important as it explains the underlying political processes that enable social media to be predictive of election results in the first place. This observation is especially relevant for the scope of this thesis research since the way political communication and commentary are reflected on social media also depends on the larger political climate of any given country. In other words, the real-world political processes map into social media, and in turn, those political opinions found on online platforms map into election predictions. Hence, in this literature review I will first explore the relation between online platforms and politics. Then, I will review the body of scholarly work on social media-based election predictions. Finally, I will situate my research question in relation to the previous two sections.

1. Social media and its political usage across regimes

The scholarly literature seems to paint a somewhat contrasting picture of how the political purpose of social media varies between democracies and non-democracies. While more democratic states create systems within which – among other uses – election candidates can use social media as a campaigning tool, non-democracies can often bear witness to protests and resistance movements which heavily depend on social media for information dissemination and

(8)

logistics given the relative unavailability of other means of civil society structures. This is not to say that democratic countries do not rely on social media for protests and social movements but rather that civil society in non-democracies is seemingly more constrained to that choice due to the relative unavailability of other alternatives (including traditional media and on-street protests). The following paragraphs will dive deeper into this distinction, while outlining the mechanisms through which social media influences and aids political behaviour.

According to the literature, even though messaging and blogging platforms have existed for the better part of the 21st century, the role of social media as a widely recognized political tool seems to be a relatively recent development. On this subject, scholars point to Barack Obama’s use of social media platforms as one of the first successful cases that cemented social networks’ place in politics (Soler, Cuartero, and Roblizo 2012, 1194). There seems to be little to no controversy among the scholars on whether social media campaigning has an effect on voter recruitment – the general consensus is that it does have a noticeable positive effect (Bright et al.

2019; Dimitrova and Matthes 2018). The literature describes several overarching ways in which these effects take place. Firstly, “the name recognition effect” – which is disputed by Kobayashi and Ichifuji (2015) – stipulates that mere familiarity with any candidate’s name might incline a person to vote for them when all other metrics are equal (Bright et al. 2019; Zajonc 1968).

Secondly, communication via social media is much easier, instantaneous, and has a wide reach which means that candidates can effectively talk about their programs and expeditiously interact with potential voters (Bright et al. 2019; Auter and Fine 2017). Thirdly, “friends and family bias”

– by which people are more likely to have a higher opinion of those whom their surroundings support – is also quite pronounced in social media since people can see others’ interactions with politician accounts (Auter and Fine 2017).

While there is no conclusive evidence on whether underlying psychological effects like

“the name recognition effect” or “friends and family bias” have any significant bearing over people’s desire to vote for a certain candidate, most researchers seem to heavily agree that campaigning over social media is both cheaper and more effective in terms of raw reach (Auter and Fine 2017; Reuter and Szakonyi 2013). After studying the US Senate candidates and their patterns of social media campaigning, Auter and Fine (2017, 186) state that this relative cost- efficiency and the ability to directly disseminate information instead of having to go through journalists and traditional media encourages “underdog” candidates to heavily rely on and benefit from platforms like Twitter and Facebook. However, it is still worth noting that – as pointed out by Dimitrova and Matthes (2018) – that measuring the exact effect of social media campaigning is enormously complex given that those who use these online platforms are not necessarily the same people who vote.

While the above can apply to both democracies and – to an extent that election campaigning is possible – non-democracies (Tapsell 2020), the latter also heavily depend on social media platforms for civic mobilization and general non-election centred messaging from

(9)

anti-establishment figures. Weak “conventional” civil societies push activists to create “virtual”

civil societies to challenge the regime as it was the case with Tunisia, Egypt, Ukraine, and Russia (Beissinger 2017, 364). In this way, the importance of online political spaces can be drawn from their real-world implications. Two studies – one by Reuter and Szakonyi (2013) and one by Beissinger (2017) – examine the case of Russia’s 2011-2012 election fraud protests. The articles state that social media platforms and blogging websites were instrumental in effectively spreading information about fraud. This is consistent with the previous section in regard to internet’s broadcasting capacity and its usefulness for the “underdogs”. However, Beissinger (2017, 367) also argues that these “virtual” civil societies tend to live by a false claim of representing the people, whilst in reality, they tend to be disconnected from the majority population. This is reinforced by Reuter and Szakonyi’s (2013, 42) conclusion that

“Facebook/Twitter users are significantly more likely to believe that there were significant electoral violations in the 2011 elections”. Given that social media users and non-users already have initial demographic differences, the additional information exposure differences can render any form of activism that emerges from the internet disconnected from the everyday reality of the population at large. In contrast to this, another similar case study that explored South Korea’s candlelight protests of 2016-2017 concerning corruption in the government demonstrates that these protests had actual tangible effects in the form of presidential impeachment (Yun and Min 2020). However, while these studies show the same core mechanisms of social media-driven mobilization, they point to differing outcomes which is perhaps connected to different socio- economic and political conditions.

Concluding this section, the literature describes similar core use of social media and internet in political contexts – spreading a message in a faster and cheaper way – albeit for different purposes. Be it election campaigning and protests in democratic countries or opposition-lead unrests in non-democracies, rapid communication and ease of use of online spaces makes internet a desirable tool. A notable point of interest here then is how the differences in “virtual civil societies” and “conventional civil societies” can shape the discrepancies between online voices and real-world voices of people in relations to the state regime. Given the relative freedom of speech and explicit political usage of social media by the elite in democratic countries, these differences should be not as noticeable. In other words, virtual spaces are perhaps more in line with conventional civil societies which would render online political discourses a reflection of real-world political opinions in democracies. On the other hand, the distinctions between virtual civil societies and conventional civil societies seem to be more prominent in non-democracies. Hence, this links back to the research question: the way people use the internet – and social media specifically – in contrast to traditional ways of displaying political opinions is dependent on the regime. This leads to the question of how big the difference is and whether it is reflected on the process of elections.

2. Predicting elections using social media

(10)

Election predictions is an area of research that emerged around mere ten years ago and as such has a loosely structured theoretical backbone. Most research works take the form of case studies wherein authors attempt to predict elections in different countries or in different time frames. It is worth noting right away that a glaring gap in this literature is its limited geographical coverage as many scholars attempt to study countries like the US (Brito and Adeodato 2020; Oikonomou and Tjortjis 2018) or European countries (Jungherr, Jürgens, and Schoen 2011; Soler, Cuartero, and Roblizo 2012; Tumasjan et al. 2010; Tsakalidis et al. 2015), while developing countries or non-democracies are – with few exceptions (Jaidka et al. 2018) – left largely unstudied. Hence, there might be an inherent sample-driven bias in the literature as a whole.

That being said, most of the studies focus on the actual execution of prediction algorithms which can be subdivided into several topics: techniques, effectiveness, and challenges. The two most widely used techniques are volumetric and sentiment analysis of social media posts. Before addressing those two methods, it is worth pointing out that virtually every study uses internet web-scraping – that is, automated data gathering which relies on code – to collect the necessary data for analysis. This data can be filtered by certain parameters, e.g., one can attempt to only gather tweets that contain a specific candidate’s name but not the other candidates’ names. Regarding the actual methods of making predictions, there is a very clear conceptual difference between volumetric analysis and sentiment analysis. In simple terms, the former uses the sheer number of mentions of a candidate or a party to extrapolate their share of votes (Jaidka et al. 2018; Soler, Cuartero, and Roblizo 2012, and Tumasjan et al. 2011). At the same time, the literature seems to lack a solid theoretical foundation for volumetric analysis as it is not clear why mentions of a person or a party can serve as a proxy for votes. The only potential explanation mentioned in the literature is the “name recognition effect”. As the number of tweets, replies, retweets, and overall engagement with content mentioning a politician rises, the number of people who recognize their name increases as well. In contrast, sentiment analysis uses the same filtered data as volumetric approaches and tries to infer their semantic meanings and evaluates their polarity in terms of “positive”, “negative”, or “neutral” to gauge the opinion of people and infer the vote from there (Tsakalidis et al. 2015). Consequently, this means that sentiment analysis techniques have to rely on other significantly more sophisticated procedures to automatically analyse written text as compared to simply counting the number of mentions.

The scholarly literature seems to mostly concur that sentiment analysis-based predictions generally have better accuracy than volumetrically inferred ones (Metaxas and Gayo-Avello 2011; Jaidka et al. 2018). Theoretically, this difference could be motivated by the fact that the name recognition effect can be confounded by negative emotions, while sentiment analysis accounts for them. However, the overall accuracy of social media driven predictions is under scrutiny despite there being a number of reportedly successful applications of both techniques, e.g., Tumasjan et al. (2010) and Ceron et al. (2013). In their ground-breaking paper, Tumasjan et al. (2010, 414) used a simple volumetric approach to infer the results of 2009 German federal

(11)

elections. They state that “… the mere number of messages reflects the election result and that this rather simple metric, with mean average error (MAE) of 1.65%, even comes close to traditional election polls”. Later Soler, Cuartero, and Roblizo (2012) test a similar method in the context of Spanish elections – both general and regional – to conclude that there is a correlation between tweet numbers and election results. On the sentiment analysis side, Oikonomou and Tjortjis (2018) correctly predicted the winner of the US presidential race of 2016 in three key states: Florida, Ohio, and North Carolina. In a somewhat different lane, Brito and Adeodato (2020) claim to have enhanced the performance of traditional polls by combining them with social media data. Despite these reported successful woks, there are a number of criticisms which claim that election prediction methods are either at best inconsistent or at worst are accidentally accurate. Notably, Metaxas and Gayo-Avello (2011) use a procedure similar to that used by Tumasjan et al. (2010) but in the context of US Congressional elections. Their results produce a MAE of 17.1% as opposed to 1.65% acquired by Tumasjan et al. (2010). However, when using sentiment analysis, they reduce the MAE to 7.6%, yet state that it is also “unacceptably high”

(Metaxas and Gayo-Avello 2011, 168). A number of other studies – such as those by Chung and Mustafaraj (2011) – demonstrate results that are not as positive. All in all, the feasibility of social media as a good basis for predicting elections seems heavily contested. However, while there are both optimistic and pessimistic results, almost none of the studies are able to claim the generalizability of their results which is also pointed out by Gayo-Avello (2012, 92). That is to say, individual case studies can come up with positive or negative results, yet there are no studies that evaluate the concept of the approach itself and this seems to be an important gap in the literature.

The last paragraph already demonstrates that there are a number of challenges in the field given the noticeable inconsistency of results. Here, two other challenges will be outlined in brief:

lack of consistent methodological choices and complexities of demographic biases. Starting off from the first – and arguably the biggest – challenge, the field seems to be very loosely structured on a methodological level. Slightest changes in data collection can seemingly have noticeable effect on the results. For instance, while certain studies suggest that only taking the last 24 hours provide the best results (Dwi Prasetyo and Hauff 2015), some other studies use the weighing system wherein posts made a day before the elections have the highest impact on the prediction and the other days weigh less in descending order (Jaidka et al. 2018). On a similar theme, both Dwi Prasetyo and Hauff (2015) and Jaidka et al. (2018) modify their volumetric algorithm to only account for the number of unique users that tweet about a particular politician and count that as a vote instead of the overall number of tweets. In other words, the whole process of election predictions has a somewhat “black box” characteristic at the moment. The second challenge – demographic biases – come from the fact that depending on the context there can be dramatic differences between those who express their opinion on social media and those who end up voting (Metaxas and Gayo-Avello 2011, 167). This challenge exists on a somewhat different plane than the previous one since even if one could accurately infer the votes of everyone on social media, this might not predict the election results. In general, it seems that not

(12)

enough attention is paid to this bias as is stated by Dwi Prasetyo and Hauff (2015, 151). The authors posit that this might be due to de-biasing not providing significant improvements since in many studied countries internet penetration levels are already quite high (Dwi Prasetyo and Hauff 2015, 151). This claim is in line with the general trend of mostly studying developed countries among the scholars. However, demographic biases are still argued to be important as seen from when Jaidka et al. (2018) studied India, Pakistan, and Malaysia. While India and Pakistan’s 90% of tweets were in English – and hence, processed – Malaysia’s tweets were only 23% in English (Jaidka et al. 2018, 5). The authors conclude that while the results for India and Pakistan were quite promising, Malaysia’s data “[…] did not offer any advantages over traditional polls” (Jaidka et al. 2018, 17).

In conclusion of this section, there are a variety of tools and methodologies involved in making election predictions based on social media data. Out of all of them – despite having problems of its own – sentiment analysis-based predictions seem to produce more accurate results as compared to simple volumetric predictions. However, at the same time, these positive results are highly disputed and are claimed to have low replicability and dependency on arbitrary choices of researchers. While the literature is quite divided in terms of demonstrating successful or unsuccessful applications of any given algorithm or methodology, there is a very visible gap in why and when these techniques fail or succeed and how these results can be related to overall political trends in given countries. Theoretical explanations of processes involved in making predictions and systematizations of gold-standard procedures are largely unaddressed in the literature in favour of isolated case studies.

3. Situating the research question

There are three points of justification as to why I chose my research question. Firstly, as was demonstrated above, the intersection between social media and politics has a very tangible current day relevance as a significant portion of the world population frequently uses social media and consumes a good portion of information from those platforms. Hence, exploring political opinions on social media might shed light into real life political phenomena. Secondly, I specifically aim to the address the theoretical gap pertaining to how predictive algorithms relate to the political context in which these algorithms are deployed. Given that the regime type is arguably one of the country level characteristics that most closely relates to politics and political phenomena, I theorize that it would have a prominent effect on predictions based on social media. Thirdly, I aim to contribute to the existing literature by expanding the range of countries that are used for election predictions. As mentioned above, quite often countries that are used for this type of research tend to be English-speaking democracies. I add an additional case of Russia which is a non-English-speaking non-democracy.

The case study choice for this thesis has fallen on the US and Russia. The choice is motivated by several factors. Firstly, the language accessibility which is important since portions

(13)

of the data require sentiment analysis for which some level of language familiarity is helpful.

Additionally, just navigating the Twitter space and identifying accounts of presidential candidates would benefit from knowing the language. Secondly, both cases present clear-cut opposites of each other in terms of the regime type. Thirdly, while autocracy in Russia is quite well-studied, there is no considerable literature on election predictions in Russia. Fourthly, Russia presents an interesting case specifically for the scope of this research given how it is often characterized as an “informational autocracy” which employs lower levels of violent repression but higher levels of tight media control when it comes to national broadcasting and traditional media (Guriev and Treisman 2019).

Chapter 3: Hypotheses

Non-democracies and democracies have significant differences when it comes to the way in which people’s electoral opinions are formed and the ways in which those opinions are translated into final electoral outcomes. Electoral fraud both directly produces an inaccurate picture of real political sentiments of people and changes their voting patterns incentivizing the population to not vote for candidates that are deemed unlikely to win due to their political profile. Additionally, there are potential tangible costs for voting opposition and potential benefits for supporting the current leader (Miller 2015). Therefore, while election outcomes cannot be counted as a real representation of popular political sentiments in non-democracies, social media can arguably shed light into them to a certain degree. Based on these initial assumptions, the hypotheses below investigate the relationship between political sentiments of people displayed in social media and election outcomes. Hypothesis 1 puts forth a basis that depending on the regime type the degree to which social media sentiments coincide with official election results differs: higher similarity in the more democratic US than Russia which represents a non-democracy. Hypothesis 2 takes a more detailed look at the connection by identifying potential moderators: online opinions of certain sociodemographic groups – such as urban populations – are more or less likely to be predictive of election outcomes based on country regime types. Hence, Hypothesis 2 is reflective of real political processes that take place in the nexus between elections, regime types, and online spaces.

An important concept within the literature on elections in autocracies is “the dictator’s dilemma” which states that while dictators can lose considerable power with too much electoral freedom, they would never know what people think of them with too much repression (Wintrobe 1998). Making elections fair and transparent would potentially cost a dictator their position in power or at the very least delegitimize their rule if the election outcomes show noticeable opposition support. On the other hand, more transparent elections can be a source of valuable information. Here, the value of such information is that without knowing their own political stance and popularity in the eyes of the populace, they might eventually lose all of their power if

(14)

people see them unfavourably. Information on who and where supports them would allow the dictator to take relevant measures to stabilize their power, prolong their reign, and prevent any potential moves from the opposition forces and supporters.

Given this context, there are two crucial mechanisms that can influence electoral outcomes in non-democracies: election corruption and the populace’s incentives – or lack thereof – to vote against an establishment candidate irrespective of their true opinions. Starting off with the first mechanism, non-democratic regimes tend to have significantly higher levels of election corruption which skews the results in a way that supports the establishment and simultaneously is not reflective of people’s real opinions. According to Knutsen et al. (2017, 107), “[…]

elections in which incumbents perform worse than expected provide informative signals of regime popularity […]”. Therefore, certain autocrats might opt into mass fabrication of election results to avoid potential regime destabilization. Electoral manipulations of this sort directly misrepresent population sentiments – which can be inferred from social media – in non- democracies, while democratic elections serve as a more accurate representation of actual political sentiments of people.

The second mechanism lies in the consideration that there is a psychological effect of expecting repercussions for anti-establishment votes or tangible gains for pro-establishment votes. One of the reasons why authoritarian rulers hold elections is that they are able to gather information about their support base and opponents. With that information available, the ruler can establish more favourable socioeconomic conditions in regions where their support is high, while neglecting the pro-opposition regions (Miller 2015). I stipulate that the expectation of such treatment can have an effect on voters’ willingness to cast a genuine vote, i.e., the one consistent with their political ideological preferences instead of the one induced by fear. Additionally, according to Lust-Okar (2006), individuals are more likely to vote for those officials who – by their estimations – would be able to provide them with certain benefits and who also quite often happen to be pro-regime. These expectations of “the rigged game” in non-democratic countries create a situation wherein – even under more transparent elections – voters may end up casting disingenuous votes. However, these expectations – which seemingly do not take place in democracies – should not have an effect on the kind of opinions people express online.

As a result of the mechanisms outlined above, there is an expected heightened discrepancy between real election results and predictions based on social media in non- democratic states as opposed to democracies. The underlying assumption here is that social media provides a comparatively freer medium for expressing people’s real political opinions.

Evidence suggests that social media is often used by opposition movements and candidates or in general for expressing anti-regime grievances due to it being harder to control and monitor by most states (Reuter and Szakonyi 2013; Beissinger 2017). As stated by Beissinger (2017, 364),

“virtual” civil societies are used as a substitute to weakly organized “conventional” civil societies since the latter are often repressed in non-democratic countries. This implies a level of

(15)

greater freedom that online platforms allow people when compared to, say, traditional media outlets in the form of print newspaper or television in non-democratic regimes. This leads to the following hypothesis.

Hypothesis 1: “The predictive power of online sentiments is lower in Russia as compared to the US.”

Different socioeconomic groups can have somewhat differing general voting tendencies and preferences. This can intuitively be explained by people’s political preferences being shaped by their socioeconomic needs. For instance, lower income individuals would on average be more likely to vote for redistributive policies, while individuals with higher income would more likely resist those. On the other hand, what kind of opinions get filtered out or repressed – both in elections and on media – depend on the regime type. As stated above, virtually all non- democracies commit electoral fraud even if minimal as in the case of competitive authoritarian regimes (Levitsky and Way 2002, 53). Anti-regime opinions – which also often align with general democratic preferences – are stifled during elections and on most traditional media outlets. Combining the two trends above, it can be observed that when the socioeconomic voting patterns and regime types overlap, distinct patterns of what kind of opinions are voiced and/or counted appear. Preferences of demographics that tend to vote anti-regime are less likely to be predictive of official election results in non-democracies. At the same time, certain demographics might avoid voicing anti-regime sentiments online which ends up aligning their voices with election results. Based on these observations, sub-hypotheses below demonstrate three distinct sociodemographic groups whose social media sentiments tend to be more or less predictive of election results based on the regime type of their state.

As one of the premises of Hypothesis 1 states, genuine political opinions are more likely to be found in online spaces not only due to the lack of top-down filtering but also because individuals are less likely to self-censor thanks to relative anonymity. However, this also suggests that without anonymity and with higher social prominence and visibility, self-censoring could still be a relevant phenomenon on social media. Hence, I stipulate that public figure or organization – including media outlets, private and public enterprises, associations, etc. – accounts are either less likely to express any kind of political opinions online or are more likely to express pro-regime opinions in non-democracies. Hence, their online opinions are more predictive of election results in non-democratic countries. On the other hand, prominent social figures and organizations are more likely to express a more diverse set of opinions on social media in democracies.

Hypothesis 2A: “Public figure and organization opinions are more predictive of election results in Russia than the US.”

Literature on the effects of education on democracy suggests that there is a positive causal link that goes from the former to the latter (Evans and Rose 2012; Glaeser et al. 2007).

(16)

Educational institutions are linked to heightened civic participation by means of reducing the costs of socialization and to general anti-dictatorial sentiments. Assuming that urban spaces tend to have both more accessible and higher quality education, urban population is supposedly more likely to be liberal-leaning and anti-dictatorial compared to non-urban population. This trend is also present in non-democratic countries (Testa 2018). Arguably, in non-democracies, this tendency can translate into more prominent pro-opposition support. While the pattern is the same for both democracies and non-democracies, the crucial difference is that the pro-opposition ballot box opinions in non-democracies are more likely to be stifled – either due to them not casting genuine votes or electoral fraud – as compared to the same votes in democracies. This implies that in non-democratic countries, urban population voices should be less predictive of official election results which can be tailored to strengthen the perceived popularity of a current dictator.

Hypothesis 2B: “Urban population opinions are less predictive of election results in Russia than in the US.”

Opposition and/or underdog candidates tend to use social media as a campaigning tool more often and intensively as compared to established candidates since they have limited access to other official channels either due to the lack of sufficient resources or because of artificially created hindrances (Auter and Fine 2017, 186). I argue that both of the factors above can be present in non-democratic countries. Firstly, raising funds that can be spent for media time and campaigning is arguably harder for opposition forces since those parties tend to be noticeably smaller when compared to the large ruling party. Secondly, non-democracies are more likely to pressure and harass opposition forces which can include directly banning them from running for seats in official capacity. Specifically, in the context of Russia this can be exemplified by Alexey Navalny’s case. Meanwhile, ruling parties and/or coalitions in democracies are less known for strongly and directly hindering other parties’ access to traditional media, mostly opting for legal mechanisms such as changing campaign financing laws (Katz and Mair 1995). These factors create a situation wherein non-establishment candidates in non-democracies have more incentives to use social media platforms, while their establishment counterparts have the capacity to be more present in traditional media outlets. As a result, social media is likely to have a higher-than-expected (based on the final distribution of votes) saturation of opposition tweets.

This pattern is likely to be not as visible in democracies since non-establishment candidates are not “cornered” into social media. This means that there is no expected “oversaturation” of opposition or non-mainstream candidates in online spaces in democratic contexts. As an outcome, candidate tweets are less predictive of election results in non-democratic countries due to a more disproportionate distribution of establishment vs non-establishment voices.

Hypothesis 2C: “Twitter activity of candidates are less predictive of election results in Russia than in the US.”

(17)

Chapter 4: Methodology

To analyse the ways in which predictive power of social media might differ across regimes, I collect Twitter data concerning electoral sentiments of users in the United States and the Russian Federation. The data is collected automatically for several presidential election cycles. Several variations of filtering are used: politician vs. non-politician tweets; cumulative number of tweets about a politician vs. cumulative number of users who tweet about a politician;

etc. These variations are outlined in detail during the following sections. The collected data is analysed to make predictions using two different approaches: volumetric analysis which accounts for the volume of data on a politician to predict their performance and sentiment analysis that assesses sentiment polarities (on a positive to negative spectrum) for each candidate. For Hypothesis 2, the data is filtered to exclude or only include a certain subset of tweets to gauge whether this has an asymmetric effect on predictions when comparing the results for the two countries. The next sections will discuss the main measurements, operationalizations, data collection and analysis methods in more detail.

1. Data collection

1.1 Scope of data collection

I go through several election cycles for both countries to increase the number of predictions which should improve the overall accuracy of comparisons, as it allows to isolate specific elections that could be considered outliers due to special or unprecedented circumstances. Five elections covered in this research are Russian presidential elections of 2012 and 2018 and the US presidential elections of 2012, 2016, and 2020. The tweets are collected on the timeframe of one day before the elections. In other words, if the voting day is November 3, then the data only includes tweets from November 2 0:00 to 24:00. This choice is motivated by two factors. One, reportedly this approach can grant more accurate results (Dwi Prasetyo and Hauff 2015). Two, this reduces the amount of data that needs processing. The data was collected during February 2022.

For the purposes of this study no distinction between potential bot accounts and real user accounts are made for two reasons. One, since most existing methods for flagging bot accounts are not completely reliable in and of themselves there is a potential for data loss. This is especially pertinent since the two countries in question can have somewhat varying online space contexts, i.e., the bot behavior might be dissimilar. Second, the userbase density across the two countries – demonstrated below in Figure 2 – is roughly similar which means that the potential effects bots might have on predictions can be relatively close which makes the comparisons of the prediction making techniques themselves still feasible.

(18)

1.2 Data collection technique

I collect all the data by using the web-scraping technique on Twitter. The Twitter connection is established by using the snscrape module (JustAnotherArchivist 2018). This allows me to deploy a self-written Python code to collect individual tweets and metadata pertaining to those tweets filtered by search words and timeframes. The keyword search is an AND type search as opposed to an OR type search, i.e., the results must contain all of the indicated words and not just one or some of them. The search words are the names and surnames of the candidates as relevant to the election cycles. For example, the list of keywords for the 2012 US presidential elections include “Barack Obama”, “Mitt Romney”, “Gary Johnson”, “Jill Stein” and other nominees. The Appendix section demonstrates the full list of candidates included in this thesis.

The collected data includes:

Number: tweet ID

Timestamp: time the tweet was posted

Text: tweet texts

Text: username of the person who posted the tweet

Text: display name of the person who posted the tweet

Text: self-reported user location

Number: number of followers a user has

Binary: whether a user’s account is verified

Text: name of the place where the tweet was posted from

Text: name of the country where the tweet was posted from

Text: country code of the country where the tweet was posted from

Text: URL of the tweet

Number: number of likes

Number: number of times a tweet was retweeted

Number: number of replies

Number: number of times a tweet was quote tweeted

Number: word count of the tweet

Text: tags in the tweet

2. Measurements and operationalisations

There are two main concepts that need to be operationalized: predictive power and real election values. Predictive power is measured by MAE which stands for mean average error.

MAE is a mean of differences of predicted value and real value for each candidate as formulated below.

(19)

𝑀𝐴𝐸 =1

𝑛 ∑ |𝑃𝑖

𝑛

𝑖=1

− 𝑅𝑖|

Real value for each candidate is measured not by the end result of the elections but by popular vote to account for different election systems across countries. The electoral systems mediate the relationship between people’s votes and the official end tally. For example, in the case of the US, this mediating force is the electoral college and the winner-takes-all voting system which can grant a win to a candidate even if they do not win the popular support. Russia, on the other hand, employs a two-round majoritarian system. These differences in electoral systems can introduce an additional mediating variable. By only accounting for popular vote, this variable and its effects are circumvented, as online sentiments are more closely associated with popular vote. The data is taken from official governmental sources for each country: the Federal Elections Commission for the US (https://www.whitehouse.gov/about-the-white-house/our- government/the-legislative-branch/) and the Central Election Commission for Russia (http://duma.gov.ru/en/duma/about/history/information/). The data is explicitly taken from the official sources – which may or may not represent the popular sentiments of the people due to various reasons such as electoral fraud – since the goal of this thesis work is not to gauge the people’s actual preferences but to be able to infer the electoral outcomes based on the available online social media data and in relation to regime types.

3. Data analysis 3.1 Making predictions

I use the collected data to make relevant election predictions by using volumetric analysis and sentiment analysis. As stated in the literature review section, volumetric analysis is based on the volume of information on a candidate to estimate their vote shares, while sentiment analysis focuses on the userbase’s opinion on any given candidate. This research employs the dictionary- based sentiment analysis approach which uses a pre-made dictionary with word encodings (along the positive – negative dimension) and marks the tweets based on that. In other words, a single tweet can be marked on a numeric scale (which could end up negative and positive, e.g., “-5” or

“+7”) by combining the sentiment marks of individual words in them based on the marks those words receive in a dictionary if it contains them.

In creating the volumetric predictions, there are several variations that I use:

1. Counting the overall number of tweets about a politician.

This is the simplest way to evaluate the volume of discourse around a given candidate.

Theoretically, this approach can be linked to the name recognition effect whereby – all other factors equal – people tend to vote for candidates that they recognize at least in name (Bright et al. 2019; Zajonc 1968). The logic goes that the more people talk about a

(20)

certain candidate, the more public exposure the said candidate amasses. Hence, there are no controls for tweet density across users, i.e., it might be the case that a sizeable portion of the discourse is produced by a small group of people.

2. Counting the number of retweets and quote tweets in addition to the overall number of tweets about a politician.

Adding these variables allows to capture the level of exposure a politician garners more fully which can potentially enhance the previous approach. The theoretical backing is the same: more people seeing others talking about a politician facilitates the name recognition effect.

3. Counting the number of users who tweet about a politician.

This approach only counts the raw number of users who tweet about a certain politician to produce their predicted share of votes. This is done in the same vein as Dwi Prasetyo and Hauff (2015) who hypothesize and confirm that tweets themselves should not be considered “votes” but rather the number of people should be considered an indication of the potential number of voters who intend to support a certain candidate and/or party.

For dictionary analysis, the tweets are tokenized and lemmatized, punctuation marks are removed as per standard language processing procedures in order for tweet contents to be able to better match dictionary words. Tokenization refers to the process of segmenting sentences into individual words, i.e., “tokens”. For instance, the sentence “They voted for the second candidate”

would be tokenized into “They”, “voted”, “for”, “the”, “second”, and “candidate”.

Lemmatization reduces those tokens into their base forms by conducting a morphological analysis. For example, “voted” would be turned into “vote”. These procedures are carried out by ready-for-use Python functions and do not need to be done from scratch.

Net sentiments for each candidate are used to predict their share of votes. In other words, negative sentiments are subtracted from positive sentiments to produce net sentiments. I use separate dictionaries for Russian language data and English language data to mark the sentiments. SentiWordNet (Baccianella, Esuli, and Sebastiani 2010) is used for tweets in English and Sentimental (Rumshisky et al. 2017) for tweets in Russian. Admittedly, one might consider using two separate dictionaries a validity threat since no two dictionaries are the same in their quality and concepts they capture. Yet there are several factors motivating this choice as opposed to using one dictionary and translating the tweets or translating the whole dictionary. Firstly, there is a valid concern that machine translation of both the tweets and dictionaries might not be accurate which would harm the quality of predictions for one country while keeping the other intact. This issue is specifically relevant for translating tweets as online slang might add a level of unpredictability and complexity. Secondly, the type of polarity marking needed for this research, i.e., positive to negative, is supposedly simple, field-independent, and non-specific enough that it should be conceptualized in the same way for both languages.

(21)

3.2 Moderators

To check the effects of moderator variables, the process of creating an additional group of predictions based on filtering the data on moderators is used. The following table summarizes the filtering procedures, motivations behind them, and expected results for each hypothesis.

Hypothesis Filter Motivation Expected result

Hypothesis 2A Remove tweets from verified accounts. Verified accounts are those profiles that are marked as verified by Twitter staff and they most often belong to high-profile

figures,

companies, or organizations. As an indicator, those accounts have a signature tick mark at the end of users’

display names.

Removing tweets from verified accounts reduces their weight to zero which allows to isolate the effect that verified account tweets have on the general baseline results.

Verified accounts are used as a proxy to public figures and organizations as two conditions are met:

(1) non-anonymity and (2) public visibility as those accounts tend to have higher following.

On a positive to negative axis, the MAE change is located higher for Russia than for the US. Since public figure and organization tweets are more likely to be in line with official election results, the new Russian dataset is more likely to increase the MAE when compared to the changes in the US dataset.

Hypothesis 2B Include only tweets from five largest cities by population in each country based on self-reported locations of users.

For the US, those are New York in NY, Los Angeles in CA, Chicago in IL, Houston in

Only including five largest cities grants absolute weight to those cases which allows to isolate the difference

between general

population results demonstrated in the Hypothesis 1 baseline and – in extrapolation – urban areas.

On a positive to negative axis, the MAE change is located higher for Russia than for the US. In other words, the new Russian dataset is more likely to raise the MAE when compared to the changes in the US dataset.

(22)

Texas, and Phoenix in AZ.

The list for Russia contains Moscow, Saint-Petersburg, Novosibirsk, Yekaterinburg, and Nizhny Novgorod.

The dataset is hand-filtered by using self-reported user locations.

Hypothesis 2C Remove tweets by election candidates themselves. That is to say, tweets from the presidential candidates of each respective election cycles are not counted when making

predictions.

Removing tweets from candidates reduces their weight to zero which allows to isolate the effect that their tweets have on the general baseline results.

On a positive to negative axis, the MAE change is located lower for Russia than for the US. In other words, the new Russian dataset is more likely to reduce the MAE when compared to the changes in the US dataset.

Chapter 5: Results 1. Hypothesis 1

1.1 Descriptive statistics

In total, 725,695 tweets were collected across the two countries. The distribution in volume between the two countries is demonstrated in Figure 1 – where the Y axis represents the number of tweets, and the X axis represents the country – and shows an expected major swing towards the US where Twitter is more widely used for political discussion. There are 722,654

(23)

tweets about the US election candidates, while the same is 3041 for Russia. However, it is worth noting that only two Russian elections were analysed compared to three United States elections.

The platform reach of tweets is higher in the US as demonstrated in Figure 2. The X axis represents the type of meta – likes, retweets, replies, quote retweets, and unique users – grouped by country, while the Y axis represents the quantity of those metrics per tweet. For instance, it includes the number of likes per tweet for the US election tweets as compared to the same metric in Russia. Hence, this figure allows to see a comparative level of user engagement between the two countries. There are ~5 times more likes per tweet for the US elections which means that – given how Twitter flags tweets liked by accounts followed by a user on their timeline – potentially 5 times more people can see each tweet. There are roughly 3 times more retweets, ~2 times more replies, and ~5 times more quote retweets per tweet in the case of US elections.

These metrics are perhaps conditioned by the larger userbase following the US elections on Twitter than in Russia. That being said, the US election tweets are slightly denser across the userbase having ~0.5 users per tweet, while the Russian election tweets average on ~0.7 users.

(24)

Figure 3 and Figure 4 represent the total metadata volumes in each country grouped by year. The X axis represents the meta type grouped by year, while the Y axis represents the quantity of those metrics. Not surprisingly, comparing the two figures, it can be seen that the total engagement metrics – i.e., the number of likes or retweets – are dramatically higher for the US. Interestingly enough, while the volume of tweets rises each consecutive election in the US (Figure 4), the same is not true for Russia where the 2012 elections saw a higher number of tweets (Figure 3). At the same time, the engagement statistics are still higher during the 2018 elections with the number of likes, retweets, replies, and quote tweets being higher. This could mostly be explained by the high number of Vladimir Putin related tweets in 2012, as there are 1180 tweets on him that year compared to 699 in 2018.

(25)

1.2 Predictions

Figures 5 to 9 below demonstrate the prediction results for each election divided by country and year. The four bars for each candidate reflect the four prediction making methods outlined in the chapter above: volumetric based on raw number of tweets; volumetric based on the number of tweets, retweets, and quote tweets; volumetric based on the number of unique users; and the dictionary-based sentiment analysis prediction method. Each figure includes those prediction values for candidates during their respective election cycles in the Y axis, while the star symbol stands for the official share of votes. In other words, the figures allow to compare the predicted value for each candidate with the official vote shares they received.

(26)
(27)
(28)

1.3 Prediction errors

Figure 10 summarizes the MAE for each election cycle across the four prediction methods. The X axis represents the prediction methods grouped by election cycles, while the Y axis represents the MAE. On average, the MAE is lower for the US elections for every method of prediction except the dictionary-based sentiment analysis. This means that the predictions – except the sentiment-based ones – are on average more accurate for the US than for Russia.

Notably, both countries see an increase in MAE by each consecutive election cycle, i.e., earlier elections are easier to predict in the given sample.

(29)

Figure 11 summarizes the MAE for all methods not divided by countries. The X axis represents the prediction methods, while the Y axis – the MAE. As it can be seen there are noticeable differences between the predictions based on distinct methods. Prediction method 2 based on number of tweets, retweets, and quote tweets has the lowest MAE which is 4.76%.

Method 1 which only utilizes the number of tweets has a MAE of 6.33%, unique user count- based method 3 – 8.04%, and the sentiment analysis-based method 4 has the highest MAE of 9.58%.

(30)

2. Hypothesis 2

2.1 Hypothesis 2A

Figures 12 and 13 summarize the results for Hypothesis 2A. Prediction errors of the new dataset not containing verified accounts are demonstrated in Figure 12. The figure is structured in the same way as Figure 10, i.e., the prediction methods grouped by election cycles are shown across the X axis, while the Y axis represents the MAE. Comparisons of these results and the baseline results from Hypothesis 1 summarized in Figure 10 are shown in Figure 13 grouped by country. The X axis in Figure 13 represents the prediction methods grouped by countries – as opposed to by election cycles – to provide an easy country-to-country comparison. The Y axis represents the MAE change from the baseline results which can be either positive or negative.

Positive MAE change indicates an increase in error, while the negative change would mean that the predictions are getting more accurate.

The changes are mostly small and kept below 1% except for two cases: a drastic 2.5%

increase for tweet and retweet/quote tweet-based predictions in the US and a slight MAE improvement for sentiment-based predictions in Russia. On average, the US election predictions see a higher increase in MAE – i.e., the predictions become less accurate – when removing tweets from verified accounts.

(31)

2.2 Hypothesis 2B

Figures 14 and 15 summarize the results for Hypothesis 2B. Figure 14 represents the MAE across elections when only accounting for the five largest cities in each country. The figure is organized in the same manner as Figure 10 and Figure 12 except that the predictions are based not on the whole dataset but only the tweets posted by users in the five largest cities for each country. The X axis indicates the prediction methods grouped by election cycles, while the Y axis shows the MAE. Figure 15 demonstrates the difference between predictions based on full scope of data and those made only accounting for large cities. In other words, it compares the results of Figure 10 and Figure 14 but averaged by country in the same way as Figure 13.

As it can be seen, the positive MAE change following the major cities selection is higher for the US when compared to Russia. In the US, the volumetric prediction results see an almost 4% increase, while the same is about ~7% for sentiment analysis-based predictions. On the other hand, the MAE increases for volumetric predictions are around ~1% for Russia, where the sentiment analysis-based predictions actually improve with the MAE slightly dropping. This trend is somewhat similar to what is demonstrated in Figure 13 above.

(32)
(33)

2.3 Hypothesis 2C

Figures 16 to 20 demonstrate the number of tweets posted by candidates themselves during their respective election cycles one day before the voting day. In every figure, the X axes represent the presidential candidate, while the Y axes are for the number of tweets by those candidates found in the initial dataset used for the baseline predictions of Hypothesis 1. Only candidates that had their personal Twitter accounts during that timeframe are displayed.

In general, more candidate tweets were registered for the US than for Russia. However, when comparing the frequency of candidate tweets among all tweets, the counts for Russia are much higher: ~0.01 against ~0.0002. That being said, this relative activity of Russian election candidates can be mainly attributed to one candidate – Sergey Mironov – who put out 34 tweets during the day before the 2012 presidential elections. Notably, there is no Twitter activity from Vladimir Putin during both election cycles in Russia, while, say, Barack Obama who was an incumbent at the time did show Twitter activity during the 2012 elections. In general, the US candidate activity demonstrates improved diversity as both large party candidates and small party candidates did participate.

(34)
(35)

Figure 21 represents the MAE of predictions based on datasheets with candidate tweets filtered off. The prediction methods grouped by elections are indicated by the X axis. The Y axis, again, represents the MAE for each method. Figure 22 compares the average MAE by country between Figure 21 and Figure 10. As in the previous comparison figures – Figure 13 and Figure 15 – positive change means that the MAE is increasing, i.e., the predictions are getting less accurate, and vice versa for negative change.

(36)

As demonstrated by Figure 22, the change is not significant being mostly below 1% in both positive and negative courses yet has opposite directions for the two countries. While the predictions for Russia improve, predictions for the US get less accurate. There is an almost imperceptible change in tweet count-based and unique user count-based predictions for Russia, while the other two methods see more prominent drops in MAE. The sentiment analysis-based results virtually do not change for the US. However, the retweet/quote tweet-based predictions see a higher than 1% increase in MAE.

(37)

Chapter 6: Discussion 1. Hypothesis 1

The results are generally in line with the hypothesis: MAE is lower for the US for all methods except the dictionary approach. The reasons for this inconsistency could be two-fold.

First, this could be the result of using two separate dictionaries for Russian and English. If one is somehow more suited towards catching the specific language used around the elections discourse, then the differences in predictions are to be expected. However, this doesn’t necessarily explain the heavy swing towards high MAE for the US case specifically. Another reason could be the disproportionate negative messaging in the US. Especially for the latest two elections the leading candidates of the Republican and the Democratic parties garner significantly more negative comments than positive ones. When comparing the net sentiments, candidates from smaller parties are more likely to be on the net positive side, while the two leading candidates can fall on the net negative. This makes the internal logic used in this thesis – that the more people see a candidate in positive light the more likely they are to win – inapplicable for the specific framework of US politics which seems to highly focus on negative messaging. The same observation is not true for Russia where the negative messaging trend is not prominent. This distinction in political contexts makes the two countries somewhat

(38)

incomparable when it comes to sentiment analysis, so perhaps using different methods of handling sentiment-based predictions would be recommended.

Other than that, all three methods of volumetric analysis perform better for the US rather than Russia. While the differences in mean MAE between countries are about ~2.5% for the second and third methods, a notable difference can be seen for the first method – pure tweet number-based predictions – that only accounts for unique users. The difference in MAE here is about ~0.7% with Russia being the less accurate one. This implies that the retweeting/quote tweeting dynamics as well as the general density of the userbase – i.e., less people producing the same number of tweets in the US when compared to Russia – are different across the two countries.

One final side observation of this thesis work is that it seems that the second method of volumetric analysis – the number of tweets coupled with the number of RTs/QRTs – yields the most accurate results across both countries. Adding RTs and QRTs improves upon predictions based on pure number

Сурет

Figure 3 and Figure 4 represent the total metadata volumes in each country grouped by  year
Figure  10  summarizes  the  MAE  for  each  election  cycle  across  the  four  prediction  methods
Figure  11  summarizes  the  MAE  for  all  methods  not  divided  by  countries.  The  X  axis  represents  the  prediction  methods,  while  the  Y  axis  –  the  MAE
Figure 21 represents the MAE of predictions based on datasheets with candidate tweets  filtered off

Ақпарат көздері

Outline

СӘЙКЕС КЕЛЕТІН ҚҰЖАТТАР