Of social media and political manipulation

semanticscholar(2013)

引用 0|浏览1
暂无评分
摘要
With the exploding popularity of online social networks and microblogging platforms, social media have become the turf on which battles of opinion are fought. This section discusses a particularly insidious type of abuse of social media, aimed at manipulation of political discourse online. Grassroots campaigns can be simulated using techniques of what has come to be known as astroturf with the goal of promoting a certain view or candidate, slandering an opponent, or simply inciting or suppressing the vote. Such deception threatens the democratic process. We describe various attacks of this kind and a system designed to detect them. 6.3.1 The Rise of Online Grassroots Political Movements The 2008 presidential election will go down in history as the first to be dominated by grassroots movements organized and coordinated online. The ultimate success of Senator Obama’s campaign was due in no small part to its pioneering use of social media. An approach of direct dialog with his grassroots supporters captivated and connected with untapped layers of society and initiated a new era of political participation in American politics. On the other side of the aisle, the aftermath of the election brought about a reaction 232 Chapter 6 The Internet and the Physical World culminating in the Tea Party movement [397]. In both cases, it was clear that citizens would no longer be content as passive targets of political messages. They demanded an increased role in defining political discourse. As individuals gradually turn to the Internet in search of political and economic information, they naturally use existing social networks and platforms to discuss their views and ideals with their peers. Microblogging tools such as Twitter play an important role in this movement by allowing individuals to act as media aggregators and curators who are able to influence their followers and who are, in turn, influenced by the people that they elect to follow. Over time, trust develops between followers and followees making the latter more likely to accept content and information provided by the former. Perhaps the most striking demonstration of the relevance of this type of discourse, and of how aligned it is with public opinion at large, can be found in a 2010 paper by Tumasjan et al. [470]. By analyzing over 100, 000 tweets containing direct political references to parties or politicians in the ramp-up to the German Federal Election in 2009, they found that the fraction of activity within Twitter corresponding to each party closely matched the vote shares in the final election results. If this result could be generalized, this would imply that Twitter can be used as a real-time public sentiment monitoring tool. Based on this finding, Tumasjan et al. proposed that Twitter be used as a distributed real-time campaign monitoring tool. If it is true that Twitter truly mirrors public perception, then perhaps it is also true that by manipulating perception within Twitter one is also able to manipulate it in the real world. This inference has not escaped the attention of institutions and groups interested in promoting specific topics or actions. Mustafaraj and Metaxas [364] studied in detail one such case that occurred during the 2010 special Massachusetts senate election. They observed how a network of fake, computer controlled, accounts produced almost 1000 tweets in just over 2 h, containing a specific URL smearing one of the candidates. The goal of the perpetrators was to generate as much traffic as possible to reach a wide audience and thus influence the outcome of the election. To achieve this goal, specific users perceived as influential were they targeted in hopes that they would retweet the URL, thus bestowing upon it an added layer of credibility. Blunt as it was, this attempt was extremely successful, generating such a large number of retweets to briefly place the URL in the first page of Google results for the query Martha Coakley—the name of the smeared candidate. Coordinated deception of this sort, where a single agent forges the appearance of widespread support for an idea or position, is known as astroturfing, a name that stems from the parallel between fake grassroots movement and the common type of artificial grass used in sports stadiums. In this section, we look in detail at several tactics being used to promote misinformation in a covert way and a system that aims to automatically detect and track such attempts. 6.3.2 Spam and Astroturfing As anyone with an email inbox is well aware, spammers have decades of experience in reaching huge audiences. Their techniques range from simple mass email campaigns to sophisticated techniques that automatically customize each message to avoid detection by automated countermeasures. As with other communication media in the past, spammers have descended upon Twitter and adapted their toolbox to this new medium. Many of these techniques and potential countermeasures have been analyzed in detail [113,223,479,499]. Although there seems to be limited amounts of collusion between spammer accounts [223] in the form of spam campaigns designed to make users click a specific URL, there are specific characteristics that can identify spammer accounts. Defining features include the frequency of tweets, the age of the accounts, 6.3 Abuse of Social Media and Political Manipulation 233 and their periphery in the social graph [499]. The combination of content and user behavior attributes makes it possible to train machine learning algorithms to automatically detect spam accounts with a large accuracy [113]. This is likely due to the fact that spam relies on large numbers of accounts controlled by a small number of spammers. At first glance the goals of spammers and astroturfers might seem similar. They both want to communicate a message to a large audience of users, and both want to effect action (clicks, votes, changes of opinion) in the targeted users. However, there are several fundamental differences between the two types of attacks. Astroturfers, to create the illusion of widespread autonomous support, must retain some degree of credibility and appear independent with respect to commercial or political interests. Likewise, while spammers can use a single account to target many users, astroturfers rely on the fact that users are more receptive to messages they perceive as coming from multiple independent sources. These different techniques necessitate distinct approaches to the detection problem. Spam detection systems often focus on the content of messages—for instance, determining whether the message contains a certain link or set of tags. In detecting astroturf, the focus must be on how the message is delivered rather than its content. The fact that the message is delivered in the guise of legitimate online chatter instead of an organized campaign is more relevant than its veracity. Content may be a legitimate opinion or information resource; the fraud is not a product of the content but rather the distribution mechanism. Further, many of the users involved in propagating a successful astroturf message may in fact be legitimate users who are unwittingly complicit in the deception, having been deceived themselves. Thus, methods for detecting spam that focus on properties of user accounts, such as the number of URLs in tweets originating from an account or the interval between successive tweets, are likely to be unsuccessful in the detection of astroturf. A normal user may come to believe and disseminate a piece of information that had its origins on a campaign of this type. As more and more normal users join the dissemination of this message, any information that could potentially be extracted from analyzing the properties of the accounts spreading it will become increasingly muddled. 6.3.3 Deceptive Tactics Anyone trying to increase their visibility on Twitter has an obvious strategy: create an account, start tweeting and gradually accumulate followers. However, the egalitarian nature of the platform means that they are just one voice in a crowd of millions. When the goal is to have your voice heard no matter what the cost, several deceptive tactics can be used to quickly gain a large number of followers and obtain an aura of influence or importance within the community [141]. 6.3.3.1 Centrally and computer controlled accounts The old tenet, “nothing attracts a crowd like a crowd” holds true online. Astroturfers take advantage of this fact to catalyze faux grassroots activity by creating the illusion that a large number of people are behind a message or movement. The simplest way to achieve this effect is the creation of multiple centrally controlled accounts, known as sockpuppets, which are used to simulate several independent actors promoting a coherent message. These accounts can then be used to broadcast a message seemingly independent of one another, or be manipulated to appear as though they are engaged socially with one another. One advantage of the first approach is that it creates the appearance of independent actors responding to an exogenous influence at the expense of the credibility that comes with a rich social circle. The second approach relies on social 234 Chapter 6 The Internet and the Physical World expectations to create the appearance of authenticity at the expense of appearing independent. Common to both of these approaches is a reliance on a large number of centrally coordinated accounts. To effectively astroturf at scale requires automation, and Chu et al. [153] studied the behavioral differences between real users and bots on Twitter. They distinguish between two types of bots: “benign” bots, which often self-identify as automated processes and simply relay information from RSS feeds or other automated sources; and “malicious” bots, which spread spam or malicious content while acting as real users. One of the key distinguishing features between humans and bots is that bots ten
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要