An estimated one out of every four tweets comes from a bot, a robot account that use software to automatically tweet messages at certain users. Bots are often used for advertising, spreading pro-Justin Bieber messages or just for laughs, but they have also ventured into the realm of political communication. As bots become more “social,” or more able to convincingly mimic human conversation, their use by government actors has become increasingly controversial.
Think astroturfing on steroids. Lobbyists, politicians and activists have long attempted to present their campaigns as genuine grassroots efforts, but bots take it to a whole new level. By boosting follower counts or creating trending topics, bots can manufacture a storm of “public support” that can be difficult to ignore. They have the ability to reach thousands of people at a very low cost, and the huge amount of personal data people share on social media sites allows bots to take a very targeted approach when spreading their messages.
Oppressive governments, such as Syria, China and Venezuela, use bots to create noise around “dangerous” hashtags that could be used to organize a protest. They simply overwhelm the hashtags with pro-government tweets, making it almost impossible to follow the conversations between actual users. Another method to hijack social media conversations has been the bot’s low-tech cousin, “50 cent armies,” huge groups of people working for tiny salaries to spread pro-government comments around the internet.
Philip Howard, a professor in communication and international studies at the University of Washington, recently published a column in Slate arguing that it was time for the United States to recognize that bots are the wave of future in mass communication. He proposed embracing bot technology to spread pro-democracy messages and combat the pro-government messages being spread by anti-democratic regimes’ bots.
He suggests using social media data to find the most receptive targets and share links to stories that promote democratic ideals. Rather than targeting hardcore pro-government supporters, he advises sending “links about life in countries with peace, order, and good governance to moms blogging about their parenting troubles, students getting caught up in the Eurovision contest and government workers reading online news from sources outside their country.” He envisions Twitter bots as a news source that will critique dictators, provide information to citizens abroad and share upbeat stories about life in democracies.
A few days later, a critical response to Howard’s column was posted on Slate by Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, who notes that removing all of the social elements from a Twitter feed seems to completely defeat the purpose of using “social” media. Especially when dealing with such a complex, emotionally-charged issue as democracy, it seems that real interpersonal communication and authentic dialogue, not an automated propaganda machine, would be necessary to truly impact opinions.
However, a recent study by Aiello et al. (2012) has shown that social media bots actually have the potential to be surprisingly popular and influential online. The study measured a bot’s ability to influence social media users based on the three building blocks of opinion formation and information spreading: trust, popularity and influence. High levels of activity, such as sending out a large number of messages, helped the bot achieve popularity, as measured by its number of followers. While one might expect influence to be closely related to trust, the bot was able to achieve high levels of influence, measured by users accepting its suggestions for friends and clicking on its links, even though it lacked any identifying information, such as a profile description, that might help foster some sense of relatability or trust. In fact, even though the bot only used very simple automated activity, it was frequently mistaken for a real person. Similar results have been found by the Web Ecology Project (2011), which demonstrated the relative ease with which bots infiltrate and influence social networks.
With this in mind, it seems possible that pro-democracy Twitter bots could, at the very least, serve as a useful intermediary. While 140 automated characters is certainly not enough space to sell such a complicated idea as democracy, it could direct users to valuable resources and increase their exposure to American values. Aiello et al. and the Web Ecology Project’s findings seem to validate Howard’s suggestion that pro-democracy bots could be used as effective and influential news transmitters. While York argues that such services are already provided on Twitter feeds run by the Voice of America and Radio Free Liberty, it seems that bots could help these articles reach a much wider audience. Thanks to their sophisticated targeting capabilities, bots could help pro-democracy messages find their way to users who haven’t yet discovered accounts like Voice of America or who are afraid to “follow” such accounts in countries that monitor social media activity.
While there seems to be at least some potential for bots to influence the US government’s target audiences, they are not a risk-free strategy. As mentioned by Aiello et al (2012), once users discovered that they were interacting with a bot instead of a person, they sometimes reacted with anger, concern about privacy control and a fear of being controlled. If pro-democracy Twitter bots were discovered not to be real users, the strategy could backfire and spawn negative feelings, rather than the warm, fuzzy associations with democracy that they were designed to inspire.
Even more concerning is the possibility that targeting users for pro-democracy bot messages could draw attention to potential supporters and put them in danger, especially in countries where online dissidents are arrested or subjected to intimidation tactics.
Both Howard and York fail to mention that, at least as far as the US government is concerned, the debate on bots has already been decided. Back in 2011, the Anonymous hacking group released a solicitation of bids by the United States Air Force for “Persona Management Software,” a program that would allow the government to create fake identities on social-networking sites, collect real users’ data and then use that data to gain credibility and circulate promotional messages in Iraq and Afghanistan. Limited information on their actual implementation and results is available, but given the ongoing instability in the region, it appears that while bots may have potential, they still have a lot of work to do.