Tainted Tweets

Social bots manipulate the stock market, help ISIS recruit terrorists — and could play a role in electing the next president

hero

Illustration: Madelin Lum
Illustration: Madelin Lum

In 2014, CYNK Technologies traded at a price so low that it didn’t qualify for a listing on the New York, Nasdaq and American stock exchanges. Suddenly, the struggling tech company’s shares surged, increasing more than 20,000 percent in a matter of a few weeks, to a high of nearly $22 per share.

Suspicious Securities and Exchange Commission officials suspended trading, resulting in huge loses for CYNK investors. Initially, regulators couldn’t figure out what had created the inexplicable bull market for a company with no revenue. In time, they had their answer: social bots.

Social bots, short for software robots, automatically produce content and interact with humans on social media platforms like Twitter, often with the goal of influencing opinions and behavior. Once easy to spot, bots have become so complex that experts often cannot distinguish between social media accounts used by humans and those governed by computer algorithms.

Although bots can operate as news aggregators or customer service interfaces, or perform other benign tasks, increasingly they are being used for nefarious purposes, including influencing the stock market, recruiting terrorists and manipulating presidential and other elections, said Emilio Ferrara, a computer scientist at the USC Information Sciences Institute and research assistant professor at USC Viterbi.

“These bots, which spread synthetically generated content on social media, have the potential to undermine the very roots of our information society,” said Ferrara, whose bot detection algorithms, developed with the support of a grant by the Office of Naval Research, perform with an accuracy rate above 90 percent.

In the CYNK example, CEO Phillip Thomas Kueber created an army of bots that tweeted “news” about the imminent release of cutting-edge software. Financial trading bots employed by Wall Street picked up the trending information and began buying company shares “like crazy,” Ferrara said. The scam caused over $6 billion in damage to legitimate investors. Today, the stock trades for about 15 cents.

The Islamic State has recently extended its reach by leveraging bots to spread propaganda and cultivate potential recruits. At the simplest level, automated Twitter accounts tweet and retweet videos of violence and other content, ostensibly to inspire followers. More advanced bots can actually identify people sympathetic to ISIS and engage them in ongoing Twitter and Facebook “conversations.” The goal: attract converts to support the group financially or commit terrorism in its name.

Politics, already a nasty business, has become even nastier with the emergence of social bots. Experts agree they are a bipartisan affair, with tech-savvy Hillary Clinton and Donald Trump supporters equally likely to unleash them.

“Politicians have strong incentives to engage in bots, although we don’t have evidence of any major candidates currently using them for campaign purposes,” said Pablo Barbera, a USC assistant professor of international relations with expertise in social media. “They’re very cheap compared to TV and radio, and quite effective. You can easily get something trending on social media with them and have lots of people see your message.”

Given social media’s exploding popularity and role as the primary news source for many younger voters, the appearance of bots that aim to shape political discourse through misinformation, character attacks and intimidation is a worrisome trend, said Dora Kingsley Vertenten, a professor at the USC Price School of Public Policy.

How do bots work?

The “master puppets” behind influence bots, said Ferrara, often create fake Twitter and Facebook profiles. They do so by stealing online pictures, giving them fictitious names and cloning biographical information from existing accounts. These bots have become so sophisticated that they can tweet, retweet, share content, comment on posts, “like” candidates, grow their social influence by following random accounts, and even engage in human-like conversations.

As negative news spreads faster online than positive news, Ferrara said, there’s a real payoff for bots to go negative. That’s because of the “cascade effect,” whereby misinformation on Twitter quickly spreads to Facebook and other platforms, eventually transcending social media to shape offline political opinions. In recent years, bots have smeared candidates in the United Kingdom, South Korea and Mexico as well as the United States.

In the 2010 U.S. Senate race in Massachusetts, a group based in Iowa allegedly set up fake Twitter accounts to attack candidate Martha Coakley, according to a paper by Wellesley College researchers. One message read: “AG [Attorney General] Coakley thinks Catholics shouldn’t be in the ER, take action now!” — a provocative tweet aimed at the heavily Catholic state. Scott Brown ultimately prevailed in the race.

Still, social bots might play less of a role than expected in electing the next U.S. president, said USC’s Christian Grose, an associate professor of political science.

“They might make a difference around the margins, but on balance people vote for the economy,” he said. “If the economy is doing poorly, it doesn’t matter how strong the messaging is or how many bots are employed, the incumbent party will probably lose.”