BALTIMORE (Tribune News Service) — Seen a flood of support for Russia's foreign policy on Twitter? Or a surge in sympathy for Islamic State terrorists?
It could be genuine. But it also might be the work of bots — automated social media accounts programmed to push a message out widely and quickly.
That's why the Pentagon's advanced research arm organized a four-week "Twitter Bot Challenge," inviting researchers to try to find 39 Robotic tweeters.
Five teams found all the fakes. But Filippo Menczer, an Indiana University researcher who headed one of them, warns that the technology advances rapidly, and what worked today might not work tomorrow.
"It's very much work in progress," Menczer said. "It's a moving target."
In an age when life is increasingly lived online, and the Islamic State in particular has proved adept at using social media to attract support and recruits, intelligence officials across the government are looking for ways to make sense of the deluge of information posted every day.
They are working on ways to channel the flood to help spot terrorists, make better military decisions and identify threats against the president, presidential candidates and other leaders.
The potential is obvious: billions of tweets, Facebook posts and Instagram pictures are posted every day, all of it possibly useful to law enforcement, the military and intelligence.
Isaac Porche, a researcher at the RAND Corporation, said the potential is huge, especially if the information can be combined with other databases.
"It's one thing that if someone posts they like al-Qaida or ISIS," he said. "It's another thing if you have some data that shows this is not just a rant."
While much of the interest is in using the data to spot terrorists and guide military campaigns overseas, law enforcement and domestic security services are also combing social media at home. Following the unrest in Baltimore last spring, officials complied a spreadsheet of dozens of online postings.
The list of posts was included in documents the city released under a public information act, but it's not clear who created or how it was used.
But reliably plucking out the threats from among pictures of children, pets and meals, or turning the swirling stream of data into a clear picture of what's happening in the real world, remains a challenge.
In recent papers, Menczer and his colleagues detailed their efforts in the bot challenge and described a publicly available tool that anyone can use to help figure out if a Twitter account is controlled by a human or a few lines of computer code.
Such tools can be made. Getting them to cover the entire social media universe is hard.
"Working that scale is hard," Menczer said. "Twitter is big."
Civil libertarians, meanwhile, warn that tracking even public posts runs the risk of undermining Americans' free speech rights.
But despite the challenges, officials say, some work is showing promise. The Department of Defense expects to have a system to suck in and analyze information from social media and elsewhere on the Internet fully operational this year.
The tool, called Information Volume and Velocity, was created to scrape information from all corners of the Internet, including places not normally reached by search engines, analyze it for trends and provide commanders with up-to-the-second information.
The Defense Information Systems Agency, headquartered at Fort Meade, said in December it would be looking for a contractor to pull data from "news sites, social media sites, micro-blogs, aggregation sites for news, blogs and forums, pictures, video, and images" into the system.
Charlie Fields, an official at the DISA, said the tool has already been used in several different kinds of military operations, including humanitarian work and disaster relief.
"The program is designed to enhance commanders' situational awareness of what is happening in the social media realm and can help improve decision making," Fields said in a written response to questions.
It's also cheap: The system costs less than $4 million per year to run — an almost insignificant sum in the Defense Department's massive budget.
Other efforts have fared less well. The government's efforts to battle the Islamic State online have faltered, and after sending top officials to Silicon Valley to meet with social media company bosses, the Obama administration announced that it was changing the bureaucracy that was established to lead the fight.
On Friday, Twitter announced that it has suspended more than 125,000 accounts since the middle of 2015 for connections to terrorism — primarily the Islamic State.
"We condemn the use of Twitter to promote terrorism and the Twitter Rules make it clear that this type of behavior, or any violent threat, is not permitted on our service," the company said in a blog post.
A proposed effort by the Defense Advanced Research Project Agency to track the terror group's movements through the darker corners of the Internet is being reconsidered, a spokesman for the agency said. Another push to bring social media to bear on the military's counterterrorism mission failed to meet its goals last year, a Pentagon spokeswoman said.
The Department of Homeland Security faced criticism after the December shooting attack in San Bernardino, Calif., for not scanning the social media accounts of visa applicants.
Homeland Security spokeswoman Marsha Catron said the department does have three pilot programs looking at how to use social media data as part of the application process.
"We are actively considering additional ways to incorporate the use of social media review in various vetting programs," she said.
Rand Waltzman, a former researcher at DARPA, wrote a scathing critique of the government's efforts in Time magazine. He argued that U.S. officials are holding themselves back in the name of protecting people's privacy — and falling behind authoritarian states and terror groups that have no such qualms.
Current interpretations of laws written well before the Internet age, he wrote, "have led to overly cautious and non-uniform policies and prohibitions resulting in massive confusion and paralysis."
Waltzman declined to be interviewed.
The tool being developed at Fort Meade filters information that would identify individuals, Fields said. A long-running program at the Department of Homeland Security follows a similar policy, collecting personal information only in limited circumstances.
The Secret Service sought extra funding in its budget request for 2016 to step up its monitoring of social media during the presidential election campaign. Agents want to use the information to find groups who "may oppose a candidate's viewpoint" and who are using social media to organize protests.
The Secret Service declined to comment, citing the sensitivity of its operations.
Aaron Mackey, a lawyer at the Electronic Frontier Foundation, said citizens might be reluctant to share their opinions online if they know that their tweets or Facebook posts are being collected by the government.
"If the user knows they're going to be put on a watch list or the FBI's going to knock on their door and ask if everyone's OK, they're not going to say anything remotely controversial," he said.
Mackey said it's not clear that agents are easily able to discern between statements protected by free speech guarantees and real threats.
"The government does a really poor job trying to draw these lines," he said.
One way around the potential legal issues is to have the social media companies spot threats and other problems themselves. The White House sent top officials to meet with bosses at Silicon Valley firms last month to find ways to do just that.
Ned Price, a spokesman for the National Security Council, said the meeting showed how the president was committed to fighting the Islamic State online.
"The horrific attacks in Paris and San Bernardino this winter underscored the need for the United States and our partners in the international community and the private sector to deny violent extremists like ISIL fertile recruitment ground," he said in a statement.
The meeting also spurred fresh interest in using social media data to calculate some kind of radicalization score for users — a sort of early warning system for terrorism. The concept would be similar to the work of Menczer and the other bot-spotters, only much more difficult.
"To detect a potential terrorist you have to infer intention," Menczer said, "and this is very hard even for human experts."
©2016 The Baltimore Sun
Visit The Baltimore Sun at www.baltimoresun.com
Distributed by Tribune Content Agency, LLC.