EMBED SHAREStudents Create Program to Identify Fake Twitter Accounts EMBED SHAREThe code has been copied to your clipboard.
width px height px
The URL has been copied to your clipboard
No media source currently available0:192:180:57
Direct link Share
See commentsPrintSAN FRANCISCO — For months, university students Ash Bhat and Rohan Phadte had been tracking about 1,500 political propaganda accounts on Twitter that appeared to have been generated by computers when they noticed something odd.
In the hours after the February school shooting in Parkland, Florida, the bots, short for robots, shifted into high gear, jumping into the debate about gun control.
The hashtag #guncontrol gained traction among the bot network. In fact, all of the top hashtags among the bots were about the Parkland shooting, Bhat and Phadte noticed.
Twitter under fire
Since the 2016 U.S. presidential election, technology companies have come under fire for how their services were used by foreign-backed operations to sow discord among Americans before and after the election.
Twitter, in particular, has been called out repeatedly for the sheer number of computerized accounts that tweet about controversial topics. The company itself has said 50,000 accounts on its service were linked to Russian propaganda efforts, and the company recently announced plans to curtail automated, computer-generated accounts.
On Monday, executives from Twitter are expected to be on Capitol Hill to brief the Senate Commerce committee about how the service was manipulated in the wake of the Parkland shooting.
For Bhat and Phadte, students at the University of California, Berkeley, the growing public scrutiny on bots couldn’t come fast enough.
Figuring out Twitter fakes
Childhood friends from San Jose, Calif., the two work out of their shared apartment in Berkeley on ways to figure out what is real and fake on the internet and how to arm people with tools to tell the difference.
“Everyone's realizing how big of a problem this is becoming,” Bhat, co-founder of RoBhat Labs, said. “And I think we're also at a weird inflection point. It's like the calm before the storm. We're building up our defenses before the real effects of misinformation hit.”
One of their projects is Botcheck.me, a way for Twitter users to check whether a person on Twitter is real or fake. To use botcheck.me, users can download a Google Chrome extension, which puts the blue button next to every Twitter account. Or users can run a Twitter account through the website botcheck.me.
Some of the characteristics of a fake Twitter persona? Hundreds of tweets over a 24-hour period is one. Another, mostly retweeting others. A third clue, thousands of followers even though the account may be relatively new.
Polarizing the debate
The result is a digital robot army ready to jump into a national debate, they say.
“The conversation around gun control was a lot more polarizing in terms of for and against gun control, as opposed to seeing in the Parkland shooting other issues, such as mental illness,” Bhat said.
The two do not speculate who may be behind the bots or what their motives may be. Their concern is to try to bring some authenticity back into online discussions.
“Instead of being aggravated and spending an hour tweeting and retweeting, or getting madder, you can find out it’s a bot and stop engaging,” Bhat said.
In recent months, the students say they have seen a lot of Twitter accounts they have been tracking suspended.
But as fast as Twitter can get rid of accounts, the students say new ones are popping back up. And suspicious accounts are starting to look more like humans. They may tweet about the weather or cars for awhile before switching over into political content.
“You can sort of see these bots evolve,” Bhat said. “And the scary thing for us is that if we aren’t keeping up on their technological progress, it’s going to be impossible to tell the difference.”