A new automated spear-phishing framework maintained a success rate of between 30 percent and 66 percent among its targets on Twitter, claim researchers.
Security researchers John Seymour and Philip Tully announced their tool, named “SNAP_R,” at the Black Hat USA 2016 security conference.
The framework essentially tweets out phishing posts to users – choosing targets through machine learning.
The duo could have selected another platform to test their framework, but they decided to go with Twitter for its number of users and for its imperfect 140-character language known colloquially as “Twitterese,” which means SNAP_R could make grammatical mistakes without necessarily raising red flags.
That’s not the only reason Seymour and Tully chose Twitter, however. As Seymour told Infosecurity Magazine:
“There’s also a trusting culture. No one suspects their social networks of harboring negative content. And there’s this idea of incentivized data disclosures, which makes people want to share their personal details about themselves.”
According to their research paper, SNAP_R works by first grouping Twitter users into clusters based upon their profiles, their level of activity, and their engagement metrics.
Specifically, the framework looks at how frequently a potential target posts, what topics they tend to tweet about, and what their “sentiment” is on those topics.
If analysis of those factors suggests the user can be successfully phished, the framework selects them as a target and generates phishing tweets by using both Markov models and Long Short Term Memory (LSTM). The former lacks context but generates text word by word based upon the training data collected on a target, whereas the latter uses context earlier in the sentence to predict what the next word will be in a sentence.
They will then post the phishing tweet at a time when the user is most likely to see and respond to it:
In tests, SNAP_R performed well. As the researchers observe in their paper:
“Though largescale phishing campaigns tend to have very low compromise rates, they persist because the few examples that do succeed lead to a high return on investment. On tests consisting of 90 users, we found that our automated spear phishing framework had between 30% and 66% success rate. This is more successful than the 5-14% previously reported in largescale phishing campaigns, and comparable to the 45% reported for largescale manual spear phishing efforts. We attribute our results to the unique risks associated with social media and our ability to leverage data science to target vulnerable users with a highly personalized message.”
It’s difficult for social media users to protect themselves against this type of threat, as social networking communities like Twitter are built around the notion of people sharing information with one another.
With that being said, users might want to be careful when clicking on the links of users whom they don’t know or with whom they are not familiar. If something doesn’t seem right, they should research a user and look for suspicious behavior in their posting history, including patterns which might suggest the user is in actuality a phishing bot.
As always we recommend that Twitter users harden their account security by enabling two-step verification.
In addition, password management software can provide protection against phishing attacks – as they should only offer to fill in login credentials if they have verified the domain name of the site being visited.
Thanks – interesting article. Note typo above – 514% (!) should read 5-14%
Thanks for the correction Matt.