view counter

Insider threatIdentifying, thwarting insider threats before they do damage

Published 18 February 2014

Researchers argue that one way to identify and predict potential insider threats even before these individuals begin to do damage like stealing and leaking sensitive information, is by using Big Data to monitor changes in behavior patterns. Researchers at PARC, for example, found that individuals who exhibit sudden decrease in participation in group activity, whether in a game like World of Warcraft or corporate e-mail communications, are likely to withdraw from the organization. A withdrawal represents dissatisfaction with the organization, a common trait of individuals who are likely to engage in insider security breaches.

Last Tuesday, in a testimony before the Senate Armed Services Committee, National Intelligence director James Clapper reiterated that hostile insiders with access to classified materials, like Edward Snowden, represented a major threat to national security. When committee members asked Clapper how the intelligence community was planning to prevent future security breaches, Clapper responded that individuals with access to national intelligence information will be more tightly monitored in the future.

Clapper said the U.S. intelligence community is planning to establish a single cloud environment to store and operate intelligence communications. Moving communications into the cloud would allow monitors to “tag the data, [and] tag the people, so that you can monitor where the data is and who has access to it on a real-time basis,” Clapper said.

According to Defense One, the Defense Department (DOD) is exploring methods to identify potential violations by insiders before they occur. Mark Nehmer, associated deputy director of cybersecurity and counterintelligence for DOD, said that an insider threat signal could be a combination of multiple changes in an individual’s profile.

“Think of statistics and human behavior and think about correlating past and future behavior, that’s the future of insider threat, I believe,” he said, at Nextgov’s Cybersecurity Series in Washington, D.C. last Tuesday. Nehmer’s recommendations to DOD for preventing an insider attack include stricter punishments for minor infractions involving data loss, mandating that all software fixes comply with a single new standard, ensuring that more people with top secret clearance have at least one person sign off on work assignments involving sensitive information, and the creation of a Joint Information Environment (JIE) which will move all DOD communications into one cloud setting.

“We need to build an architecture so that a whole department can use enterprise services,” said Nehmer.

Once intelligence officials are able to monitor staff behavior patterns, they must then understand which signals predict insider threats. Oliver Brdiczka, a researcher at PARC, is using online environments like multiplayer online game, World of Warcraft, and corporate e-mail systems to understand the traits of individuals likely to commit security breaches. In the World of Warcraft sample, Brdiczka was able to predict which players would quit the game’s organized teams in six months with an accuracy rate of 89 percent. In the corporate e-mail system sample, Brdiczka achieved 60 percent accuracy in predicting which employee would likely quit the organization.

The general conclusion from both samples is that individuals who exhibit sudden decrease in participation, whether in a gaming activity or corporate communications, are likely to withdraw from the organization. A withdrawal represents dissatisfaction with the organization, a common trait of individuals who are likely to engage in insider security breaches.

Defense One notes that Brdiczka’s research is part of the Anomaly Detection at Multiple Scales (ADAMS) program, funded by a grant from the Defense Advanced Research Projects Agency (DARPA) and designed to identify patterns and anomalies in very large data sets. The program aims to detect and prevent insider threats from “good” insiders turned “malevolent.”

— Read more in Oliver Brdiczka et al., “Proactive Insider Threat Detection through Graph Learning and Psychological Context” (Palo Alto Research Center [PARC], Palo Alto, California, no date); and Akshay Patil et al., “Modeling Attrition in Organizations From Email Communication” (Palo Alto Research Center [PARC], Palo Alto, California, no date)

view counter
view counter