Article

The Insider Track on Cyber Security

Posted August 26, 2014 | Leadership | Technology |
In this issue:

In June 2013, the actions of Edward Snowden set off a firestorm of revelations about the inner workings of one of the US's most secretive organizations, the National Security Agency (NSA). As the country began debating the spy versus whistleblower status of Mr. Snowden, a second, equally chilling dialogue began: how was one person, a contractor, able to walk so easily out the door of a heavily monitored facility with a treasure trove of secrets? For all organizations, it served as a sharp reminder of how much damage one insider can generate. But despite the far-reaching consequences to US federal agencies -- presidential executive orders, budget reallocations, technology reviews -- the Snowden incident has done surprisingly little to affect the near-term strategies and implementations of cyber security in private industry.

That's not to say private industry isn't aware of the vulnerability. A recent survey performed by the Ponemon Institute found that 88% of the 693 IT professionals surveyed recognize the potential for significant harm from insider threats.1 As it turns out, businesses harbored concerns for some time before the NSA affair -- they just hadn't been very vocal about it. Despite these concerns, plans to mitigate insider threats seem to hover at the bottom end of the cyber security priority list. The common sentiment of business owners, executive teams, and IT administrators goes like this: "We've spent the last two years building a layered perimeter defense, and we're almost done. As long as the threat doesn't come from inside, we're good."

SIGNIFICANT RISK, SIGNIFICANT COSTS

As the Snowden incident demonstrates, statements like the above are misguided at best. An organization's cyber defense strategy cannot be comprehensive without a mitigation strategy for addressing insider threats. Otherwise, it's like knowingly leaving a back door onto the computer network (except that insiders get to use the front door). Insider incidents don't get nearly the attention that the breaches caused by external attacks get on a daily basis. However, sources such as the Brooklyn Law School's Trade Secrets Institute website2 have pages filled with insider threat legal cases.

From a numbers viewpoint, it has been difficult to get a clear picture of the severity of insider incidents relative to the higher-profile external attacks. That has begun to change over the last few years as the major benchmark studies have led to a better understanding of how to parse out the data. External attacks will always outnumber insider incidents, given that the number of external actors is much larger, but the industry statistics show that insider threats play a significant role in breaches:

  • The 2014 Verizon Data Breach Investigations Report indicates that 18% of the collected security incidents are attributed to insider misuse,3 trailing only crimeware and miscellaneous errors.

  • According to the 2013 Ponemon Cost of Cyber Crime Study, 42% of companies surveyed admit to an insider incident in the last year.4

  • SafeNet keeps a running tally of breaches at the data record level. Its data indicates that while external attacks outnumber insider incidents, insiders can do more damage. The Q1 2014 report, for instance, shows that while 11% of successful breaches were carried out by an insider, they accounted for 52% of the 200 million data records exfiltrated.5

REASONS FOR THE LACK OF INSIDER THREAT PROGRAMS

If business leaders and IT professionals continue to experience a general unease at the lack of strong insider threat programs, and industry numbers appear to justify those concerns, why hasn't this translated into more action? There are any number of reasons, of course, but two primary themes can be seen across private industry. First, insider threats are poorly understood, and little is known about the remedies that might be available to address the vulnerability. Second, there is an emotional discomfort that comes from thinking employees might betray the trust that's been granted them by the organization.

The emotional issue is certainly understandable, but it shouldn't be allowed to delay or prevent addressing a very real threat. Most businesses will be affected by an insider at some point, probably several times. When it happens, it's not just profits that get impacted; it can affect customers' lives, the reputation of the business, and, ultimately, all the employees in the organization.

It might be said that neither of these issues is at play in larger organizations, especially in regulated industries like banking and healthcare. Indeed, regulators do take some of the heat off businesses; it's the regulatory bodies that require controls be put in place for insider threats, not the employers. And larger organizations are more likely to have the resources to investigate the insider threat risk more thoroughly. Still, that doesn't explain the frequent delay in addressing insider threats, which organizations often plan to address only after the perimeter has been fully secured against external threats.

INSIDER THREATS ARE DIFFERENT

It's not surprising that organizations have difficulty understanding the insider threat; it's not your typical cyber security adversary. For one, the average insider is not a professional hacker and generally lacks the technical sophistication needed to use the tools of that particular trade. But it doesn't require a lot of sophistication to do great harm to an organization when you are an insider.

To better understand effective strategies and technologies for combating insider threats, let's start by defining what an insider threat is. An insider threat is when someone with authorized access to the network and the electronic assets located on it uses that access to do unauthorized things, such as commit fraud, theft of intellectual property (IP), or IT sabotage. Those with authorized access could include employees, contractors, vendors, and even business partners and corporate executives.

There are two key components to keep in mind with regard to the definition. First, we're talking about insiders, not outsiders trying to breach the perimeter defenses. The importance of this goes beyond their authorized use of the computer network. It means that they work alongside others in the office, they understand and are part of the corporate culture, they get trained in the same procedures and policies as everyone else, they know if they're being monitored and probably how, and they know where the company tends to be lax and where it is strict in its policies. They are also likely to know exactly where and how the organization's sensitive data is stored, as well as the true value of the data. Second, they're authorized. In order to be productive, they need access to the data that the organization is attempting to protect. These characteristics make it very difficult for traditional cyber security products to address insider threats, as we can see from the following example of attempted trade secret theft.

Hanjuan Jin, an American citizen, had been an employee of Motorola in Chicago for nine years. In 2007, as she prepared to board a flight to China, she was stopped for a random customs search. Officials found over US $30,000 cash in her luggage and more than 1,000 documents containing Motorola IP related to the company's proprietary iDEN technology. Jin was able to sneak away with hundreds of confidential documents without Motorola's knowledge, using an unsophisticated method, only to be stopped by chance at the airport.6 She apparently didn't seek out Motorola with the intent to steal its secrets; the attempted theft occurred nine years after she began working at the company. Such circumstances aren't the exception when it comes to insider attacks. A report on insider threats in the financial industry found that the typical malicious insider has worked for his or her organization for five years before committing a crime, and goes on committing the crime for two and a half years before being caught.7 Clearly, insiders know how to conceal their criminal activity in the everyday noise of their work environments.

TRADITIONAL TECHNOLOGIES AND NEW ATTEMPTS

Technology has been used in an attempt to detect insider threats for some time. A study by the US-based Intelligence and National Security Alliance (INSA) found that many of the insider threat programs run by the 13 private industry companies interviewed are technology-centric.8 Given the lack of successful detection of insider threats in general, it's pretty clear that these technologies haven't been as effective as one would hope.

This lack of success stems from the nature of the insider threat, which is a human problem, and the fact that traditional cyber security products applied to the insider threat, such as security information and event management (SIEM) and data loss prevention (DLP) products, aren't aligned to address it. Most cyber security systems use rules engines and pattern recognition to detect activity that matches known methods of theft or misuse. But insiders have authorized access to the data and the systems being protected, making it difficult to develop rules that don't also produce numerous false positives resulting from legitimate work activity. How does one write a rule to distinguish an authorized insider's use of data for legitimate purposes from similar use of the data for something malicious? What if an organization wishes to support the use of cloud sharing services, even if such services can be used for exfiltration? Compounding the problem, insiders are often well aware of the ways in which they might be monitored and have proven adept at avoiding attention.

As big data spawned advances in data analytics and machine-learning techniques, organizations have attempted to use these sophisticated approaches to defend against insider security threats. Network flow analysis tools use statistical methods to establish baselines and identify anomalies in the network traffic data. The problem is that networks, and especially the people using them, generate too many anomalies to make this a practical solution for effectively identifying insider threats. Others have tried to employ predictive modeling to discern an insider threat from a non-threat. Predictive models, like random forests and support vector machines, can yield impressive results in domains where the algorithms can be trained with large data sets containing numerous examples of each possible outcome (e.g., a data set with tens of thousands of labeled examples of spam and normal email). But even a large organization will have only a few historical cases of insider threat incidents -- not nearly enough to train a viable predictive model.

INSIDER THREAT PROGRAMS: TRAINING, POLICIES, AND ASSURANCES

Given the difficulty that technologies have historically had in addressing the insider threat, businesses have established a number of procedural mitigations. Some examples include:

  • Background checks. As part of the hiring process, high-risk indicators -- such as a criminal record -- can be discovered early to avoid potential problems.

  • Confidentiality agreements. Having employees sign legal documents can serve to educate them on the policies of the organization and reinforce the consequences of IP theft.

  • Awareness training and banners. Regular reminders reinforce expectations for the proper handling of sensitive data.

  • Compartmentalization. Operating on a need-to-know basis limits the amount of sensitive data exposed to insiders and thus the damages that could be caused by an insider threat.

  • Reporting suspicious activity. Organizations should establish the idea that security is everyone's responsibility, and everyone should remain diligent. If something is out of place, or someone is doing something suspicious, it should be reported to management or HR.

  • Restricting the use of technology. New technologies such as cloud storage, mobile devices, social media platforms, and removable storage have sprung up everywhere to make us more productive and connected at the office and elsewhere. They also make it easier for a malicious employee to collect and exfiltrate sensitive data. Many organizations restrict the use of these resources to minimize the risk.

There are a number of additional controls that organizations have used in various insider threat programs. CERT, a division of the Software Engineering Institute at Carnegie Mellon University, has run a robust insider threat research program for years. Its "Common Sense Guide to Mitigating Insider Threats" publication describes several additional mitigation procedures and policies.9

As with the application of technologies for combating insider threats, these procedural controls have their limitations and noted failures, too -- a topic often discussed in the trade secret legal community. A notable example of this is the recent incident at the pharmaceutical company Eli Lilly. Guoqing Cao, an American citizen living near Indianapolis, was a research scientist at Eli Lilly for seven years. In October 2012, he was arrested on charges of theft of trade secrets.10 Cao, along with a codefendant, were accused of emailing numerous confidential documents to a contact at a foreign competitor. A significant aspect of the case is that Eli Lilly had a number of procedural controls in place to mitigate against insider threats. Specifically, the company "limited access through security cards, required employee confidentiality agreements, restricted access to Lilly confidential information on a need-to-know basis, limited access to computer networks, and utilized data security banners and policies."11 Despite these and many other policy controls, the codefendants were able to plan their crime and execute the collection and exfiltration of confidential documents. It's clear that neither traditional cyber security technology nor procedural controls have adequately addressed insider threats.

NEEDS ARE DRIVING NEW TECHNOLOGIES

Even if the NSA incident wasn't a game changer for private industry, it certainly awakened technology companies to the preexisting concerns of business leaders and their need for better solutions. The market is beginning to see a number of new products dedicated to detecting and mitigating insider threats. Some products focus on monitoring and recording all insider activity -- including keystrokes, social media posts, and use of removable media -- thereby acting as a virtual security camera. They are similar in purpose to what you would find behind the counter of a convenience store; they are there to record evidence of a crime when it occurs. Most new products, however, are reengaging the problem with some form of data analytics technology. Hoping to overcome the lackluster results of prior attempts, the new products try to address the issue of noisy data in different ways. For instance, one new approach takes the form of a data analytics platform or toolkit. Analysts treat the various sources of security and network information like big data, employing data analytics techniques to explore and detect patterns that might indicate a threat. Here it's the analyst drawing conclusions about possible threats, using a more investigative process of asking questions of and getting answers from the data, as opposed to previous approaches that relied on algorithms to draw the conclusions.

Another group of emerging data analytics-based technologies is significantly improving the detection rate of insider threats by focusing on how each individual in an organization uses the computer network. These systems capture network transmissions and events each individual generates on the network and use them to establish a behavior profile for the individual. Analytics are then used to identify the individual's behavior patterns, such as how often he typically accesses a document repository, which computers or devices he uses to access the network, and which websites he regularly visits.

If an effective behavior profile can be defined, one that is sensitive enough to pick up on subtle changes that indicate a threat but robust enough to allow for the variances in everyday work activity, then this method could provide a powerful means of observation and detection. For instance, one mechanism could compare current behavior profiles to their historical baselines in order to detect behavior changes, which often occur when an insider makes the decision to commit a crime. Meanwhile, a second mechanism could evaluate one behavior profile against other behavior profiles in a cohort group. This latter control can detect when one person on a team is covertly hiding an activity while performing her job responsibilities for the team, even if that activity is part of her historical baseline (an important factor if, say, the behavior monitoring capability was put in place after the malicious activity had begun).

The data sources for behavior-based technologies are varied. Some use security events collected by SIEM products, while others use captured network traffic, text data mined from social feeds, and even manually keyed-in events documented by HR in personnel files. Given the sensitive nature of such data, users of these technologies will have to weigh their inherent invasiveness against the need to protect the organization's own data.

BUSINESSES NEED TO ACT ON INSIDER THREAT VULNERABILITIES

Addressing the insider threat is a difficult task for many reasons:

  • It's never pleasant to think about one of your own using the trust he has been given against the organization.

  • Past technologies and policy controls have had a spotty track record.

  • Privacy implications and the response of employees to monitoring technologies have to be taken into account.

  • Unlike investigating an alert arising from a possible external attack, a false positive on an employee amounts to accusing her falsely. This can have personal and professional consequences for the employee and HR consequences for the employer.

Nevertheless, failure to address insider threats can have devastating consequences for a business and all those that depend on it: employees, customers, suppliers, and others. Focusing only on external threats still leaves the most vulnerable part of the network open to attack, so a comprehensive cyber security plan needs to mitigate internal threats. The plan should include:

  • Policy controls. At a minimum, everyone should be under nondisclosure agreements, and expectations should be clearly communicated for the proper handling of sensitive information. Other controls may be included to some effect, although they can have drawbacks. For instance, compartmentalization may reduce the exposure of confidential data, but it also limits the exchange of ideas and cross-purpose information. Overemphasis on reporting policies can be detrimental to trust and the culture in the office. And restricting the use of personal devices, personal email access, and social media in the office doesn't play well with younger members of the workforce, who have integrated these technologies into their personal and professional lives. Industry regulations will also play a big role in which policy controls are required. Each organization must assess its requirements and weigh the benefits and drawbacks of various policy controls for the environment.
  • Monitoring technology. Some form of comprehensive monitoring capability should be deployed -- one that isn't rules- or signature-based, is able to apply some level of machine learning, and can detect behavior changes and outliers. If the network is a tightly controlled environment with policies against removable media, outside email, and cloud-based personal storage, rules can be an extra layer to detect some policy violations. Forensics is also important for a monitoring technology. When an incident occurs, it is important to be able to collect evidence and determine what confidential information or systems may have been affected.
  • Incident response plans. Insider incidents will occur, and it is important to plan in advance how the organization will respond to them. Incident response plans should be developed with executive direction and support, as well as input from the IT, legal, and HR departments.

With these angles covered, organizations can help secure the most vulnerable part of the network -- the part inside the perimeter.

ENDNOTES

1 "2013 Cost of Cyber Crime Study: United States." Ponemon Institute, October 2003.

2Trade Secrets Institute.

3 "2014 Data Breach Investigations Report." Verizon Enterprise Solutions, 2013.

4 Ponemon Institute (see 1).

5 "Breach Level Index: First Quarter Recap." SafeNet, 2014.

6 "Suburban Chicago Woman Sentenced to Four Years in Prison for Stealing Motorola Trade Secrets Before Boarding Plane to China." Press release, US Federal Bureau of Investigation, 29 August 2012.

7 Cummings, Adam, et al. "Insider Threat Study: Illicit Cyber Activity Involving Fraud in the US Financial Services Sector." Carnegie Mellon University, July 2012.

8 "A Preliminary Examination of Insider Threat Programs in the US Private Sector." Intelligence and National Security Alliance, 2013.

9 Silowash, George, et al. "Common Sense Guide to Mitigating Insider Threats, 4th Edition." Carnegie Mellon University, December 2012.

10 Beyer, Justin K. "Two Former Eli Lilly Scientists Accused of Stealing $55 Million in Trade Secrets on Behalf of Chinese Pharmaceutical Company in Southern District of Indiana Indictment." Trading Secrets, 28 October 2013.

11 Beyer (see 10).

 

About The Author
Chris Kauffman
Chris Kauffman is founder and CEO of Personam, Inc., a technology company that provides insider threat detection products and services. He has 20 years of experience in software product development, specializing in data analytics, machine learning, and the domain of the insider threat problem. Mr. Kauffman was formerly a Managing Partner at Sphere of Influence, where he directed a diverse R&D team of software developers and scientists… Read More