Blog

Threat Hunting: Is Your Security Operation Ready to Launch Such a Program?

Feb 24, 2020

Nov.Dec Magazine Cover As published in the November/December edition of InfoSecurity Professional Magazine.

It could be a blended attack as slick as a multichannel marketing campaign. Or a spontaneous crime of opportunity by a single dis-gruntled employee. It could even be an innocent configuration error. When a threat exists, there will be indicators. The perennial challenge is to hunt for signs in the right places and to isolate the signal from the noise. How best to find—and remove, where possible—such threats remains up for debate. 

Lance Cottrell, chief scientist at Ntrepid, approaches threat hunting less as a specific set of techniques than as a set of high-level goals. “From the 50,000-foot view, we’re trying to understand the threat landscape,” he says. “Writ large, you are trying to figure out what the things are that are coming after you.”

The breadth of that mandate can make it difficult to define a threat hunting practice, or even to draw bright lines around where it borders with other security measures. For example, a specific threat identified through threat hunting may be investigated using existing general processes for incident analysis.

SEASONING THE ATTACK SURFACE

“Threat hunting relies on both active and passive measures. Honeypot machines that no other system will ever legitimately connect to can be set up inside the firewall. This inward-looking measure can provide 100% confidence that every connection attempt is nefarious.

Another pre-positioning measure is salting production databases with false data to mark provenance. Hard to discern as illegitimate by outsiders, finding watermarked data in the wild can tell administrators that a specific data store in their environment has been breached.

Using such deception to detect wrongdoing has a much longer history than IT does. This salting practice harkens to fake “trap streets” inserted into maps so their creators could detect plagiarism of their work.” —Matt Gillespie

Likewise, threat hunting inputs run the gamut—from eavesdropped conversations among criminal gangs to analyses of server logs and user behavior. Some threats are malicious, while others are not. An organization’s concept of threat hunting should encompass this whole scope, even if its coverage is limited.

SETTING UP A THREAT HUNTING PRACTICE STARTS FROM THE TOP

Launching a formal program can be daunting. Even finding the right people to staff the practice is difficult, because of the breadth of skills involved.

I think it requires quite a team effort, and I don’t think you’re going to find a unicorn that can handle the full gamut of what needs to be done in a threat hunting program,” says Tom Gorup, vice president of security and support operations at Alert Logic. 

From server and network administrators to data scientists, aligning the organization toward threat hunting needs to come from upper management, enabled from top down.“Before you even think about hiring a threat hunter, you need to get your culture in check. Once you do that, it opens a lot of doors, and then it’s about investment in time and tools,” Gorup suggests.

“If I were a CISO and the long-term strategy was to get threat hunting in place,” he continues, “I would want to be sure that all our basics were in place first. We’re able to centralize data, we have a good incident analysis process, we’re able to access information quickly and easily.”

As an open-ended, data-driven activity, threat hunting depends on access to information and collection methods that are designed with machine readability in mind, with characteristics such as key-value pairs and good parsing. Data silos must be broken down so that threat hunters can draw on the information they need.

Data access also speaks to the cultural component of the process. For instance, a developing investigation might need access to specific log reports. The wealth of information they contain—from failed logins and lockouts to unusual data movement—can make them invaluable. Having the CEO and CISO sign off on the threat-hunting initiative can be the difference between threat hunters meeting with resistance versus cooperation when trying to get internal information.

In a world of limited resources, executive buy-in is critical to make threat hunting efficient enough to be sustainable. To extend that efficiency, it is also critical to operationalize the spoils of threat hunting so that teams can free themselves up to focus on novel issues.

Aamir Lakhani, a security strategist and researcher at Fortinet, identifies that requirement as a best practice. “The job of the threat hunter is really to get as many things off their plates as they can and make it as automated and scripted as possible,” he says. In a job that requires looking at many places simultaneously, that efficiency is essential.

TARGETING INTERNAL THREATS, WHETHER MALICIOUS OR NOT

Alongside other security and IT practices, inward-facing threat hunting reveals truths that would otherwise remain hidden. A primary tool in this area is to use human intel-ligence gathered from human resources departments and direct observation to define typical behavior for specific user groups and to identify when users step outside those norms.

“We’re combining human psychology to define behavior and how that corresponds and interacts with IT,” Lakhani explains. “We had one customer’s employee where a few issues caught our eye. We saw that he wasn’t cashing pay-checks, he was active on some really curious forum boards, and those things caught our attention in areas that we wouldn’t notice just doing pen testing or scans.”

It turned out that the employee had signed an offer with a competitor and was trying to steal information. The indicators didn’t paint a straight line to the threat, but they showed up as an aberration from expected behavior, which eventually led investigators to the truth.

In addition, loyal employees can innocently create internal threats. For example, employees participating on discussion boards may inadvertently give out more information than they intend to. This is especially true if others on the board can determine where they work.

Lakhani explains, “They may be a leader in that community, and they’re trying to do good answering questions on an Oracle system or an Apache system, but people can start putting together a profile on a given company.” Helping potential attackers map out internal IT systems may be the last thing on such users’ minds, but it shouldn’t be.

By providing input into user-awareness training, a threat-hunting team could remediate the threat, closing the information loop by communicating back to the end users.

OBSERVING THREATS IN THEIR NATIVE HABITATS

Hunting outside the company for cyber threats is bound-less in scope. Understanding the likely sources of threats and developing ways to monitor them can be an elaborate challenge in itself. For example, a defense contractor might study the priorities of foreign national research institutions in unfriendly countries. That information could suggest potential areas of interest where the country might level cyberattacks.

A threat hunter may also elect to participate directly in the forums and marketplaces frequented by threat actors of interest. That requires building a trusted false identity, which is a complicated thing to do.

“They need to make friends with the right people, demonstrate the right competencies and knowledge and speak the right language, with the right kind of slang and the right behaviors,” Cottrell says. “And technologically they need to look right; they can’t be using their office- issued Windows desktop.”

Once accepted into that community, threat hunters have access to conversations ranging from emerging new malware to specific targets and data being sought. In addition, looking at what’s offered at a marketplace can reveal indicators of compromise, such as a customer list, credit card numbers or passwords that indicate a breach.

Most insights that turn up in external threat hunting are unclear, perhaps even valueless to a specific entity. Cottrell notes, “The advantage of an inward-looking approach is that all of the information you find is going to be relevant to your organization. … If you are trying to hang out in hacker forums to look for threats, the vast majority of what you’re going to learn is probably not important to you.”

ANALYZING THE THREAT LANDSCAPE

Sometimes threats are unambiguous, such as a confirmed case of your purloined data on offer in a cyber souk or discussion of an upcoming DDoS attack. More often, they are detected in subtle patterns of events or behaviors, as with the example of the malicious employee digging up dirt for a competitor.

That example also reveals how broad the scope of information needed can be and how vague the indicators. Gorup remarks, “You’re dealing with a lot of ambiguity … because you’re often dealing with an alert from your SIEM [security information and event management software] that doesn’t have a full picture for one reason or another.”

He cites the case of a large company that missed a pattern of such alerts. “They received [large numbers of alerts] from their endpoint solution that they marked as ambiguous, and if they were looking at their data more in the aggregate, they would [have seen] an increase in these unknown-type alerts.”

That search for patterns brings data analytics and data analysis to the fore, and visualization tools play a valuable role. Visualization can also be used to create playbooks that describe patterns of notifications for specific incident types, presenting that data in a way that’s easy to consume.

In the analysis of future ambiguous events, those play-book records can be compared against emerging sets of notifications to help diagnose threats. “Data science plays a big part in that, because we want to be able to understand what’s abnormal when we’ve applied it against these particular use cases,” Gorup explains.

THE POTENTIAL FOR AI TO DEVELOP INSIGHTS

The emerging role of artificial intelligence (AI) stretches the boundaries of what’s possible with modeling and statistical methods. Detecting patterns and anomalies in the context of threat hunting is broadly similar to the use of AI by mainstream antivirus solutions. Indeed, malware detection based on files’ behaviors has become more capable in recent years, as detection models have become more sophisticated.

On the other hand, Lakhani suggests a judicious perspective on the outer limits of present technology. “If you’re tracking expenses and expense behavior, the right machine learning models can definitely say, ‘Hey, this type of expense is very odd for this user.’”

He is cautious about generalizing that success too far, though. The broad use of AI to detect patterns in alerts and behaviors, while promising, is in its infancy. “AI definitely has a broader place in the future, but it’s far from being a magic bullet … sometimes it seems like marketing teams have watched too many Terminator movies.”

On the current state of analyzing live threats using AI, Cottrell says: “You may be surprised how much of it
is manual. Say you’ve infiltrated a criminal cyber souk; there aren’t tens of thousands of big data dumps per day going into these things. So, you may be wanting to follow up every time someone says they have a new big chunk of data.”

That’s a role for manual involvement and relationship building. The prospect of removing humans from their primary role in the threat-hunting kill chain is still a long way off. Security decision makers are well advised to enable them with the authority, tools and data to help make them successful. •

MATT GILLESPIE is a technology writer based in Chicago. He can be found at www.linkedin.com/in/mgillespie1.