Blog

The First Line of Defense: Are Humans Doing a Good Enough Job?

May 14, 2020

As published in the March/April 2020 edition  of InfoSecurity Professional Magazine

By Crystal Bedell

Infosecurity Pro Mag Cover Humans have long been touted as the weakest link in security. But in many ways that axiom oversimplifies the issue of the human element and makes end users collectively the bad guy when, for the most part, they’re only trying to do their jobs.

Understanding why humans behave the way they do, and allowing them to inform a security strategy, can strengthen the human element so that people aren’t the weakest link but a helpful component of your security arsenal.

“We put people in front of computers, and we expect them to behave in specific ways that are in line with the functionality and operations of those systems, as well as our security requirements,” says Alex Blau, practicing behavioral scientist and vice president at ideas42, a nonprofit consultancy. “But oftentimes, people don’t behave the way security professionals would want them to, and that’s when they create vulnerabilities that allow attackers and entry points to exist.”

Understanding human nature

As any cybersecurity professional knows, you can’t apply technical controls to human behavior. By their nature, people are creative, emotional and often unpredictable. Those characteristics apply equally to end users as well as cyber criminals who leverage human behavior to advance their attacks.

“There only needs to be one way to breach your network, and it may be a human that creates that opportunity by misconfiguring a security tool, by misusing a communications system, the network, or email, or inappropriately responding to social engineering,” says Bob Hillery, chief operations officer and chief research officer for InGuardians, Inc., an information security consulting firm.

“The reason social engineering has always worked is because people want to help each other. That’s not going to change,” Hillery says. “That’s human nature, and we hire people because we want them to help each other and be innovative in how they find solutions to make things work.”

Roselle Safran, president at Rosint Labs and entrepreneur in residence at Lytical Ventures, agrees. “Attackers evolve and up their game when what they’re doing is being thwarted. To a large extent, what they’re doing with social engineering just works as it is. They don’t have to improve their capabilities,” she says.

It’s important for both end users and security professionals to understand that nothing is off limits to cyber criminals. “The real cyber attackers don’t care about rules. They don’t care about being nice. They don’t care about proper techniques,” Hillery explains.

Consider, for example, a phishing email that instructs recipients to click on a link to learn about proper procedures if there’s an active shooter in the building. In Hillery’s experience, everyone clicks on the link, but companies don’t want to use this type of content for a phishing exercise because it’s the very thing they should click on if there’s an active shooter. That wouldn’t, however, stop an attacker from using it.

“Unfortunately, people are going to fall for those types of attacks,” Safran says. “That’s going to happen if you’re relying on the end user to always get it right, and that’s why I feel the burden of making sure that doesn’t happen needs to fall on the security team to make sure that email doesn’t reach them in the first place or, if it does, the user will be stopped when they click and prevented from being able to enter their credentials.”

Safran continues: “That makes the task of the security team even more challenging, because they can’t rely on the end user to be that last line of defense. But, in my opinion, it’s unrealistic to think that an end user can be a line of defense that’s going to be infallible.”

Hillery agrees. “There’s no way to stop all the risks of the human element. Any time the human can be a single point of failure for your overall security posture, you have a design problem. And you need to make sure it’s not a single point of failure; there needs to be a second person check, a software check, something,” he says.

That said, a solution cannot be implemented in a vacuum. “No matter how advanced the technology you’re implementing, no matter how simple the processes, if people are not aware and if they are not committed individually to that security control … no matter what you do, it will fail,” says J. Eduardo Campos, president and managing partner at business consulting firm Embedded-Knowledge Inc

First Line of Defense Context, context, context

That’s where security awareness training can help.

“User awareness training is helpful, but it needs to go beyond the minutia of what a phishing email looks like,” Safran says. Otherwise, all too often, users assume that they personally aren’t a target. “User awareness training needs to first start with providing an understanding of why attackers are interested in their organization and why users are targets. Once people understand the context of why cyberattacks are happening to their organization and potentially to them personally, then it’s easier to go to the next step of identifying when those attacks are coming in.”

Security professionals can also benefit from having a better understanding of context.

“Human behavior is motivated by context, so rarely will you find that decision-making happens on a lone island. The context you put someone in will dictate the behavior they exhibit. The technical controls, policies, anything that’s visible to them will be most important to how they actually operate. If you take that lens, it opens up a lot of opportunity,” Blau explains.

Security awareness training might make users smarter, but “a deeper diagnosis about behavior and the context in the environment that’s increasing or decreasing that behavior will be the lever you need to pay attention to,” he says. “That’s where a policy can be changed, or you literally need to write something down so that people will be more attentive to it.”

Hillery also stresses the importance of context in terms of security policies. “Across my careers, one of the challenges has always been writing a policy that does what you want it to—and that users can actually follow. We often write policies so constrictively that they impede work, and many times people are breaking policy either because they don’t understand the security implication, or they had to break it in order to actually get their work done.

“Policies must be functional,” he continues. “The people who write the policies are often not the ones doing the work. The people doing the work need to have a say so that they understand what the policy says, and the policy writers need to understand what’s doable.”

In fact, security policies are only effective if they take into account the end user. “Engage your stakeholders. Talk to people. Listen to their needs. Listen to what moves them. And then, design a solution that speaks to their minds and hearts,” Campos advises. “The solution is not yours as a security provider. If the program fails when you leave, it’s because it’s not theirs. It was yours. The first thing you do is come at the problem from the end user’s perspective. Make whatever you’re implementing—the security policy or data transfer policy—their solution. And then find champions, people who can advocate for you without you being in the room.”

Where techies tend to fail

Departmental champions are important, but they don’t negate the need for support from “the top.”

“It’s critical that there is buy-in and support from leadership in order for the security program to be as effective as it can be. What happens at the top cascades down. If senior leadership is not paying attention to cybersecurity, that will be reflected in the organization’s policies. If the organization’s policies are not weaving in cybersecurity, then the security team has very little ability to enforce what needs to be enforced in order to have a secure posture,” Safran says.

“That’s where techies fail,” Campos argues. “We get excited about the technology, and we forget about the risks. At the end of the day, someone in the food chain will make a decision to invest in cybersecurity or technology or training, and that decision is based on a risk assessment. If the risk assessment is well done or not, that’s another thing. Someone has the power of approval to avoid, mitigate or ignore risk. [Cybersecurity professionals] need to work with users and senior leadership to put policies in place that will empower users to carry out their functions, deliver their goals and the company commitment, and at the same time protect the assets.”

Bottom line: “Cyber security is a continuous effort. We’re never going to be at the point where we can just say, ‘Alright we have this handled. No more need to focus on cybersecurity,’ because the attackers are constantly evolving, and we have to as well,” Safran says. “The human element makes mistakes. … That’s inevitable and that has to be factored into the equation.”

CRYSTAL BEDELL is a longtime magazine contributor who lives and works in Washington state.