Monitoring for insider threats is becoming increasingly commonplace as organizations recognize its potential for damaging consequences, whether accidental or malicious. In fact, a recent report has highlighted that 74% of security professionals say that attacks are becoming more frequent, and 66% are concerned about the likelihood of inadvertent data leaks.
To combat these threats, organizations are employing methods to track user logins, file access, data transfers and flag anomalies that need attention. Solutions typically involve analysis of user behavior to establish normal patterns of activity and to detect any deviation from regular routines. However, security teams have often prioritized the technical capabilities of monitoring tools over ethical considerations, unwittingly disregarding the potential for biased outcomes.
Ramifications of prejudice in tools and teams
Bias within insider threat monitoring programs is a serious issue. Its ramifications can increase risks to overall security as well as having a detrimental effect on organizational culture, productivity and staff well-being.
At its most damaging, biased monitoring can lead to wrongly identifying employees as bad actors by using characteristics such as race, nationality, gender and job role. It erodes trust, creating a negative environment and fostering resentment among individuals who, justifiably, feel unfairly targeted and marginalized. This can lead to decreased morale throughout a workforce, reducing output and loyalty, and lowering employee retention. From an HR perspective, biased monitoring may expose organizations to legal action, regulatory scrutiny, reputational harm and a raft of unwanted press coverage.
Bias can also result in dangerous situations where real threats go undetected or are ignored because of misconceptions about who is most likely to be involved in malicious activities. This can have severe consequences, leaving organizations open to threats that should have been dealt with, but were ignored. The problems are exacerbated if the security tools deployed are working from inherently biased algorithms. They will perpetuate discrimination, reinforcing existing stereotypes.
To effectively address bias, organizations must first understand its different forms and how they might manifest themselves.
Recognizing that bias is an issue
To summarize, ‘monitoring bias’ occurs when unjustified focus is placed on certain employees or groups regardless of their actual behavior when accessing corporate systems.
As a starting point, organizations should review how they are monitoring threats and whether any associated tools are perpetuating bias. There are a range of indicators to look out for, including placing too much attention on what are termed ‘selective behaviors’. For example, accessing applications during unconventional hours might trigger alerts if a system is pre-programmed to associate unusual work patterns with suspicious activities, ignoring agreements that permit flexible hours. Or it might flag when an employee accesses the network from different countries, misinterpreting this type of behavior as possibly criminal without considering legitimate reasons such as business travel, holidays or international projects.
Other factors to consider are groups with specific backgrounds or demographic characteristics that have been bracketed as high risk through prejudice. Certain employees may also be the victim of attribution bias where they are monitored more closely based on an isolated incident such as a minor data breach, without looking at their overall track record. Sometimes this goes as far as unnecessary investigations and disciplinary action against innocent employees. It can create a situation where security teams are preoccupied with identifying and categorizing certain people or groups they wrongly perceive as high risk, heightening the potential for breaches from areas with less scrutiny.
By contrast, some staff members may be given too much freedom, perhaps based on their seniority or long length of tenure, and allowed to engage in activity that would usually be considered very risky or not in line with company policy.
When security teams are distracted from the bigger picture they may also rely on insider threat monitoring data to justify their actions, even though it may be inherently biased. Unfortunately, this kind of confirmation bias can continue to misinform decision-making, even after analysis has shown that it is incorrect.
Why unbiased threat protection must be data-driven
Eliminating bias from insider threat detection will help improve overall cybersecurity, ensuring that focus is directed consistently at the riskiest behaviors without judging users. Modern threat monitoring solutions minimize bias by using a data-driven approach that establishes a baseline for normal behavior. Any deviation is highlighted for remediation.
Without revealing the identity of the user, these systems can automatically detect and mitigate threats ensuring that employees can usually continue working without interruption. If further investigation is required, authorized IT staff can request additional data in adherence with an organization's privacy policy to ensure that serious threats are dealt with effectively and efficiently.
Unbiased monitoring based on real data ensures precise threat detection. This level of accuracy strengthens security by focusing on genuine risks while preserving the privacy and reputation of employees. Moreover, a bias-free stance promotes a fair and inclusive work environment, reinforcing trust. This, in turn, contributes to a positive company culture where individuals feel valued and respected, ultimately helping to raise morale, productivity and organizational well-being.
We've featured the best business VPN.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro