Search
Close this search box.
Search
Close this search box.

Why AI Bias Is A Growing Cybersecurity Concern

Warning alert icon with a hacked system
AdobeStock
Here’s what organizations can do about it: Make sure there is continuous human involvement in your artificial intelligence efforts.

Artificial intelligence has proved useful in helping humans to find patterns they could otherwise not connect. By shuffling through troves of data, AI can find the necessary information in a matter of seconds, taking action much faster than an actual person would. The problem is that AI relies on data provided by humans—and unlike humans, it cannot use empathy to determine when some details are wrong.

This phenomenon is frequently referred to as “bias.” When the data pool or the algorithm is incomplete, it can produce false positives and negatives that affect results. With hackers becoming more sophisticated every year, there is a good chance this bias will become a growing threat to cybersecurity.

Security Threats Might Be Overlooked in the Future

Security threats can come from different directions. That said, China, Russia and India are at the top of the list among countries with the highest numbers of cybercriminals. This marks them as a “danger,” which means the AI defense system will keep most of its traffic in these countries.

The problem is that many countries we frequently consider low priority are steadily but surely developing a cybercrime problem. For example, Japan was previously considered a country with few cyberattacks and, thus, a low-priority one. However, in 2012, the country showed an 8.3 percent increase in cyberattacks, marking the highest point in over 15 years.

Humans know that, but AI has yet to be fully trained to keep its focus on these emerging countries. These can cause malware detection systems to overlook a particular threat simply because it came from a place where it was not considered a problem initially. Without regular updates to the database and algorithm, this can significantly threaten one’s cybersecurity efforts.

Hackers Are Learning to Take Advantage

With more and more companies relying on AI systems to detect threats, hackers will likely learn to take advantage of this flaw. Many are beginning to use VPNs to conceal where they are attacking from, choosing to appear in countries with a low crime rate. This could cause the AI defense system to be biased, not considering a threat until it is too late.

The biggest issue here is that developing teams might not even realize their system has this type of prejudice. Should they decide to rely only on the AI system to detect these threats, the malware can easily slip unnoticed into the system. This is one of the main reasons why mixing AI with human intelligence is recommended, as this sort of collaboration can keep the bias at a minimum.

The Increasing Risks of a False Positive

We’ve been talking about how AI bias can lead to a false negative and falsely classify an actual threat as a non-issue. However, the opposite could also happen. AI bias could lead to false positives in their reports, which means they can find a problem where there is none.

This factor is particularly easy to overlook, especially now that many companies are using AI detection tools to reduce these false positives. That said, this could also lead to over-classification, especially as training the data could cause detection systems to no longer have any differences. This becomes highly problematic, as social media has made slang and code words quite popular.

For instance, someone developing a threat detection algorithm for AI could associate slang and word abbreviations with phishing. This could cause important emails to get classified as spam, leading to potential delays in production. When the employees communicate casually through email or chat, a phishing warning might be triggered unnecessarily, sending a ticket to the cybersecurity team.

This might seem like a good thing because the system is “at least detecting.” However, these false positives could draw attention away from things that are actual threats. As the AI is biased and unable to differentiate between spam and actual communication between teams, it places an unnecessary strain on the security department. These are the moments that hackers will likely take advantage of to launch an attack.

Continuously Evolving Cybersecurity Landscape

Perhaps the greatest threat of AI bias to cybersecurity is its inability to keep up with the changing dynamics of the threat landscape. With technology continuously developing at a faster rate than ever, so do cybersecurity threats. Hackers are also becoming more ingenious with their attacks, with over 150,000 attacks occurring per hour. Some of those attacks have a pattern, but others try to find new ways to bypass security.

Training an AI model can take months, sometimes even years, until it succeeds in recognizing a new threat. This can create blind spots in a company’s security system, leading to more breaches as the malware detection system does not detect the attack. This can become a huge problem, especially as people rely on the AI system’s fast ability to browse substantial amounts of data. Human error can pose a significant cybersecurity threat, but so can relying on a system that is slow on change.

AI technology is continuously evolving, especially when it comes to deep learning models. They can be highly opaque and complex, making them quite challenging to navigate. In this case, finding where the bias is rooted can be very demanding, making it difficult to mitigate. Removing all biases altogether is also not the ideal path, as the obvious threats are still there—and, thus, should not be ignored. This is why a hybrid human intelligence and AI model should be used, as it could prevent the bias from growing out of control.

The Bottom Line

Addressing AI bias can be challenging, especially as the landscape is evolving in more areas than one. However, with frequent testing, the bias could be mitigated, preventing an attack from growing out of proportion. While the bias cannot be eliminated entirely, it can be controlled with appropriate human involvement.


MORE LIKE THIS

  • Get the CEO Briefing

    Sign up today to get weekly access to the latest issues affecting CEOs in every industry
  • upcoming events

    Roundtable

    Strategic Planning Workshop

    1:00 - 5:00 pm

    Over 70% of Executives Surveyed Agree: Many Strategic Planning Efforts Lack Systematic Approach Tips for Enhancing Your Strategic Planning Process

    Executives expressed frustration with their current strategic planning process. Issues include:

    1. Lack of systematic approach (70%)
    2. Laundry lists without prioritization (68%)
    3. Decisions based on personalities rather than facts and information (65%)

     

    Steve Rutan and Denise Harrison have put together an afternoon workshop that will provide the tools you need to address these concerns.  They have worked with hundreds of executives to develop a systematic approach that will enable your team to make better decisions during strategic planning.  Steve and Denise will walk you through exercises for prioritizing your lists and steps that will reset and reinvigorate your process.  This will be a hands-on workshop that will enable you to think about your business as you use the tools that are being presented.  If you are ready for a Strategic Planning tune-up, select this workshop in your registration form.  The additional fee of $695 will be added to your total.

    To sign up, select this option in your registration form. Additional fee of $695 will be added to your total.

    New York, NY: ​​​Chief Executive's Corporate Citizenship Awards 2017

    Women in Leadership Seminar and Peer Discussion

    2:00 - 5:00 pm

    Female leaders face the same issues all leaders do, but they often face additional challenges too. In this peer session, we will facilitate a discussion of best practices and how to overcome common barriers to help women leaders be more effective within and outside their organizations. 

    Limited space available.

    To sign up, select this option in your registration form. Additional fee of $495 will be added to your total.

    Golf Outing

    10:30 - 5:00 pm
    General’s Retreat at Hermitage Golf Course
    Sponsored by UBS

    General’s Retreat, built in 1986 with architect Gary Roger Baird, has been voted the “Best Golf Course in Nashville” and is a “must play” when visiting the Nashville, Tennessee area. With the beautiful setting along the Cumberland River, golfers of all capabilities will thoroughly enjoy the golf, scenery and hospitality.

    The golf outing fee includes transportation to and from the hotel, greens/cart fees, use of practice facilities, and boxed lunch. The bus will leave the hotel at 10:30 am for a noon shotgun start and return to the hotel after the cocktail reception following the completion of the round.

    To sign up, select this option in your registration form. Additional fee of $295 will be added to your total.