Security

Is AI Impacting Cybersecurity Risks?

Chaz Hager April 12 2024

How Cybercriminals are Using AI to Attack Harder

As technology keeps expanding exponentially, cybersecurity keeps growing more complex—and AI is no exception. Without a doubt, AI is already increasing cybersecurity risks and has been for several years.  

Sometime last year, one of our customers was hit hard by a cyberattack that was enhanced by AI (artificial intelligence). This customer is international, with teams and equipment spread across several continents, and their systems were accessed in a country where they were still using what we call a “legacy” system.  

When we say they were hit hard, here’s what we mean: 

  • They had to send every employee home; nobody could log on and work. They didn’t have functional systems or the ability to even log in and access company resources for weeks. Employees’ personal lives were impacted, as employees were pulled into manage the crisis. 

  • Our customer first reported the attack to us in the middle of the month and after six weeks, the crisis finally started to settle down a bit. 

  • Nothing could be remediated in the short term and instead, it required a massive effort to not only get their business back up and running, but to also ensure the attack wasn’t still active and present somewhere. Multiple, different components were impacted. We had to be certain that this attack wasn’t just the first wave of a multi-phase attack.

Cyberattacks already—unaided by AI—can be devastating to businesses and have been. So, what about AI is making them even more tricky and dangerous?  

Different Types of AI Cybercrime 

There are—as was even referenced in a recent episode of the show Billions—two ways for cybercriminals to attack: they can attack a system, or they can attack people. Sometimes they may attack people to gain access to a system, but these are the two essential vectors of a cybercriminal attack. 

Systems Attack:

When cybercriminals attack your systems, they’re going after vulnerabilities within the system—often known ones. One of the most common ways companies get hacked this way is by not running software updates. Software companies release updates for multiple reasons, but often they include patches and fixes for different bugs, errors, and vulnerabilities in their code. When you don’t run these updates, or don’t run them in a timely fashion, it’s easy for cybercriminals to seek out those openings and attack. 

Attack through People:

The official term for these types of attacks is social engineering: when cybercriminals target people through phishing, baiting, etc. To trick and manipulate them into giving up some critical information, access, or even money. As we mentioned above, they can also manipulate them into giving them entry into your systems, without their knowledge, so they can then find a vulnerability in the system that’s further downstream. 

The simplest way to think about AI is as a general-purpose tool that can be applied to just about any task. Artificial Intelligence at its core implies “a machine’s ability to combine computers, datasets, and sets of instructions to perform tasks that usually require human intelligence, such as reasoning, learning, decision-making, and problem-solving.” 

Which means that while there are a multitude of exciting possibilities for AI in business—including in cybersecurity—it also means that cybercriminals are excited about its uses too. Cybercriminals can now use AI to be faster and more efficient in their attacks and go deeper—just like what happened to our customer. They’re using AI to: 

  • Enhance attacks: by making it harder for cyber-defenses such as spam filters and antivirus software to detect a threat.

  • Create better manipulations: AI can be used to create even more realistic impersonations and fake data that can confuse and trick employees.

  • Automate and scale attacks: Just as you may be excited at AI’s potential to automate simple tasks and help you go faster at scale, so are cybercriminals. Cybercriminals can use AI to run and automate very large attacks with little to no extra effort. 

Some more detailed examples of the types of attacks that are possible include: 

Deepfakes:

The term “deepfake” refers to the combination of “deep learning” and “fake media,” and is one possible way AI can be used to create better manipulations and impersonations for social engineering attacks. For example, the first documented case of AI-enabled cybercrime (all the way back in 2019) involved cybercriminals creating a voice “deepfake”, leading the CEO of a UK-based energy firm to believe he was on the phone with his boss, the chief execute of the firm’s German-owned parent company, and tricking him into transferring €220,00—the equivalent of approx. $243,000 at the time. Part of what made the phone call so convincing, as shared by the firm’s insurance company, was that the fake voice not only captured the German CEO’s subtle accent, but also “carried his ‘melody’ in his manner of speaking.” 

Business Email Compromises (BEC):

Business Email Compromises are one of the most common phishing attacks out there; these involve cybercriminals sending emails impersonating an executive or other individual at a company with the target of deceiving employees into making unauthorized transactions, such as transferring large sums of money, or disclosing sensitive and valuable information. With AI, cybercriminals can analyze communication patterns and, like deepfakes, create better, more convincing fraudulent emails.  

Password Cracking and Other Types of Hacking:

A key part of AI’s power is in its ability to consume endless amounts of data and rapidly analyze it. Cybercriminals can use AI tools along with Machine Learning to create faster, better algorithms to analyze large datasets of passwords and generate more password variations, faster, enabling them to get better at cracking employee passwords. AI algorithms can also aid them in scanning for software and system vulnerabilities and further develop adaptive malware.  

Adaptive Persistent Threats (ADTs):

As we mentioned in our opening, one of the biggest threats to our customer’s breach was the possibility that the attack they experienced was still ongoing, undetected deep within their systems. This is what’s known as an adaptive persistent threat, or ADT. With AI, these become even scarier, as AI algorithms can enable the criminals to adapt their attack tactics once they’re already breached your system, continue to evade detection, and extract sensitive, valuable data over long periods of time. 


How to Fight Back Against AI-Enabled Cybercrime 

The good news is, just by reading this blog, you’re already engaging with the critical first step: awareness. AI is real and already used more than we may want to admit—after all, the first reported deepfake happened back in 2019. No matter where you fall on the spectrum of personal or business-adoption of AI, to robustly defend against cybercrime in 2024 and beyond, you must be aware of the AI factor.  

You must also know of the impact AI-enabled cybercrime can have on your business. Nobody is immune either; while we primarily hear about major attacks hitting large companies in the headlines, one in every five cyberattack victims is a small or medium-sized business (SMB). And According to the National Cyber Security Alliance, 60% of SMBs close their business operations within six months of a cyber incident.   

Whether it’s the loss of a single large sum of money or the complete, multi-week shutdown of your business, cybercriminals aim to hurt when they attack. And beyond the direct hits, cyberattacks impact your relationships with your customers, vendors, and partners as well—especially if it’s their data or sensitive information that’s breached. It’s critical to your reputation and relationships alike that you do everything you can and are supposed to be doing to secure your defenses against AI cybercrime. Cyberattacks do happen and they can be forgiven if you are appropriately prepared and respond effectively. But if you’re not? Your customers will think twice about continuing to do business with you.  

Lastly, when it comes to AI, you must fight fire with fire: cybersecurity is already adapting to use AI itself and this should be a critical piece to your cybersecurity defense plan.  

Arm Yourself with the Essential Steps for Defense 

Cybersecurity isn’t something you want to take on yourself, especially now that AI is involved. To help you understand the essential pieces of an effective cybersecurity strategy, we’ve prepared a Cybersecurity Checklist to Safeguard Against AI —so you can move from awareness to action.  

CHECKLIST:
6 Essential STEPS TO

MANAGING AI CYBERSECURITY RISKS

Download Your Checklist Now

Take a Page from Our Playbook

Latest Posts