What Are the Risks of Artificial Intelligence in Cybersecurity and to your SAP System?
Aug 21, '23 by Joerg Schneider-Simon
Ransomware continues to be one of the top varieties of malicious software. And it’s wreaking havoc, grinding company operations to a halt as the ransomware renders mission-critical data and systems inaccessible, while also exposing companies to huge regulatory penalties.
One of the primary vehicles for ransomware is phishing attacks, where hackers trick email recipients into clicking on malicious links or opening files that contain malware.
What’s even more scary is that cybercriminals are now using a powerful new weapon to ramp up these attacks: generative artificial intelligence tools, like ChatGPT.
AI, Ransomware, and Phishing
Artificial intelligence (AI), like many technologies past and present, is morally neutral, or “dual-use.” This means it can be used for purposes both noble and nefarious, depending on the goals of its developers. It can even be wielded against itself: An e-commerce fraud prevention company can use AI to detect certain markers that indicate fraud, while criminals can also use AI to commit fraud in the first place.
A team of 26 researchers from 14 institutions recently explored the dire cybersecurity implications of AI. They published their findings in July 2023 in their report, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.”
In this report, the authors determined there are “three high-level implications of progress in AI for the threat landscape.” They predict that progress in AI will:
- Expand existing threats.
- Introduce new threats.
- Alter the typical character of threats.
One of the existing threats expected to expand? Phishing attacks.
How AI Is Expanding Beyond Phishing Attacks
AI is driving an increase in phishing’s targeted cousin: spear phishing.
Spear phishing is a variant of phishing in which victims receive an email that appears to be from a highly trusted source, like friends, coworkers, or loved ones. And whereas it’s easy to ignore an email from a stranger, a busy employee could very easily be fooled by an email ostensibly from their boss, asking them to make some changes to an attached report.
Such was the type of attack leveled at U.S. Democratic National Committee Chief John Podesta in March 2016: An email that appeared to be from Google was in fact an attempt to breach his entire email account – and unfortunately, the attack was successful.
This seems to be the new landscape moving forward. In fact, as AI becomes more advanced, phishing attacks are expected to expand in both sophistication and scope.
Unlimited Possibilities for Replicating Human Behavior with AI
Truly, the capabilities of AI are breathtaking. In a 2023 human-versus-machine challenge, an enhanced iteration of ChatGPT surpassed human performance on the comprehensive two-day bar exam by a notable margin of 7%.
According to Reuters, GPT-4 (the advanced AI model developed by OpenAI with support from Microsoft) achieved a remarkable score of 297 on the bar exam during an experiment led by two law professors and assisted by two professionals from the legal technology company Casetext. This achievement positions GPT-4 within the 90th percentile of human test takers, making it eligible for legal practice in the majority of states, as concluded by the researchers.
This increased sophistication means that AI programming can generate content (like emails) that are indistinguishable from those written by a human. As a result, these phony emails are much more likely to slip past email filters. Combine that with previously stolen contact list data that contains first names, and you have the recipe for a sophisticated phishing attack.
But the threat of AI goes well beyond email. Tools like ChatGPT emulate human interactions in chat sessions, convincing humans that they are corresponding with a human when they are actually corresponding with a faceless bot.
Another threat comes in the form of voice messages created with voice cloning. All that Microsoft's voice-cloning AI needs to simulate a speaker's voice is a three-second clip of that person talking. Then it clones them with remarkable accuracy.
Similarly, by automating many of these tasks that might slow down hackers doing this all manually, AI is expected to expand the scope of future phishing attacks. The report researchers explain:
The most advanced spear phishing attacks require a significant amount of skilled labor, as the attacker must identify suitably high-value targets, research these targets’ social and professional networks, and then generate messages that are plausible within this context. If some of the relevant research and synthesis tasks can be automated, then more actors may be able to engage in spear phishing.
The anticipated increase in both the volume and sophistication of phishing/malware attacks will undoubtedly result in even more people falling victim to cyberattacks — including people who consider themselves otherwise technically savvy.
AI-Powered Threats to your SAP System
Because SAP is such a desirable target for cyberattacks, it stands to reason that SAP systems will face a sizeable portion of this AI-fueled increase in phishing and spear phishing.
There is a bit of good news: Ransomware delivered via phishing will typically not affect SAP, due to SAP’s system being completely outside of the regular OS and drives. (It’s a double-edged sword, however, as this delineation also explains why OS-level anti-virus programs don’t work with SAP.)
However, that doesn’t mean that phishing can’t affect SAP in other ways. Besides getting people to click on attachments, another main goal of phishing is the theft of login credentials. As an example, you might see an email in your inbox from PayPal, saying that there’s an issue with your account. If you click the link, you’ll see a legitimate-looking login screen. Enter your email and password, and voila — cybercriminals now have access to your PayPal account (and the banking information contained therein.) This same technique can easily be used to steal SAP login credentials, where cybercriminals can then legitimately log in to SAP and steal data or engage in sabotage.
This was notably put into practice in 2013, when a variant of Carberp, an infamous banking trojan, was discovered to be targeting the logon client for SAP, recording critical user input. More than 10 years later, hackers have an even more-effective weapon in their arsenal — generative artificial intelligence.
According to Verizon’s DBIR 2023 report, pretexting has overtaken phishing as the most prevalent method of social engineering. Cybercriminals are discovering that ChatGPT is adept at creating realistic fake social media profiles that trick people into clicking on malicious links or persuading them to share personal information — and all with next to no input from human hackers.
Considering how mission-critical SAP is to its users and how much data is contained in the average SAP system, the thought of a successful phishing attack is enough to keep many a CIO awake at night. Their biggest weapon in this fight? Awareness. Organizations must make a point of educating all employees about spear phishing, advising them to carefully check the full email address — not just the sender’s name — before opening any attachments or clicking links. It may take extra time and effort, but it is a drop in the bucket compared to the time and effort needed to recover from a successful phishing and/or ransomware attack.
Artificial intelligence is transforming the business landscape, both for good and for ill. Fortunately, AI-powered phishing attacks on SAP systems don’t have to be a part of your company’s landscape, as long as steps are taken to mitigate the risk.
Learn more about how malware can be hidden in seemingly innocent attachments by watching our webinar: “Protecting Your SAP Applications from Content-Based Cyberthreats.”
Share this on social: