What better way to demonstrate the reality of AI – its capabilities and limitations, boundless potential and risk – than in an article about AI-powered social engineering that OpenAI “wrote.” Generated from specific input parameters entered by Rehmann’s senior manager of security solutions, Mark Spaak, the article below is an unedited product of OpenAI’s take on the subject, save for the single human-inserted reference to the very real 2023 Verizon Data Breach report.
While the validity of all other information and incidents presented within the article can’t be confirmed – they are, after all, lacking names, places, dates, and/or citations to any primary or secondary source – the article’s theme, recommendations, and call to action are (perhaps ironically) on point.
In the relentless landscape of cybersecurity, small and medium-sized businesses (SMBs) find themselves in the crosshairs of a stealthy adversary: artificial intelligence-driven social engineering. Recent high-profile incidents vividly illustrate the tangible and severe consequences of business email compromise (BEC) and wire fraud, showcasing how AI is now a weapon of choice for cybercriminals targeting the human element within organizations.
In fact, the Verizon Data Breach Report for 2023 indicated that business-email compromises have doubled and now account for 50 percent of all incidents within the social engineering category. Social engineering continues to be quite lucrative for threat actors, with a high effective rate underscoring we need to roll up our sleeves.
Consider the case of a regional manufacturing firm that fell victim to a meticulously orchestrated BEC attack. Cybercriminals, armed with AI, studied the company’s communication patterns, mimicking the CEO’s writing style and tone flawlessly. The convincing email instructed the finance department to initiate a payment of $350,000 to what appeared to be a legitimate vendor account. By the time the deception was uncovered, the funds had vanished into untraceable cryptocurrency accounts, leaving the company grappling with a substantial financial loss. Threat actors often live off the land, attempting multiple transactions until the deception has been uncovered. It is not uncommon for an organization to sustain multiple transactional losses before the scam is detected.
Similarly, a tech startup became a casualty of wire fraud when an employee unknowingly responded to an AI-crafted phishing email. The email, seemingly from a trusted client, requested an urgent change in bank account details for an upcoming transaction. Unbeknownst to the employee, the funds were rerouted to an offshore account controlled by cybercriminals. The startup incurred not only a six-figure financial hit but also endured reputational damage as clients questioned the integrity of their security measures and process controls.
These real-world examples underscore the urgency not only for SMBs but all organizations to fortify their defenses against social engineering. Technological solutions, such as advanced email security platforms leveraging AI for anomaly detection, are critical.
Equally important are the organizational internal controls, including regular monthly phishing and security awareness training for all staff to enhance discernment and ensuring all email accounts have multi-factor authentication required. In procedural terms, it also involves instituting and consistently following rigorous verification processes. This includes employing an additional communication channel to confirm requested alterations to financial transactions or accounts, engaging trusted individuals in the validation process.
In navigating the evolving landscape of cyber threats, the integration of AI into social engineering tactics poses a formidable challenge for organizations. Gone are the days of mis-spelled phishing emails and easy to spot scams, grammar is correct, formatting is clean, the messaging and tone is on point and relevant.
It is imperative organizations acknowledge and adopt a comprehensive strategy that combines cutting-edge technology with employee training and aggressive internal process controls. By doing so, businesses can significantly enhance their resilience against the rising tide of AI-powered business email compromise and wire fraud. In the era of AI assisted threats and deception, our ability to be educated and aware of its influence is critical, just like this article.
Had this article failed to disclose up front that it was almost entirely generated by AI, would you have been fooled? Would your employees? Imagine, then, if this piece were instead an AI-generated “personal” email sent to one of your employees and seemingly authored by you, a known vendor, or a client. Imagine, too, if that email requested payment, information, or some other transaction that, despite being valuable, wasn’t particularly unusual.
Are you confident that your employees, your IT team, and/or your technology infrastructure could detect AI-driven social engineering at work – and, just as importantly – respond quickly and properly to deflect it?
As the above article demonstrates, the age of digital deception is upon us. AI is only going to get smarter, and whether you or your employees are interested in using its tools or not, threat actors will continue to capitalize on its ever-growing capabilities every leap of the way. To protect your organization – its data, dollars, and reputation – you and your employees must be vigilant and have the necessary controls and procedures in place. Cutting corners could result in cutting your balance sheet or collapsing all you’ve built. Want to make sure you are protected? Contact Rehmann today to learn more.