
Bank security, fraud prevention, compliance, and IT professionals are pressed to manage multiple challenges when combatting financial fraud, but the malicious use of generative AI (GAI) — and the rapidly escalating losses it’s causing — suggest that preventing and combatting AI-related financial fraud must be a top priority now and in the future.
Deloitte’s Center for Financial Services, which tallied AI-related fraud losses at $12.3 billion in 2023, predicts AI-related fraud losses in the U.S. could reach $40 billion by 2027. Why the rocket-like rise? Credit the nature of the beast. GAI & Deepfakes.
The execution of AI and GAI scams is driven by fraudsters’ endless imaginations and the continuous learning capabilities of AI and GAI. “Deepfakes” — videos, audio, images or documents that are digitally altered or generated to sound or look like someone or something else — are a type of AI self-learning that constantly monitors and adapts its ability to evaluate and bypass automated fraud detection systems. This continuous evolution enables fraudsters to continuously alter and expand the scope of their financial crimes, making it difficult for banks and their customers to anticipate the “when and how” details of each next attack.
Some types of AI financial fraud may make a financial institution more vulnerable than others. For example, business email compromise (BEC) is a common scam where a fraudster hacks executives’ business email accounts, then, impersonating the hacked executives, sends seemingly legitimate instructions to transfer or wire funds into criminals’ accounts.
With GAI, multiple victims can be targeted with fake executive personas at the same time, exponentially increasing the risk of fraud. In 2022, the FBI’s Internet Crime Complaint Center reported 21,832 instances of BEC in that resulted in $2.7 billion in financial losses.
Keep pace with AI developments
In its 2023 report, the Financial Stability Oversight Council (FSOC) noted the importance of monitoring AI vulnerabilities and safety and soundness risks, underscoring your Board’s critical oversight of plans to address growing threats from ill-intended use of AI to commit financial fraud. Suggested priorities:
- Stay aware of emerging and trending types of fraud and train and upskill employees to identify and report suspicious activities.
- Invest in processes, procedures, and technologies to develop 360-degree customer views using actual and expected relationship behaviors to bring potential red flags to the forefront faster.
- Encourage enterprise-wide, cross-functional engagement and collaboration to improve risk assessments and support internal teams and third-party vendors directly responsible for fraud-detection and protection systems and processes.
- Provision for increases in operating budgets to meet higher costs associated with risk management activities, regulatory examinations, and compliance requirements.
While AI can provide operational efficiency and support business development strategies by deepening knowledge on consumer behaviors, it can also be easily exploited to commit financial fraud. Institutions that address AI concerns not only bolster their resilience against cyberattacks but also contribute to the strength and stability of global financial networks.
Read about AI regulatory guidance for financial institutions > here.
Contact your Rehmann advisor for a personal consultation or contact Beth A. Behrend, CBCCO, CBAP at 616.975.4100 or [email protected]; or Jessica Dore, CISA, at 989.797.8391 or [email protected].