Your guide to navigating AI-enabled AML ID&V risks | HCLTech
Digital Process Operations

Your guide to navigating AI-enabled AML ID&V risks

5 examples of how AI and other advanced tech can be abused to falsify identification and how banks can mitigate those risks
 
3 minutes 30 seconds read
Jesper Kristensen

Author

Jesper Kristensen
Associate Vice President, Digital Process Operations
3 minutes 30 seconds read
Share
Your guide to navigating AI-enabled AML ID&V risks

Artificial Intelligence (AI) and other next-generation technologies hold immense potential to protect against fraud and enhance security. At the same time, criminals are utilizing the same technology for illegal, unethical activities like creating fake identities to launder money and commit fraud.

For example, the Identification and Verification (ID&V) part of a bank’s Anti-Money Laundering (AML) process might contain vulnerabilities that criminals can potentially exploit —using the very technologies the bank uses —to overcome barriers to entry and gain unauthorized systems access.

Below are examples of these potential risks, as well as the mitigating steps banks can take to remain secure. Bear in mind that neither the list of risks nor the mitigating steps are exhaustive.

 

qute-color

Regular security audits, employee training and customer awareness programs are crucial to maintaining a secure and trustworthy financial environment.

Share  

 

  1. Synthetic identity creation:

    AI can generate ultra-realistic fake identities (name, date of birth, etc.) and forge documents.

    Technologies involved:

    1. AI/ML models that can generate realistic but synthetic personal information
    2. Generative adversarial networks (GANs) for creating synthetic facial images and documents

    Mitigations:

    1. Deploy AI-driven document verification services to detect anomalies in ID documents
    2. Deploy advanced document, services that can distinguish between synthetic and real identities
    3. Utilize biometric verification and liveness detection technologies to ensure the identity being claimed is real and present
  2. Deepfakes and voice synthesis

    Deepfake technology can create highly realistic, synthesized images and videos, and voice synthesis can duplicate voices.

    Technologies involved:

    1. Deepfakes and voice synthesis for replicating facial and voice characteristics of a real individual

    Mitigations:

    1. Use voice biometrics coupled with anti-spoofing measures
    2. Implementing multi-factor authentication processes, combining something the user knows (password), something the user has (token or phone) and something the user is (biometrics)
    3. Liveness detection ensures that a real, live person is providing the biometric traits
  3. Manipulating behavioral biometrics

    AI models can analyze and mimic user behavior, allowing unauthorized access by pretending to have legitimate user patterns.

    Technologies involved:

    1. AI models that can mimic user behavior, mouse movements and typing patterns to pass behavioral biometrics verification

    Mitigation:

    1. Continuously monitor user behavior throughout the session and employ anomaly detection models to identify deviations from established patterns
    2. Combine behavioral biometrics with other forms of verification to build a more robust identity verification system
    3. Enforce step-up authentication when anomalies are detected
  4. Automation in account creation

    Criminals can use bots to automate the account creation process with fake or stolen identities.

    Technologies involved:

    1. Automation bots and scripts can create multiple accounts quickly, using stolen or synthesized identities

    Mitigation:

    1. Implement CAPTCHAs and other bot-detection mechanisms
    2. Deploy AML risk assessments that authenticate and introduce additional verification steps for suspicious sign-up patterns
    3. Implement rate limiting on account creation from the same IP
  5. Social engineering attacks

    Advanced AI models can craft persuasive phishing messages and impersonate bank officials in communications.

    Technologies involved:

    1. AI tools for natural language processing and generation can be used for crafting convincing phishing emails and messages

    Mitigation:

    1. Implement advanced email filtering and secure communication channels between banks and customers
    2. Facilitate employee training and awareness programs to recognize and report suspicious activities
    3. Develop ongoing customer education programs on security best practices

Conclusion:

For every technological advancement, there is an equal push to secure systems against malicious uses of such technologies. Banks must stay vigilant, keep abreast of the latest advancements in security and collaborate with cybersecurity experts, other financial institutions, partners and regulatory bodies to ensure the integrity and security of their systems. Regular security audits, employee training and customer awareness programs are crucial to maintaining a secure and trustworthy financial environment.

TAGS:
Financial Services
Share On