As banks accelerate their use of AI, they are quietly crossing a new regulatory fault line: emotional AI. While the industry focuses on predictive analytics, the use of AI to interpret and react to customer emotions is creating a significant compliance gap between the GDPR¹ and the new AI Act².
This isn't a future-gazing exercise. The AI Act, which began prohibiting certain AI practices in February 2025, explicitly bans emotion recognition systems in the workplace and educational institutions for posing an "unacceptable risk"³. While financial services aren't explicitly named, the regulatory direction is clear.
The European Court of Justice has affirmed that credit scoring falls under Article 22 of the GDPR¹, which grants individuals the right not to be subject to solely automated decisions with legal or similarly significant effects. If an AI system denies a loan based on its interpretation of a customer's emotional state during an interaction, does this violate Article 22? Most institutions haven't conducted the legal analysis to determine this.
Furthermore, Article 7 of the GDPR¹ requires that consent be freely given, specific, informed, and unambiguous. Are customers genuinely consenting to their emotional state being analyzed when they click "I agree" on lengthy terms and conditions?
The fairness principle under Article 5 of the GDPR¹ and anti-discrimination laws like the UK's Equality Act 2010⁴ raise further questions. If an emotional AI system disproportionately affects individuals with certain mental health conditions or from specific cultural backgrounds, it could constitute illegal discrimination. The FCA's Consumer Duty⁵ also requires firms to act in good faith and avoid foreseeable harm, which includes the potential for emotional profiling to disadvantage vulnerable customers.
With states in the US now moving to regulate "AI companions" and therapeutic chatbots⁶, the regulatory gap around emotional AI in banking will not last long. Banks are moving from AI experiments to production-level autonomous agents⁷. The question is whether their compliance frameworks have kept pace with their technological ambitions.
References
¹ Regulation (EU) 2016/679 (GDPR) ² Regulation (EU) 2024/1689 (AI Act) ³ AI Act, Article 5 (Prohibited Artificial Intelligence Practices) ⁴ Equality Act 2010 (UK) ⁵ FCA Consumer Duty (PRIN 2A) ⁶ New York AI Companion Law (2026) ⁷ SAS Banking Predictions 2026
This article was originally published on LinkedIn.
View on LinkedIn →
Solicitor | Fintech Law Specialist
Gavin is a specialist solicitor with over 25 years of experience in financial technology regulation, digital assets law, and emerging technology compliance. He advises premier financial institutions and innovative technology companies on complex regulatory matters across 33 jurisdictions.
Qualifications: PhD (Cryptocurrency & Stablecoin Policy), LLM (Commercial Law), Solicitor of England & Wales
Experience: £750M+ transaction value | 33 jurisdictions | Trusted adviser to Morgan Stanley, American Express, Visa, Citibank, and leading fintech innovators
Regulatory frameworks governing artificial intelligence in financial services