Federated Learning: Fighting Financial Fraud with Privacy-Compliant AI

Having your credit card stolen is a very frustrating experience, and sadly, one that many can relate to. In 2019 alone, fraudulent transactions involving credit cards issued in Europe accounted for €1.87 billion – a staggering amount, and one that is only expected to increase as online shopping grows. With its advanced capability to analyse large amounts of data and create forecasts, artificial intelligence (AI) could provide financial institutions with significant help in fighting fraud. However, using AI to detect financial crimes requires access to sensitive data, which raises concerns about data privacy and security. That’s where federated learning comes in.
“Federated learning allows the training of an AI model on your own data, without sharing the data with someone else; you just share updates to a global model. The data stays on each client’s premises – in the case of a bank, down to a single branch – to avoid any risks, but the system allows every participant to leverage their collective intelligence to train the global model. It’s a win-win: everyone is better off by collaborating, while ensuring data security and privacy at the same time,” said Prof. Radu State, head of the Services and Data Management (SEDAN) research group at SnT, and principal investigator of the research project in federated learning for PSD2-compliant data analytics, an initiative launched in February 2022 in partnership with LUXHUB.
In fact, SnT and LUXHUB have been working together to create added value, service-based financial data. The partnership is focusing on implementing groundbreaking technology in the field of artificial intelligence, while respecting the industry’s main concern: the safety and privacy of sensitive data. Experts from FinTech/ICT research and the financial sector have been working together on a federated learning model for the benefit of the entire financial service industry.
Photo by Michael Dziedzic on Unsplash