The third session of the IIF’s DataTalk forum, our interactive discussion forum with knowledge partner the Oliver Wyman Forum, focused on artificial intelligence (AI) and machine learning (ML) explainability. This note provides a brief summary of the key themes that emerged in our discussion, noting that this was conducted under the Chatham House rule, and comments are unattributed.
Participants highlighted the importance of building trust with customers and regulators in order to demystify explainability. Financial institutions are increasingly using AI/ML to manage risk, gain competitive advantage, better leverage data and increase efficiencies. Many start by building high-level principles around the use of AI/ML, and several have broadened their approach to develop a company-wide data ethics framework that goes further than AI/ML.
Participants discussed the various approaches that firms can take to tackle explainability, including taking a layered approach that is interlinked with the risk/materiality of the specific use case, inherent vs. post-hoc explainability techniques, and the importance of governance in this context. A discussion on the role of regulation vs supervision in the context of the continued development of new techniques and technologies was also discussed, with the view emphasized robust supervision over regulation.
For more information on this forum, please contact email@example.com.