The IIF is pleased to share our paper on "Bias and Ethical Implications in Machine Learning," the second paper on our three-part Thematic Series on issues related to Machine Learning (ML). This paper builds on one of the themes that emerged in our IIF Machine Learning in Credit Risk Report (March 2018) and the subsequent engagements that the IIF has had with the regulatory community and the 60 banks and mortgage insurers that participated in the original IIF survey.
In this paper, we discuss the concept and sources of bias, how firms identify bias, and specific responses and solutions taken by financial institutions to ensure fairness. This includes a substantial focus on governance processes for ML and the technical measures undertaken by many financial institutions, as well as detailing a number of cases where ML is helping to overcome some existing biases.
It is our view that maximizing predictive performance should be subject to a fairness constraint, and to ensure that ML algorithms are fair and support firms’ ethical standards. Concurrently, policy makers should carefully consider how different legal, technical and policy-related perspectives could create challenges for the use of ML, and where this can potentially stifle innovation.