The proliferation of automated decision-making systems has yielded much commercial success, but the potential of such systems to systematically generate biased decisions threatens to exacerbate the vulnerability of certain subgroups. Especially as the aim of machine learning algorithms shifts from making predictions for consumption by humans to making the very decisions themselves, it becomes critical to design algorithms that are robust to bias and ensure their propagation in relevant areas. With the support of a 2018 CLTC research grant, PIs Olfat and Aswani have developed a hierarchical framework for fair machine learning that extends to classification, unsupervised learning and decision problems. By targeting the score functions that underlie many machine learning algorithms, this framework is able to obtain solutions that are more fair and more robust to noise in data. This renewal will study the problem of using the developed framework to ensure fairness in key social science and public policy domains. As an example, work is already underway applying this framework to identify false news stories, but with certifiable impartiality with regards to political ideology. The framework has additional applications in critical areas such as the fair placement of heart defibrillators and unbiased allocation of resources in law enforcement. Thus, this work will reduce the costs of lessening social inequity and promote communal trust in public goods, services and policy.
Findings, Papers, Presentations