Grant / April 2023

Recontextualizing Fairness for Indian Contexts

In recent years, there has been a notable surge in research addressing algorithmic risks and the concept of fairness. However, the understanding of fairness often relies on Western assumptions, raising concerns about its applicability in non-Western contexts. Although algorithmic fairness is crucial to ensure that AI remains within ethical and legal boundaries in the West, there is a significant risk that applying a simplistic understanding of fairness could prove inadequate in controlling AI deployments in non-Western contexts.

Despite rising skepticism in the west, resource-poor nations continue to deploy black-box algorithms for high-stakes decision-making systems. The focus of this study is on natural language processing (NLP) tasks, such as machine translation, toxicity detection, language generation, and sentiment analysis. This study aims to survey the current landscape of debiasing approaches in NLP and evaluate their effectiveness in mitigating biases beyond gender and race.

To achieve these goals, the study will explore and test potential debiasing methods, comparing their performance against existing models. The ultimate objective is to identify gaps in the current literature and emphasize the need for further research in this area. By shedding light on the limitations of simplistic fairness understanding in controlling AI deployments, this study aims to contribute to a more inclusive and ethically sound development of AI systems globally.