Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Discriminatory AI and the Law Legal Standards for Algorithmic Profiling

von Ungern-Sternberg, Antje, Discriminatory AI and the Law – Legal Standards for Algorithmic Profiling. (June 29, 2021). Draft Chapter, in: Silja Vöneky, Philipp Kellmeyer, Oliver Müller and Wolfram Burgard (ed.) Responsible AI, Cambridge University Press (Forthcoming), Available at SSRN:

“Artificial Intelligence is increasingly used to assess people (profiling) and helps employers to find qualified employees, internet platforms to distribute information or to sell goods, and security authorities to single out suspects. Apart from being more efficient than humans in processing huge amounts of data, intelligent algorithms – which are free of human prejudices and stereotypes – would also prevent discriminatory decisions, or so the story goes. However, many studies show that the use of AI can lead to discriminatory outcomes. From a legal point of view, this raises the question if the law as it stands prohibits objectionable forms of differential treatment and detrimental impact. In the legal literature dealing with automated profiling, some authors have suggested that we need a “right to reasonable inferences”, i.e. a certain methodology for AI algorithms affecting humans. This paper takes up this idea with respect to discriminatory AI and claims that such a right already exists in antidiscrimination law. It argues that the need to justify differential treatment and detrimental impact implies that profiling methods correspond to certain standards. It is now a major challenge for both lawyers as well as data and computer scientist to develop and establish those methodological standards in order to guarantee compliance with antidiscrimination law (and other legal regimes), as the paper outlines.”

Sorry, comments are closed for this post.