“Artificial intelligence systems and machine-learning algorithms have come under fire recently because they can pick up and reinforce existing biases in our society, depending on what data they are programmed with. A Stanford team used special algorithms to detect the evolution of gender and ethnic biases among Americans from 1900 to the present. But an interdisciplinary group of Stanford scholars turned this problem on its head in a new Proceedings of the National Academy of Sciences paper published April 3. The researchers used word embeddings – an algorithmic technique that can map relationships and associations between words – to measure changes in gender and ethnic stereotypes over the past century in the United States. They analyzed large databases of American books, newspapers and other texts and looked at how those linguistic changes correlated with actual U.S. Census demographic data and major social shifts such as the women’s movement in the 1960s and the increase in Asian immigration, according to the research. “Word embeddings can be used as a microscope to study historical changes in stereotypes in our society,” said James Zou, an assistant professor of biomedical data science. “Our prior research has shown that embeddings effectively capture existing stereotypes and that those biases can be systematically removed. But we think that, instead of removing those stereotypes, we can also use embeddings as a historical lens for quantitative, linguistic and sociological analyses of biases.” Zou co-authored the paper [Word embeddings quantify 100 years of gender and ethnic stereotypes with history] Professor Londa Schiebinger, linguistics and computer science Professor Dan Jurafsky and electrical engineering graduate student Nikhil Garg, who was the lead author…”
Sorry, comments are closed for this post.