The New Yorker: “As we teach computers to use natural language, we are bumping into the inescapable biases of human communication…Artificial intelligence is an ethical quagmire. Its power can be more than a little nauseating. But there’s a kind of unique horror to the capabilities of natural language processing. In 2016, a Microsoft chatbot called Tay lasted sixteen hours before launching into a series of racist and misogynistic tweets that forced the company to take it down. Natural language processing brings a series of profoundly uncomfortable questions to the fore, questions that transcend technology: What is an ethical framework for the distribution of language? What does language do to people?…”
Sorry, comments are closed for this post.