Quartz: “More than 60 years after philosopher Ludwig Wittgenstein’s theories on language were published, the artificial intelligence behind Google Translate has provided a practical example of his hypotheses. Patrick Hebron, who works on machine learning in design at Adobe and studied philosophy with Wittgenstein expert Garry Hagberg for his bachelor’s degree at Bard College, notes that the networks behind Google Translate are a very literal representation of Wittgenstein’s work.n Google employees have previously acknowledged that Wittgenstein’s theories gave them a breakthrough in making their translation services more effective, but somehow, this key connection between philosophy of language and artificial intelligence has long gone under-celebrated and overlooked.
Crucially, Google Translate functions by making sense of words in their context. The translation service relies on an algorithm created by Google employees called word2vec, which creates “vector representations” for words, which essentially means that each word is represented numerically. For the translations to work, programmers have to then create a “neural network,” a form of machine learning, that’s trained to understand how these words relate to each other. Most words have several meanings (“trunk,” for example, can refer to part of an elephant, tree, luggage, or car, notes Hebron), and so Google Translate has to understand the context. The neural network will read millions of texts, focusing on the two words preceding and following on from any one word, so as to be able to predict a word based on the words surrounding it. The artificial intelligence calculates probabilistic connections between each word, which form the coordinates of an impossible-to-imagine multi-dimensional vector space…”
Sorry, comments are closed for this post.