Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

A.I. Is Mastering Language. Should We Trust What It Says?

The New York Times: “OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency — a development that could have profound implications for the future…GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years. Advances in computational power — and new mathematical techniques — have enabled L.L.M.s of GPT-3’s vintage to ingest far larger data sets than their predecessors, and employ much deeper layers of artificial neurons for their training…”

See also Fast Company:  OpenAI’s DALL-E AI is becoming a scary-good graphic artist – “DALL-E 2, a new text-to-image AI model, can make images based on either language or image input.”

Sorry, comments are closed for this post.