The New York Times: “In late November, OpenAI, a San Francisco artificial intelligence lab, unveiled a bot called ChatGPT that left more than a million people feeling as if they were chatting with another human being. Similar technologies are under development at Google, Meta and other tech giants. Some companies have been reluctant to share the technology with the wider public. Because these bots learn their skills from data posted to the internet by real people, they often generate untruths, hate speech and language that is biased against women and people of color. If misused, they could become a more efficient way of running the kind of misinformation campaign that has become commonplace in recent years. “Without any additional guardrails in place, they are just going to end up reflecting all the biases and toxic information that is already on the web,” said Margaret Mitchell, a former A.I. researcher at Microsoft and Google, where she helped start its Ethical A.I. team. She is now with the A.I. start-up Hugging Face. But other companies, including Character.AI, are confident that the public will learn to accept the flaws of chatbots and develop a healthy distrust of what they say. Mr. Thiel found that the bots at Character.AI had both a talent for conversation and a knack for impersonating real-life people. “If you read what someone like Kautsky wrote in the 19th century, he does not use the same language we use today,” he said. “But the A.I. can somehow translate his ideas into ordinary modern English.”
Sorry, comments are closed for this post.