The MarkUp – A conversation with Arvind Narayanan by – “If you have been reading all the hype about the latest artificial intelligence chatbot, ChatGPT, you might be excused for thinking that the end of the world is nigh. The clever AI chat program has captured the imagination of the public for its ability to generate poems and essays instantaneously, its ability to mimic different writing styles, and its ability to pass some law and business school exams. Teachers are worried students will use it to cheat in class (New York City public schools have already banned it). Writers are worried it will take their jobs (BuzzFeed and CNET have already started using AI to create content). The Atlantic declared that it could “destabilize white-collar work.” Venture capitalist Paul Kedrosky called it a “pocket nuclear bomb” and chastised its makers for launching it on an unprepared society. Even the CEO of the company that makes ChatGPT, Sam Altman, has been telling the media that the worst-case scenario for AI could mean “lights out for all of us.” But others say the hype is overblown. Meta’s chief AI scientist, Yann LeCun, told reporters ChatGPT was “nothing revolutionary.” University of Washington computational linguistics professor Emily Bender warns that “the idea of an all-knowing computer program comes from science fiction and should stay there.” So, how worried should we be? For an informed perspective, I turned to Princeton computer science professor Arvind Narayanan, who is currently co-writing a book on “AI snake oil.” In 2019, Narayanan gave a talk at MIT called “How to recognize AI snake oil” that laid out a taxonomy of AI from legitimate to dubious. To his surprise, his obscure academic talk went viral, and his slide deck was downloaded tens of thousands of times; his accompanying tweets were viewed more than two million times..”
Sorry, comments are closed for this post.