Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

AI Scientists Have a Problem: AI Bots Are Reviewing Their Work

Chronicle of Higher Education: “When Arjun Guha submitted a paper to a conference on artificial intelligence last year, he got feedback that made him roll his eyes. “The document is impeccably articulated,” one peer-reviewer wrote, “boasting a lucid narrative complemented by logically sequenced sections and subsections.” Guha, an associate professor of computer science at Northeastern University, knew this “absurd” remark could stem from only one source: an AI chatbot. “If I wanted to know what ChatGPT thought of our paper,” Guha complained on X, “I could have asked myself.” AI is upending peer review, the time-honored tradition in which academics help judge which research should be elevated to publication — and which should go in the reject pile. Under the specter of ChatGPT, no one can be sure anymore that their intellectual labor is being read and judged by humans. Scientists, even those who think generative AI can be a helpful tool, say it’s demoralizing to be on the receiving end of an evaluation blatantly outsourced to a robot. And in an ironic twist, this blow to the ego appears to be hitting the AI field most of all: Up to 17 percent of reviews submitted to prestigious AI conferences in the last year were substantially written by large language models (LLMs), a recent study estimated…”

Sorry, comments are closed for this post.