The Scholarly Kitchen: “There is broad consensus in scholarly publishing that AI tools will make the task of ensuring the integrity of the scientific record a Herculean task. However, it seems that many publishers are still struggling to figure out how to address the new issues and challenges that these AI tools present. Current publisher policies fall well short of providing a robust framework for assessing the risk of different AI tools and researchers are left guessing how they should use AI in their research and subsequent writing. Do publishers really understand what tools researchers are using and how they are using them? Can we do more to create better policies based on real use cases and not hypothetical conjecture about what AI might do in the future? Are there existing frameworks that we can borrow to beef up policy to ensure we continue to uphold research integrity without stifling researcher creativity? While scholarly publishers put together committees, organize working groups, and plan conferences to discuss how AI might impact the industry, some authors already are off to the races trying out every new tool they can get their hands on. Recent surveys by Oxford University Press (OUP) and Elsevier demonstrate the prevalence of author use of these tools, researchers’ high motivations for using AI, and even how researchers are using these AI tools. Sometimes researchers’ AI experiments go as planned, improving quality and efficiency in various parts of their workflow including literature reviews, data analysis, writing and revision, and more. Other times they fall short, leaving researchers with inaccurate or inexact results they must revise using more traditional, “old-fashioned” methods…”
Sorry, comments are closed for this post.