Harvard Business Review: “Hot off the heels of OpenAI releasing their GenAI chatbot ChatGPT to the public in November 2022, Google released their own chatbot called Bard (now Gemini). During Bard’s first public demonstration, it generated a major factual error in response to a question about the discoveries made by the James Webb Space Telescope. This wrong answer by the chatbot led to a 9% drop in the stock price of Alphabet, Google’s parent company — at the time, $100 billion in market value. Incidents that demonstrate the risks of chatbots are occurring across different professions, too. In 2023, two lawyers were fined by the Federal District Court of New York for submitting legal briefs containing fictional cases and legal citations generated by the chatbot ChatGPT. And in journalism, a number of well-known publications have been embarrassed after using content generated by chatbots. For instance, Sports Illustrated featured several articles published by authors with fake names with headshots generated by AI. In both of cases, professionals and companies uncritically used chatbot content, and they are only the tip of the iceberg. In the rush to release large language model (LLM) chatbots to the public, there have been numerous issues with these tools generating falsehoods and misinformation. As a result, managers and organizations are beginning to see an increasing array of new risks based on expectations and professional standards around the accuracy of information. In this article, we look at nature of these risks, and offer some informed guidance based on our research for how to manage them.”