Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Ten Legal and Business Risks of Chatbots and Generative AI

Ten Legal and Business Risks of Chatbots and Generative AI – by Matthew F. Ferraro, a senior fellow at the National Security Institute at George Mason University, is a Counsel at WilmerHale; Natalie Li is a Senior Associate, and Haixia Lin and Louis W. Tompros / Partners at WilmerHale.It took just two months from its introduction in November 2022 for the artificial intelligence (AI)- powered chatbot ChatGPT to reach 100 million monthly active users—the fastest growth of a consumer application in history. Chatbots like ChatGPT are Large Language Models (LLMs), a type of artificial intelligence known as “generative AI.” Generative AI refers to algorithms that, after training on massive amounts of input data, can create new outputs, be they text, audio, images or video. The same technology fuels applications like Midjourney and DALL-E 2 that produce synthetic digital imagery, including “deepfakes.” Powered by the language model Generative Pretrained Transformer 3 (GPT-3), ChatGPT is one of today’s largest and most powerful LLMs. It was developed by San Francisco-based startup OpenAI—the brains behind DALL-E 2—with backing from Microsoft and other investors, and was trained on over 45 terabytes of text from multiple sources including Wikipedia, raw webpage data and books to produce human-like responses to natural language inputs. LLMs like ChatGPT interact with users in a conversational manner, allowing the chatbot to answer follow-up questions, admit mistakes, and challenge premises and queries. Chatbots can write and improve code, summarize text, compose emails and engage in protracted colloquies with humans. The results can be eerie; in extended conversations in February 2023 with journalists, chatbots grew lovelorn and irascible and expressed dark fantasies of hacking computers and spreading misinformation. The promise of these applications has spurred an “arms race” of investment into chatbots and other forms of generative AI. Microsoft recently announced a new, $10 billion investment in OpenAI, and Google announced plans to launch an AI-powered chatbot called Bard later this year. The technology is advancing at a breakneck speed. As Axios put it, “The tech industry isn’t letting fears about unintended consequences slow the rush to deploy a new technology.” That approach is good for innovation, but it poses its own challenges. As generative AI advances, companies will face a number of legal and ethical risks, both from malicious actors leveraging this technology to harm businesses and when businesses themselves wish to implement chatbots or other forms of AI into their functions. This is a quickly developing area, and new legal and business dangers—and opportunities—will arise as the technology advances and use cases emerge. Government, business and society can take the early learnings from the explosive popularity of generative AI to develop guardrails to protect against their worst behavior and use cases before this technology pervades all facets of commerce. To that end, businesses should be aware of the following top 10 risks and how to address them…”

Sorry, comments are closed for this post.