Tech Policy Press: “The enthusiasm for generative AI systems has taken the world by storm. Organizations of all sorts– including businesses, governments, and nonprofit organizations– are excited about its applications, while regulators and policymakers show varying levels of desire to regulate and govern it. Old hands in the field of cybersecurity and governance, risk & compliance (GRC) functions see a much more practical challenge as organizations move to deploy ChatGPT, DALL-E 2, Midjourney, Stable Diffusion, and dozens of other products and services to accelerate their workflows and gain productivity. An upsurge of unreported and unsanctioned generative AI use has brought forth the next iteration of the classic “Shadow IT“ problem: Shadow AI…
Shadow AI refers to the AI systems, solutions, and services used or developed within an organization without explicit organizational approval or oversight. It can include anything from using unsanctioned software and apps to developing AI-based solutions in a skunkworks-like fashion. Wharton School professor Ethan Mollick has called such users the hidden AI cyborgs.”
Sorry, comments are closed for this post.