Computer World: “Adoption of generative AI is happening at a breakneck pace, but potential threats posed by the technology will require organizations to set up guardrails to protect sensitive data and customer privacy — and to avoid running afoul of regulators. As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators — not to mention the potential exposure of sensitive data. For example, in April, after Samsung’s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to published accounts. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and “requested code optimization.” ChatGPT is hosted by its developer, OpenAI, which asks users not to share any sensitive information because it cannot be deleted. “It’s almost like using Google at that point,” said Matthew Jackson, global CTO at systems integration provider Insight Enterprises. “Your data is being saved by OpenAI. They’re allowed to use whatever you put into that chat window. You can still use ChatGPT to help write generic content, but you don’t want to paste confidential information into that window.”
Sorry, comments are closed for this post.