TechHQ: “Being careful with whom you share your secrets is always good advice. And that tip applies to advanced chatbots such as OpenAI’s ChatGPT. In fairness, OpenAI does provide user guidelines in its FAQs. “We are not able to delete specific prompts from your history,” writes the developer. “Please don’t share any sensitive information in your conversations.” The advice comes 8th on the list and, a few bullet points earlier, OpenAI mentions that it reviews conversations between ChatGPT and users to improve its systems and to ensure the content complies with the company’s policies and safety requirements. But this may be of little comfort if you’ve overlooked the possibility of chatbots remembering what type. The prospect of one of the fastest-growing consumer applications ever developed gobbling up user secrets has got security agencies concerned, including the UK’s National Cyber Security Centre (NCSC). Today, the NCSC issued a warning on the risks of advanced chatbots based on large language models (LLMs) such as ChatGPT. Advanced chatbots have grabbed the attention of firms as well as the public at large. ChatGPT can generate documents and other plausible business correspondence in seconds, but if the prompts used to generate text could cause issues were they to be made public, then business users might want to think again…”
Sorry, comments are closed for this post.