Wired – OpenAI’s newest model is “a data hoover on steroids,” says one expert—but there are still ways to use it while minimizing risk. [unpaywalled]: “…On the face of it, OpenAI’s privacy policy does show a large amount of data collection, including personal information, usage data, and content provided when you use it. ChatGPT uses the data you share to train its models, unless you turn it off in the settings or use the enterprise version. OpenAI is quick to say in its privacy policy that individual data is “anonymized,” but the approach on the whole seems to be “take everything now and sort it out later,” says Angus Allan, senior product manager at digital consultancy CreateFuture, which advises firms on ways to use AI and data analytics. “Their privacy policy explicitly states they collect all user input and reserve the right to train their models on this.” The catch-all “user content” clause likely covers images and voice data too, says Allan. “It’s a data hoover on steroids, and it’s all there in black and white. The policy hasn’t changed significantly with ChatGPT-4o, but given its expanded capabilities, the scope of what constitutes ‘user content’ has broadened dramatically.”
OpenAI’s privacy policies are clear that ChatGPT does not have access to any data on your device beyond what you explicitly input into the chat. However, by default, ChatGPT does collect lots of other data about you, says Jules Love, founder at Spark, a consultancy that advises companies on how to build AI tools including ChatGPT into their workflows while addressing data privacy. “It uses everything from prompts and responses to email addresses, phone numbers, geolocation data, network activity, and what device you’re using.”
Sorry, comments are closed for this post.