Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

OpenAI says its latest GPT-4o model is ‘medium’ risk

The Verge: “OpenAI has released its GPT-4o System Card, a research document that outlines the safety measures and risk evaluations the startup conducted before releasing its latest model. GPT-4o was launched publicly in May of this year. Before its debut, OpenAI used an external group of red teamers, or security experts trying to find weaknesses in a system, to find key risks in the model (which is a fairly standard practice). They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. Now, the results are being released. According to OpenAI’s own framework, the researchers found GPT-4o to be of “medium” risk. The overall risk level was taken from the highest risk rating of four overall categories: cybersecurity, biological threats, persuasion, and model autonomy. All of these were deemed low risk except persuasion, where the researchers found some writing samples from GPT-4o could be better at swaying readers’ opinions than human-written text — although the model’s samples weren’t more persuasive overall.An OpenAI spokesperson, Lindsay McCallum Rémy, told The Verge that the system card includes preparedness evaluations created by an internal team, alongside external testers listed on OpenAI’s website as Model Evaluation and Threat Research (METR) and Apollo Research, both of which build evaluations for AI systems…”

Sorry, comments are closed for this post.