Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

OpenAI’s Recent Announcement: What Went Wrong, and How It Could Be Better

EFF: “Earlier this month, OpenAI revealed an impressive language model that can generate paragraphs of believable text. It declined to fully release their research “due to concerns about malicious applications of the technology.” OpenAI released a much smaller model and technical paper, but not the fully-trained model, training code, or full dataset, citing concerns that bad actors could use the model to fuel turbocharged disinformation campaigns. Whether or not OpenAI’s decision to withhold most of their model was correct, their “release strategy” could have been much better.

The risks and dangers of models that can automate the production of convincing, low-cost, realistic text is an important debate to bring forward. But the risks attached to hinting about dangers without backing them up with detailed analysis and while refusing public or academic access, need to be considered also. OpenAI has appeared to consider one set of risks, without fully considering or justifying the risks they have taken in the opposite direction. Here are the concerns we have, and how OpenAI and other institutions should handle similar situations in the future…”

Sorry, comments are closed for this post.