Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

UK AI Safety Institute releases new AI safety evaluations platform

GOV.UK: “Global AI safety evaluations are set to be enhanced as the UK AI Safety Institute’s evaluations platform is made available to the global AI community today (Friday 10 May), paving the way for safe innovation of AI models. After establishing the world’s first state-backed AI Safety Institute, the UK is continuing the drive towards greater global collaboration on AI safety evaluations with the release of the AI Safety Institute’s homegrown Inspect evaluations platform. By making Inspect available to the global community, the Institute is helping accelerate the work on AI safety evaluations being carried out across the globe, leading to better safety testing and the development of more secure models. This will allow for a consistent approach to AI safety evaluations around the world. Inspect is a software library which enables testers – from start ups, academia and AI developers to international governments – to assess specific capabilities of individual models and then produce a score based on their results. Inspect can be used to evaluate models in a range of areas, including their core knowledge, ability to reason, and autonomous capabilities. Released through an open source licence, it means Inspect it is now freely available for the AI community to use. The platform is available from today – the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use…”

Sorry, comments are closed for this post.