New AI report: OECD (2024), OECD Artificial Intelligence Papers, No. 16, OECD Publishing, Paris, https://doi.org/10.1787/d1a8d965-en.”Defining AI incidents and related terms,” it’s a must-read for everyone in AI. Important information:
- An AI incident is defined as: “an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to any of the following harms: ➵ injury or harm to the health of a person or groups of people; ➵ disruption of the management and operation of critical infrastructure; ➵ violations of human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; ➵ harm to property, communities or the environment.”
- An AI hazard is defined as: “An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident, i.e., any of the following harms: ➵ injury or harm to the health of a person or groups of people; ➵ disruption of the management and operation of critical infrastructure; ➵ violtions to human rights or a breach of obligations under the applicable law intended to protect fundamental, labour and intellectual property rights; ➵ harm to property, communities or the environment.”
- Types of harm listed by the report: ➵ Physical harm ➵ Environmental harm ➵ Economic or financial harm, including harm to property ➵ Reputational harm ➵ Harm to public interest ➵ Harm to human rights and to fundamental rights ➵ Psychological harm The report states: “A further step would be to establish clear taxonomies to categorise incidents for each dimension of harm. Assessing the “seriousness” of an AI incident, harm, damage, or disruption (e.g., to determine whether an event is classified as an incident or a serious incident) is context-dependent and is also left for further discussion.”
Sorry, comments are closed for this post.