Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Salesforce offers 5 guidelines to reduce AI bias

Tech Republic: “Salesforce, which last year introduced its Einstein AI framework behind its Customer 360 platform, has published what it says is the industry’s first Guidelines for Trusted Generative AI. Written by Paula Goldman, chief ethical and humane use officer, and Kathy Baxter, principal architect of ethical AI at the company, the guidelines are meant to help organizations prioritize AI-driven innovation around ethics and accuracy — including where bias leaks can spring up and how to find and cauterize them. Baxter, who also serves as a visiting AI fellow at the National Institute of Standards and Technology, said there are several entry points for bias in machine learning models used for job screening, market research, healthcare decisions, criminal justice applications and more. However, she noted, there is no easy way to measure what constitutes a model that is “safe” or has exceeded a certain level of bias or toxicity.”

Sorry, comments are closed for this post.