r x y, r “…I agree with Dr. Narayanan’s assessment that using AI to predict social outcomes is “fundamentally dubious,” but I don’t believe that AI is doomed to always be worse than humans at assessing job candidates for quality. This is not because we have reason to believe AI will ever be particularly good at making hiring decisions, but because, in my view, humans are pretty bad at making these hiring assessments to begin with. So long as AI can perform slightly better than a checklist plus a coin flip, it seems like AI would be at least on par with humans. I also don’t believe that AI is doomed to forever discriminate in hiring. I think it’s hypothetically possible to design an AI that doesn’t have a single racist or sexist kilobyte in it.
What’s less clear is why anyone should believe that the average AI system should be better at reducing hiring discrimination than the average traditional hiring practice. The idea AI proponents seem to have is that AI is more “objective,” and because there is less discretion involved, it is therefore less discriminatory. This is silly logic: if an employer made a rule that said, “hire the 5th white person that walks through the door,” this rule is trivial to objectively enforce. But obviously it is the rule itself that is discriminatory, not any discretionary wiggle room you can squeeze out of the rule. If your response to this is “actually, the choice of which rule to use was subjective, so subjectivity is still the problem,” then yes, that’s the point! Nothing in the process of building, training, and implementing an AI is ordained by God. The use of AI is up to an employer’s discretion, and the AI may just end up enforcing racist rules objectively…”
Sorry, comments are closed for this post.