The New York Times – “Some Facebook employees recently told their managers that they were concerned about answering difficult questions about their workplace from friends and family over the holidays. What if Mom or Dad accused the social network of destroying democracy? Or what if they said Mark Zuckerberg, Facebook’s chief executive, was collecting their online data at the expense of privacy? So just before Thanksgiving, Facebook rolled out something to help its workers: a chatbot that would teach them official company answers for dealing with such thorny questions. If a relative asked how Facebook handled hate speech, for example, the chatbot — which is a simple piece of software that uses artificial intelligence to carry on a conversation — would instruct the employee to answer with these points: Facebook consults with experts on the matter.
- It has hired more moderators to police its content.
- It is working on A.I. to spot hate speech.
- Regulation is important for addressing the issue.
It would also suggest citing statistics from a Facebook report about how the company enforces its standards. The answers were put together by Facebook’s public relations department, parroting what company executives have publicly said. And the chatbot has a name: the “Liam Bot.” (The provenance of the name is unclear.)…”
Sorry, comments are closed for this post.