Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

What did they know, and when did they know it? The Microsoft Bing edition

Gary Marcus – Substack – The Road to AI We Can Trust – A new discovery that makes a curious story a whole lot more curious. “We all know by now just how off the rails Bing can get. Here’s a timeline, deliberately leaving out one surprising thing out until the end.

  • March 23, 2016: Microsoft releases the chatbot Tay. Under the malign influence from some users, it quickly begins to spout racist rhetoric. Tay is retracted at after 16 hours later, for being a “racist asshole”.
  • Over the next several years: Lessons are learned. Don’t release stuff too soon; be careful when your products can learn from the open web. That sort of thing.
  • June 21, 2022: Microsoft releases their Responsible AI Standard, “an important step in [the] journey to develop better, more trustworthy AI”.
  • November 2022: OpenAI release ChatGPT to incredible acclaim
  • Also November 2022: that other thing that I am saving for the end.
  • Feb 2, 2023: Microsoft President Brad Smith, an attorney who was once their General Counsel, positions Microsoft as the responsible AI company, with the first boldfaced principle being “First, we must ensure that AI is built and used responsibly and ethically”, and noting “First, these issues are too important to be left to technologists alone.”
  • Feb 6, 2023: Bing chat is released, in tandem with OpenAI, to enormous initial acclaim.
  • Week of Feb 15, 2023: Internet goes wild with reports of Bing chat (code named Sydney) going off the rails.

Many of us knew all that already. But then… someone on Twitter sends me a tip this morning. My first reaction was to think it’s a hoax. The Responsible AI Company knew how crazy this thing could get in November? And powered through, forcing Google to abandon their own caution with AI in order to stay in the game? No, can’t be true. That would be too crazy, and too embarrassing…”

Sorry, comments are closed for this post.