Futurism: “Microsoft has finally spoken out about its unhinged AI chatbot. In a new blog post, the company admitted that its Bing Chat feature is not really being used to find information — after all, it’s unable to consistently tell truth from fiction — but for “social entertainment” instead. The company found that “extended chat sessions of 15 or more questions” can lead to “responses that are not necessarily helpful or in line with our designed tone.” As to why that is, Microsoft offered up a surprising theory: it’s all the fault of the app’s pesky human users. “The model at times tries to respond or reflect in the tone in which it is being asked to provide responses that can lead to a style we didn’t intend,” the company wrote. “This is a non-trivial scenario that requires a lot of prompting so most of you won’t run into it, but we are looking at how to give you more fine-tuned control.” The news comes after a growing number of users had truly bizarre run-ins with the chatbot in which it did everything from making up horror stories to gaslighting users, acting passive-aggressive, and even recommending the occasional Hitler salute. But can all of these unhinged conversations be traced back to the original prompt of the user? Is Microsoft’s AI really just mimicking our tone and intent in its off-the-rails answers, a mirror of our desires to mess with new technology?..”
- See also another article retreating from the rapturous embrace and endorsement of Microsoft’s version of ChatGPT, this one from – “A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me. Last week, after New York Timestesting the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine. But a week later, I’ve changed my mind (oops, really). I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities. It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it. This realization came to me on Tuesday night, when I spent a bewildering and enthralling two hours talking to Bing’s A.I. through its chat feature, which sits next to the main search box in Bing and is capable of having long, open-ended text conversations on virtually any topic. (The feature is available only to a small group of testers for now, although Microsoft — which announced the feature in a splashy, celebratory event at its headquarters — has said it plans to release it more widely in the future.)..”
- See also The Atlantic – AI Search Is a Disaster. Microsoft and Google believe chatbots will change search forever. “So far, there’s no reason to believe the hype…Last week, both Microsoft and Google announced that they would incorporate AI programs similar to ChatGPT into their search engines—bids to transform how we find information online into a conversation with an omniscient chatbot. One problem: These language models are notorious mythomaniacs.”
Sorry, comments are closed for this post.