Business Insider – But it won’t blame AI:
- “Microsoft took down a string of embarrassing and offensive travel articles last week.
- The company said the articles were not published by “unsupervised AI” and blamed “human error.”
- But the scope of the errors should concern anyone worried about AI’s impact on the news.”
See also Tech Crunch: “…The major AI players that fill today’s headlines and feed stock market frenzies — OpenAI, Google, Microsoft — operate their platforms on black-box models. A query goes in one side and an answer spits out the other side, but we have no idea what data or reasoning the AI used to provide that answer. Most of these black-box AI platforms are built on a decades-old technology framework called a “neural network.” These AI models are abstract representations of the vast amounts of data on which they are trained; they are not directly connected to training data. Thus, black-box AIs infer and extrapolate based on what they believe to be the most likely answer, not actual data. Sometimes this complex predictive process spirals out of control and the AI “hallucinates.” By nature, black-box AI is inherently untrustworthy because it cannot be held accountable for its actions. If you can’t see why or how the AI makes a prediction, you have no way of knowing if it used false, compromised, or biased information or algorithms to come to that conclusion…”
Sorry, comments are closed for this post.