Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Category Archives: AI

Perplexity will show live US election results despite AI accuracy warnings

Ars Technica: “On Friday, Perplexity launched an election information hub that relies on data from The Associated Press and Democracy Works to provide live updates and information about the 2024 US general election, which takes place on Tuesday, November 5. “Starting Tuesday, we’ll be offering live updates on elections using data from The Associated Press so you can stay informed on presidential, senate, and house races at both a state and national level,” Perplexity wrote in a blog post. The site will pull data from special data sources (called APIs) hosted by the two organizations. As of Monday, Perplexity’s hub currently provides interactive information on voting requirements, poll times, and summaries about ballot measures, candidates, policy positions, and endorsements. Users can ask questions about the information similar to using a chatbot like ChatGPT…”

ChatGPT rolls out a Google competitor with a skewed view of the news

Poynter: “When I asked the latest artificial intelligence-powered search engine from ChatGPT what’s happening in my city of St. Petersburg, or Orlando and Miami, I didn’t find links to the largest newspapers in Florida. Instead, I found articles from the New York Post, The Sun, People Magazine — and real estate blogs St. Pete Rising… Continue Reading

LLRX October 2024 Columns and Articles

Artificial Intelligence and Unconscious Bias Risk – Elizabeth Sweetland reviews: Meredith Broussard, More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech (MIT Press 2023). 248 Pages. Trump’s Election Lawyers Must Heed Their Ethical Duties – Attorneys Stephen Marcus and Bruce Kuhlik discuss the ethical responsibilities of lawyers in the context of predicted attempts… Continue Reading

The chatbot optimisation game: can we trust AI web searches?

The Guardian -“…Looking into the sort of evidence that large language models (LLMs, the engines on which chatbots are built) find most convincing, three computer science researchers from the University of California, Berkeley, found current chatbots overrely on the superficial relevance of information. They tend to prioritise text that includes pertinent technical language or is… Continue Reading

Pete Recommends – Weekly highlights on cyber security issues, November 2, 2024

Via LLRX – Pete Recommends – Weekly highlights on cyber security issues, November 2, 2024 – Privacy and cybersecurity issues impact every aspect of our lives – home, work, travel, education, finance, health and medical records – to name but a few. On a weekly basis Pete Weiss highlights articles and information that focus on… Continue Reading

Annoyed Redditors tanking Google Search results illustrates perils of AI scrapers

Ars Technica:”A trend on Reddit that sees Londoners giving false restaurant recommendations in order to keep their favorites clear of tourists and social media influencers highlights the inherent flaws of Google Search’s reliance on Reddit and Google’s AI Overview. In May, Google launched AI Overviews in the US, an experimental feature that populates the top… Continue Reading

Exploiting Meta’s Weaknesses, Deceptive Political Ads Thrived on Facebook and Instagram in Run-Up to Election

ProPublica: Reporting Highlights Deceptive Political Ads: Eight deceptive advertising networks have placed over 160,000 election and social issues ads across more than 340 Facebook pages in English and Spanish. Harmed Users: Some of the people who clicked on ads were unwittingly signed up for monthly credit card charges or lost health coverage, among other consequences.… Continue Reading

Prebunking Elections Rumors: Artificial Intelligence Assisted Interventions Increase Confidence in American Elections

Mitchell Linegar, arXiv:2410.19202. 24 Oct 2024 – Large Language Models (LLMs) can assist in the prebunking of election misinformation. Using results from a preregistered two-wave experimental study of 4,293 U.S. registered voters conducted in August 2024, we show that LLM-assisted prebunking significantly reduced belief in specific election myths,with these effects persisting for at least one… Continue Reading