Forbes [free to read]: “Social Links, a surveillance company that had thousands of accounts banned after Meta accused it of mass-scraping Facebook and Instagram, is now using ChatGPT to make sense of data its software grabs from social media. Most people use ChatGPT to answer simple queries, draft emails, or produce useful (and useless) code. But spyware companies are now exploring how to use it and other emerging AI tools to surveil people on social media. In a presentation at the Milipol homeland security conference in Paris on Tuesday, online surveillance company Social Links demonstrated ChatGPT performing “sentiment analysis,” where the AI assesses the mood of social media users or can highlight commonly-discussed topics amongst a group. That can then help predict whether online activity will spill over into physical violence and require law enforcement action. Founded by Russian entrepreneur Andrey Kulikov in 2017, Social Links now has offices in the Netherlands and New York; previously, Meta dubbed the company a spyware vendor in late 2022, banning 3,700 Facebook and Instagram accounts it allegedly used to repeatedly scrape the social sites. It denies any link to those accounts and the Meta claim hasn’t harmed its reported growth: company sales executive Rob Billington said the company had more than 500 customers, half of which were based in Europe, with just over 100 in North America. That Social Links is using ChatGPT shows how OpenAI’s breakout tool of 2023 can empower a surveillance industry keen to tout artificial intelligence as a tool for public safety. But according to the American Civil Liberties Union’s senior policy analyst Jay Stanley, using AI tools like ChatGPT to augment social media surveillance will likely “scale up individualized monitoring in a way that could never be done with human monitors,” he told Forbes…”
Sorry, comments are closed for this post.