Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Researchers developed an AI system that predicts the likelihood people will spread misinformation based on words they use

Poynter: “University of Sheffield researchers Yida Mu and Dr. Nikos Aletra report they’ve developed an artificial intelligence system to help identify Twitter users who are more likely to share unreliable news sources. In their study published in the journal PeerJ Computer Science, the researchers found strong correlations between specific language patterns and the propensity to share false information. Users who shared dubious information tended to use the words, “media,” “government,” “truth,” “Israel,” “liberal,” “muslim” and “Islam” in their tweets. Users who shared more reliable information sources tended to use more personal words such as “myself,” “feel,” “excited,” “mood,” “mom” and “okay.” Topics related to politics such as political ideology, government and justice are correlated with users that propagate unreliable sources. “We also observe a high correlation of such users with the topic related to impolite personal characterizations. This corroborates results of a recent study that showed political incivility on Twitter is correlated to political polarization,” the study authors wrote. The researchers based their findings on the analysis of over 1 million tweets from approximately 6,200 Twitter users. This data helped the researchers develop a “new natural language processing methods.”…

Sorry, comments are closed for this post.