Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Deepfake Bot Submissions to Federal Public Comment Websites Cannot Be Distinguished from Human Submissions

Technology Science – Max Weiss – Original Research – Deepfake Bot Submissions to Federal Public Comment Websites Cannot Be Distinguished from Human Submissions

“Abstract: The federal comment period is an important way that federal agencies incorporate public input into policy decisions. Now that comments are accepted online, public comment periods are vulnerable to attacks at Internet scale. For example, in 2017, more than 21 million (96% of the 22 million) public comments submitted regarding the FCC’s proposal to repeal net neutrality were discernible as being generated using search-and-replace techniques [1]. Publicly available artificial intelligence methods can now generate “Deepfake Text,” computer-generated text that closely mimics original human speech. In this study, I tested whether federal comment processes are vulnerable to automated, unique deepfake submissions that may be indistinguishable from human submissions. I created an autonomous computer program (a bot) that successfully generated and submitted a high volume of human-like comments during October 26-30, 2019 to the federal public comment website for the Section 1115 Idaho Medicaid Reform Waiver.

Results summary: The bot generated and submitted 1,001 deepfake comments to the public comment website at Medicaid.gov over a period of four days. These comments comprised 55.3% (1,001 out of 1,810) of the total public comments submitted. Comments generated by the bot were often highly relevant to the Idaho Medicaid waiver application, including discussion of the proposed waiver’s consequences on coverage numbers, its impact on government costs, unnecessary administrative burdens, and relevant personal experience. Finally, in order to test whether humans can distinguish deepfake comments from other comments submitted, I conducted a survey of 108 respondents on Amazon’s Mechanical Turk. Survey respondents, who were trained and assessed through exercises in which they distinguished more obvious bot versus human comments, were only able to correctly classify the submitted deepfake comments half (49.63%) of the time, which is comparable to the expected result of random guesses or coin flips. This study demonstrates that federal public comment websites are highly vulnerable to massive submissions of deepfake comments from bots and suggests that technological remedies (e.g., CAPTCHAs) should be used to limit the potential of abuse…”

Sorry, comments are closed for this post.