“U.S. Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO), Chair and Ranking Member of the Senate Judiciary Subcommittee on Privacy, Technology, and the Law, announced a bipartisan legislative framework to establish guardrails for artificial intelligence. The framework lays out specific principles for upcoming legislative efforts, including the establishment of an independent oversight body, ensuring legal accountability for harms, defending national security, promoting transparency, and protecting consumers and kids. The announcement follows multiple hearings in the Subcommittee featuring witness testimony from industry and academic leaders, including OpenAI CEO Sam Altman, Anthropic CEO Dario Amodei, and Microsoft President and Vice Chair Brad Smith who will testify before the Subcommittee on Tuesday. Specifically, the framework would:
- Establish a Licensing Regime Administered by an Independent Oversight Body. Companies developing sophisticated general purpose AI models (e.g.,GPT-4) or models used in high risk situations (e.g., facial recognition) should be required to register with an independent oversight body, which would have the authority to audit companies seeking licenses and cooperating with other enforcers such as state Attorneys General. The entity should also monitor and report on technological developments and economic impacts of AI.
- Ensure Legal Accountability for Harms. Congress should require AI companies to be held liable through entity enforcement and private rights of action when their models and systems breach privacy, violate civil rights, or cause other harms such as non-consensual explicit deepfake imagery of real people, production of child sexual abuse material from generative AI, and election interference. Congress should clarify that Section 230 does not apply to AI and ensure enforcers and victims can take companies and perpetrators to court.
- Defend National Security and International Competition. Congress should utilize export controls, sanctions, and other legal restrictions to limit the transfer of advanced AI models, hardware, and other equipment to China Russia, other adversary nations, and countries engaged in gross human rights violations.
- Promote Transparency. Congress should promote responsibility, due diligence, and consumer redress by requiring transparency from companies. Developers should be required to disclose essential information about training data, limitations, accuracy, and safety of AI models to users and other companies. Users should also have a right to an affirmative notice when they are interacting with an AI model or system, and the new agency should establish a public database to report when significant adverse incidents occur or failures cause harms.
- Protect Consumers and Kids. Consumers should have control over how their personal data is used in AI systems and strict limits should be imposed on generating AI involving kids. Companies deploying AI in high-risk or consequential situations should be required to implement safety brakes and give notice when AI is being used to make adverse decisions.
- A copy of the bipartisan framework can be found here.”
Sorry, comments are closed for this post.