Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Oversight of A.I.: Rules for Artificial Intelligence

Senate Judiciary Committee Hearing. May 15, 20223 – Oversight of A.I.: Rules for Artificial Intelligence – Hearing video

  • Witnesses – Samuel Altman, CEO OpenAI San Francisco, CA – Download Testimony: “…OpenAI is a leading developer of large language models (LLMs) and other AI tools. Fundamentally, the current generation of AI models are large-scale statistical prediction machines – when a model is given a person’s request, it tries to predict a likely response. These models operate similarly to auto-complete functions on modern smartphones, email, or word processing software, but on a much larger and more complex scale.2 The model learns from reading or seeing data about the world, which improves its predictive abilities until it can perform tasks such as summarizing text, writing poetry, and crafting computer code. Using variants of this technology, AI tools are also capable of learning statistical relationships between images and text descriptions and then generating new images based on natural language inputs…”
  • Christina Montgomery Chief Privacy & Trust Officer IBM Cortlandt Manor, NY – Download Testimony – “…IBM has strived for more than a century to bring powerful new technologies like artificial intelligence into the world responsibly, and with clear purpose. We follow long-held principles of trust and transparency that make clear the role of AI is to augment, not replace, human expertise and judgement. We were one of the first in our industry to establish an AI Ethics Board, which I co-chair, and whose experts work to ensure that our principles and commitments are upheld in our global business engagements…This period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests. It is my privilege to share with you IBM’s recommendations for those guardrails..”
  • Gary Marcus Professor Emeritus New York University Vancouver, BC, Canada – Download Testimony: “…We all more or less agree on the values we would like for our AI systems to honor. We want, for example, for our systems to to be transparent, to protect our privacy, to be free of bias, and above all else to be safe. But current systems are not in line with these values. Current systems are not transparent, they do not adequately protect our privacy, and they continue to perpetuate bias. Even their makers don’t entirely understand how they work. Most of all, we cannot remotely guarantee they are safe.
  • See also WSJ [free article] – ChatGPT’s Sam Altman Faces Senate Panel Examining Artificial Intelligence/a>. Congress looks to impose AI regulations, if it can reach consensus
  • See also Advocating for Open Models in AI Oversight: Stability AI’s Letter to the United States Senate Today, the United States Senate held a hearing to consider the future of AI oversight. Ahead of the hearing, Stability AI was pleased to share a detailed paper emphasizing the importance of open models for a transparent, competitive, and resilient digital economy from CEO Emad Mostaque. You can read the paper here.
  • See also Fortune – OpenAI CEO Sam Altman tells senators he wants to see A.I. licensed. That might be good for us. It’s definitely good for OpenAI.

Sorry, comments are closed for this post.