Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

What policy makers need to know about AI (and what goes wrong if they don’t)

Answer.AI: “Many policy makers in the US are being lobbied to introduce “AI safety” legislation by various well-funded groups. As a result, a number of pieces of legislation are now being drafted and voted on. For instance, SB 1047 is a bill currently working it’s way through the process in California, and introduces a number of new regulations that would impact AI developers and researchers. Few, if any, of the policy makers working on this kind of legislation have a background in AI, and therefore it’s hard for them to fully understand the real practical implications of the legislative language that’s being brought to them for consideration. This article will endeavor to explain those foundations of how AI models are trained and used that are necessary to create effective legislation. I will use SB 1047 throughout as a case study, because at the time of writing (mid June 2024) the actual impact of this piece of legislation is very different to what its primary backer, Senator Scott Wiener, appears to have in mind. The aims of SB 1047 can be understood through the open letter that he wrote. This article will not comment on either whether the stated goals are appropriate or accurate, since plenty of other commentators have already written at length about these social and political issues. Here we will instead just look at how they can be implemented. Another reason to focus on this legislation is because one of the current strongest open source models in the world is from the jurisdiction covered by it: Llama 3 70b, created by the Californian company Meta. (The other current top open source models are Tongyi Qianwen (Qwen) 2 72b, and DeepSeek-Coder-V2; these models are both generally stronger than Llama3, and are both created by Chinese companies.) A stated goal of SB 1047 is to ensure that large models cannot be released without being first confirmed to be safe, and to do this whilst ensuring open source can continue to thrive. However, this law as written will not actually cover nearly any large models at all, and if it were modified or interpreted so it did, it would then entirely block all large open source model development. The difference between the goals and reality of SB 1047 is due to critical technical details, that can only be understood through more deeply understanding the technology being regulated. In the remainder of this article, I’ll go step by step through the technical details (with examples) of these issues, along with simple and easy to implement recommendations to resolve them…”

Sorry, comments are closed for this post.