Accurate, Focused Research on Law, Technology and Knowledge Discovery Since 2002

Daily Archives: January 16, 2025

The Ethics of Advanced AI Assistants

Google DeepMind – “First, because LLMs display immense modeling power, there is a risk that the model weights encode private information present in the training corpus. In particular, it is possible for LLMs to ‘memorise’ personally identifiable information (PII) such as names, addresses and telephone numbers, and subsequently leak such information through generated text outputs (Carlini et al., 2024) This paper focuses on the opportunities and the ethical and societal risks posed by advanced AI assistants. We define advanced AI assistants as artificial agents with natural language interfaces, whose function is to plan and execute sequences of actions on behalf of a user – across one or more domains – in line with the user’s expectations. The paper starts by considering the technology itself, providing an overview of AI assistants, their technical foundations and potential range of applications. It then explores questions around AI value alignment, well-being, safety and malicious uses. Extending the circle of inquiry further, we next consider the relationship between advanced AI assistants and individual users in more detail, exploring topics such as manipulation and persuasion, anthropomorphism, appropriate relationships, trust and privacy. With this analysis in place, we consider the deployment of advanced assistants at a societal scale, focusing on cooperation, equity and access, misinformation, economic impact, the environment and how best to evaluate advanced AI assistants. Finally, we conclude by providing a range of recommendations for researchers, developers, policymakers and public stakeholders. Our analysis suggests that advanced AI assistants are likely to have a profound impact on our individual and collective lives. To be beneficial and value-aligned, we argue that assistants must be appropriately responsive to the competing claims and needs of users, developers and society. Features such as increased agency, the capacity to interact in natural language and high degrees of personalisation could make AI assistants especially helpful to users. However, these features also make people vulnerable to inappropriate influence by the technology, so robust safeguards are needed. Moreover, when AI assistants are deployed at scale, knock-on effects that arise from interaction between them and questions about their overall impact on wider institutions and social processes rise to the fore. These dynamics likely require technical and policy interventions in order to foster beneficial cooperation and to achieve broad, inclusive and equitable outcomes. Finally, given that the current landscape of AI evaluation focuses primarily on the technical components of AI systems, it is important to invest in the holistic sociotechnical evaluations of AI assistants, including human–AI interaction, multi-agent and societal level research, to support responsible decision-making and deployment in this domain.”

What Homeowners Insurance Actually Covers

Lifehacker: “While Los Angeles County continues to battle devastating wildfires, you may be wondering about the safety of your own home. More specifically, how are you covered when the unthinkable happens? Unfortunately, insurance doesn’t function like a gas or electric company; even in the face of disaster, insurers aren’t obligated to service your home. And… Continue Reading

Map: How big are the LA fires? Use this tool to overlay them atop where you live

CalMatters is joining with our public media partners at PBS SoCal, LAist and KCRW to bring you reliable, essential, free information to support people affected by the wildfires and keep all Californians up to date. Sign up for the Daily Wildfire Updates newsletter. “The fires sweeping across Los Angeles County for the past week have… Continue Reading