Tech Policy Press: “…Just a couple of weeks before Pichai took the stage, in April, Google DeepMind published a paper that boasts 57 authors, including experts from a range of disciplines from different parts of Google, including DeepMind, Jigsaw, and Google Research, as well as researchers from academic institutions such as Oxford, University College London, Delft University of Technology, University of Edinburgh, and a think tank at Georgetown, the Center for Security and Emerging Technology. The paper speculates about the ethical and societal risks posed by the types of AI assistants Google and other tech firms want to build, which the authors say are “likely to have a profound impact on our individual and collective lives.” The paper considers the potential nature of the technology itself, giving a broad overview of these imagined AI assistants, their technical roots, and the wide array of potential applications. It delves into questions about values and safety, and how to guard against malicious uses. Then, it takes a closer look at how these imagined advanced AI assistants interact with individual users, discussing issues like manipulation, persuasion, anthropomorphism, trust, and privacy. Then, the papers moves on to the collective, examining the broader societal implications of deploying advanced AI assistants, including on cooperation, equity and access, misinformation, economic impact, environmental concerns, and methods for evaluating these technologies. The paper also offers a series of recommendations for researchers, developers, policymakers, and public stakeholders to consider…”
Sorry, comments are closed for this post.