Publication Library
State-of-play and future trends on the development of oversight frameworks for emerging technologies - Part 2
Description: Part 2 highlights the challenges and opportunities in regulating these technologies, emphasising the need for updated frameworks that address ethical, privacy, and collaboration issues. In Part 2, we use a mixed-methods approach, including desk research, interviews, SWOT analysis and expert elicitation, to examine existing and developing oversight mechanisms. We provide insights into legislative and non-regulatory standards, ethical guidelines and self-regulatory frameworks relevant to key debates on oversight of emerging technologies, including on the lack of specific regulatory frameworks for organoids, ethical challenges in human embryology, fragmented oversight in engineering biology, and privacy concerns in neurotechnology. The study also discusses the potential for dual-use scenarios in neurotechnology and the need for international collaboration in managing biosecurity threats in engineering biology.
Created At: 05 April 2025
Updated At: 05 April 2025
U.S. Tort Liability for Large-Scale Artificial Intelligence Damages - A Primer for Developers and Policymakers
Description: Leading artificial intelligence (AI) developers and researchers, as well as government officials and policymakers, are investigating the harms that advanced AI systems might cause. In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of AI developers whose models inflict, or are used to inflict, large-scale harm. Highly capable AI systems are a growing presence in widely used consumer products, industrial and military enterprise, and critical societal infrastructure. Such systems may soon become a significant presence in tort cases as well—especially if their ability to engage in autonomous or semi-autonomous behavior, or their potential for harmful misuse, grows over the coming years. The authors find that AI developers face considerable liability exposure under U.S. tort law for harms caused by their models, particularly if those models are developed or released without utilizing rigorous safety procedures and industry-leading safety practices. At the same time, however, developers can mitigate their exposure by taking rigorous precautions and heightened care in developing, storing, and releasing advanced AI systems. By taking due care, developers can reduce the risk that their activities will cause harm to other people and reduce the risk that they will be held liable if their activities do cause such harm.
Created At: 05 April 2025
Updated At: 05 April 2025
Steps Toward AI Governance - Insights and Recommendations from the 2024 EqualAI Summit
Description: EqualAI's 2024 artificial intelligence (AI) summit, cosponsored by RAND, was convened in Washington, D.C., to facilitate dialogue among corporate stakeholders from multiple industries, functions, and roles about AI development, acquisition, and integration. The purpose of the summit was to identify and align on common practices, discuss challenges, and share lessons learned in establishing and evaluating metrics in AI governance. These conference proceedings describe key insights derived from summit discussions about best practices, metrics, and tools for evaluating the standards and performance of AI systems. The authors highlight two themes related to developing effective AI governance: (1) technical challenges, such as uncertainty about the rigor of external model evaluations and complications related to differing use cases and risk levels, and (2) organizational factors, such as how misaligned organizational goals create disincentives for investing in the implementation of appropriate AI processes and the crucial role that company culture plays in adopting and implementing AI governance standards. These conference proceedings are intended to help organizations foster a cohesive approach to AI governance.
Created At: 05 April 2025
Updated At: 05 April 2025
Securing AI Model Weights - Preventing Theft and Misuse of Frontier Models
Description: As frontier artificial intelligence (AI) models — that is, models that match or exceed the capabilities of the most advanced models at the time of their development — become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights — the learnable parameters that encode the core intelligence of an AI — from theft by a variety of potential attackers. Specifically, the authors (1) identify 38 meaningfully distinct attack vectors, (2) explore a variety of potential attacker operational capacities, from opportunistic (often financially driven) criminals to highly resourced nation-state operations, (3) estimate the feasibility of each attack vector being executed by different categories of attackers, and (4) define five security levels and recommend preliminary benchmark security systems that roughly achieve the security levels.
Created At: 05 April 2025
Updated At: 05 April 2025
Applying History to Inform Anticipatory AI Governance
Description: How might lessons from previous technologically driven transformations, such as the Industrial Revolution, inform today’s AI governance challenges? These conference proceedings address this question by combining historical analysis with an approach to foresight called backcasting, which examines pathways to hopeful futures. Workshop participants—12 individuals from diverse backgrounds, including business leaders, policymakers, and technologists—were presented with two scenarios, each depicting a different future of AI-enabled human flourishing, and three historical case studies focusing on the societal impacts of general-purpose technologies in the 19th and early 20th centuries. During the workshop, the participants discussed the scenarios in breakout groups to develop their backcasting pathways and then used the historical case studies to refine and rework those pathways.
Created At: 05 April 2025
Updated At: 05 April 2025