Publication Library
State-of-play and future trends on the development of oversight frameworks for emerging technologies - Part 1
Description: As technologies become more pervasive and form a critical aspect of our societal infrastructure, governance and wider oversight mechanisms have a key role to play in ensuring that benefits from technology are maximised and risks are managed proactively. The goal of technology oversight is to ensure that technology is developed, deployed and used in a responsible and ethical manner, and that it does not pose undue risks or harm to individuals or society as a whole. Wellcome commissioned RAND Europe to undertake a study on the state-of-play and future trends on the development of oversight frameworks for emerging technologies. The specific objective of the study is to identify and analyse a suite of oversight frameworks and mechanisms (including associated emerging trends and novel approaches) that are in use, in development or under debate in different jurisdictions across the globe for a set of emerging technologies. The technologies of interest include genomics (specifically engineering biology), human embryology, organoids, neurotechnology, artificial intelligence (AI) (specifically its application and use as a research tool) and data platforms. The study findings are presented in two related documents: the global technology landscape review report and the technology oversight report (this report). The two reports should be read alongside each other. This report examines notable oversight mechanisms that are either established or under development across a selection of global jurisdictions, offering key learning and insights that could inform future technology oversight discussions.
Created At: 05 April 2025
Updated At: 05 April 2025
State-of-play and future trends on the development of oversight frameworks for emerging technologies - Part 2
Description: Part 2 highlights the challenges and opportunities in regulating these technologies, emphasising the need for updated frameworks that address ethical, privacy, and collaboration issues. In Part 2, we use a mixed-methods approach, including desk research, interviews, SWOT analysis and expert elicitation, to examine existing and developing oversight mechanisms. We provide insights into legislative and non-regulatory standards, ethical guidelines and self-regulatory frameworks relevant to key debates on oversight of emerging technologies, including on the lack of specific regulatory frameworks for organoids, ethical challenges in human embryology, fragmented oversight in engineering biology, and privacy concerns in neurotechnology. The study also discusses the potential for dual-use scenarios in neurotechnology and the need for international collaboration in managing biosecurity threats in engineering biology.
Created At: 05 April 2025
Updated At: 05 April 2025
U.S. Tort Liability for Large-Scale Artificial Intelligence Damages - A Primer for Developers and Policymakers
Description: Leading artificial intelligence (AI) developers and researchers, as well as government officials and policymakers, are investigating the harms that advanced AI systems might cause. In this report, the authors describe the basic features of U.S. tort law and analyze their significance for the liability of AI developers whose models inflict, or are used to inflict, large-scale harm. Highly capable AI systems are a growing presence in widely used consumer products, industrial and military enterprise, and critical societal infrastructure. Such systems may soon become a significant presence in tort cases as well—especially if their ability to engage in autonomous or semi-autonomous behavior, or their potential for harmful misuse, grows over the coming years. The authors find that AI developers face considerable liability exposure under U.S. tort law for harms caused by their models, particularly if those models are developed or released without utilizing rigorous safety procedures and industry-leading safety practices. At the same time, however, developers can mitigate their exposure by taking rigorous precautions and heightened care in developing, storing, and releasing advanced AI systems. By taking due care, developers can reduce the risk that their activities will cause harm to other people and reduce the risk that they will be held liable if their activities do cause such harm.
Created At: 05 April 2025
Updated At: 05 April 2025
Steps Toward AI Governance - Insights and Recommendations from the 2024 EqualAI Summit
Description: EqualAI's 2024 artificial intelligence (AI) summit, cosponsored by RAND, was convened in Washington, D.C., to facilitate dialogue among corporate stakeholders from multiple industries, functions, and roles about AI development, acquisition, and integration. The purpose of the summit was to identify and align on common practices, discuss challenges, and share lessons learned in establishing and evaluating metrics in AI governance. These conference proceedings describe key insights derived from summit discussions about best practices, metrics, and tools for evaluating the standards and performance of AI systems. The authors highlight two themes related to developing effective AI governance: (1) technical challenges, such as uncertainty about the rigor of external model evaluations and complications related to differing use cases and risk levels, and (2) organizational factors, such as how misaligned organizational goals create disincentives for investing in the implementation of appropriate AI processes and the crucial role that company culture plays in adopting and implementing AI governance standards. These conference proceedings are intended to help organizations foster a cohesive approach to AI governance.
Created At: 05 April 2025
Updated At: 05 April 2025
Securing AI Model Weights - Preventing Theft and Misuse of Frontier Models
Description: As frontier artificial intelligence (AI) models — that is, models that match or exceed the capabilities of the most advanced models at the time of their development — become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights — the learnable parameters that encode the core intelligence of an AI — from theft by a variety of potential attackers. Specifically, the authors (1) identify 38 meaningfully distinct attack vectors, (2) explore a variety of potential attacker operational capacities, from opportunistic (often financially driven) criminals to highly resourced nation-state operations, (3) estimate the feasibility of each attack vector being executed by different categories of attackers, and (4) define five security levels and recommend preliminary benchmark security systems that roughly achieve the security levels.
Created At: 05 April 2025
Updated At: 05 April 2025