Publication Library
Securing AI Model Weights - Preventing Theft and Misuse of Frontier Models
Description: As frontier artificial intelligence (AI) models — that is, models that match or exceed the capabilities of the most advanced models at the time of their development — become more capable, protecting them from theft and misuse will become more important. The authors of this report explore what it would take to protect model weights — the learnable parameters that encode the core intelligence of an AI — from theft by a variety of potential attackers. Specifically, the authors (1) identify 38 meaningfully distinct attack vectors, (2) explore a variety of potential attacker operational capacities, from opportunistic (often financially driven) criminals to highly resourced nation-state operations, (3) estimate the feasibility of each attack vector being executed by different categories of attackers, and (4) define five security levels and recommend preliminary benchmark security systems that roughly achieve the security levels.
Created At: 05 April 2025
Updated At: 05 April 2025
Applying History to Inform Anticipatory AI Governance
Description: How might lessons from previous technologically driven transformations, such as the Industrial Revolution, inform today’s AI governance challenges? These conference proceedings address this question by combining historical analysis with an approach to foresight called backcasting, which examines pathways to hopeful futures. Workshop participants—12 individuals from diverse backgrounds, including business leaders, policymakers, and technologists—were presented with two scenarios, each depicting a different future of AI-enabled human flourishing, and three historical case studies focusing on the societal impacts of general-purpose technologies in the 19th and early 20th centuries. During the workshop, the participants discussed the scenarios in breakout groups to develop their backcasting pathways and then used the historical case studies to refine and rework those pathways.
Created At: 05 April 2025
Updated At: 05 April 2025
The Root Causes of Failure for Artificial Intelligence Projects and How They Can Succeed
Description: To investigate why artificial intelligence and machine learning (AI/ML) projects fail, the authors interviewed 65 data scientists and engineers with at least five years of experience in building AI/ML models in industry or academia. The authors identified five leading root causes for the failure of AI projects and synthesized the experts' experiences to develop recommendations to make AI projects more likely to succeed in industry settings and in academia.
Created At: 04 April 2025
Updated At: 04 April 2025
AIs Power Requirements Under Exponential Growth
Description: Larger training runs and widespread deployment of future artificial intelligence (AI) systems may demand a rapid scale-up of computational resources (compute) that require unprecedented amounts of power. In this report, the authors extrapolate two exponential trends in AI compute to estimate AI data center power demand and assess its geopolitical consequences. They find that globally, AI data centers could need ten gigawatts (GW) of additional power capacity in 2025, which is more than the total power capacity of the state of Utah. If exponential growth in chip supply continues, AI data centers will need 68 GW in total by 2027 — almost a doubling of global data center power requirements from 2022 and close to California's 2022 total power capacity of 86 GW.
Created At: 04 April 2025
Updated At: 04 April 2025
Strategic competition in the age of AI
Description: Artificial intelligence (AI) holds the potential to usher in transformative changes across all aspects of society, economy and policy, including in the realm of defence and security. The United Kingdom (UK) aspires to be a leading player in the rollout of AI for civil and commercial applications, and in the responsible development of defence AI. This necessitates a clear and nuanced understanding of the emerging risks and opportunities associated with the military use of AI, as well as how the UK can best work with others to mitigate or exploit these risks and opportunities.
Created At: 04 April 2025
Updated At: 04 April 2025