Publication Library
Causality for Machine Learning
Description: Graphical causal inference as pioneered by Judea Pearl arose from research on artificial intelligence (AI), and for a long time had little connection to the field of machine learning. This article discusses where links have been and should be established, introducing key concepts along the way. It argues that the hard open problems of machine learning and AI are intrinsically related to causality, and explains how the field is beginning to understand them.
Created At: 14 December 2024
Updated At: 14 December 2024
The Next Decade in AI Four Steps Towards Robust Artificial Intelligence
Description: Recent research in artificial intelligence and machine learning has largely emphasized general-purpose learning and ever-larger training sets and more and more compute. In contrast, I propose a hybrid, knowledge-driven, reasoning-based approach, centered around cognitive models, that could provide the substrate for a richer, more robust AI than is currently possible.
Created At: 14 December 2024
Updated At: 14 December 2024
Making Contextual Decisions with Low Technical Debt
Description: Applications and systems are constantly faced with decisions that require picking from a set of actions based on contextual information. Reinforcement-based learning algorithms such as contextual bandits can be very effective in these settings, but applying them in practice is fraught with technical debt, and no general system exists that supports them completely. We address this and create the first general system for contextual learning, called the Decision Service. Deploy Learn Explore Log Figure 1: Complete loop for effective contextual learning, showing the four system abstractions we define. Existing systems often suffer from technical debt that arises from issues like incorrect data collection and weak debuggability, issues we systematically address through our ML methodology and system abstractions. The Decision Service enables all aspects of contextual bandit learning using four system abstractions which connect together in a loop: explore (the decision space), log, learn, and deploy. Notably, our new explore and log abstractions ensure the system produces correct, unbiased data, which our learner uses for online learning and to enable real-time safeguards, all in a fully reproducible manner. The Decision Service has a simple user interface and works with a variety of applications: we present two live production deployments for content recommendation that achieved click-through improvements of 25-30%, another with 18% revenue lift in the landing page, and ongoing applications in tech support and machine failure handling. The service makes real-time decisions and learns continuously and scalably, while significantly lowering technical debt.
Created At: 14 December 2024
Updated At: 14 December 2024
Algorithms for Causal Reasoning in Probability Trees
Description: Probability trees are one of the simplest models of causal generative processes. They possess clean semantics and—unlike causal Bayesian networks—they can represent context-specific causal dependencies, which are necessary for e.g. causal induction. Yet, they have received little attention from the AI and ML community. Here we present concrete algorithms for causal reasoning in discrete probability trees that cover the entire causal hierarchy (association, intervention, and counterfactuals), and operate on arbitrary propositional and causal events. Our work expands the domain of causal reasoning to a very general class of discrete stochastic processes.
Created At: 14 December 2024
Updated At: 14 December 2024
Data Fitting vs. Data Interpreting Approaches to Data Science
Description: I contrast the “data fitting” vs “data interpreting” approaches to data science along three dimensions: Expediency, Transparency, and Explainability. “Data fitting” is driven by the faith that the secret to rational decisions lies in the data itself. In contrast, the data-interpreting school views data, not as a sole source of knowledge but as an auxiliary means for interpreting reality, and “reality” stands for the processes that generate the data. I argue for restoring balance to data science through a task-dependent symbiosis of fitting and interpreting, guided by the Logic of Causation.
Created At: 14 December 2024
Updated At: 14 December 2024