Publication Library
Carrot and Stick Eliciting Comparison Data and Beyond
Description: Comparison data elicited from people are fundamental to many machine learning tasks, including reinforcement learning from human feedback for large language models and estimating ranking models. They are typically subjective and not directly verifiable. How to truthfully elicit such comparison data from rational individuals? We design peer prediction mechanisms for eliciting comparison data using a bonus-penalty payment [11]. Our design leverages on the strong stochastic transitivity for comparison data [60, 13] to create symmetrically strongly truthful mechanisms such that truth-telling 1) forms a strict Bayesian Nash equilibrium, and 2) yields the highest payment among all symmetric equilibria. Each individual only needs to evaluate one pair of items and report her comparison in our mechanism. Wefurther extend the bonus-penalty payment concept to eliciting networked data, designing a symmetrically strongly truthful mechanism when agents’ private signals are sampled according to the Ising models. We provide the necessary and sufficient conditions for our bonus-penalty payment to have truth-telling as a strict Bayesian Nash equilibrium. Experiments on two realworld datasets further support our theoretical discoveries.
Created At: 14 December 2024
Updated At: 14 December 2024
Optima Optimizing Effectiveness and Efficiency for LLM Based Multi Agent System
Description: Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving, yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods. We present OPTIMA, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness in LLM-based MAS through LLM training. OPTIMA employs an iterative generate, rank, select, and train paradigm with a reward function balancing task performance, token efficiency, and communication readability. We explore various RL algorithms, including Supervised Fine-Tuning, Direct Preference Optimization, and their hybrid approaches, providing insights into their effectiveness-efficiency trade-offs. We integrate Monte Carlo Tree Search-inspired techniques for DPO data generation, treating conversation turns as tree nodes to explore diverse interaction paths. Evaluated on commonmulti-agent tasks, including information-asymmetric question answering and complex reasoning, OPTIMA shows consistent and substantial improvements over single-agent baselines and vanilla MAS based on Llama 3 8B, achieving up to 2.8x performance gain with less than 10% tokens on tasks requiring heavy information exchange. Moreover, OPTIMA’s efficiency gains open new possibilities for leveraging inference-compute more effectively, leading to improved inferencetime scaling laws. By addressing fundamental challenges in LLM-based MAS, OPTIMA shows the potential towards scalable, efficient, and effective MAS1.
Created At: 14 December 2024
Updated At: 14 December 2024
Agent S An Open Agentic Framework that Uses Computers Like a Human
Description: We present Agent S, an open agentic framework that enables autonomous interaction with computers through a Graphical User Interface (GUI), aimed at transforming human-computer interaction by automating complex, multi-step tasks. Agent S aims to address three key challenges in automating computer tasks: acquiring domain-specific knowledge, planning over long task horizons, and handling dynamic, non-uniform interfaces. To this end, Agent S introduces experience-augmented hierarchical planning, which learns from external knowledge search and internal experience retrieval at multiple levels, facilitating eff icient task planning and subtask execution. In addition, it employs an AgentComputer Interface (ACI) to better elicit the reasoning and control capabilities of GUI agents based on Multimodal Large Language Models (MLLMs). Evaluation on the OSWorld benchmark shows that Agent S outperforms the baseline by 9.37% on success rate (an 83.6% relative improvement) and achieves a new state-of-the-art. Comprehensive analysis highlights the effectiveness of individual components and provides insights for future improvements. Furthermore, Agent S demonstrates broad generalizability to different operating systems on a newly-released WindowsAgentArena benchmark. Code available at https://github.com/simular-ai/Agent-S.
Created At: 14 December 2024
Updated At: 14 December 2024
Difference in Differences for Health Policy and Practice A Review of Modern Methods
Description: Difference-in-differences (DiD) is the most popular observational causal inference method in health policy, employed to evaluate the real-world impact of policies and programs. To estimate treatment effects, DiD relies on the “parallel trends assumption”, that on average treatment and comparison groups would have had parallel trajectories in the absence of an intervention. Historically, DiD has been considered broadly applicable and straightforward to implement, but recent years have seen rapid advancements in DiD methods. This paper reviews and synthesizes these innovations for medical and health policy researchers. We focus on four topics: (1) assessing the parallel trends assumption in health policy contexts; (2) relaxing the parallel trends assumption when appropriate; (3) employing estimators to account for staggered treatment timing; and (4) conducting robust inference for analyses in which normal-based clustered standard errors are inappropriate. For each, we explain challenges and common pitfalls in traditional DiD and modern methods available to address these issues.
Created At: 14 December 2024
Updated At: 14 December 2024
Few-Shot Task Learning through Inverse Generative Modeling
Description: Learning the intents of an agent, defined by its goals or motion style, is often extremely challenging from just a few examples. We refer to this problem as task concept learning and present our approach, Few-Shot Task Learning through Inverse Generative Modeling (FTL-IGM), which learns new task concepts by leveraging invertible neural generative models. The core idea is to pretrain a generative model on a set of basic concepts and their demonstrations. Then, given a few demonstrations of a new concept (such as a new goal or a new action), our method learns the underlying concepts through backpropagation without updating the model weights, thanks to the invertibility of the generative model. We evaluate our method in five domains– object rearrangement, goal-oriented navigation, motion caption of human actions, autonomous driving, and real-world table-top manipulation. Our experimental results demonstrate that via the pretrained generative model, we successfully learn novel concepts and generate agent plans or motion corresponding to these concepts in (1) unseen environments and (2) in composition with training concepts.
Created At: 14 December 2024
Updated At: 14 December 2024