Publication Library
CowPilot - A Framework for Autonomous and Human-Agent Collaborative Web Navigation
Description: While much work on web agents emphasizes the promise of autonomously performing tasks on behalf of users, in reality, agents often fall short on complex tasks in real-world contexts and modeling user preference. This presents an opportunity for humans to collaborate with the agent and leverage the agent's capabilities effectively. We propose CowPilot, a framework supporting autonomous as well as human-agent collaborative web navigation, and evaluation across task success and task efficiency. CowPilot reduces the number of steps humans need to perform by allowing agents to propose next steps, while users are able to pause, reject, or take alternative actions. During execution, users can interleave their actions with the agent by overriding suggestions or resuming agent control when needed. We conducted case studies on five common websites and found that the human-agent collaborative mode achieves the highest success rate of 95% while requiring humans to perform only 15.2% of the total steps. Even with human interventions during task execution, the agent successfully drives up to half of task success on its own. CowPilot can serve as a useful tool for data collection and agent evaluation across websites, which we believe will enable research in how users and agents can work together. Video demonstrations are available at this https URL: https://oaishi.github.io/cowpilot.html
Created At: 30 January 2025
Updated At: 30 January 2025
Value Function Decomposition in Markov Recommendation Process
Description: Recent advances in recommender systems have shown that user-system interaction essentially formulates long-term optimization problems, and online reinforcement learning can be adopted to improve recommendation performance. The general solution framework incorporates a value function that estimates the user's expected cumulative rewards in the future and guides the training of the recommendation policy. To avoid local maxima, the policy may explore potential high-quality actions during inference to increase the chance of finding better future rewards. To accommodate the stepwise recommendation process, one widely adopted approach to learning the value function is learning from the difference between the values of two consecutive states of a user. However, we argue that this paradigm involves an incorrect approximation in the stochastic process. Specifically, between the current state and the next state in each training sample, there exist two separate random factors from the stochastic policy and the uncertain user environment. Original temporal difference (TD) learning under these mixed random factors may result in a suboptimal estimation of the long-term rewards. As a solution, we show that these two factors can be separately approximated by decomposing the original temporal difference loss. The disentangled learning framework can achieve a more accurate estimation with faster learning and improved robustness against action exploration. As empirical verification of our proposed method, we conduct offline experiments with online simulated environments built based on public datasets.
Created At: 30 January 2025
Updated At: 30 January 2025
LLMs-as-Judges - A Comprehensive Survey on LLM-based Evaluation Methods
Description: The rapid advancement of Large Language Models (LLMs) has driven their expanding application across various fields. One of the most promising applications is their role as evaluators based on natural language responses, referred to as ''LLMs-as-judges''. This framework has attracted growing attention from both academia and industry due to their excellent effectiveness, ability to generalize across tasks, and interpretability in the form of natural language. This paper presents a comprehensive survey of the LLMs-as-judges paradigm from five key perspectives: Functionality, Methodology, Applications, Meta-evaluation, and Limitations. We begin by providing a systematic definition of LLMs-as-Judges and introduce their functionality (Why use LLM judges?). Then we address methodology to construct an evaluation system with LLMs (How to use LLM judges?). Additionally, we investigate the potential domains for their application (Where to use LLM judges?) and discuss methods for evaluating them in various contexts (How to evaluate LLM judges?). Finally, we provide a detailed analysis of the limitations of LLM judges and discuss potential future directions. Through a structured and comprehensive analysis, we aim aims to provide insights on the development and application of LLMs-as-judges in both research and practice. We will continue to maintain the relevant resource list at this https URL: https://github.com/CSHaitao/Awesome-LLMs-as-Judges
Created At: 30 January 2025
Updated At: 30 January 2025
Trends and Reversion in Financial Markets on Time Scales from Minutes to Decades
Description: We empirically analyze the reversion of financial market trends with time horizons ranging from minutes to decades. The analysis covers equities, interest rates, currencies and commodities and combines 14 years of futures tick data, 30 years of daily futures prices, 330 years of monthly asset prices, and yearly financial data since medieval times. Across asset classes, we find that markets are in a trending regime on time scales that range from a few hours to a few years, while they are in a reversion regime on shorter and longer time scales. In the trending regime, weak trends tend to persist, which can be explained by herding behavior of investors. However, in this regime trends tend to revert before they become strong enough to be statistically significant, which can be interpreted as a return of asset prices to their intrinsic value. In the reversion regime, we find the opposite pattern: weak trends tend to revert, while those trends that become statistically significant tend to persist. Our results provide a set of empirical tests of theoretical models of financial markets. We interpret them in the light of a recently proposed lattice gas model, where the lattice represents the social network of traders, the gas molecules represent the shares of financial assets, and efficient markets correspond to the critical point. If this model is accurate, the lattice gas must be near this critical point on time scales from 1 hour to a few days, with a correlation time of a few years.
Created At: 29 January 2025
Updated At: 29 January 2025
DeepSeek-R1 - Incentivizing Reasoning Capability in LLMs via Reinforcement Learning
Description: We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeekR1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.
Created At: 29 January 2025
Updated At: 29 January 2025