6 links tagged with all of: machine-learning + evaluation
Click any tag below to further narrow down your results
Links
The article discusses various open problems in machine learning inspired by a graduate class. It critiques current methodologies, emphasizing the need for a design-based perspective, better evaluation methods, and innovations in large language models. The author encourages researchers to explore these under-addressed areas.
This article introduces WebGym, an extensive open-source environment for training visual web agents using nearly 300,000 tasks from real websites. It details a reinforcement learning approach that improves agent performance, achieving a notable increase in success rates on unseen tasks compared to other models.
AIRS-Bench evaluates the research capabilities of large language model agents across 20 tasks in machine learning. Each task includes a problem, dataset, metric, and state-of-the-art value, allowing for performance comparison among various agent configurations. The framework supports contributions from the AI research community for further development.
This article details the implementation of Google's Nested Learning (HOPE) architecture, focusing on its mechanism-level components and testing procedures. It provides guidance on installation, usage, and evaluation, including various training configurations and memory management strategies for machine learning models.
WavReward is a novel reward feedback model designed to evaluate spoken dialogue systems by assessing both their intelligence quotient (IQ) and emotional quotient (EQ) through audio language models. It introduces a specialized evaluator using multi-sample feedback and reinforcement learning, along with the ChatReward-30K dataset, significantly outperforming existing evaluation models in accuracy and subjective testing across various spoken dialogue scenarios.
JudgeLRM introduces a novel approach to using Large Language Models (LLMs) as evaluators, particularly in complex reasoning tasks. By employing reinforcement learning with judge-wise rewards, JudgeLRM models significantly outperform traditional Supervised Fine-Tuning methods and current leading models, demonstrating superior performance in tasks that require deep reasoning.