/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Merge-of-Thought Distillation
OTESGN: Optimal Transport-Enhanced Syntactic-Semantic Graph Networks for Aspect-Based Sentiment Analysis
MESH - Understanding Videos Like Human: Measuring Hallucinations in Large Video Models
Adapting Vision-Language Models for Neutrino Event Classification in High-Energy Physics
Symmetry-Guided Multi-Agent Inverse Reinforcement Learning
AU-Harness: An Open-Source Toolkit for Holistic Evaluation of Audio LLMs
Expert-Guided Explainable Few-Shot Learning for Medical Image Diagnosis
Towards Generalized Routing: Model and Agent Orchestration for Adaptive and Efficient Inference
MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining
Demo: Healthcare Agent Orchestrator (HAO) for Patient Summarization in Molecular Tumor Boards
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
Beyond the Pre-Service Horizon: Infusing In-Service Behavior for Improved Financial Risk Forecasting
On Synthesis of Timed Regular Expressions
TinyDef-DETR: A DETR-based Framework for Defect Detection in Transmission Lines from UAV Imagery
LiDAR-BIND-T: Improved and Temporally Consistent Sensor Modality Translation and Fusion for Robotic Applications
From Vision to Validation: A Theory- and Data-Driven Construction of a GCC-Specific AI Adoption Index
A Comprehensive Guide to Differential Privacy: From Theory to User Expectations
The Architecture of AI Transformation: Four Strategic Patterns and an Emerging Frontier
FLM-Audio: Natural Monologues Improves Native Full-Duplex Chatbots via Dual Training
Deep Learning-Based Rock Particulate Classification Using Attention-Enhanced ConvNeXt
The Information Dynamics of Generative Diffusion
Data-Augmented Few-Shot Neural Stencil Emulation for System Identification of Computer Models
Group Expectation Policy Optimization for Heterogeneous Reinforcement Learning
Pretrained Conformers for Audio Fingerprinting and Retrieval
Towards Scalable Training for Handwritten Mathematical Expression Recognition
To Theoretically Understand Transformer-Based In-Context Learning for Optimizing CSMA
Klear-CodeTest: Scalable Test Case Generation for Code Reinforcement Learning
HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning
Villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
New Kid in the Classroom: Exploring Student Perceptions of AI Coding Assistants
Can Large Language Models Understand As Well As Apply Patent Regulations to Pass a Hands-On Patent Attorney Test?
Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound
Uncertainty Estimation by Human Perception versus Neural Models
Persistent Homology of Topic Networks for the Prediction of Reader Curiosity
Task Matters: Knowledge Requirements Shape LLM Responses to Context-Memory Conflict
Crack Path Prediction with Operator Learning using Discrete Particle System data Generation
Diffusion Graph Neural Networks for Robustness in Olfaction Sensors and Datasets
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question Answering
An Ontology-Driven Graph RAG for Legal Norms: A Structural, Temporal, and Deterministic Approach
Combating Falsification of Speech Videos with Live Optical Signatures (Extended Version)
Early Exit and Multi Stage Knowledge Distillation in VLMs for Video Summarization
Critical Challenges and Guidelines in Evaluating Synthetic Tabular Data: A Systematic Review
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation
Entropy-Gated Branching for Efficient Test-Time Reasoning
SWI: Speaking with Intent in Large Language Models
Byzantine-Robust Federated Learning Using Generative Adversarial Networks
VeriSafe Agent: Safeguarding Mobile GUI Agent via Logic-based Action Verification
MIND: Towards Immersive Psychological Healing with Multi-agent Inner Dialogue
V-HOP: Visuo-Haptic 6D Object Pose Tracking
EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds
Knowledge-Guided Biomarker Identification for Label-Free Single-Cell RNA-Seq Data: A Reinforcement Learning Perspective
MERaLiON-SpeechEncoder: Towards a Speech Foundation Model for Singapore and Beyond
RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution
IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves
DeepVoting: Learning and Fine-Tuning Voting Rules with Canonical Embeddings
Rethinking Disentanglement under Dependent Factors of Variation
Discovering physical laws with parallel symbolic enumeration
Semantic Augmentation in Images using Language
Algorithmic Collusion by Large Language Models
A minimal coalition logic
Deep Reinforcement Learning for Inventory Networks: Toward Reliable Policy Optimization
Inconsistency Handling in Prioritized Databases with Universal Constraints: Complexity Analysis and Links with Active Integrity Constraints
Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference
CogGuide: Human-Like Guidance for Zero-Shot Omni-Modal Reasoning
TreeGPT: Pure TreeFFN Encoder-Decoder Architecture for Structured Reasoning Without Attention Mechanisms
Robix: A Unified Model for Robot Interaction, Reasoning and Planning
KROMA: Ontology Matching with Knowledge Retrieval and Large Language Models
Scaling LLM Planning: NL2FLOW for Parametric Problem Generation and Rigorous Evaluation
Optimizing Length Compression in Large Reasoning Models
LLMs for sensory-motor control: Combining in-context and iterative learning
Effort-aware Fairness: Incorporating a Philosophy-informed, Human-centered Notion of Effort into Algorithmic Fairness Metrics
Simulating Human-like Daily Activities with Desire-driven Autonomy
Enhancing Few-Shot Transfer Learning with Optimized Multi-Task Prompt Tuning through Modular Prompt Composition
ButterflyQuant: Ultra-low-bit LLM Quantization through Learnable Orthogonal Butterfly Transforms
CDE: Curiosity-Driven Exploration for Efficient Reinforcement Learning in Large Language Models
SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
Feasibility-Guided Fair Adaptive Offline Reinforcement Learning for Medicaid Care Management
Retrieval-Augmented Generation for Reliable Interpretation of Radio Regulations
Explaining Concept Drift through the Evolution of Group Counterfactuals
LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering
Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth
Graph Alignment via Dual-Pass Spectral Encoding and Latent Space Communication
ObjectReact: Learning Object-Relative Control for Visual Navigation
Fluent but Unfeeling: The Emotional Blind Spots of Language Models
Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification
An improved educational competition optimizer with multi-covariance learning operators for global optimization problems
Improving Video Diffusion Transformer Training by Multi-Feature Fusion and Alignment from Self-Supervised Vision Encoders
A modified RIME algorithm with covariance learning and diversity enhancement for numerical optimization
Towards Explainable Job Title Matching: Leveraging Semantic Textual Relatedness and Knowledge Graphs
Explainable AI for Accelerated Microstructure Imaging: A SHAP-Guided Protocol on the Connectome 2.0 scanner
Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India
OpenFake: An Open Dataset and Platform Toward Large-Scale Deepfake Detection
Prompt Pirates Need a Map: Stealing Seeds helps Stealing Prompts
Resource-Efficient Glioma Segmentation on Sub-Saharan MRI
ENSI: Efficient Non-Interactive Secure Inference for Large Language Models
We're Still Doing It (All) Wrong: Recommender Systems, Fifteen Years Later
LLMs Don't Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations
MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization
Load more
CDE: Curiosity-Driven Exploration for Efficient Reinforcement Learning in Large Language Models
Created by
Haebom
作者
Runpeng Dai, Linfeng Song, Haolin Liu, Zhenwen Liang, Dian Yu, Haitao Mi, Zhaopeng Tu, Rui Liu, Tong Zheng, Hongtu Zhu, Dong Yu
概要
本論文は、検証可能な報酬を使用した強化学習(RLVR)における大型言語モデル(LLM)の推論能力を向上させるための探索戦略の改善に焦点を当てています。既存のRLVR法の早期収束とエントロピー崩壊の問題を解決するために、モデルの固有の好奇心を活用した好奇心ベースのナビゲーション(CDE)フレームワークを提案します。アクターの生成応答に対する恥ずかしさと批評家のマルチヘッドアーキテクチャから得られた価値推定値の分散を好奇心信号として使用し、RLVRフレームワーク内のナビゲーションボーナスとして活用します。理論的分析は、アクターベースのボーナスが過誤を処罰し、正解の多様性を促進し、批評ベースのボーナスが既存のRLナビゲーションボーナスに関連付けられていることを示しています。実験の結果、AIMEベンチマークで標準RLVRと比較して約3点の性能向上を達成し、RLVR内の補正崩壊メカニズムを分析してLLMの一般的な故障モードを明らかにしました。
Takeaways、Limitations
•
Takeaways:
◦
LLMの推論能力を向上させるための効果的な探索戦略である好奇心ベースの探索(CDE)フレームワークの提示
◦
アクターと批評家の好奇心信号を利用した新しいナビゲーションボーナスの設計と理論的分析
◦
AIMEベンチマークにおける既存のRLVRと比較した性能向上を実験的に検証
◦
RLVRのキャリブレーション崩壊機構解析によるLLM故障モードの理解を促進
•
Limitations:
◦
提示された方法のパフォーマンス向上はAIMEベンチマークに限定されています。他のベンチマークや作業での一般化性能検証が必要
◦
好奇心信号の定義と設定に関するさらなる研究が必要になるかもしれません。
◦
キャリブレーション崩壊メカニズムのより深い分析と解決策の提示が必要
PDFを見る
Made with Slashpage