/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Merge-of-Thought Distillation
OTESGN: Optimal Transport-Enhanced Syntactic-Semantic Graph Networks for Aspect-Based Sentiment Analysis
MESH - Understanding Videos Like Human: Measuring Hallucinations in Large Video Models
Adapting Vision-Language Models for Neutrino Event Classification in High-Energy Physics
Symmetry-Guided Multi-Agent Inverse Reinforcement Learning
AU-Harness: An Open-Source Toolkit for Holistic Evaluation of Audio LLMs
Expert-Guided Explainable Few-Shot Learning for Medical Image Diagnosis
Towards Generalized Routing: Model and Agent Orchestration for Adaptive and Efficient Inference
MachineLearningLM: Scaling Many-shot In-context Learning via Continued Pretraining
Demo: Healthcare Agent Orchestrator (HAO) for Patient Summarization in Molecular Tumor Boards
Focusing by Contrastive Attention: Enhancing VLMs' Visual Reasoning
Beyond the Pre-Service Horizon: Infusing In-Service Behavior for Improved Financial Risk Forecasting
On Synthesis of Timed Regular Expressions
TinyDef-DETR: A DETR-based Framework for Defect Detection in Transmission Lines from UAV Imagery
LiDAR-BIND-T: Improved and Temporally Consistent Sensor Modality Translation and Fusion for Robotic Applications
From Vision to Validation: A Theory- and Data-Driven Construction of a GCC-Specific AI Adoption Index
A Comprehensive Guide to Differential Privacy: From Theory to User Expectations
The Architecture of AI Transformation: Four Strategic Patterns and an Emerging Frontier
FLM-Audio: Natural Monologues Improves Native Full-Duplex Chatbots via Dual Training
Deep Learning-Based Rock Particulate Classification Using Attention-Enhanced ConvNeXt
The Information Dynamics of Generative Diffusion
Data-Augmented Few-Shot Neural Stencil Emulation for System Identification of Computer Models
Group Expectation Policy Optimization for Heterogeneous Reinforcement Learning
Pretrained Conformers for Audio Fingerprinting and Retrieval
Towards Scalable Training for Handwritten Mathematical Expression Recognition
To Theoretically Understand Transformer-Based In-Context Learning for Optimizing CSMA
Klear-CodeTest: Scalable Test Case Generation for Code Reinforcement Learning
HiD-VAE: Interpretable Generative Recommendation via Hierarchical and Disentangled Semantic IDs
MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning
Villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models
New Kid in the Classroom: Exploring Student Perceptions of AI Coding Assistants
Can Large Language Models Understand As Well As Apply Patent Regulations to Pass a Hands-On Patent Attorney Test?
Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound
Uncertainty Estimation by Human Perception versus Neural Models
Persistent Homology of Topic Networks for the Prediction of Reader Curiosity
Task Matters: Knowledge Requirements Shape LLM Responses to Context-Memory Conflict
Crack Path Prediction with Operator Learning using Discrete Particle System data Generation
Diffusion Graph Neural Networks for Robustness in Olfaction Sensors and Datasets
MM-Prompt: Cross-Modal Prompt Tuning for Continual Visual Question Answering
An Ontology-Driven Graph RAG for Legal Norms: A Structural, Temporal, and Deterministic Approach
Combating Falsification of Speech Videos with Live Optical Signatures (Extended Version)
Early Exit and Multi Stage Knowledge Distillation in VLMs for Video Summarization
Critical Challenges and Guidelines in Evaluating Synthetic Tabular Data: A Systematic Review
Parasite: A Steganography-based Backdoor Attack Framework for Diffusion Models
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation
Entropy-Gated Branching for Efficient Test-Time Reasoning
SWI: Speaking with Intent in Large Language Models
Byzantine-Robust Federated Learning Using Generative Adversarial Networks
VeriSafe Agent: Safeguarding Mobile GUI Agent via Logic-based Action Verification
MIND: Towards Immersive Psychological Healing with Multi-agent Inner Dialogue
V-HOP: Visuo-Haptic 6D Object Pose Tracking
EgoAgent: A Joint Predictive Agent Model in Egocentric Worlds
Knowledge-Guided Biomarker Identification for Label-Free Single-Cell RNA-Seq Data: A Reinforcement Learning Perspective
MERaLiON-SpeechEncoder: Towards a Speech Foundation Model for Singapore and Beyond
RED: Unleashing Token-Level Rewards from Holistic Feedback via Reward Redistribution
IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves
DeepVoting: Learning and Fine-Tuning Voting Rules with Canonical Embeddings
Rethinking Disentanglement under Dependent Factors of Variation
Discovering physical laws with parallel symbolic enumeration
Semantic Augmentation in Images using Language
Algorithmic Collusion by Large Language Models
A minimal coalition logic
Deep Reinforcement Learning for Inventory Networks: Toward Reliable Policy Optimization
Inconsistency Handling in Prioritized Databases with Universal Constraints: Complexity Analysis and Links with Active Integrity Constraints
Directly Aligning the Full Diffusion Trajectory with Fine-Grained Human Preference
CogGuide: Human-Like Guidance for Zero-Shot Omni-Modal Reasoning
TreeGPT: Pure TreeFFN Encoder-Decoder Architecture for Structured Reasoning Without Attention Mechanisms
Robix: A Unified Model for Robot Interaction, Reasoning and Planning
KROMA: Ontology Matching with Knowledge Retrieval and Large Language Models
Scaling LLM Planning: NL2FLOW for Parametric Problem Generation and Rigorous Evaluation
Optimizing Length Compression in Large Reasoning Models
LLMs for sensory-motor control: Combining in-context and iterative learning
Effort-aware Fairness: Incorporating a Philosophy-informed, Human-centered Notion of Effort into Algorithmic Fairness Metrics
Simulating Human-like Daily Activities with Desire-driven Autonomy
Enhancing Few-Shot Transfer Learning with Optimized Multi-Task Prompt Tuning through Modular Prompt Composition
ButterflyQuant: Ultra-low-bit LLM Quantization through Learnable Orthogonal Butterfly Transforms
CDE: Curiosity-Driven Exploration for Efficient Reinforcement Learning in Large Language Models
SimpleVLA-RL: Scaling VLA Training via Reinforcement Learning
Feasibility-Guided Fair Adaptive Offline Reinforcement Learning for Medicaid Care Management
Retrieval-Augmented Generation for Reliable Interpretation of Radio Regulations
Explaining Concept Drift through the Evolution of Group Counterfactuals
LoCoBench: A Benchmark for Long-Context Large Language Models in Complex Software Engineering
Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth
Graph Alignment via Dual-Pass Spectral Encoding and Latent Space Communication
ObjectReact: Learning Object-Relative Control for Visual Navigation
Fluent but Unfeeling: The Emotional Blind Spots of Language Models
Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification
An improved educational competition optimizer with multi-covariance learning operators for global optimization problems
Improving Video Diffusion Transformer Training by Multi-Feature Fusion and Alignment from Self-Supervised Vision Encoders
A modified RIME algorithm with covariance learning and diversity enhancement for numerical optimization
Towards Explainable Job Title Matching: Leveraging Semantic Textual Relatedness and Knowledge Graphs
Explainable AI for Accelerated Microstructure Imaging: A SHAP-Guided Protocol on the Connectome 2.0 scanner
Incorporating AI Incident Reporting into Telecommunications Law and Policy: Insights from India
OpenFake: An Open Dataset and Platform Toward Large-Scale Deepfake Detection
Prompt Pirates Need a Map: Stealing Seeds helps Stealing Prompts
Resource-Efficient Glioma Segmentation on Sub-Saharan MRI
ENSI: Efficient Non-Interactive Secure Inference for Large Language Models
We're Still Doing It (All) Wrong: Recommender Systems, Fifteen Years Later
LLMs Don't Know Their Own Decision Boundaries: The Unreliability of Self-Generated Counterfactual Explanations
MetaLLMix : An XAI Aided LLM-Meta-learning Based Approach for Hyper-parameters Optimization
Load more
Optimizing Length Compression in Large Reasoning Models
Created by
Haebom
作者
Zhengxiang Cheng, Dongping Chen, Mingyang Fu, Tianyi Zhou
概要
この論文は、大規模推論モデル(LRM)が不要で冗長な推論プロセスを生成する問題を解決するために、「誤った事故」という重要な問題を提起します。モデルが正解を導いた後も繰り返し検証する傾向がこの問題の原因であると主張します。これを解決するために、効率性と効果性を超えて、簡潔さと十分性という2つの細分化された原則を提案します。これらの原則に基づいて、グループ相対ポリシー最適化(GRPO)ベースのポストトレーニング方法であるLC-R1を提示します。 LC-R1は、全体的な簡潔さのための長さ補償と推論プロセスの誤った部分を排除するための圧縮補償を組み合わせています。複数の推論ベンチマークの実験結果、LC-R1は精度が約2%減少するだけでシーケンス長を約50%減少させ、高い圧縮率を優先するパレート最適点を達成することを示しています。また、LC-R1の堅牢性を検証し、より強力でありながら計算効率的なLRM開発のための洞察力を提供します。コードは
https://github.com/zxiangx/LC-R1
で公開されています。
GitHub - zxiangx/LC-R1: Code for paper: Optimizing Length Compression in Large Reasoning Models
Code for paper: Optimizing Length Compression in Large Reasoning Models - zxiangx/LC-R1
github.com
Takeaways、Limitations
•
Takeaways:
◦
大規模推論モデルの非効率的な推論過程を解決するための新しい原則(簡潔さ,十分性)と方法(LC-R1)の提示
◦
精度低下を最小限に抑えながら推論プロセスの長さを劇的に短縮する効果的な方法を提示
◦
高い圧縮率を達成しながらも性能低下を最小限に抑えるパレート最適点を達成
◦
LRMの計算効率向上に寄与
•
Limitations:
◦
提示された方法の一般化性能に関するさらなる研究の必要性
◦
さまざまな種類のLRMの適用性と性能評価が必要
◦
「誤った事故」の定義と測定に関するさらなる研究が必要
PDFを見る
Made with Slashpage