/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
BRIDGE - Building Reinforcement-Learning Depth-to-Image Data Generation Engine for Monocular Depth Estimation
VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning
UI-UG: A Unified MLLM for UI Understanding and Generation
Q-Mirror: Unlocking the Multi-Modal Potential of Scientific Text-Only QA Pairs
Conda: Column-Normalized Adam for Training Large Language Models Faster
TENET: Leveraging Tests Beyond Validation for Code Generation
FameMind: Frame-Interleaved Video Reasoning via Reinforcement Learning
Explore-Execute Chain: Towards an Efficient Structured Reasoning Paradigm
Sequence Pathfinder for Multi-Agent Pickup and Delivery in the Warehouse
MMPB: It's Time for Multi-Modal Personalization
Painless Activation Steering: An Automated, Lightweight Approach for Post-Training Large Language Models
A Meta-Analysis of LLM Effects on Students across Qualification, Socialisation, and Subjectification
Wavelet-Induced Rotary Encodings: RoPE Meets Graphs
Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models
Provable Scaling Laws of Feature Emergence from Learning Dynamics of Grokking
Predicting LLM Reasoning Performance with Small Proxy Model
Beyond the Individual: Introducing Group Intention Forecasting with SHOT Dataset
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation
Video models are zero-shot learners and reasoners
Beyond Sharp Minima: Robust LLM Unlearning via Feedback-Guided Multi-Point Optimization
U-Mamba2-SSL for Semi-Supervised Tooth and Pulp Segmentation in CBCT
Graph Coloring for Multi-Task Learning
KANO: Kolmogorov-Arnold Neural Operator
Robust LLM Training Infrastructure at ByteDance
Communications to Circulations: 3D Wind Field Retrieval and Real-Time Prediction Using 5G GNSS Signals and Deep Learning
FlowRL: Matching Reward Distributions for LLM Reasoning
DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion
Multi-Robot Task Planning for Multi-Object Retrieval Tasks with Distributed On-Site Knowledge via Large Language Models
U-Mamba2: Scaling State Space Models for Dental Anatomy Segmentation in CBCT
MindVL: Towards Efficient and Effective Training of Multimodal Large Language Models on Ascend NPUs
Inducing Uncertainty on Open-Weight Models for Test-Time Privacy in Image Recognition
Ban&Pick: Ehancing Performance and Efficiency of MoE-LLMs via Smarter Routing
LiDAR-BIND-T: Improved and Temporally Consistent Sensor Modality Translation and Fusion for Robotic Applications
Long-Horizon Visual Imitation Learning via Plan and Code Reflection
Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families
Learning to Generate Unit Test via Adversarial Reinforcement Learning
Diffusion Language Models Know the Answer Before Decoding
Object Detection with Multimodal Large Vision-Language Models: An In-depth Review
Image-Conditioned 3D Gaussian Splat Quantization
The DNA of nuclear models: How AI predicts nuclear masses
FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI
Learning Unified User Quantized Tokenizers for User Representation
A Survey on Code Generation with LLM-based Agents
The Ever-Evolving Science Exam
The Impact of Language Mixing on Bilingual LLM Reasoning
Mind the Gap: A Review of Arabic Post-Training Datasets and Their Limitations
Linguistic and Embedding-Based Profiling of Texts generated by Humans and Large Language Models
QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design
Scaling RL to Long Videos
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training
HumanVideo-MME: Benchmarking MLLMs for Human-Centric Video Understanding
LATTE: Latent Trajectory Embedding for Diffusion-Generated Image Detection
Deep Graph Learning for Industrial Carbon Emission Analysis and Policy Impact
DNN-Based Precoding in RIS-Aided mmWave MIMO Systems With Practical Phase Shift
SoMi-ToM: Evaluating Multi-Perspective Theory of Mind in Embodied Social Interactions
When Does Multimodality Lead to Better Time Series Forecasting?
FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
Decoupled Classifier-Free Guidance for Counterfactual Diffusion Models
QGuard:Question-based Zero-shot Guard for Multi-modal LLM Safety
VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models
A theoretical framework for self-supervised contrastive learning for continuous dependent data
Efficient Context Selection for Long-Context QA: No Tuning, No Iteration, Just Adaptive-$k$
Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement
Static Word Embeddings for Sentence Semantic Representation
Negative-Guided Subject Fidelity Optimization for Zero-Shot Subject-Driven Generation
Multi Layered Autonomy and AI Ecologies in Robotic Art Installations
WorldGym: World Model as An Environment for Policy Evaluation
Personalized Subgraph Federated Learning with Differentiable Auxiliary Projections
ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models
Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features
SelfReflect: Can LLMs Communicate Their Internal Answer Distribution?
Value-Guided Search for Efficient Chain-of-Thought Reasoning
LLM Agents for Interactive Exploration of Historical Cadastre Data: Framework and Application to Venice
Find the Fruit: Zero-Shot Sim2Real RL for Occlusion-Aware Plant Manipulation
AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language Models
Causal Interventions Reveal Shared Structure Across English Filler-Gap Constructions
DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning
Octic Vision Transformers: Quicker ViTs Through Equivariance
Silent Leaks: Implicit Knowledge Extraction Attack on RAG Systems through Benign Queries
ELEPHANT: Measuring and understanding social sycophancy in LLMs
Structured Agent Distillation for Large Language Model
ScSiameseClu: A Siamese Clustering Framework for Interpreting single-cell RNA Sequencing Data
DisCO: Reinforcing Large Reasoning Models with Discriminative Constrained Optimization
Modeling Saliency Dataset Bias
TensorRL-QAS: Reinforcement learning with tensor networks for improved quantum architecture search
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Simple yet Effective Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization
Stochastic Layer-wise Learning: Scalable and Efficient Alternative to Backpropagation
Fair Uncertainty Quantification for Depression Prediction
Adaptive Rectification Sampling for Test-Time Compute Scaling
Lobster: A GPU-Accelerated Framework for Neurosymbolic Programming
Enabling Rapid Shared Human-AI Mental Model Alignment via the After-Action Review
CODA: Repurposing Continuous VAEs for Discrete Tokenization
Value Profiles for Encoding Human Variation
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
A Survey on SAR ship classification using Deep Learning
Revisiting semi-supervised learning in the era of foundation models
Rethinking Diffusion Model in High Dimension
Load more
DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning
Created by
Haebom
作者
Gaurav Srivastava, Zhenyu Bi, Meng Lu, Xuan Wang
概要
大規模言語モデル(LLM)は膨大なデータセットによる広範な訓練で推論能力が大幅に向上したが、追加データのみに依存することは実用的になっていない。本論文では、外部の監督なしで自主的に推論能力を向上させるモデルの必要性を強調し、単一言語モデルを発展させるためにマルチエージェントディスカッショントレースを使用する新しいグランド真実のないトレーニングフレームワークであるDTE(Debate、Train、Evolve)を提案する。また、ディスカッションの質を向上させるために、エージェントに推論を批判し、改善するよう明示的に指示する新しいプロンプト戦略であるReflect-Critique-Refineも紹介します。 7つの推論ベンチマークで6つの公開ウェイトモデルを対象に広範な評価を行った結果、DTEフレームワークはかなりの改善を達成し、特に困難なGSM-PLUSデータセットで平均8.92%の精度向上を示した。さらに、他のすべてのベンチマークで平均5.8%の精度が向上し、強力なクロスドメイン一般化能力を示しました。
Takeaways、Limitations
•
Takeaways:
◦
Ground truthなしでマルチエージェントディスカッションを通じて単一言語モデルの推論能力を向上させるDTEフレームワークの提案。
◦
ディスカッション品質を向上させるReflect-Critique-Refineプロンプト戦略の導入
◦
GSM-PLUSデータセットで8.92%の精度を向上させ、他のベンチマークで5.8%の精度を向上させることで、強力なパフォーマンスを証明し一般化する能力を確認します。
◦
オープンソースコードとモデル公開による研究の再現性と拡散の貢献
•
Limitations:
◦
論文の内容だけではDTEフレームワークの計算コストと訓練時間関連情報不在。
◦
モデルの具体的な改善原理やメカニズムの詳細な説明の欠如
◦
他の推論ベンチマークに対する一般化性能のさらなる分析が必要
PDFを見る
Made with Slashpage