/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Emotions as Ambiguity-aware Ordinal Representations
From Tabula Rasa to Emergent Abilities: Discovering Robot Skills via Real-World Unsupervised Quality-Diversity
Enhancing Model Privacy in Federated Learning with Random Masking and Quantization
Scaling Laws for Task-Stratified Knowledge in Post-Training Quantized Large Language Models
Principled Detection of Hallucinations in Large Language Models via Multiple Testing
Vocoder-Projected Feature Discriminator
ControlEchoSynth: Boosting Ejection Fraction Estimation Models via Controlled Video Diffusion
Explain Before You Answer: A Survey on Compositional Visual Reasoning
Time-Aware One Step Diffusion Network for Real-World Image Super-Resolution
PediatricsMQA: a Multi-modal Pediatrics Question Answering Benchmark
VideoEraser: Concept Erasure in Text-to-Video Diffusion Models
A Systematic Survey of Model Extraction Attacks and Defenses: State-of-the-Art and Perspectives
GeoSAM2: Unleashing the Power of SAM2 for 3D Part Segmentation
Input-Time Scaling
LinguaSafe: A Comprehensive Multilingual Safety Benchmark for Large Language Models
A Survey on Parallel Text Generation: From Parallel Decoding to Diffusion Language Models
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Putnam-AXIOM: A Functional and Static Benchmark for Measuring Higher Level Mathematical Reasoning in LLMs
From Imitation to Optimization: A Comparative Study of Offline Learning for Autonomous Driving
R-Zero: Self-Evolving Reasoning LLM from Zero Data
Human-Centered Human-AI Interaction (HC-HAII): A Human-Centered AI パースペクティブ
GTPO: Trajectory-Based Policy Optimization in Large Language Models
Contrastive Multi-Task Learning with Solvent-Aware Augmentation for Drug Discovery
A Large-Scale Benchmark of Cross-Modal Learning for Histology and Gene Expression in Spatial Transcriptomics
Invisible Architectures of Thought: Toward a New Science of AI as Cognitive Infrastructure
Revisiting Pre-trained Language Models for Vulnerability Detection
MegaScience: Pushing the Frontiers of Post-Training Datasets for Science Reasoning
Scaling Decentralized Learning with FLock
SegQuant: A Semantics-Aware and Generalizable Quantization Framework for Diffusion Models
Apple Intelligence Foundation Language Models: Tech Report 2025
Optimistic Exploration for Risk-Averse Constrained Reinforcement Learning
PyVision: Agentic Vision with Dynamic Tooling
DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective
RoboTwin 2.0: A Scalable Data Generator and Benchmark with Strong Domain Randomization for Robust Bimanual Robotic Manipulation
Analyzing Character Representation in Media Content using Multimodal Foundation Model: Effectiveness and Trust
MEraser: An Effective Fingerprint Erasure Approach for Large Language Models
CoQuIR: A Comprehensive Benchmark for Code Quality-Aware Information Retrieval
DreamActor-H1: High-Fidelity Human-Product Demonstration Video Generation via Motion-designed Diffusion Transformers
Pseudo-Simulation for Autonomous Driving
BinConv: A Neural Architecture for Ordinal Encoding in Time-Series Forecasting
FaceEditTalker: Controllable Talking Head Generation with Facial Attribute Editing
EnvInjection: Environmental Prompt Injection Attack to Multi-modal Web Agents
X-Sim: Cross-Embodiment Learning via Real-to-Sim-to-Real
Heat Diffusion Models - Interpixel Attention Mechanism
Bidirectional Task-Motion Planning Based on Hierarchical Reinforcement Learning for Strategic Confrontation
Multi-Type Context-Aware Conversational Recommender Systems via Mixture-of-Experts
Pricing AI Model Accuracy
Evaluating the Fitness of Ontologies for the Task of Question Generation
Utility-Focused LLM Annotation for Retrieval and Retrieval-Augmented Generation
PGAD: Prototype-Guided Adaptive Distillation for Multi-Modal Learning in AD Diagnosis
Constructing a Norm for Children's Scientific Drawing: Distribution Features Based on Semantic Similarity of Large Language Models
An Empirical Risk Minimization Approach for Offline Inverse RL and Dynamic Discrete Choice Model
Efficient PINNs via Multi-Head Unimodular Regularization of the Solutions Space
Statistical learning does not always entail knowledge
Score-based Generative Diffusion Models for Social Recommendations
PromptKeeper: Safeguarding System Prompts for LLMs
X-Prompt: Towards Universal In-Context Image Generation in Auto-Regressive Vision Language Foundation Models
Understanding Fairness-Accuracy Trade-offs in Machine Learning Models: Does Promoting Fairness Undermine Performance?
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language モデル
Leveraging Multi-facet Paths for Heterogeneous Graph Representation Learning
Training with Explanations Alone: A New Paradigm to Prevent Shortcut Learning
Generation of Geodesics with Actor-Critic Reinforcement Learning to Predict Midpoints
TabSketchFM: Sketch-based Tabular Representation Learning for Data Discovery over Data Lakes
HoneyBee: A Scalable Modular Framework for Creating Multimodal Oncology Datasets with Foundational Embedding Models
StepWiser: Stepwise Generative Judges for Wiser Reasoning
AniME: Adaptive Multi-Agent Planning for Long Animation Generation
AppAgent-Pro: A Proactive GUI Agent System for Multidomain Information Integration and User Assistance
AI Chaperones Are (Really) All You Need to Prevent Parasocial Relationships with Chatbots
Nemori: Self-Organizing Agent Memory Inspired by Cognitive Science
General agents contain world models
Approximate Lifted Model Construction
Fitness Landscape of Large Language Model-Assisted Automated Algorithm Search
Synthesizing High-Quality Programming Tasks with LLM-based Expert and Student Agents
Preference Elicitation for Multi-objective Combinatorial Optimization with Active Learning and Maximum Likelihood Estimation
Reference-Aligned Retrieval-Augmented Question Answering over Heterogeneous Proprietary Documents
Demonstrating specification gaming in reasoning models
AirRAG: Autonomous Strategic Planning and Reasoning Steer Retrieval Augmented Generation
Think Smart、Act SMARL! Analyzing Probabilistic Logic Shields for Multi-Agent Reinforcement Learning
From Evidence to Decision: Exploring Evaluative AI
CODA: Coordinating the Cerebrum and Cerebellum for a Dual-Brain Computer Use Agent with Decoupled Reinforcement Learning
Discrete-Guided Diffusion for Scalable and Safe Multi-Robot Motion Planning
Patch Progression Masked Autoencoder with Fusion CNN Network for Classifying Evolution Between Two Pairs of 2D OCT Slices
DeepScholar-Bench: A Live Benchmark and Automated Evaluation for Generative Research Synthesis
Large Language Models (LLMs) for Electronic Design Automation (EDA)
Symphony: A Decentralized Multi-Agent Framework for Scalable Collective Intelligence
HPC Digital Twins for Evaluating Scheduling Policies, Incentive Structures and their Impact on Power and Cooling
Decomposing Behavioral Phase Transitions in LLMs: Order Parameters for Emergent Misalignment
Cross-Platform E-Commerce Product Categorization and Recategorization: A Multimodal Hierarchical Classification Approach
Linear-Time Demonstration Selection for In-Context Learning via Gradient Estimation
MathBuddy: A Multimodal System for Affective Math Tutoring
Diffusion Language Models Know the Answer Before Decoding
GLSim: Detecting Object Hallucinations in LVLMs via Global-Local Similarity
Dhati+: Fine-tuned Large Language Models for Arabic Subjectivity Evaluation
WaveHiT-SR: Hierarchical Wavelet Network for Efficient Image Super-Resolution
The Next Layer: Augmenting Foundation Models with Structure-Preserving and Attention-Guided Learning for Local Patches to Global Context Awareness in Computational Pathology
Logical Reasoning with Outcome Reward Models for Test-Time Scaling
The Information Dynamics of Generative Diffusion
AI-Powered Detection of Inappropriate Language in Medical School Curricula
Generative AI for Testing of Autonomous Driving Systems: A Survey
Multispectral LiDAR data for extracting tree points in urban and suburban areas
Load more
Harmony in Divergence: Towards Fast, Accurate, and Memory-efficient Zeroth-order LLM Fine-tuning
Created by
Haebom
作者
Qitao Tan, Jun Liu, Zheng Zhan, Caiwei Ding, Yanzhi Wang, Xiaolong Ma, Jaewoo Lee, Jin Lu, Geng Yuan
概要
本論文では、大規模言語モデル(LLM)の微調整におけるメモリ効率的なゼロ差(ZO)最適化技術の限界を克服するために、新しい最適化技術であるDiZO(Divergence-driven Zeroth-Order optimization)を提案します。従来のZO方法は、前方パスのみを使用して傾きを推定するため、メモリ効率的ですが、収束速度と精度が一次(FO)方法に比べて著しく低下します。 DiZOは、FOとZO最適化の更新パターンの違いを分析し、階層ごとに最適化ニーズに合わせて更新サイズを調整する階層別発散駆動適応方式を導入します。実験の結果、DiZOは多様なデータセットでトレーニングGPU時間を最大48%まで短縮しながらも収束に必要な繰り返し回数を大幅に削減し、RoBERTa-large、OPTシリーズ、Llamaシリーズなどのモデル微調整で従来のZO技術を凌駕し、場合によってはメモリ集約的なFO微調整を上回る性能を見せました。
Takeaways、Limitations
•
Takeaways:
◦
ゼロ車最適化により、メモリ効率の高い大規模言語モデルの微調整が可能であることを示した。
◦
従来のゼロ車最適化の収束速度と精度問題を改善するDiZOアルゴリズムを提示
◦
さまざまなLLMモデルとデータセットで従来の方法と比較して優れた性能を実証。
◦
トレーニング時間とコスト削減効果を提示(最大48%減少)。
•
Limitations:
◦
提示されたコードの公開リンクは匿名リンクであり、コードアクセスと検証に制限がある可能性があります。
◦
さまざまなハイパーパラメータ設定による性能変化の分析が不足する可能性があります。
◦
特定のタイプのLLMまたはデータセットに偏った結果である可能性。より広範な実験が必要な場合があります。
PDFを見る
Made with Slashpage