/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
BRIDGE - Building Reinforcement-Learning Depth-to-Image Data Generation Engine for Monocular Depth Estimation
VSSFlow: Unifying Video-conditioned Sound and Speech Generation via Joint Learning
UI-UG: A Unified MLLM for UI Understanding and Generation
Q-Mirror: Unlocking the Multi-Modal Potential of Scientific Text-Only QA Pairs
Conda: Column-Normalized Adam for Training Large Language Models Faster
TENET: Leveraging Tests Beyond Validation for Code Generation
FameMind: Frame-Interleaved Video Reasoning via Reinforcement Learning
Explore-Execute Chain: Towards an Efficient Structured Reasoning Paradigm
Sequence Pathfinder for Multi-Agent Pickup and Delivery in the Warehouse
MMPB: It's Time for Multi-Modal Personalization
Painless Activation Steering: An Automated, Lightweight Approach for Post-Training Large Language Models
A Meta-Analysis of LLM Effects on Students across Qualification, Socialisation, and Subjectification
Wavelet-Induced Rotary Encodings: RoPE Meets Graphs
Backdoor Attribution: Elucidating and Controlling Backdoor in Language Models
Provable Scaling Laws of Feature Emergence from Learning Dynamics of Grokking
Predicting LLM Reasoning Performance with Small Proxy Model
Beyond the Individual: Introducing Group Intention Forecasting with SHOT Dataset
Adversarial Defense in Cybersecurity: A Systematic Review of GANs for Threat Detection and Mitigation
Video models are zero-shot learners and reasoners
Beyond Sharp Minima: Robust LLM Unlearning via Feedback-Guided Multi-Point Optimization
U-Mamba2-SSL for Semi-Supervised Tooth and Pulp Segmentation in CBCT
Graph Coloring for Multi-Task Learning
KANO: Kolmogorov-Arnold Neural Operator
Robust LLM Training Infrastructure at ByteDance
Communications to Circulations: 3D Wind Field Retrieval and Real-Time Prediction Using 5G GNSS Signals and Deep Learning
FlowRL: Matching Reward Distributions for LLM Reasoning
DreamControl: Human-Inspired Whole-Body Humanoid Control for Scene Interaction via Guided Diffusion
Multi-Robot Task Planning for Multi-Object Retrieval Tasks with Distributed On-Site Knowledge via Large Language Models
U-Mamba2: Scaling State Space Models for Dental Anatomy Segmentation in CBCT
MindVL: Towards Efficient and Effective Training of Multimodal Large Language Models on Ascend NPUs
Inducing Uncertainty on Open-Weight Models for Test-Time Privacy in Image Recognition
Ban&Pick: Ehancing Performance and Efficiency of MoE-LLMs via Smarter Routing
LiDAR-BIND-T: Improved and Temporally Consistent Sensor Modality Translation and Fusion for Robotic Applications
Long-Horizon Visual Imitation Learning via Plan and Code Reflection
Measuring the Measures: Discriminative Capacity of Representational Similarity Metrics Across Model Families
Learning to Generate Unit Test via Adversarial Reinforcement Learning
Diffusion Language Models Know the Answer Before Decoding
Object Detection with Multimodal Large Vision-Language Models: An In-depth Review
Image-Conditioned 3D Gaussian Splat Quantization
The DNA of nuclear models: How AI predicts nuclear masses
FoundBioNet: A Foundation-Based Model for IDH Genotyping of Glioma from Multi-Parametric MRI
Learning Unified User Quantized Tokenizers for User Representation
A Survey on Code Generation with LLM-based Agents
The Ever-Evolving Science Exam
The Impact of Language Mixing on Bilingual LLM Reasoning
Mind the Gap: A Review of Arabic Post-Training Datasets and Their Limitations
Linguistic and Embedding-Based Profiling of Texts generated by Humans and Large Language Models
QuestA: Expanding Reasoning Capacity in LLMs via Question Augmentation
CADmium: Fine-Tuning Code Language Models for Text-Driven Sequential CAD Design
Scaling RL to Long Videos
On the Effectiveness of Methods and Metrics for Explainable AI in Remote Sensing Image Scene Classification
Reinforcement Fine-Tuning Naturally Mitigates Forgetting in Continual Post-Training
HumanVideo-MME: Benchmarking MLLMs for Human-Centric Video Understanding
LATTE: Latent Trajectory Embedding for Diffusion-Generated Image Detection
Deep Graph Learning for Industrial Carbon Emission Analysis and Policy Impact
DNN-Based Precoding in RIS-Aided mmWave MIMO Systems With Practical Phase Shift
SoMi-ToM: Evaluating Multi-Perspective Theory of Mind in Embodied Social Interactions
When Does Multimodality Lead to Better Time Series Forecasting?
FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
Decoupled Classifier-Free Guidance for Counterfactual Diffusion Models
QGuard:Question-based Zero-shot Guard for Multi-modal LLM Safety
VITA: Zero-Shot Value Functions via Test-Time Adaptation of Vision-Language Models
A theoretical framework for self-supervised contrastive learning for continuous dependent data
Efficient Context Selection for Long-Context QA: No Tuning, No Iteration, Just Adaptive-$k$
Resisting Contextual Interference in RAG via Parametric-Knowledge Reinforcement
Static Word Embeddings for Sentence Semantic Representation
Negative-Guided Subject Fidelity Optimization for Zero-Shot Subject-Driven Generation
Multi Layered Autonomy and AI Ecologies in Robotic Art Installations
WorldGym: World Model as An Environment for Policy Evaluation
Personalized Subgraph Federated Learning with Differentiable Auxiliary Projections
ViewSpatial-Bench: Evaluating Multi-perspective Spatial Localization in Vision-Language Models
Finite Sample Analysis of Linear Temporal Difference Learning with Arbitrary Features
SelfReflect: Can LLMs Communicate Their Internal Answer Distribution?
Value-Guided Search for Efficient Chain-of-Thought Reasoning
LLM Agents for Interactive Exploration of Historical Cadastre Data: Framework and Application to Venice
Find the Fruit: Zero-Shot Sim2Real RL for Occlusion-Aware Plant Manipulation
AudioTrust: Benchmarking the Multifaceted Trustworthiness of Audio Large Language Models
Causal Interventions Reveal Shared Structure Across English Filler-Gap Constructions
DEBATE, TRAIN, EVOLVE: Self Evolution of Language Model Reasoning
Octic Vision Transformers: Quicker ViTs Through Equivariance
Silent Leaks: Implicit Knowledge Extraction Attack on RAG Systems through Benign Queries
ELEPHANT: Measuring and understanding social sycophancy in LLMs
Structured Agent Distillation for Large Language Model
ScSiameseClu: A Siamese Clustering Framework for Interpreting single-cell RNA Sequencing Data
DisCO: Reinforcing Large Reasoning Models with Discriminative Constrained Optimization
Modeling Saliency Dataset Bias
TensorRL-QAS: Reinforcement learning with tensor networks for improved quantum architecture search
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Simple yet Effective Semi-supervised Knowledge Distillation from Vision-Language Models via Dual-Head Optimization
Stochastic Layer-wise Learning: Scalable and Efficient Alternative to Backpropagation
Fair Uncertainty Quantification for Depression Prediction
Adaptive Rectification Sampling for Test-Time Compute Scaling
Lobster: A GPU-Accelerated Framework for Neurosymbolic Programming
Enabling Rapid Shared Human-AI Mental Model Alignment via the After-Action Review
CODA: Repurposing Continuous VAEs for Discrete Tokenization
Value Profiles for Encoding Human Variation
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
A Survey on SAR ship classification using Deep Learning
Revisiting semi-supervised learning in the era of foundation models
Rethinking Diffusion Model in High Dimension
Load more
FW-Merging: Scaling Model Merging with Frank-Wolfe Optimization
Created by
Haebom
作者
Hao Mark Chen, Shell Xu Hu, Wayne Luk, Timothy Hospedales, Hongxiang Fan
概要
この論文は、マルチタスク学習(MTL)のためのデータ効率的なアプローチであるモデルマージの限界を解決するために、フランクウォルフマージング(FW-Merging)と呼ばれる新しいアプローチを提示します。既存のモデルのマージ方法論がさまざまなモデルソースと複数のモデルをマージするときにスケーラビリティに問題がある問題を解決するために、FW-Mergingは制約最適化の問題でモデルのマージを定式化します。 Frank-Wolfe最適化に触発され、目標関数を線形近似し、関連性の高いモデルを選択して繰り返しマージする方法を使用します。 FW-Mergingは、既存のマージ方法と統合してパフォーマンスを向上させることができ、さまざまなモデルソースに適用可能でメモリオーバーヘッドが一定であるという利点を持っています。
Takeaways、Limitations
•
Takeaways:
◦
さまざまなモデルソースに適しており、モデルや作業情報が部分的に知られている状況でも有効です。
◦
多数のモデルをマージしても安定した性能を示し、スケーラビリティに優れています。
◦
既存のマージ方法と統合して、パフォーマンスをさらに向上させることができます。
◦
データ駆動型マージ方法とは異なり、一定のメモリオーバーヘッドを維持します。
◦
最先端のモデルマージ技術よりも優れた性能を発揮します。
•
Limitations:
◦
論文に具体的なLimitationsは記載されていない。
PDFを見る
Made with Slashpage