/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Structure Transfer: an Inference-Based Calculus for the Transformation of Representations
Ensemble of Pathology Foundation Models for MIDOG 2025 Track 2: Atypical Mitosis Classification
AudioCodecBench: A Comprehensive Benchmark for Audio Codec Evaluation
Understanding Space Is Rocket Science - Only Top Reasoning Models Can Solve Spatial Understanding Tasks
DaMoC: Efficiently Selecting the Optimal Large Language Model for Fine-tuning Domain Tasks Based on Data and Model Compression
Modular Techniques for Synthetic Long-Context Data Generation in Language Model Training and Evaluation
EZhouNet:A framework based on graph neural network and anchor interval for the respiratory sound event detection
AImoclips: A Benchmark for Evaluating Emotion Conveyance in Text-to-Music Generation
TimeCopilot
First Order Model-Based RL through Decoupled Backpropagation
Pilot Study on Generative AI and Critical Thinking in Higher Education Classrooms
Beacon: Post-Training Quantization with Integrated Grid Selection
Is Artificial Intelligence Reshaping the Landscape of the International Academic Community of Geosciences?
Vectorized Attention with Learnable Encoding for Quantum Transformer
Transplant Then Regenerate: A New Paradigm for Text Data Augmentation
Depth-Breadth Synergy in RLVR: Unlocking LLM Reasoning Gains with Adaptive Exploration
MultiGen: Child-Friendly Multilingual Speech Generator with LLMs
StreetViewAI: Making Street View Accessible Using Context-Aware Multimodal AI
Street-Level AI: Are Large Language Models Ready for Real-World Judgments?
The KG-ER Conceptual Schema Language
LOTS of Fashion! Multi-Conditioning for Image Generation via Sketch-Text Pairing
Conditional Video Generation for High-Efficiency Video Compression
TriCLIP-3D: A Unified Parameter-Efficient Framework for Tri-Modal 3D Visual Grounding based on CLIP
Demographic-aware fine-grained classification of pediatric wrist fractures
An Analysis of Action-Value Temporal-Difference Methods That Learn State Values
Stochastic Parameter Decomposition
Auto-Regressive vs Flow-Matching: a Comparative Study of Modeling Paradigms for Text-to-Music Generation
MiniCPM4: Ultra-Efficient LLMs on End Devices
Evaluating the Efficacy of LLM-Based Reasoning for Multiobjective HPC Job Scheduling
How Can I Publish My LLM Benchmark Without Giving the True Answers Away?
Optimization of Module Transferability in Single Image Super-Resolution: Universality Assessment and Cycle Residual Blocks
Transferable Mask Transformer: Cross-domain Semantic Segmentation with Region-adaptive Transferability Estimation
RBT4DNN: Requirements-based Testing of Neural Networks
Robust Offline Imitation Learning Through State-level Trajectory Stitching
Beyond holography: the entropic quantum gravity foundations of image processing
KNighter: Transforming Static Analysis with LLM-Synthesized Checkers
FRIDA to the Rescue! Analyzing Synthetic Data Effectiveness in Object-Based Common Sense Reasoning for Disaster Response
CoDiff: Conditional Diffusion Model for Collaborative 3D Object Detection
Rapid Word Learning Through Meta In-Context Learning
Image Embedding Sampling Method for Diverse Captioning
Is an Ultra Large Natural Image-Based Foundation Model Superior to a Retina-Specific Model for Detecting Ocular and Systemic Diseases?
Extended Histogram-based Outlier Score (EHBOS)
A Survey of Graph Retrieval-Augmented Generation for Customized Large Language Models
Breaking the Context Bottleneck on Long Time Series Forecasting
Defending LVLMs Against Vision Attacks through Partial-Perception Supervision
ACING: Actor-Critic for Instruction Learning in Black-Box LLMs
Kolb-Based Experiential Learning for Generalist Agents with Human-Level Kaggle Data Science Performance
Quantifying Calibration Error in Neural Networks Through Evidence-Based Theory
Robust training of implicit generative models for multivariate and heavy-tailed distributions with an invariant statistical loss
Learning from 10 Demos: Generalisable and Sample-Efficient Policy Learning with Oriented Affordance Frames
AutoPETIII:The Tracer Frontier。 What Frontier?
Long Input Sequence Network for Long Time Series Forecasting
FFHFlow: Diverse and Uncertainty-Aware Dexterous Grasp Generation via Flow Variational Inference
Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers
MTP: A Meaning-Typed Language Abstraction for AI-Integrated Programming
Diffusion on language model encodings for protein sequence generation
Style Transfer to Calvin and Hobbes comics using Stable Diffusion
Autonomation, Not Automation: Activities and Needs of European Fact-checkers as a Basis for Designing Human-Centered AI Systems
Plan Verification for LLM-Based Embodied Task Completion Agents
EigenBench: A Comparative Behavioral Measure of Value Alignment
Oyster-I:Beyond Refusal - Constructive Safety Alignment for Responsible Language Models
Extending FKG.in: Towards a Food Claim Traceability Network
DeepVIS: Bridging Natural Language and Data Visualization Through Step-wise Reasoning
Theory of Mind Using Active Inference: A Framework for Multi-Agent Cooperation
CP-Bench: Evaluating Large Language Models for Constraint Modelling
Axiomatics of Restricted Choices by Linear Orders of Sets with Minimum as Fallback
DMN-Guided Prompting: A Framework for Controlling LLM Behavior
Computational Basis of LLM's Decision Making in Social Simulation
Science Across Languages: Assessing LLM Multilingual Translation of Scientific Papers
Enhancing FKG.in: automating Indian food composition analysis
WASP: A Weight-Space Approach to Detecting Learned Spuriousness
Transferable Belief Model on Quantum Circuits
PIN: A Knowledge-Intensive Dataset for Paired and Interleaved Multimodal Documents
(Ir)rationality in AI: State of the Art, Research Challenges and Open Questions
Intelligence Primer
ChronoGraph: A Real-World Graph-Based Multivariate Time Series Dataset
Delta Activations: A Representation for Finetuned Large Language Models
DEXOP: A Device for Robotic Transfer of Dexterous Human Manipulation
Towards a Unified View of Large Language Model Post-Training
No Thoughts Just AI: Biased LLM Recommendations Limit Human Agency in Resume Screening
IPA: An Information-Preserving Input Projection Framework for Efficient Foundation Model Adaptation
SSGaussian: Semantic-Aware and Structure-Preserving 3D Style Transfer
Parking Availability Prediction via Fusing Multi-Source Data with A Self-Supervised Learning Enhanced Spatio-Temporal Inverted Transformer
PARCO: Phoneme-Augmented Robust Contextual ASR via Contrastive Entity Disambiguation
AUDETER: A Large-scale Dataset for Deepfake Audio Detection in Open Worlds
From Editor to Dense Geometry Estimator
Decoupled Entity Representation Learning for Pinterest Ads Ranking
Facts Fade Fast: Evaluating Memorization of Outdated Medical Knowledge in Large Language Models
HumAIne-Chatbot: Real-Time Personalized Conversational AI via Reinforcement Learning
Reinforcement Learning for Robust Ageing-Aware Control of Li-ion Battery Systems with Data-Driven Formal Verification
An Empirical Study of Vulnerabilities in Python Packages and Their Detection
How many patients could we save with LLM priors?
Learning Active Perception via Self-Evolving Preference Optimization for GUI Grounding
MAGneT: Coordinated Multi-Agent Generation of Synthetic Multi-Turn Mental Health Counseling Sessions
VisioFirm: Cross-Platform AI-assisted Annotation Tool for Computer Vision
Crossing the Species Divide: Transfer Learning from Speech to Animal Sounds
YOLO Ensemble for UAV-based Multispectral Defect Detection in Wind Turbine Components
Attention as an Adaptive Filter
TAGAL: Tabular Data Generation using Agentic LLM メソッド
Enhancing Technical Documents Retrieval for RAG
Load more
SampleAttention: Near-Lossless Acceleration of Long Context LLM Inference with Adaptive Structured Sparse Attention
Created by
Haebom
作者
Qianchao Zhu, Jiangfei Duan, Chang Chen, Siran Liu, Guanyu Feng, Xin Lv, Xiao Chuanfu, Dahua Lin, Chao Yang
概要
この論文は、非常に長いコンテキストウィンドウをサポートする大規模言語モデル(LLM)におけるvanillaアテンションの二次複雑さによって引き起こされる長いTime-to-First-Token(TTFT)遅延問題を解決するための新しい方法を提供します。従来のアプローチは、追加の事前訓練または微調整が必要であり、しばしばモデル精度を犠牲にしているが、本論文では理論的および実験的基盤に基づいてほとんど損失のない希少アテンションを提示する。実行時にヘッドごとのスパースパターンを低コストで動的にキャプチャすることが重要であることを明らかにし、このために適応構造化とほとんど損失のないスパースアテンションであるSampleAttentionを提案します。 SampleAttentionは、観察されたかなりのスパースパターンを利用して、隣接するトークンの固定比率にアテンションを集中してローカルウィンドウパターンをキャプチャし、低コストで最小限のキー値セットを適応的に選択する2段階のクエリベースのキー値フィルタリング方式を使用して列ストライプパターンをキャプチャします。包括的な評価の結果、SampleAttentionは既存のLLMのvanilla attentionをほとんど精度を失うことなく置き換えることができ、FlashAttentionと比較してTTFTを最大2.42倍に減らすことができます。
Takeaways、Limitations
•
Takeaways:
◦
長いコンテキストウィンドウを持つLLMのTTFT遅延問題を効果的に解決する新しいスパースアテンション技術の提示
◦
追加のPretrainingまたはfinetuningなしで既存のLLMに適用可能。
◦
FlashAttentionと比較してTTFTを大幅に削減しながら、精度の損失はほとんどありません。
◦
実行時にヘッドごとのスパースパターンを動的に捕捉する効率的な方法の提示
•
Limitations:
◦
SampleAttentionのパフォーマンスがさまざまなLLMアーキテクチャとコンテキストウィンドウサイズにどのように一般化できるかについてのさらなる研究が必要です。
◦
他の高度なスパースアテンション技術とのより包括的な比較分析が必要です。
◦
非常に長いコンテキストウィンドウのパフォーマンス評価の欠如。
PDFを見る
Made with Slashpage