/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
SACL: Understanding and Combating Textual Bias in Code Retrieval with Semantic-Augmented Reranking and Localization
Towards Provable (In)Secure Model Weight Release Schemes
Semantic Scene Graph for Ultrasound Image Explanation and Scanning Guidance
IndieFake Dataset: A Benchmark Dataset for Audio Deepfake Detection
These Are Not All the Features You Are Looking For: A Fundamental Bottleneck in Supervised Pretraining
In-Context Learning Strategies Emerge Rationally
Fake it till You Make it: Reward Modeling as Discriminative Prediction
Semantic Preprocessing for LLM-based Malware Analysis
PCDVQ: Enhancing Vector Quantization for Large Language Models via Polar Coordinate Decoupling
TracLLM: A Generic Framework for Attributing Long Context LLMs
TaxaDiffusion: Progressively Trained Diffusion Model for Fine-Grained Species Generation
Composite Flow Matching for Reinforcement Learning with Shifted-Dynamics Data
Explainability of Large Language Models using SMILE: Statistical Model-agnostic Interpretability with Local Explanations
Thinkless: LLM Learns When to Think
A3: an Analytical Low-Rank Approximation Framework for Attention
Search and Refine During Think: Autonomous Retrieval-Augmented Reasoning of LLMs
JointDiT: Enhancing RGB-Depth Joint Modeling with Diffusion Transformers
Energy Matching: Unifying Flow Matching and Energy-Based Models for Generative Modeling
AI-Driven Sentiment Analytics: Unlocking Business Value in the E-Commerce Landscape
Towards Adaptive Memory-Based Optimization for Enhanced Retrieval-Augmented Generation
AirCache: Activating Inter-modal Relevancy KV Cache Compression for Efficient Large Vision-Language Model Inference
Will LLMs be Professional at Fund Investment? DeepFund: A Live Arena Perspective
Revealing higher-order neural representations of uncertainty with the Noise Estimation through Reinforcement-based Diffusion (NERD) model
Zero-TIG: Temporal Consistency-Aware Zero-Shot Illumination-Guided Low-light Video Enhancement
PP-DocBee: Improving Multimodal Document Understanding Through a Bag of Tricks
CREStE: Scalable Mapless Navigation with Internet Scale Priors and Counterfactual Guidance
Markets with Heterogeneous Agents: Dynamics and Survival of Bayesian vs. No-Regret Learners
Reward-Guided Speculative Decoding for Efficient LLM Reasoning
UP-VLA: A Unified Understanding and Prediction Model for Embodied Agent
DisCoPatch: Taming Adversarially-driven Batch Statistics for Improved Out-of-Distribution Detection
Materialist: Physically Based Editing Using Single-Image Inverse Rendering
Representation Learning of Lab Values via Masked AutoEncoders
Lagrangian Index Policy for Restless Bandits with Average Reward
SIDA: Social Media Image Deepfake Detection, Localization and Explanation with Large Multimodal Model
InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models
Pretrained Reversible Generation as Unsupervised Visual Representation Learning
MvKeTR: Chest CT Report Generation with Multi-View Perception and Knowledge Enhancement
GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs
ToolScan: A Benchmark for Characterizing Errors in Tool-Use LLMs
Recall and Refine: A Simple but Effective Source-free Open-set Domain Adaptation Framework
InterFormer: Effective Heterogeneous Interaction Learning for Click-Through Rate Prediction
Prompting with Phonemes: Enhancing LLMs' Multilinguality for Non-Latin Script Languages
Advanced computer vision for extracting georeferenced vehicle trajectories from drone imagery
Rapid Gyroscope Calibration: A Deep Learning Approach
HERMES: temporal-coHERent long-forM understanding with Episodes and Semantics
A GREAT Architecture for Edge-Based Graph Problems Like TSP
ClimateIQA: A New Dataset and Benchmark to Advance Vision-Language Models in Meteorology Anomalies Analysis
MockLLM: A Multi-Agent Behavior Collaboration Framework for Online Job Seeking and Recruiting
Is my Data in your AI Model? Membership Inference Test with Application to Face Images
PuriDefense: Randomized Local Implicit Adversarial Purification for Defending Black-box Query-based Attacks
Continual Learning as Computationally Constrained Reinforcement Learning
Efficient Image Generation with Variadic Attention Heads
Smart Ride and Delivery Services with Electric Vehicles: Leveraging Bidirectional Charging for Profit Optimisation
From Memories to Maps: Mechanisms of In-Context Reinforcement Learning in Transformers
Graphs Meet AI Agents: Taxonomy, Progress, and Future Opportunities
Taming the Untamed: Graph-Based Knowledge Retrieval and Reasoning for MLLMs to Conquer the Unknown
Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues
Doppelganger Method: Breaking Role Consistency in LLM Agent via Prompt-based Transferable Adversarial Attack
Metis-RISE: RL Incentivizes and SFT Enhances Multimodal Reasoning Model Learning
Fast Monte Carlo Tree Diffusion: 100x Speedup via Parallel Sparse Planning
NFISiS: New Perspectives on Fuzzy Inference Systems for Renewable Energy Forecasting
The State of Large Language Models for African Languages: Progress and Challenges
Structuring the Unstructured: A Multi-Agent System for Extracting and Querying Financial KPIs and Guidance
Super Co-alignment for Sustainable Symbiotic Society
Improving Human-AI Coordination through Online Adversarial Training and Generative Models
WiS Platform: Enhancing Evaluation of LLM-Based Multi-Agent Systems Through Game-Based Analysis
Review learning: Real world validation of privacy preserving continual learning across medical institutions
Whole-Body Conditioned Egocentric Video Prediction
MTSBench: Benchmarking Multivariate Time Series Anomaly Detection and Model Selection at Scale
HalluSegBench: Counterfactual Visual Reasoning for Segmentation Hallucination Evaluation
WorldVLA: Towards Autoregressive Action World Model
"What's Up, Doc?": Analyzing How Users Seek Health Information in Large-Scale Conversational AI Datasets
Potemkin Understanding in Large Language Models
SkLEP: A Slovak General Language Understanding Benchmark
Process mining-driven modeling and simulation to enhance fault diagnosis in cyber-physical systems
TITAN: Query-Token based Domain Adaptive Adversarial Learning
SmoothSinger: A Conditional Diffusion Model for Singing Voice Synthesis with Multi-Resolution Architecture
Optimising 4th-Order Runge-Kutta Methods: A Dynamic Heuristic Approach for Efficiency and Low Storage
Domain Knowledge-Enhanced LLMs for Fraud and Concept Drift Detection
Scalable Bayesian Low-Rank Adaptation of Large Language Models via Stochastic Variational Subspace Inference
Leveraging LLM-Assisted Query Understanding for Live Retrieval-Augmented Generation
Temporal-Aware Graph Attention Network for Cryptocurrency Transaction Fraud Detection
Pay Attention to Small Weights
Real-time and personalized product recommendations for large e-commerce platforms
RQdia: Regularizing Q-Value Distributions With Image Augmentation
CA-I2P: Channel-Adaptive Registration Network with Global Optimal Selection
A Systematic Review of Human-AI Co-Creativity
Holistic Surgical Phase Recognition with Hierarchical Input Dependent State Space Models
On Uniform Weighted Deep Polynomial approximation
Exploring Adapter Design Tradeoffs for Low Resource Music Generation
Detecting Referring Expressions in Visually Grounded Dialogue with Autoregressive Language Models
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Hyperspherical Variational Autoencoders Using Efficient Spherical Cauchy Distribution
Integrating Vehicle Acoustic Data for Enhanced Urban Traffic Management: A Study on Speed Classification in Suzhou
DiLoCoX: A Low-Communication Large-Scale Training Framework for Decentralized Cluster
Agent-RewardBench: Towards a Unified Benchmark for Reward Modeling across Perception, Planning, and Safety in Real-World Multimodal Agents
From On-chain to Macro: Assessing the Importance of Data Source Diversity in Cryptocurrency Market Forecasting
$T^3$: Multi-level Tree-based Automatic Program Repair with Large Language Models
BitMark for Infinity: Watermarking Bitwise Autoregressive Image Generative Models
Task-Aware KV Compression For Cost-Effective Long Video Understanding
Load more
Small Encoders Can Rival Large Decoders in Detecting Groundedness
Created by
Haebom
作者
Istabrak Abbes, Gabriele Prato, Quentin Fournier, Fernando Rodriguez, Alaa Boukhary, Adam Elwood, Sarath Chandar
概要
この論文は、外部コンテキストを活用する大規模言語モデル(LLM)のパフォーマンスの向上に焦点を当てています。 LLMは、提供されたコンテキストに情報が不足している場合、根拠のない推測や内部の知識に頼って質問に答えるのに苦労します。したがって、コンテキストに厳密に基づいた応答の生成、つまり根拠のある応答生成は、実際には一貫性と信頼性を確保するために不可欠です。本研究は、高価なLLMの応答を生成する前に、与えられた質問が提供されたコンテキストに基づいているかどうかを検出するメカニズムに焦点を当てています。これらの検出メカニズムは、推論時間とリソース消費を大幅に削減することができます。 RoBERTaやNomicBERTなどの軽量な作業に特化したエンコーダモデルをキュレートされたデータセットに微調整することで、Llama3 8BやGPT4oなどの最先端のLLMと比較して精度を達成しながら、推論遅延時間を数倍に短縮できます。ソースコードはGitHubで公開されています。
Takeaways、Limitations
•
Takeaways:
◦
軽量モデルを使用してLLMの根拠のないレスポンス生成問題を効果的に解決する可能性を提示
◦
LLMの推論時間と資源消費の削減に貢献
◦
RoBERTaやNomicBERTなどの軽量モデルが最先端LLMと同様の性能を示すことを実証
•
Limitations:
◦
キュレートされたデータセットへの依存度が高い可能性があります。データセットの品質と量がモデルのパフォーマンスに大きな影響を与える可能性があります。
◦
特定のタスクに特化したモデルなので、他のタスクの一般化性能にはさらなる研究が必要です。
◦
提案された方法の実際の環境における性能とスケーラビリティのさらなる検証が必要である。
PDFを見る
Made with Slashpage