[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Photonic Fabric Platform for AI Accelerators
Achieving Robust Channel Estimation Neural Networks by Designed Training Data
Can Mental Imagery Improve the Thinking Capabilities of AI Systems?
Characterizing State Space Model (SSM) and SSM-Transformer Hybrid Language Model Performance with Long Context Length
PGT-I: Scaling Spatiotemporal GNNs with Memory-Efficient Distributed Training
Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling
A Lightweight and Robust Framework for Real-Time Colorectal Polyp Detection Using LOF-Based Preprocessing and YOLO-v11n
HMID-Net: An Exploration of Masked Image Modeling and Knowledge Distillation in Hyperbolic Space
Synchronizing Task Behavior: Aligning Multiple Tasks during Test-Time Training
Resolving Token-Space Gradient Conflicts: Token Space Manipulation for Transformer-Based Multi-Task Learning
Fast Bilateral Teleoperation and Imitation Learning Using Sensorless Force Control via Accurate Dynamics Model
VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis
Reviving Cultural Heritage: A Novel Approach for Comprehensive Historical Document Restoration
Interaction-Merged Motion Planning: Effectively Leveraging Diverse Motion Datasets for Robust Planning
Learning Software Bug Reports: A Systematic Literature Review
Rethinking Data Protection in the (Generative) Artificial Intelligence Era
Frequency-Aligned Knowledge Distillation for Lightweight Spatiotemporal Forecasting
TopoStreamer: Temporal Lane Segment Topology Reasoning in Autonomous Driving
"Before, I Asked My Mom, Now I Ask ChatGPT": Visual Privacy Management with Generative AI for Blind and Low-Vision People
QLPro: Automated Code Vulnerability Discovery via LLM and Static Code Analysis Integration
FedWSQ: Efficient Federated Learning with Weight Standardization and Distribution-Aware Non-Uniform Quantization
Plan for Speed: Dilated Scheduling for Masked Diffusion Language Models
Bridging the Digital Divide: Small Language Models as a Pathway for Physics and Photonics Education in Underdeveloped Regions
DaMO: A Data-Efficient Multimodal Orchestrator for Temporal Reasoning with Video LLMs
Dynamic Context Tuning for Retrieval-Augmented Generation: Enhancing Multi-Turn Planning and Tool Adaptation
Specification and Evaluation of Multi-Agent LLM Systems - Prototype and Cybersecurity Applications
PhysioWave: A Multi-Scale Wavelet-Transformer for Physiological Signal Representation
Draft-based Approximate Inference for LLMs
Label-semantics Aware Generative Approach for Domain-Agnostic Multilabel Classification
SemiOccam: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels
Adversarial bandit optimization for approximately linear functions
Know Or Not: a library for evaluating out-of-knowledge base robustness
Leveraging Vision-Language Models for Visual Grounding and Analysis of Automotive UI
DualReal: Adaptive Joint Training for Lossless Identity-Motion Fusion in Video Customization
CoordField: Coordination Field for Agentic UAV Task Allocation In Low-altitude Urban Scenarios
Return Capping: Sample-Efficient CVaR Policy Gradient Optimisation
AnyTSR: Any-Scale Thermal Super-Resolution for UAV
Enhanced Pruning Strategy for Multi-Component Neural Architectures Using Component-Aware Graph Analysis
Executable Functional Abstractions: Inferring Generative Programs for Advanced Math Problems
Measuring Leakage in Concept-Based Methods: An Information Theoretic Approach
APIGen-MT: Agentic Pipeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay
The Dual-Route Model of Induction
Detecting PTSD in Clinical Interviews: A Comparative Analysis of NLP Methods and Large Language Models
SWI: Speaking with Intent in Large Language Models
A Study of LLMs' Preferences for Libraries and Programming Languages
TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully Synthetic Data
Sampling Decisions
Federated Continual Instruction Tuning
Fine-Tuning Diffusion Generative Models via Rich Preference Optimization
BriLLM: Brain-inspired Large Language Model
Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective
RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
PLADIS: Pushing the Limits of Attention in Diffusion Models at Inference Time by Leveraging Sparsity
DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability
Symbolic Mixture-of-Experts: Adaptive Skill-based Routing for Heterogeneous Reasoning
OMNISEC: LLM-Driven Provenance-based Intrusion Detection via Retrieval-Augmented Behavior Prompting
Too Much to Trust? Measuring the Security and Cognitive Impacts of Explainability in AI-Driven SOCs
Attend or Perish: Benchmarking Attention in Algorithmic Reasoning
Can Optical Denoising Clean Sonar Images? A Benchmark and Fusion Approach
Brain Foundation Models: A Survey on Advancements in Neural Signal Processing and Brain Discovery
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in Product QA Agents
Detecting Benchmark Contamination Through Watermarking
MEMERAG: A Multilingual End-to-End Meta-Evaluation Benchmark for Retrieval Augmented Generation
Steering into New Embedding Spaces: Analyzing Cross-Lingual Alignment Induced by Model Interventions in Multilingual Language Models
Analyze the Neurons, not the Embeddings: Understanding When and Where LLM Representations Align with Humans
MKE-Coder: Multi-Axial Knowledge with Evidence Verification in ICD Coding for Chinese EMRs
An Overall Real-Time Mechanism for Classification and Quality Evaluation of Rice
Layerwise Recall and the Geometry of Interwoven Knowledge in LLMs
Learning in Strategic Queuing Systems with Small Buffers
BARNN: A Bayesian Autoregressive and Recurrent Neural Network
HEPPO-GAE: Hardware-Efficient Proximal Policy Optimization with Generalized Advantage Estimation
CGP-Tuning: Structure-Aware Soft Prompt Tuning for Code Vulnerability Detection
A recent evaluation on the performance of LLMs on radiation oncology physics using questions of randomly shuffled options
A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios
PEMF-VTO: Point-Enhanced Video Virtual Try-on via Mask-free Paradigm
Understanding the Design Decisions of Retrieval-Augmented Generation Systems
DOGR: Towards Versatile Visual Document Grounding and Referring
Ev2R: Evaluating Evidence Retrieval in Automated Fact-Checking
DualSwinUnet++: An Enhanced Swin-Unet Architecture With Dual Decoders For PTMC Segmentation
PerspectiveNet: Multi-View Perception for Dynamic Scene Understanding
AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization
Continual Learning with Neuromorphic Computing: Foundations, Methods, and Emerging Applications
FlexiTex: Enhancing Texture Generation via Visual Guidance
ASMA: An Adaptive Safety Margin Algorithm for Vision-Language Drone Navigation via Scene-Aware Control Barrier Functions
The unknotting number, hard unknot diagrams, and reinforcement learning
Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation
Enhancing Natural Language Inference Performance with Knowledge Graph for COVID-19 Automated Fact-Checking in Indonesian Language
CVPT: Cross Visual Prompt Tuning
Proficient Graph Neural Network Design by Accumulating Knowledge on Large Language Models
Stimulating Imagination: Towards General-purpose "Something Something Placement"
Why Does New Knowledge Create Messy Ripple Effects in LLMs?
A Mathematical Framework and a Suite of Learning Techniques for Neural-Symbolic Systems
How to Leverage Predictive Uncertainty Estimates for Reducing Catastrophic Forgetting in Online Continual Learning
Towards the Next Frontier in Speech Representation Learning Using Disentanglement
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles
Which Experiences Are Influential for RL Agents? Efficiently Estimating The Influence of Experiences
Oversmoothing Alleviation in Graph Neural Networks: A Survey and Unified View
OCK: Unsupervised Dynamic Video Prediction with Object-Centric Kinematics
Benchmarking Mobile Device Control Agents across Diverse Configurations
Load more
Measuring Leakage in Concept-Based Methods: An Information Theoretic Approach
Created by
Haebom
作者
Mikael Makonnen, Moritz Vandenhirtz, Sonia Laguna, Julia E Vogt
概要
Concept Bottleneck Models(CBM)は、人間が理解できる概念を中心に予測を構成し、解釈力を高めることを目指しています。しかし、予測信号が概念のボトルネックを迂回する意図しない情報漏洩は透明性を損なう。本論文はCBMにおける情報漏洩を定量化する情報理論的尺度を提示し、概念埋め込みが指定された概念を超えて追加の意図しない情報をどの程度符号化するかを把握する。制御された合成実験により、尺度の有効性を検証し、さまざまな構成で流出傾向を検出する効果を示します。特徴と概念の次元が流出に大きな影響を及ぼし、分類器の選択が測定安定性に影響を与えることを強調する。 (XGBoostが最も安定した推定器であることがわかりました。)また、初期調査の結果、対応する尺度がソフトジョイントCBMに適用されたときに予想される動作が示されており、完全に合成された環境を超えて流出定量化の信頼性を示唆しています。この研究は、制御された合成実験における尺度を厳密に評価しますが、今後の研究は実際のデータセットへの適用を拡張する可能性があります。
Takeaways、Limitations
•
Takeaways:
情報漏洩を定量化する新しい情報理論的尺度を提示し、合成データセットを通じてその有効性を検証しました。特徴と概念の次元、分類器の選択が情報漏洩に与える影響を明らかにした。ソフトジョイントCBMにも適用可能性を確認しました。
•
Limitations:
研究は制御された合成実験に限定されており、実際のデータセットへの適用は将来の研究課題として残ります。
PDFを見る
Made with Slashpage