[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Efficient Federated Learning with Heterogeneous Data and Adaptive Dropout
Energy Efficiency in AI for 5G and Beyond: A DeepRx Case Study
A PBN-RL-XAI Framework for Discovering a "Hit-and-Run" Therapeutic Strategy in Melanoma
(Almost) Free Modality Stitching of Foundation Models
Prompt4Trust: A Reinforcement Learning Prompt Augmentation Framework for Clinically-Aligned Confidence Calibration in Multimodal Large Language Models
SEALGuard: Safeguarding the Multilingual Conversations in Southeast Asian Languages for LLM Software Systems
Dually Hierarchical Drift Adaptation for Online Configuration Performance Learning
Tree-Structured Parzen Estimator Can Solve Black-Box Combinatorial Optimization More Efficiently
EXPO: Stable Reinforcement Learning with Expressive Policies
Reinforcement Learning with Action Chunking
On the Effect of Instruction Tuning Loss on Generalization
Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models
Text to model via SysML: Automated generation of dynamical system computational models from unstructured natural language text via enhanced System Modeling Language diagrams
Feature-Based vs. GAN-Based Learning from Demonstrations: When and Why
DRAGON: Dynamic RAG Benchmark On News
Solar Flare Prediction Using Long Short-term Memory (LSTM) and Decomposition-LSTM with Sliding Window Pattern Recognition
Conversation Forests: The Key to Fine Tuning Large Language Models for Multi-Turn Medical Conversations is Branching
RAG-R1: Incentivize the Search and Reasoning Capabilities of LLMs through Multi-query Parallelism
Following the Clues: Experiments on Person Re-ID using Cross-Modal Intelligence
Stylometry recognizes human and LLM-generated texts in short samples
QLPro: Automated Code Vulnerability Discovery via LLM and Static Code Analysis Integration
Evaluating Multimodal Large Language Models on Educational Textbook Question Answering
FeDa4Fair: Client-Level Federated Datasets for Fairness Evaluation
Alleviating User-Sensitive bias with Fair Generative Sequential Recommendation Model
MATE: LLM-Powered Multi-Agent Translation Environment for Accessibility Applications
DeInfoReg: A Decoupled Learning Framework for Better Training Throughput
FLAME: Towards Federated Fine-Tuning Large Language Models Through Adaptive SMoE
ImpliRet: Benchmarking the Implicit Fact Retrieval Challenge
The Price of Freedom: Exploring Expressivity and Runtime Tradeoffs in Equivariant Tensor Products
The Limits of Tractable Marginalization
A quantum semantic framework for natural language processing
ProtocolLLM: RTL Benchmark for SystemVerilog Generation of Communication Protocols
Deepfake Technology Unveiled: The Commoditization of AI and Its Impact on Digital Trust
Training Dynamics Underlying Language Model Scaling Laws: Loss Deceleration and Zero-Sum Learning
Critique-GRPO: Advancing LLM Reasoning with Natural Language and Numerical Feedback
Matrix Is All You Need
Temporal Chunking Enhances Recognition of Implicit Sequential Patterns
Seven Security Challenges That Must be Solved in Cross-domain Multi-agent LLM Systems
PAN-Crafter: Learning Modality-Consistent Alignment for PAN-Sharpening
FlowAlign: Trajectory-Regularized, Inversion-Free Flow-based Image Editing
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs
FalseReject: A Resource for Improving Contextual Safety and Mitigating Over-Refusals in LLMs via Structured Reasoning
Multimodal Sentiment Analysis on CMU-MOSEI Dataset using Transformer-based Models
Nexus-Gen: Unified Image Understanding, Generation, and Editing via Prefilled Autoregression in Shared Embedding Space
Leveraging Large Language Models for Multi-Class and Multi-Label Detection of Drug Use and Overdose Symptoms on Social Media
Rethinking the Foundations for Continual Reinforcement Learning
Compositional Flows for 3D Molecule and Synthesis Pathway Co-design
Rethinking RoPE: A Mathematical Blueprint for N-dimensional Positional Embedding
Speculative Automated Refactoring of Imperative Deep Learning Programs to Graph Execution
Test-time Adaptation for Foundation Medical Segmentation Model without Parametric Updates
Style over Substance: Distilled Language Models Reason Via Stylistic Replication
AnnoPage Dataset: Dataset of Non-Textual Elements in Documents with Fine-Grained Categorization
Multi-View Node Pruning for Accurate Graph Representation
Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
Voting or Consensus? Decision-Making in Multi-Agent Debate
Assistance or Disruption? Exploring and Evaluating the Design and Trade-offs of Proactive AI Programming Support
A Generative Approach to LLM Harmfulness Detection with Special Red Flag Tokens
Score-of-Mixture Training: Training One-Step Generative Models Made Simple via Score Estimation of Mixture Distributions
Model Tampering Attacks Enable More Rigorous Evaluations of LLM Capabilities
Synthetic Datasets for Machine Learning on Spatio-Temporal Graphs using PDEs
Comply: Learning Sentences with Complex Weights inspired by Fruit Fly Olfaction
Inverse Reinforcement Learning with Switching Rewards and History Dependency for Characterizing Animal Behaviors
Few-Shot Radar Signal Recognition through Self-Supervised Learning and Radio Frequency Domain Adaptation
Transfer Learning Analysis of Variational Quantum Circuits
Plancraft: an evaluation dataset for planning with LLM agents
Fully Data-driven but Interpretable Human Behavioural Modelling with Differentiable Discrete Choice Model
A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation
Is Training Data Quality or Quantity More Impactful to Small Language Model Performance?
Searching Latent Program Spaces
The Pragmatic Frames of Spurious Correlations in Machine Learning: Interpreting How and Why They Matter
ComFairGNN: Community Fair Graph Neural Network
DroidSpeak: KV Cache Sharing for Cross-LLM Communication and Multi-LLM Serving
Online Intrinsic Rewards for Decision Making Agents from Large Language Model Feedback
Large Language Models Engineer Too Many Simple Features For Tabular Data
Overcoming Slow Decision Frequencies in Continuous Control: Model-Based Sequence Reinforcement Learning for Model-Free Control
IdeaSynth: Iterative Research Idea Development Through Evolving and Composing Idea Facets with Literature-Grounded Feedback
SECURE: Semantics-aware Embodied Conversation under Unawareness for Lifelong Robot Learning
Advancing Depth Anything Model for Unsupervised Monocular Depth Estimation in Endoscopy
SA-GDA: Spectral Augmentation for Graph Domain Adaptation
The GPT Surprise: Offering Large Language Model Chat in a Massive Coding Class Reduced Engagement but Increased Adopters Exam Performances
State-Constrained Offline Reinforcement Learning
SimAD: A Simple Dissimilarity-based Approach for Time Series Anomaly Detection
Unified ODE Analysis of Smooth Q-Learning Algorithms
FairTargetSim: An Interactive Simulator for Understanding and Explaining the Fairness Effects of Target Variable Definition
Fine-grained Stateful Knowledge Exploration: Effective and Efficient Graph Retrieval with Large Language Models
Learning Safe Numeric Planning Action Models
Augmenting End-to-End Steering Angle Prediction with CAN Bus Data
EASTER: Embedding Aggregation-based Heterogeneous Models Training in Vertical Federated Learning
GRAPES: Learning to Sample Graphs for Scalable Graph Neural Networks
Acquiring and Adapting Priors for Novel Tasks via Neural Meta-Architectures
VerifyBench: A Systematic Benchmark for Evaluating Reasoning Verifiers Across Domains
Is Human-Written Data Enough? The Challenge of Teaching Reasoning to LLMs Without RL or Distillation
Working with AI: Measuring the Occupational Implications of Generative AI
Establishing Best Practices for Building Rigorous Agentic Benchmarks
An Agentic Framework for Autonomous Metamaterial Modeling and Inverse Design
Seeking to Collide: Online Safety-Critical Scenario Generation for Autonomous Driving with Retrieval Augmented Large Language Models
BOOST: Bootstrapping Strategy-Driven Reasoning Programs for Program-Guided Fact-Checking
The Odyssey of the Fittest: Can Agents Survive and Still Be Good?
Agentic Reasoning: A Streamlined Framework for Enhancing LLM Reasoning with Agentic Tools
ZebraLogic: On the Scaling Limits of LLMs for Logical Reasoning
Load more
Towards transparent and data-driven fault detection in manufacturing: A case study on univariate, discrete time series
Created by
Haebom
作者
Bernd Hofmann, Patrick Bruendl, Huong Giang Nguyen, Joerg Franke
概要
本論文では、製造業、特に安全が重要な分野で一貫した製品品質を確保するための新しい方法論を紹介します。従来の手動設定ベースの品質管理スキームの限界を克服するために、データベースの機械学習アプローチを採用しながら、同時にモデルの説明可能性を確保することに焦点を当てています。複数クラスの欠陥分類のためのマップ学習モデルとShapley Additive Explanations(SHAP)を活用して説明の可能性を高め、ドメイン特化ビジュアライゼーション技術によってオペレータが理解できるように結果を提示します。定量的摂動分析と専門家の評価を通じて、モデルの説明と可視化の有効性を評価する方法論も一緒に提案します。圧着プロセスという安全上の重要性の高いケースに、単変量離散時系列データセットを適用して95.9%の欠陥検出精度を達成し、定量的選択性分析と定性的専門家評価により、説明の適切性と解釈可能性を確認しました。これは、データ駆動型欠陥検出の信頼性と解釈の可能性を高め、産業品質管理の適用システム設計に貢献する人間中心のアプローチです。
Takeaways、_____T13840_____
•
_____T13841_____:
◦
データ駆動型機械学習モデルを活用して、高精度の欠陥検出性能を達成(95.9%)。
◦
SHAP技術とドメイン特化可視化によるモデルの記述可能性を高めることによる産業現場における水溶性の増大
◦
定量的および定性的評価方法論によるモデル記述と可視化の検証
◦
安全重要度の高い圧着プロセスに首尾よく適用され、実際の産業現場に適用可能な提示。
◦
人間中心のアプローチによるデータ駆動型欠陥検出に対する信頼性の向上
•
_____T13842_____:
◦
単変量離散時系列データの分析として、多変量または連続データの一般化の可能性はさらなる研究が必要です。
◦
圧着プロセスの1つのケーススタディとして、他の製造プロセスへの適用性の検証が必要です。
◦
専門家の評価の主観性が存在する可能性があります。
◦
SHAPを使用した説明の可能性は、データセットの特性によって制限される可能性があります。
PDFを見る
Made with Slashpage