[공지사항]을 빙자한 안부와 근황
Show more
/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
Photonic Fabric Platform for AI Accelerators
Achieving Robust Channel Estimation Neural Networks by Designed Training Data
Can Mental Imagery Improve the Thinking Capabilities of AI Systems?
Characterizing State Space Model (SSM) and SSM-Transformer Hybrid Language Model Performance with Long Context Length
PGT-I: Scaling Spatiotemporal GNNs with Memory-Efficient Distributed Training
Robust 3D-Masked Part-level Editing in 3D Gaussian Splatting with Regularized Score Distillation Sampling
A Lightweight and Robust Framework for Real-Time Colorectal Polyp Detection Using LOF-Based Preprocessing and YOLO-v11n
HMID-Net: An Exploration of Masked Image Modeling and Knowledge Distillation in Hyperbolic Space
Synchronizing Task Behavior: Aligning Multiple Tasks during Test-Time Training
Resolving Token-Space Gradient Conflicts: Token Space Manipulation for Transformer-Based Multi-Task Learning
Fast Bilateral Teleoperation and Imitation Learning Using Sensorless Force Control via Accurate Dynamics Model
VisualSpeaker: Visually-Guided 3D Avatar Lip Synthesis
Reviving Cultural Heritage: A Novel Approach for Comprehensive Historical Document Restoration
Interaction-Merged Motion Planning: Effectively Leveraging Diverse Motion Datasets for Robust Planning
Learning Software Bug Reports: A Systematic Literature Review
Rethinking Data Protection in the (Generative) Artificial Intelligence Era
Frequency-Aligned Knowledge Distillation for Lightweight Spatiotemporal Forecasting
TopoStreamer: Temporal Lane Segment Topology Reasoning in Autonomous Driving
"Before, I Asked My Mom, Now I Ask ChatGPT": Visual Privacy Management with Generative AI for Blind and Low-Vision People
QLPro: Automated Code Vulnerability Discovery via LLM and Static Code Analysis Integration
FedWSQ: Efficient Federated Learning with Weight Standardization and Distribution-Aware Non-Uniform Quantization
Plan for Speed: Dilated Scheduling for Masked Diffusion Language Models
Bridging the Digital Divide: Small Language Models as a Pathway for Physics and Photonics Education in Underdeveloped Regions
DaMO: A Data-Efficient Multimodal Orchestrator for Temporal Reasoning with Video LLMs
Dynamic Context Tuning for Retrieval-Augmented Generation: Enhancing Multi-Turn Planning and Tool Adaptation
Specification and Evaluation of Multi-Agent LLM Systems - Prototype and Cybersecurity Applications
PhysioWave: A Multi-Scale Wavelet-Transformer for Physiological Signal Representation
Draft-based Approximate Inference for LLMs
Label-semantics Aware Generative Approach for Domain-Agnostic Multilabel Classification
SemiOccam: A Robust Semi-Supervised Image Recognition Network Using Sparse Labels
Adversarial bandit optimization for approximately linear functions
Know Or Not: a library for evaluating out-of-knowledge base robustness
Leveraging Vision-Language Models for Visual Grounding and Analysis of Automotive UI
DualReal: Adaptive Joint Training for Lossless Identity-Motion Fusion in Video Customization
CoordField: Coordination Field for Agentic UAV Task Allocation In Low-altitude Urban Scenarios
Return Capping: Sample-Efficient CVaR Policy Gradient Optimisation
AnyTSR: Any-Scale Thermal Super-Resolution for UAV
Enhanced Pruning Strategy for Multi-Component Neural Architectures Using Component-Aware Graph Analysis
Executable Functional Abstractions: Inferring Generative Programs for Advanced Math Problems
Measuring Leakage in Concept-Based Methods: An Information Theoretic Approach
APIGen-MT: Agentic Pipeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay
The Dual-Route Model of Induction
Detecting PTSD in Clinical Interviews: A Comparative Analysis of NLP Methods and Large Language Models
SWI: Speaking with Intent in Large Language Models
A Study of LLMs' Preferences for Libraries and Programming Languages
TruthLens: Explainable DeepFake Detection for Face Manipulated and Fully Synthetic Data
Sampling Decisions
Federated Continual Instruction Tuning
Fine-Tuning Diffusion Generative Models via Rich Preference Optimization
BriLLM: Brain-inspired Large Language Model
Studying Classifier(-Free) Guidance From a Classifier-Centric Perspective
RealGeneral: Unifying Visual Generation via Temporal In-Context Learning with Video Models
Search-R1: Training LLMs to Reason and Leverage Search Engines with Reinforcement Learning
PLADIS: Pushing the Limits of Attention in Diffusion Models at Inference Time by Leveraging Sparsity
DynamicID: Zero-Shot Multi-ID Image Personalization with Flexible Facial Editability
Symbolic Mixture-of-Experts: Adaptive Skill-based Routing for Heterogeneous Reasoning
OMNISEC: LLM-Driven Provenance-based Intrusion Detection via Retrieval-Augmented Behavior Prompting
Too Much to Trust? Measuring the Security and Cognitive Impacts of Explainability in AI-Driven SOCs
Attend or Perish: Benchmarking Attention in Algorithmic Reasoning
Can Optical Denoising Clean Sonar Images? A Benchmark and Fusion Approach
Brain Foundation Models: A Survey on Advancements in Neural Signal Processing and Brain Discovery
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in Product QA Agents
Detecting Benchmark Contamination Through Watermarking
MEMERAG: A Multilingual End-to-End Meta-Evaluation Benchmark for Retrieval Augmented Generation
Steering into New Embedding Spaces: Analyzing Cross-Lingual Alignment Induced by Model Interventions in Multilingual Language Models
Analyze the Neurons, not the Embeddings: Understanding When and Where LLM Representations Align with Humans
MKE-Coder: Multi-Axial Knowledge with Evidence Verification in ICD Coding for Chinese EMRs
An Overall Real-Time Mechanism for Classification and Quality Evaluation of Rice
Layerwise Recall and the Geometry of Interwoven Knowledge in LLMs
Learning in Strategic Queuing Systems with Small Buffers
BARNN: A Bayesian Autoregressive and Recurrent Neural Network
HEPPO-GAE: Hardware-Efficient Proximal Policy Optimization with Generalized Advantage Estimation
CGP-Tuning: Structure-Aware Soft Prompt Tuning for Code Vulnerability Detection
A recent evaluation on the performance of LLMs on radiation oncology physics using questions of randomly shuffled options
A Survey on Large Language Model-Based Social Agents in Game-Theoretic Scenarios
PEMF-VTO: Point-Enhanced Video Virtual Try-on via Mask-free Paradigm
Understanding the Design Decisions of Retrieval-Augmented Generation Systems
DOGR: Towards Versatile Visual Document Grounding and Referring
Ev2R: Evaluating Evidence Retrieval in Automated Fact-Checking
DualSwinUnet++: An Enhanced Swin-Unet Architecture With Dual Decoders For PTMC Segmentation
PerspectiveNet: Multi-View Perception for Dynamic Scene Understanding
AlphaDPO: Adaptive Reward Margin for Direct Preference Optimization
Continual Learning with Neuromorphic Computing: Foundations, Methods, and Emerging Applications
FlexiTex: Enhancing Texture Generation via Visual Guidance
ASMA: An Adaptive Safety Margin Algorithm for Vision-Language Drone Navigation via Scene-Aware Control Barrier Functions
The unknotting number, hard unknot diagrams, and reinforcement learning
Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation
Enhancing Natural Language Inference Performance with Knowledge Graph for COVID-19 Automated Fact-Checking in Indonesian Language
CVPT: Cross Visual Prompt Tuning
Proficient Graph Neural Network Design by Accumulating Knowledge on Large Language Models
Stimulating Imagination: Towards General-purpose "Something Something Placement"
Why Does New Knowledge Create Messy Ripple Effects in LLMs?
A Mathematical Framework and a Suite of Learning Techniques for Neural-Symbolic Systems
How to Leverage Predictive Uncertainty Estimates for Reducing Catastrophic Forgetting in Online Continual Learning
Towards the Next Frontier in Speech Representation Learning Using Disentanglement
Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models Aligned with Human Cognitive Principles
Which Experiences Are Influential for RL Agents? Efficiently Estimating The Influence of Experiences
Oversmoothing Alleviation in Graph Neural Networks: A Survey and Unified View
OCK: Unsupervised Dynamic Video Prediction with Object-Centric Kinematics
Benchmarking Mobile Device Control Agents across Diverse Configurations
Load more
PGT-I: Scaling Spatiotemporal GNNs with Memory-Efficient Distributed Training
Created by
Haebom
作者
Seth Ockerman, Amal Gueroudji, Tanwi Mallick, Yixuan He, Line Pouchard, Robert Ross, Shivaram Venkataraman
概要
大規模空間時間データ依存性モデリングに有効な空間時間グラフニューラルネットワーク(ST − GNN)は、メモリの制約のために主に小規模データセットにのみ適用されてきた。本論文では、大規模作業量のスケーラビリティ研究に基づいて、分散データ並列学習と2つの新しい戦略(インデックス配置と分散インデックス配置)を統合したPyTorch Geometric Temporalの拡張版であるPyTorch Geometric Temporal Index(PGT-I)を提示する。インデックス技術は空間時間構造を利用して実行時に動的にスナップショットを作成し、メモリオーバーヘッドを大幅に削減し、分散インデックス配置は複数のGPUにわたって拡張可能な処理を可能にします。提示された手法により、グラフ分割なしでPeMSデータセット全体でST-GNNを初めて学習でき、最大89%のピークメモリ使用量削減と128個のGPUを使用して、標準DDPに対して最大11.78倍の速度向上を達成した。
Takeaways、Limitations
•
Takeaways:
◦
大規模空間時間データセットのST-GNN学習を可能にする新しいフレームワークPGT-Iの提示。
◦
インデックス配置と分散インデックス配置戦略によるメモリ効率と学習速度の向上
◦
PeMSデータセットを活用した実験により性能向上を検証。
•
Limitations:
◦
PGT-IはPyTorch Geometric Temporalに依存しており、他のフレームワークとの互換性は不明です。
◦
提示された方法の効果はPeMSデータセットに限定することができ、他の種類の空間時間データセットの一般化の可能性はさらなる研究を必要とする。
◦
分散学習環境への依存性が高い。
PDFを見る
Made with Slashpage