/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation
Self-Guided Process Reward Optimization with Redefined Step-wise Advantage for Process Reinforcement Learning
Crafting Hanzi as Narrative Bridges: An AI Co-Creation Workshop for Elderly Migrants
Distributional Soft Actor-Critic with Diffusion Policy
Skywork-Reward-V2: Scaling Preference Data Curation via Human-AI Synergy
Fast AI Model Splitting over Edge Networks
From Sentences to Sequences: Rethinking Languages in Biological System
MTCNet: Motion and Topology Consistency Guided Learning for Mitral Valve Segmentationin 4D Ultrasound
Horus: A Protocol for Trustless Delegation Under Uncertainty
Mixture of Reasonings: Teach Large Language Models to Reason with Adaptive Strategies
Benchmarking Generalizable Bimanual Manipulation: RoboTwin Dual-Arm Collaboration Challenge at CVPR 2025 MEIS Workshop
Red Teaming for Generative AI, Report on a Copyright-Focused Exercise Completed in an Academic Medical Center
AirV2X: Unified Air-Ground Vehicle-to-Everything Collaboration
Semantic Structure-Aware Generative Attacks for Enhanced Adversarial Transferability
Aligning Frozen LLMs by Reinforcement Learning: An Iterative Reweight-then-Optimize Approach
Distinguishing Predictive and Generative AI in Regulation
AIn't Nothing But a Survey? Using Large Language Models for Coding German Open-Ended Survey Responses on Survey Motivation
Text-Aware Image Restoration with Diffusion Models
How Good LLM-Generated Password Policies Are?
Towards an Explainable Comparison and Alignment of Feature Embeddings
Gradient-Based Model Fingerprinting for LLM Similarity Detection and Family Classification
Empowering Intelligent Low-altitude Economy with Large AI Model Deployment
Incorporating LLMs for Large-Scale Urban Complex Mobility Simulation
Generating Hypotheses of Dynamic Causal Graphs in Neuroscience: Leveraging Generative Factor Models of Observed Time Series
Traveling Across Languages: Benchmarking Cross-Lingual Consistency in Multimodal LLMs
Threat Modeling for AI: The Case for an Asset-Centric Approach
SoccerDiffusion: Toward Learning End-to-End Humanoid Robot Soccer from Gameplay Recordings
PAD: Phase-Amplitude Decoupling Fusion for Multi-Modal Land Cover Classification
Significativity Indices for Agreement Values
Transferrable Surrogates in Expressive Neural Architecture Search Spaces
Privacy-Preserving Operating Room Workflow Analysis using Digital Twins
Uncertainty-Guided Coarse-to-Fine Tumor Segmentation with Anatomy-Aware Post-Processing
CMD-HAR: Cross-Modal Disentanglement for Wearable Human Activity Recognition
Commander-GPT: Fully Unleashing the Sarcasm Detection Capability of Multi-Modal Large Language Models
Understanding-informed Bias Mitigation for Fair CMR Segmentation
HAPI: A Model for Learning Robot Facial Expressions from Human Preferences
MaizeField3D: A Curated 3D Point Cloud and Procedural Model Dataset of Field-Grown Maize from a Diversity Panel
Illuminant and light direction estimation using Wasserstein distance method
Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association
LLM-Powered Prediction of Hyperglycemia and Discovery of Behavioral Treatment Pathways from Wearables and Diet
Interleaved Gibbs Diffusion: Generating Discrete-Continuous Data with Implicit Constraints
EquiTabPFN: A Target-Permutation Equivariant Prior Fitted Networks
Circuit-tuning: A Mechanistic Approach for Identifying Parameter Redundancy and Fine-tuning Neural Networks
EigenLoRAx: Recycling Adapters to Find Principal Subspaces for Resource-Efficient Adaptation and Inference
Learning Traffic Anomalies from Generative Models on Real-Time Observations
Enabling Population-Level Parallelism in Tree-Based Genetic Programming for Comprehensive GPU Acceleration
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models
Quantifying the Importance of Data Alignment in Downstream Model Performance
Quantum-enhanced causal discovery for a small number of samples
On Characterizations for Language Generation: Interplay of Hallucinations, Breadth, and Stability
Token Prepending: A Training-Free Approach for Eliciting Better Sentence Embeddings from LLMs
COEF-VQ: Cost-Efficient Video Quality Understanding through a Cascaded Multimodal LLM Framework
GeMID: Generalizable Models for IoT Device Identification
Next-Token Prediction Task Assumes Optimal Data Ordering for LLM Training in Proof Generation
Is Complex Query Answering Really Complex?
Aerial Vision-and-Language Navigation via Semantic-Topo-Metric Representation Guided LLM Reasoning
Offline Reinforcement Learning for Learning to Dispatch for Job Shop Scheduling
Reconsidering the energy efficiency of spiking neural networks
Exploring the Integration of Large Language Models in Industrial Test Maintenance Processes
Sequence-aware Pre-training for Echocardiography Probe Movement Guidance
Anatomical Foundation Models for Brain MRIs
Learning From Crowdsourced Noisy Labels: A Signal Processing Perspective
Quantifying the Cross-sectoral Intersecting Discrepancies within Multiple Groups Using Latent Class Analysis Towards Fairness
Delving into LLM-assisted writing in biomedical publications through excess vocabulary
Towards a Novel Measure of User Trust in XAI Systems
Avoiding Catastrophe in Online Learning by Asking for Help
Improving the Robustness of Distantly-Supervised Named Entity Recognition via Uncertainty-Aware Teacher Learning and Student-Student Collaborative Learning
Beyond Scale: The Diversity Coefficient as a Data Quality Metric for Variability in Natural Language Data
Kernel Density Bayesian Inverse Reinforcement Learning
Embodied AI Agents: Modeling the World
Mind2Web 2: Evaluating Agentic Search with Agent-as-a-Judge
AI Flow: Perspectives, Scenarios, and Approaches
A framework for Conditional Reasoning in Answer Set Programming
Autoformalization in the Era of Large Language Models: A Survey
Agentic AI Process Observability: Discovering Behavioral Variability
Artificial Intelligence Index Report 2025
MAPS: Advancing Multi-Modal Reasoning in Expert-Level Physical Science
XGeM: A Multi-Prompt Foundation Model for Multimodal Medical Data Generation
Direct Preference Optimization Using Sparse Feature-Level Constraints
Unsupervised Cognition
Urban Region Pre-training and Prompting: A Graph-based Approach
Road Graph Generator: Mapping roads at construction sites from GPS データ
Point3R: Streaming 3D Reconstruction with Explicit Spatial Pointer Memory
LiteReality: Graphics-Ready 3D Scene Reconstruction from RGB-D Scans
Answer Matching Outperforms Multiple Choice for Language Model Evaluation
Subtyping in DHOL - Extended preprint
MOTIF: Modular Thinking via Reinforcement Fine-tuning in LLMs
USAD: An Unsupervised Data Augmentation Spatio-Temporal Attention Diffusion Network
DNN-Based Precoding in RIS-Aided mmWave MIMO Systems With Practical Phase Shift
SynapseRoute: An Auto-Route Switching Framework on Dual-State Large Language Model
Self-Correction Bench: Revealing and Addressing the Self-Correction Blind Spot in LLMs
Multi-agent Auditory Scene Analysis
Fast and Simplex: 2-Simplicial Attention in Triton
Synthesizable by Design: A Retrosynthesis-Guided Framework for Molecular Analog Generation
Linear Attention with Global Context: A Multipole Attention Mechanism for Vision and Physics
Early Signs of Steganographic Capabilities in Frontier LLMs
Meta SecAlign: A Secure Foundation LLM Against Prompt Injection Attacks
FairHuman: Boosting Hand and Face Quality in Human Image Generation with Minimum Potential Delay Fairness in Diffusion Models
APT: Adaptive Personalized Training for Diffusion Models with Limited Data
ASDA: Audio Spectrogram Differential Attention Mechanism for Self-Supervised Representation Learning
Load more
AIn't Nothing But a Survey? Using Large Language Models for Coding German Open-Ended Survey Responses on Survey Motivation
Created by
Haebom
作者
Leah von der Heyde, Anna-Carolina Haensch, Bernd Wei{\ss}, Jessica Daikeler
概要
この論文は、大規模言語モデル(LLM)を使用してアンケート調査のオープンレスポンスを分類する研究です。既存の研究が主に英語データや単純な話題に集中したのとは異なり、ドイツ語のアンケート参加理由データを使って、様々な最先端LLMとプロンプト方式を比較分析した。人間の専門家のコーディングとの性能比較により、LLMの性能差を確認し、特に微調整されたLLMだけが満足できる予測性能を達成することを示した。プロンプト方式の効果はLLMによって異なり、微調整なしでは、LLMが調査参加理由の各カテゴリを不均等に分類してカテゴリ分布が歪む可能性があることを明らかにした。結論として、LLMを調査調査に効率的かつ正確に活用するための条件と制約を議論し、実務家のデータ処理と分析の意味を提示します。
Takeaways、Limitations
•
Takeaways:
◦
様々なLLMとプロンプト手法の比較分析によるアンケート開放型応答分類の実証的根拠の提示
◦
微調整されたLLMの重要性とプロンプト技術のLLM依存性を強調。
◦
LLMを活用したオープンレスポンス分析の潜在的な効用と制約を同時に提示
◦
アンケート調査におけるLLMの活用に関する実務的意義の提供
•
Limitations:
◦
研究対象はドイツ語のアンケートデータと特定のテーマに限定され、一般化の可能性に関するさらなる研究が必要です。
◦
使用されるLLMおよびプロンプト方式の制限による結果の一般化可能性の制約。
◦
微調整の必要性による実用性の低下の可能性
◦
LLMの性能不均衡によるカテゴリ分布歪み問題の継続的解決の必要性
PDFを見る
Made with Slashpage