/
/
Daily Arxiv
Daily Arxiv
世界中で発行される人工知能関連の論文をまとめるページです。
このページはGoogle Geminiを活用して要約し、非営利で運営しています。
論文の著作権は著者および関連機関にあり、共有する際は出典を明記してください。
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Power Stabilization for AI Training Datacenters
A Systematic Study of Deep Learning Models and xAI Methods for Region-of-Interest Detection in MRI Scans
Documenting Deployment with Fabric: A Repository of Real-World AI Governance
Surya: Foundation Model for Heliophysics
Hard Examples Are All You Need: Maximizing GRPO Post-Training Under Annotation Budgets
MCLPD:Multi-view Contrastive Learning for EEG-based PD Detection Across Datasets
FinAgentBench: A Benchmark Dataset for Agentic Retrieval in Financial Question Answering
VerilogLAVD: LLM-Aided Rule Generation for Vulnerability Detection in Verilog
Kourkoutas-Beta: A Sunspike-Driven Adam Optimizer with Desert Flair
SecFSM: Knowledge Graph-Guided Verilog Code Generation for Secure Finite State Machines in Systems-on-Chip
Fortifying the Agentic Web: A Unified Zero-Trust Architecture Against Logic-layer Threats
LATTE: Learning Aligned Transactions and Textual Embeddings for Bank Clients
Preacher: Paper-to-Video Agentic System
Agoran: An Agentic Open Marketplace for 6G RAN Automation
Architectural Co-Design for Zero-Shot Anomaly Detection: Decoupling Representation and Dynamically Fusing Features in CLIP
IBPS: Indian Bail Prediction System
Diagnosing Memorization in Chain-of-Thought Reasoning, One Token at a Time
TS-Insight: Visualizing Thompson Sampling for Verification and XAI
When Better Eyes Lead to Blindness: A Diagnostic Study of the Information Bottleneck in CNN-LSTM Image Captioning Models
Seed-X: Building Strong Multilingual Translation LLM with 7B パラメータ
Generation of structure-guided pMHC-I libraries using Diffusion Models
Cross-Modality Masked Learning for Survival Prediction in ICI Treated NSCLC Patients
MCA-RG: Enhancing LLMs with Medical Concept Alignment for Radiology Report Generation
KEA Explain: Explanations of Hallucinations using Graph Kernel Analysis
Empirical Evidence for Alignment Faking in a Small LLM and Prompt-Based Mitigation Techniques
A Survey of Foundation Models for IoT: Taxonomy and Criteria-Based Analysis
Deep regularization networks for inverse problems with noisy operators
LaMP-Cap: Personalized Figure Caption Generation With Multimodal Figure Profiles
On the Fundamental Impossibility of Hallucination Control in Large Language Models
Lossless Token Sequence Compression via Meta-Tokens
Versatile Cardiovascular Signal Generation with a Unified Diffusion Transformer
Flexible Tool Selection through Low-dimensional Attribute Alignment of Vision and Language
Mutarjim: Advancing Bidirectional Arabic-English Translation with a Small Language Model
MMiC: Mitigating Modality Incompleteness in Clustered Federated Learning
Edge-Cloud Collaborative Computing on Distributed Intelligence and Model Optimization: A Survey
Sadeed: Advancing Arabic Diacritization Through Small Language Model
Annif at SemEval-2025 Task 5: Traditional XMTC augmented by LLMs
CaRL: Learning Scalable Planning Policies with Simple Rewards
On the Consistency of GNN Explanations for Malware Detection
Cequel: Cost-Effective Querying of Large Language Models for Text Clustering
Kuwain 1.5B: An Arabic SLM via Language Injection
MuSeD: A Multimodal Spanish Dataset for Sexism Detection in Social Media Videos
TextSplat: Text-Guided Semantic Fusion for Generalizable Gaussian Splatting
VerifiAgent: a Unified Verification Agent in Language Model Reasoning
Embodied Long Horizon Manipulation with Closed-loop Code Generation and Incremental Few-shot Adaptation
Revisiting Out-of-Distribution Detection in Real-time Object Detection: From Benchmark Pitfalls to a New Mitigation Paradigm
A Case for Specialisation in Non-Human Entities
Pragmatic Inference Chain (PIC) Improving LLMs' Reasoning of Authentic Implicit Toxic Language
Synthetic vs. Gold: The Role of LLM Generated Labels and Data in Cyberbullying Detection
Innamark: A Whitespace Replacement Information-Hiding Method
Ontology-Guided Reverse Thinking Makes Large Language Models Stronger on Knowledge Graph Question Answering
RefineCoder: Iterative Improving of Large Language Models via Adaptive Critique Refinement for Code Generation
Setup Once, Secure Always: A Single-Setup Secure Federated Learning Aggregation Protocol with Forward and Backward Secrecy for Dynamic Users
Self-Supervised Prompt Optimization
Learning to Generate Unit Tests for Automated Debugging
Modeling Discrimination with Causal Abstraction
Large Language Models for Automated Literature Review: An Evaluation of Reference Generation, Abstract Writing, and Review Composition
Evaluation Agent: Efficient and Promptable Evaluation Framework for Visual Generative Models
Knowledge-Guided Prompt Learning for Request Quality Assurance in Public Code Review
Fine-tuning foundational models to code diagnoses from veterinary health records
Teuken-7B-Base & Teuken-7B-Instruct: Towards European LLMs
Grounded-VideoLLM: Sharpening Fine-grained Temporal Grounding in Video Large Language Models
Continual Learning for Multimodal Data Fusion of a Soft Gripper
BoostTrack++: using tracklet information to detect more objects in multiple object tracking
OPDR: Order-Preserving Dimension Reduction for Semantic Embedding of Multimodal Scientific Data
CREMA: A Contrastive Regularized Masked Autoencoder for Robust ECG Diagnostics across Clinical Domains
Generating 3D Terrain with 2D Cellular Automata
Unplug and Play Language Models: Decomposing Experts in Language Models at Inference Time
Using a cognitive architecture to consider antiBlackness in design and development of AI systems
ITL-LIME: Instance-Based Transfer Learning for Enhancing Local Explanations in Low-Resource Data Settings
ThinkTuning: Instilling Cognitive Reflections without Distillation
A "good regulator theorem" for embodied agents
Prescriptive Agents based on RAG for Automated Maintenance (PARAM)
One Subgoal at a Time: Zero-Shot Generalization to Arbitrary Linear Temporal Logic Requirements in Multi-Task Reinforcement Learning
Opus: A Prompt Intention Framework for Complex Workflow Generation
Exploring Big Five Personality and AI Capability Effects in LLM-Simulated Negotiation Dialogues
It's the Thought that Counts: Evaluating the Attempts of Frontier LLMs to Persuade on Harmful Topics
GATES: Cost-aware Dynamic Workflow Scheduling via Graph Attention Networks and Evolution Strategy
Automatic Curriculum Design for Zero-Shot Human-AI Coordination
PersonaBench: Evaluating AI Models on Understanding Personal Information through Accessing (Synthetic) Private User Data
SycEval: Evaluating LLM Sycophancy
CopyrightShield: Enhancing Diffusion Model Security against Copyright Infringement Attacks
VLASCD: A Visual Language Action Model for Simultaneous Chatting and Decision Making
Exploring the Effect of Explanation Content and Format on User Comprehension and Trust in Healthcare
On Learning Action Costs from Input Plans
Human-Object Interaction from Human-Level Instructions
Non-linear Welfare-Aware Strategic Learning
CRISPR-GPT for Agentic Automation of Gene-editing Experiments
SceneGen: Single-Image 3D Scene Generation in One Feedforward Pass
Discovering Hidden Algebraic Structures via Transformers with Rank-Aware Beam GRPO
LiveMCP-101: Stress Testing and Diagnosing MCP-enabled Agents on Challenging Queries
Neural Robot Dynamics
Dissecting Tool-Integrated Reasoning: An Empirical Study and Analysis
"Does the cafe entrance look accessible? Where is the door?" Towards Geospatial AI Agents for Visual Inquiries
End-to-End Agentic RAG System Training for Traceable Diagnostic Reasoning
Numerical models outperform AI weather forecasts of record-breaking extremes
EcomMMMU: Strategic Utilization of Visuals for Robust Multimodal E-Commerce Models
Tutorial on the Probabilistic Unification of Estimation Theory, Machine Learning, and Generative AI
StreamMem: Query-Agnostic KV Cache Memory for Streaming Video Understanding
Load more
Self-Supervised Prompt Optimization
Created by
Haebom
作者
Jinyu Xiang, Jiayi Zhang, Zhaoyang Yu, Xinbing Liang, Fengwei Teng, Jinhao Tu, Fashen Ren, Xiangru Tang, Sirui Hong, Chenglin Wu, Yuyu Luo
概要
この論文は、大規模言語モデル(LLM)の推論能力を向上させ、さまざまなドメインでの作業要件との出力整列のためによく設計されたプロンプトの重要性を強調します。従来のプロンプト最適化方法は、正解や人の介入などの外部参照に大きく依存しており、実際のシナリオの適用に制限があります。これを解決するために、この論文は外部参照なしで費用対効果の高いプロンプト最適化フレームワークである磁気マッププロンプト最適化(SPO)を提案します。 SPOはLLM出力比較を介して評価および最適化信号を導出し、LLM評価者を使用したペア別出力比較を使用して優れたプロンプトを選択し、LLMオプティマイザを使用して出力を作業要件に合わせます。実験の結果、SPOは従来の方法よりもコスト(1.1%〜5.6%)とサンプル数(3つ)を大幅に削減しながら同等または優れた性能を達成します。
Takeaways、Limitations
•
Takeaways:
◦
外部参照なしで効率的にプロンプトを最適化する新しい方法(SPO)を提示します。
◦
従来の方法よりもはるかに少ないコストとサンプル数で優れた性能を達成。
◦
さまざまな作業(閉鎖型および開放型)に適用可能。
◦
LLM自体の能力を活用してプロンプト最適化を自動化。
•
Limitations:
◦
LLM評価者と最適化者のパフォーマンスに依存する可能性があります。
◦
特定のLLMに最適化された方法である可能性。
◦
さまざまなドメインとジョブタイプの一般化パフォーマンス検証が必要です。
◦
LLM評価者とオプティマイザのエラーが最終結果に影響を与える可能性があります。
PDFを見る
Made with Slashpage