[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

SimdBench: Benchmarking Large Language Models for SIMD-Intrinsic Code Generation

Created by
  • Haebom

Author

Yibo He, Shuoran Zhao, Jiaming Huang, Yingjie Fu, Hao Yu, Cunjian Huang, Tao Xie

Outline

In this paper, we propose SimdBench, a new benchmark that focuses on SIMD intrinsic code generation. SimdBench contains 136 tasks for five representative SIMD intrinsics: SSE, AVX, Neon, SVE, and RVV. We evaluate 18 representative LLMs with SimdBench and find that the accuracy of SIMD intrinsic code generation is generally lower than that of scalar code generation. Through this, we suggest the future direction of LLMs in the field of SIMD intrinsic code generation and contribute to the research community by open-sourcing SimdBench.

Takeaways, Limitations

Takeaways:
We present SimdBench, the first specialized benchmark for SIMD intrinsic code generation.
We present systematic evaluation and analysis of the SIMD intrinsic code generation capabilities of various LLMs.
Directions for improving the performance of SIMD intrinsic code generation for LLMs.
Contributing to the research community through open-sourcing SimdBench.
Limitations:
The types of SIMD intrinsics currently included in benchmarks may be limited.
The types of LLMs assessed may be limited.
The need for additional tasks involving more complex and varied use of SIMD intrinsics.
👍