Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Chiplet-Based RISC-V SoC with Modular AI Acceleration

Created by
  • Haebom

Author

P. Ramkumar, SS Bharadwaj

Outline

Maintaining architectural flexibility while achieving high performance, energy efficiency, and cost-effectiveness in the development and deployment of edge AI devices is a critical challenge. This paper presents a chiplet-based RISC-V SoC architecture that addresses these limitations through AI-accelerated modularity and intelligent system-level optimizations. The proposed architecture integrates four innovations onto a 30 mm x 30 mm silicon interposer: adaptive inter-chiplet DVFS, a streaming flow control unit, and an AI-aware UCIe protocol extension featuring compression-aware forwarding; distributed cryptographic security across heterogeneous chiplets; and intelligent sensor-based load migration. Experimental results demonstrate that our proposed architecture achieves ~14.7% latency reduction, 17.3% throughput improvement, and 16.2% power reduction compared to existing chiplet implementations on standard benchmarks such as MobileNetV2 and ResNet-50, while achieving an efficiency of ~3.5 mJ per MobileNetV2 inference (860 mW/244 images/s).

Takeaways, Limitations

Takeaways:
Modular chiplet design provides cost-efficiency, scalability, and upgradeability, contributing to next-generation edge AI device applications.
Improved performance (latency, throughput) and power efficiency through AI-optimized configurations
Improved efficiency across a variety of workloads while maintaining real-time performance (sub-5ms)
Limitations:
There is no specific mention of Limitations in the paper.
👍