Sign In

Does Apple Silicon offer good performance for running LLMs?

Haebom
This post was very helpful to me personally, so I'm saving it for future reference.
Unfortunately, Apple Silicon doesn't support eGPUs yet. (Intel chips do.)
As of now, macOS doesn't really support NVIDIA graphics cards, so it's hard to use CUDA and similar tools.
Because of all this, when I run an LLM, I usually end up running it on Apple's own silicon chips (CPU+GPU).
Personally, I've been trying out various things with llama 13B locally, and I haven't really run into any major issues. (It runs and can even train... though I'm not sure about the efficiency.)
So I saw the benchmark results, which ggerganov published, and now I can see why Apple is so confident about its silicon chips in the field of ML.
On the other hand, it seems like Radeon’s position is becoming pretty unclear...
Apparently, this was written based on LLaMA 7B.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
haebom@kakao.com
Subscribe