English
Share
Sign In
Does Apple Silicon perform well in running LLM?
Haebom
👍
This post was personally very helpful to me, so I'm saving it for storage.
Unfortunately, Apple Silicon does not yet support eGPUs, etc. (Intel chips do support this).
MacOS currently has limited support for NVIDIA graphics cards. Therefore, it is difficult to use CUDA, etc.
Of course, when running LLM, it is usually run on a silicon chip (CPU + GPU) designed by Apple.
Personally, I have installed llama 13B locally and tried various things, and I haven't noticed any major problems. (It works and learns, but I'm not sure about the efficiency.)
So, I saw the bench that I had only felt intuitively, and ggerganov made it public, and now I understand why Apple is confident in the ML field with silicon chips.
On the one hand, I think Radeon's position will become quite ambiguous...
It is said to have been written based on LLaMA 7B.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
👍