Tried DeepSeek R1 on Raspberry Pi 5, and the speed? Not as expected! 😳
Conversation
Notices
-
It's FOSS (itsfoss@mastodon.social)'s status on Tuesday, 28-Jan-2025 23:01:03 JST It's FOSS
-
デイヴ (deivudesu@mastodon.social)'s status on Tuesday, 28-Jan-2025 23:53:15 JST デイヴ
@itsfoss Interesting! (both initial post, that I missed, and your own experience)
I'd say it's not exactly surprising though: RPi without some sort of add-on is very much underequipped for AI (no GPU and 4 very light cores)…
What is surprising in your benchmark, is that the 7B model performs worse than the 1B model (in terms of accuracy)… 🤔
-
el_haych2024 (el_haych2024@mastodon.social)'s status on Wednesday, 29-Jan-2025 12:21:09 JST el_haych2024
@itsfoss my experience running distilled model locally was like experiencing that Weave Trump's always going on about.
-
It's FOSS (itsfoss@mastodon.social)'s status on Wednesday, 29-Jan-2025 16:21:11 JST It's FOSS
@deivudesu You're right, running AI models on the Raspberry Pi 5 is a challenge. The accuracy drop with the 7B model could be due to quantization artifacts affecting precision on limited resource. I’ll be testing this further on the ArmSoM AIM7 with its NPU to see if the trend holds. 🙂
-
デイヴ (deivudesu@mastodon.social)'s status on Wednesday, 29-Jan-2025 16:42:10 JST デイヴ
@itsfoss You don't mention any quantisation so I assumed same level of quantisation for all models (otherwise it's a bit meaningless to compare them).
What quantisation did you use for each one?
-