Conversation
Notices
-
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:46:35 JST LS chatgpt is pretty neutered, but local LLMs are just too easy.
Also wow, running a 70B model locally at near chatgpt level and speed-
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:48:03 JST LS @cjd this is faraday.dev running on an m2 macbook -
cjd (cjd@pkteerium.xyz)'s status on Sunday, 08-Oct-2023 07:48:04 JST cjd What are you running it on ? -
Lelouche 🌸 (lelouchebag@shitposter.club)'s status on Sunday, 08-Oct-2023 07:48:43 JST Lelouche 🌸 @lain AI waifu bot when LS likes this. -
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:48:50 JST LS @lelouchebag it's already reality -
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:51:14 JST LS @cjd it uses the GPU, and the apple silicon macs have a unified CPU/GPU memory, so you can actually load the huge models -
cjd (cjd@pkteerium.xyz)'s status on Sunday, 08-Oct-2023 07:51:15 JST cjd Ok and it's just CPU only? Or is it doing some little GPU acceleration with the M2 GPU ? Or you don't know... -
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:53:29 JST LS @cjd no i have 96G, this is a 40g model. but the smaller models are getting really good too (and they are blazing fast) -
cjd (cjd@pkteerium.xyz)'s status on Sunday, 08-Oct-2023 07:53:30 JST cjd Oh I see, right indeed, all APUs have that. You have 16G ? -
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:54:30 JST LS @cjd no it's a laptop -
cjd (cjd@pkteerium.xyz)'s status on Sunday, 08-Oct-2023 07:54:31 JST cjd Ahh so not a laptop, that makes sense -
LS (lain@lain.com)'s status on Sunday, 08-Oct-2023 07:56:56 JST LS @cjd yeah, but they make you pay for it... -
cjd (cjd@pkteerium.xyz)'s status on Sunday, 08-Oct-2023 07:56:57 JST cjd You can get 96GB of memory in a laptop these days? O_O
-