Any hardware suggestions for DeepSeek
by m3lon - Sunday March 2, 2025 at 12:42 AM
#1
Just wondering about how DeepSeek or similar LLM modela may help... I've seen Github Copilot writing dozens of lines of code such as suggestions but I'm worried about privacy and so on so I'm thinking about using a self-hosted LLM...

I'm thinking about using Gpt4All or AnythinLLM but not sure about the type of hardware that may be required to get acceptable response times for, i. e., DeepSeek. I still have to start my own research about how much may it cost but if any of you have already tried something it will also be useful...
Reply
#2
(Mar 02, 2025, 12:42 AM)m3lon Wrote: Just wondering about how DeepSeek or similar LLM modela may help... I've seen Github Copilot writing dozens of lines of code such as suggestions but I'm worried about privacy and so on so I'm thinking about using a self-hosted LLM...

I'm thinking about using Gpt4All or AnythinLLM but not sure about the type of hardware that may be required to get acceptable response times for, i. e., DeepSeek. I still have to start my own research about how much may it cost but if any of you have already tried something it will also be useful...

Deepseek, claude, and grok are probably the best options for coding, but self hosting narrows it down quite a lot due to mosts AI's not really being opensource. 

The best Deepseek modem requires a really high-end system. https://www.youtube.com/watch?v=Tq_cmN4j2yY.

any questions lmk, id be happy to try and help.
Reply
#3
(Mar 02, 2025, 09:28 AM)Spiral Wrote: The best Deepseek modem requires a really high-end system. https://www.youtube.com/watch?v=Tq_cmN4j2yY.

Oh that was a cool watch... 2k USD is not that big... I have to definetely try the writeup...
Reply
#4
Hi m3lon.

I have personally used smaller models on an I5 with 6 cores and 16 GB of ram and, well, I have been "lucky" that they worked so slow.

So what I'm getting at is that, probably, you almost will need a high-end computer with 32-64 GB of RAM (or more) with a high-end processor. By other side you have the option to buy a Graphics Card with 16 or more GB of VRAM and use this to "load" the model.

About "prices", this is the best part: the price of the computer will not be "reduced". I mean: I don't say that you need a IBM server, but even so the price of the computer could aproacching to 3.000-4.000$ (or even surpass it if we add the graphics card).

And I insist: I coment this option thinking about trying to ensure that the model you mention is executed "fluidly and quickly."
Reply
#5
(Mar 03, 2025, 02:43 PM)titohippie Wrote: And I insist: I coment this option thinking about trying to ensure that the model you mention is executed "fluidly and quickly."

Oh, that's the idea... I was initially thinking about giving a second life to gaming graphic cards but I'm not sure about whether the time spent will pay off and considering the estimations you've done.... I guess that I'll have to change my mind At least, it is a reference for me...

I'll keep you posted man, thanks for the comments!
Reply
#6
https://medium.com/@akshaykumar12527/set...b6ac9c92c2
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Russian flipper firmware Borya 2 319 Feb 10, 2026, 01:19 AM
Last Post: HarmedThem
  Storage recommendations. viceCoolMan 5 1,783 Jan 22, 2026, 04:27 PM
Last Post: Unknown_Boy
  How Can we RFID RELAY ATTACK wıthout FlipperZero ssmk8 7 7,406 Jan 07, 2026, 05:20 AM
Last Post: 074c1ef2ab9dc40fbeet
  Raspberry Pi WH Wi-Fi Deauth Attack & Handshake Capture BaDibaDiBoo 2 480 Dec 29, 2025, 06:21 AM
Last Post: OMGBreachisBack
  What devices do you use? Chowchow222 9 2,739 Aug 10, 2025, 02:10 PM
Last Post: Borya

Forum Jump:


 Users browsing this forum: 1 Guest(s)