top of page

How to run DeepSeek-R1 locally

This is a quick step-by-step to run DeepSeek-R1 yourself.


What is it?

DeepSeek-R1 is an open source AI language model, first released as a chatbot app in January and overtaking ChatGPT in US app stores on the 27th Jan. This caused a big sell-off of US tech stock, party because it's an open-source rival, partly because it's from China and partly because it seems to have been trained on a fraction of the resources that Big Tech has thrown at their own AI development - e.g. $6million in training costs vs Meta's $60bn planned costs this year.


Open-Source but...

Why would you want to run this model locally on your computer? Well, the model is open source, but the app version of DeepSeek does send it's requests to the cloud, including any data you enter. Concerns have been raised about the data and also censorship of topics that are not allowed to be discussed in China. (Further reading).


Using it yourself locally

There are a few ways to run this yourself. The DeepSeek-R1 model has been released in multiple versions, each with a smaller or larger number of parameters. Only the larger versions are comparable with OpenAI's ChatGPT o1 - you want the 32b or 70b versions. Be aware, Large Language Models are, well, large! Expect to use 20 to 42GB of disk space.


Also, a word of warning. Running LLMs requires quite a lot of computing power - unless you have a powerful graphics card expect to weight several minutes for any response!


Step 1

Go to https://ollama.com/download. Ollama is a great tool and we've been using it for a while to test other open source models.


Download Ollama for Windows, Mac or Linux.


  • On Windows, press Start, type cmd in the search and then click on Command Prompt.

  • On a Mac, press Command + Space, type terminal in the search and press return

  • On Linux open terminal


Step 2

Install a language model. As long as Ollama is running, you can then type this command into your Command Prompt or Terminal window. It will download and then immediately run the language model:

ollama run deepseek-r1:32b

Step 3

Further help and instructions can be found in the README at the bottom of this page: https://github.com/ollama/ollama


You can also try out some of the linked projects which include graphical interfaces to try on top of Ollama.


And for a chat about all things tech, you can always reach out to WAM.




Comments


WAM

A Web3 agency in London and online.

71-75 Shelton Street, Covent Garden, WC2H 9JQ

©2024 WAM WORKS Ltd

bottom of page