If you’re looking for a powerful AI model for reasoning and code generation that is completely free and open source, DeepSeek R1 is an excellent option. It competes with models like GPT-4, Claude 3.5, and others, offering impressive performance in logic, mathematics, and coding.
In this guide, you’ll learn how to run DeepSeek R1 directly within Visual Studio Code as a code assistant, similar to GitHub Copilot, using tools such as LM Studio, Ollama, and Jan—all without spending a penny.
Unlike many expensive AI models, DeepSeek R1 is completely free to use. It can even be accessed online via DeepSeek Chat.
It competes well with other AI models in logic-driven tasks, mathematical problem-solving, and code generation.
DeepSeek R1 is available in different sizes, ranging from 1.5B to 70B parameters. This allows you to choose a version based on your computer’s hardware capabilities.
DeepSeek R1 can be used inside Visual Studio Code via extensions like Cline or Roo Code.
Running the model locally eliminates token-based pricing or API fees. A dedicated GPU is recommended for better performance, but it can also run on a CPU, albeit more slowly.
The right model version depends on your system specifications:
Model | RAM Requirement | Recommended GPU | Use Case |
---|---|---|---|
1.5B Parameters | ~4GB | Integrated GPU (e.g., GTX 1050) or modern CPU | Suitable for simple tasks and low-end PCs |
7B Parameters | ~8-10GB | Dedicated GPU (e.g., GTX 1660 or higher) | Ideal for intermediate tasks and mid-range hardware |
70B Parameters | ~40GB | High-end GPU (e.g., RTX 3090 or better) | Best for complex tasks and powerful computers |
http://localhost:1234
.ollama pull deepseek-r1
ollama serve
http://localhost:11434
.By following these steps, you can successfully integrate DeepSeek R1 into your workflow within Visual Studio Code, allowing for an efficient and cost-free AI coding assistant experience.