How to Use DeepSeek R1 for Free in Visual Studio Code

SAMI
February 1, 2025 3 mins to read
Share

If you’re looking for a powerful AI model for reasoning and code generation that is completely free and open source, DeepSeek R1 is an excellent option. It competes with models like GPT-4, Claude 3.5, and others, offering impressive performance in logic, mathematics, and coding.

In this guide, you’ll learn how to run DeepSeek R1 directly within Visual Studio Code as a code assistant, similar to GitHub Copilot, using tools such as LM Studio, Ollama, and Jan—all without spending a penny.

Why is DeepSeek R1 Gaining Attention?

1. Free and Open Source

Unlike many expensive AI models, DeepSeek R1 is completely free to use. It can even be accessed online via DeepSeek Chat.

2. Strong Performance

It competes well with other AI models in logic-driven tasks, mathematical problem-solving, and code generation.

3. Multiple Model Versions

DeepSeek R1 is available in different sizes, ranging from 1.5B to 70B parameters. This allows you to choose a version based on your computer’s hardware capabilities.

4. Easy VS Code Integration

DeepSeek R1 can be used inside Visual Studio Code via extensions like Cline or Roo Code.

5. No Usage Costs

Running the model locally eliminates token-based pricing or API fees. A dedicated GPU is recommended for better performance, but it can also run on a CPU, albeit more slowly.


Important Considerations Before Starting

  • Optimize Resource Usage: If your PC is not very powerful, use the smaller versions of DeepSeek R1 (1.5B or 7B parameters) or opt for quantized models.
  • Calculate Memory Needs: Use LLM Calc to estimate the RAM required for different models.
  • Data Privacy: Running the model locally means your data stays on your machine and is not sent to external servers.
  • API Costs: While local execution is free, accessing the DeepSeek API incurs minimal fees, significantly lower than competitors.

Choosing the Right Model Version

The right model version depends on your system specifications:

ModelRAM RequirementRecommended GPUUse Case
1.5B Parameters~4GBIntegrated GPU (e.g., GTX 1050) or modern CPUSuitable for simple tasks and low-end PCs
7B Parameters~8-10GBDedicated GPU (e.g., GTX 1660 or higher)Ideal for intermediate tasks and mid-range hardware
70B Parameters~40GBHigh-end GPU (e.g., RTX 3090 or better)Best for complex tasks and powerful computers

Running DeepSeek R1 Locally

Method 1: Using LM Studio

  1. Download and Install LM Studio
    • Get the appropriate version from LM Studio and install it on your system.
  2. Download the DeepSeek R1 Model
    • Open LM Studio and navigate to the “Discover” tab.
    • Search for “DeepSeek R1” and choose a version that matches your system.
    • If using an Apple MacBook, ensure the MLX option is selected for hardware optimization. For Windows or Linux, choose the GGUF option.
  3. Load the Model
    • Go to the “Local Models” tab in LM Studio.
    • Select DeepSeek R1 and click “Load.”
  4. Start the Local Server
    • In the “Developer” tab, enable “Start Server.”
    • The model will run at http://localhost:1234.

Method 2: Using Ollama

  1. Install Ollama
    • Download and install it from Ollama.
  2. Download the DeepSeek R1 Model
    • Open a terminal and run: ollama pull deepseek-r1
    • For smaller versions, visit Ollama Library for specific commands.
  3. Start the Server
    • Run the following command in the terminal: ollama serve
    • This launches the model at http://localhost:11434.

By following these steps, you can successfully integrate DeepSeek R1 into your workflow within Visual Studio Code, allowing for an efficient and cost-free AI coding assistant experience.


Leave a comment

Your email address will not be published. Required fields are marked *