How to Run the DeepSeek-R1 AI Model on a Mac Locally

Learn how to run DeepSeek-R1 on a Mac Mini M4 using Ollama for efficient AI model performance. Step-by-step guide for installation, setup, and interaction.

How to Run the DeepSeek-R1 AI Model on a Mac Locally

Introduction

I recently tested DeepSeek on my new Mac Mini M4, and to my surprise, it outperformed my Windows machine with an NVIDIA GeForce RTX 2080 Super. The efficiency of Apple Silicon and its Neural Engine makes it an excellent choice for running AI models locally. If you're looking to set up DeepSeek on your Mac, here’s a quick guide to get you started. Check out the specs on amazon.

Hardware Used

Apple 2024 Mac Mini M4 - Amazon


Prerequisites

Before installing DeepSeek, make sure you have:

  1. macOS Sonoma or later installed
  2. Apple Silicon (M4 recommended for best performance)

Step 1: Install Ollama

  1. Download Ollama:
  2. Install the Application:
    • Once the download is complete, locate the Ollama application file in your Downloads folder.
    • Once opened go through the installer an click Install. This will prompt you for admin password.
  3. OpenTerminal:
    • Hit Command + Space to bring up terminal
    • Type Terminal and hit enter

Step 2: Run DeepSeek-R1

Run the Model:

ollama run deepseek-r1:1.5b

For machines with more than 16GB of Video RAM, use the smaller model:

ollama run deepseek-r1:7b

Interact with the Model:

  • wait for the big packages to get pulled in.
  • Once the model is running, you can start asking it questions.
  • Use the /? command to view available options and features.
  • If you encounter any issues, use the /clear command to reset the session context.

Example Query:

How many r's are in the word strawberry?

Hardware for Ollama

If you're looking for budget-friendly hardware to run Ollama efficiently, check out my latest post for the best GPU recommendations!

Budget-Friendly Local AI Hardware for Running Ollama
Looking to run AI models like Ollama locally without breaking the bank? Here’s a guide to the best budget GPUs for LLMs, from NVIDIA’s RTX 3060 to AMD’s RX 6700 XT.

Conclusion

If you’re running AI models and want a power-efficient yet performant setup, the Mac Mini M4 is an excellent choice. With DeepSeek running on Ollama, it’s now easier than ever to run LLMs locally without needing a high-end NVIDIA GPU.

Have you tried running DeepSeek on your Mac with Ollama? Let me know your experience in the comments!

If you want a sleek web-based interface for managing your AI models, you can integrate Open WebUI, a feature-rich self-hosted AI platform.