RoboDodd
  • LinkedIn
  • Github
  • X
Sign in Subscribe

Ollama AI

A collection of 4 posts
How to Run the DeepSeek-R1 AI Model on a Mac Locally
DeepSeek

How to Run the DeepSeek-R1 AI Model on a Mac Locally

Learn how to run DeepSeek-R1 on a Mac Mini M4 using Ollama for efficient AI model performance. Step-by-step guide for installation, setup, and interaction.
14 Feb 2025 2 min read
Budget-Friendly Local AI Hardware for Running Ollama
Ollama AI

Budget-Friendly Local AI Hardware for Running Ollama

Looking to run AI models like Ollama locally without breaking the bank? Here’s a guide to the best budget GPUs for LLMs, from NVIDIA’s RTX 3060 to AMD’s RX 6700 XT.
29 Jan 2025 3 min read
How to Run the DeepSeek-R1 AI Model on a Windows PC Locally
DeepSeek Featured

How to Run the DeepSeek-R1 AI Model on a Windows PC Locally

Learn how to run the DeepSeek AI model on a Windows machine with Ollama. Enhance your experience with Open WebUI, a sleek, self-hosted platform for managing advanced AI models effortlessly.
28 Jan 2025 3 min read
How to Run the DeepSeek-R1 AI Model on a Raspberry Pi Locally
Raspberry Pi

How to Run the DeepSeek-R1 AI Model on a Raspberry Pi Locally

Run the DeepSeek-R1 AI model locally on your Raspberry Pi! Learn how to set up Ollama, install the model, and explore AI on affordable hardware. Fun and educational for enthusiasts!
28 Jan 2025 4 min read
Page 1 of 1
RoboDodd © 2025
  • Sign up
  • Privacy Policy
Powered by Ghost