This guide walks you through building and installing a complete offline-capable AI workstation for under $1500 CAD. Everything runs locally — no cloud access or API keys required.
With this setup, you'll be able to run:
Ollama
)Component | Suggested Model | Est. Price (CAD) | Link |
---|---|---|---|
CPU | AMD Ryzen 7 7700X | ~CA$409 | Buy |
GPU | NVIDIA RTX 4070 (12GB) | ~CA$959 | Buy |
RAM | 32GB DDR5 (5600MHz) | ~CA$143 | Buy |
Storage | 1TB NVMe Gen4 SSD | ~CA$110 | Pick brand |
Motherboard | B650 ATX | ~CA$200 | Pick brand |
PSU | 650W+ Gold-rated | ~CA$120 | e.g., RM650x |
Case + Cooler | Mid-tower + tower cooler | ~CA$90 | Pick style |
Total: ~CA$1900 — you can stay under $1500 with used GPU or budget motherboard.
sudo apt update && sudo apt upgrade -y
curl -fsSL https://ollama.com/install.sh | sh ollama run llama3 ollama run deepseek-coder:8b ollama run qwen:7b
git clone https://github.com/open-webui/open-webui.git cd open-webui ./scripts/docker-run.sh
Open http://localhost:3000
in your browser.
docker run -it --rm \ -p 5678:5678 \ -v ~/.n8n:/home/node/.n8n \ n8nio/n8n
Open http://localhost:5678
git clone https://github.com/ggerganov/whisper.cpp cd whisper.cpp make ./models/download-ggml-model.sh base ./main -m models/ggml-base.bin -f samples/jfk.wav
sudo apt install pip3 pip3 install piper-tts piper --list-voices piper --model en_US-lessac-medium.onnx --text "Hello there!"
You now have a complete local AI workstation, capable of handling offline LLMs, speech-to-text, automation, and voice synthesis — all on your own machine.