This project sets up a chatbot locally using Ollama and the qwen3 model.
- Operating System: Windows, macOS, or Linux
- Ollama: Installed locally. Download Ollama
- Terminal or command line access
- Python (Optional): For frontend/backend integration
- Model:
qwen3:0.6b - This is a compact, open-source large language model from Alibaba, suited for local inference.
Download and install Ollama from the official site:
👉 https://ollama.com/download
Once installed, verify it's working:
ollama --versionUse the following command to download the model:
ollama pull qwen3:0.6bYou can explore other versions with:
ollama run qwen3To start a chatbot session with the qwen3 model:
ollama run qwen3:0.6bYou’ll enter a REPL interface where you can chat directly with the model.
Ollama exposes a local HTTP API at http://localhost:11434.
You can make requests like this:
curl http://localhost:11434/api/generate -d '{
"model": "qwen3:0.6b",
"prompt": "Hello!",
"stream": false
}'Run with:
streamlit run chat.py- Port Conflict: Ensure no other service is using port 11434.
- Model Not Found: If
qwen3:0.6bisn't available, runollama listto check available models.
This project is for personal/local use. Check qwen3 and Ollama licenses for distribution or commercial usage.
