Search your files by meaning, not just filename. Drop an image into a folder, search "cherry blossoms" later, and find it instantly.
# 1. Setup
git clone <repo>
cd mnemosyne
uv venv
source .venv/bin/activate
uv sync
# 2. Configure (edit config.yaml)
# - Set provider to "ollama" or "gemini"
# - For Gemini: set GEMINI_API_KEY env var
# 3. Start watching
python main.py
# 4. In another terminal, add images
cp ~/Pictures/*.jpg input_images/
# 5. Search
python search.py "beautiful sunset"
python search.py --tag natureFile Added → Watcher → AI Analysis → Embedding → Vector DB → Semantic Search
When you drop a file into input_images/:
- Watcher detects the new file
- AI Provider (Ollama/Gemini) analyzes and describes it
- Embedder converts description to vector
- LanceDB stores it for fast semantic search
- Search finds files by meaning, not filename
Edit config.yaml:
provider:
type: "gemini" # or "ollama"
gemini:
api_key: "" # or set GEMINI_API_KEY env var
model: "gemini-2.5-flash"
ollama:
base_url: "http://localhost:11434"
model: "llama3.2-vision"
paths:
input_dir: "input_images"
storage_dir: "data"# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
# Pull a vision model
ollama pull llama3.2-vision
# Start Ollama (keep running)
ollama serve# Set API key
export GEMINI_API_KEY="your-key-here"
# Or add to config.yamlpython main.pyWatches input_images/ and automatically processes new files.
# Semantic search
python search.py "sunset over water"
python search.py "person smiling"
# Tag search
python search.py --tag nature
python search.py --tag portrait
# Limit results
python search.py "mountains" --limit 5MIT