An Open AI Assistant with Flame
A powerful, persistent AI assistant that works with multiple LLM providers, features universal function calling, real-time streaming responses, and an extensible skills system.
All providers use the OpenAI Agents SDK for unified tool calling and streaming:
- OpenAI - GPT-4o, GPT-4 Turbo, o1, o3 series
- Anthropic - Claude 3.5 Sonnet, Claude 3 Opus/Sonnet/Haiku
- Kimi/Moonshot AI - kimi-k2.5 (latest), moonshot-v1 series
- Custom APIs - Any OpenAI-compatible API endpoint (Ollama, vLLM, etc.)
Built-in functions that work with all providers:
- Filesystem Functions -
read_file,read_dir,write_file,append_file,delete_file,rename_fs,delete_dirs,create_dirs - HTTP Functions -
http_get,http_post,http_put,http_patch,http_delete - Shell Functions -
get_env,run_cmdfor command execution - Token Functions -
list_tokensfor authentication token management - Skills Functions -
load_skillfor dynamic skill loading - Team Functions -
intro_member,assign_tasksfor team collaboration - Secure, configurable, and extensible
- Automatic conversation saving
- Resume sessions across restarts
- Multiple concurrent sessions
- Export to JSON
- Session-specific provider configuration
- Extensible skills following Agent Skills specification
- 10 built-in skills: weather, git, github, gitea, makefile, backend_engineering, communication, project_management, system_design, system_test
- Install from Git, local paths, or registries
- Create custom skills for domain-specific workflows
- Role-based agent collaboration (TPM, Dev, Architect, QA)
- Team member definitions with personalized configurations
- Task assignment and member introduction functions
- CLI Bridge - Terminal UI for human interaction (default)
- GitHub Bridge - Automated GitHub Issues/PRs integration
- Gitea Bridge - Gitea Issues/PRs integration
- Feishu Bridge - Feishu/Lark chat integration
- Daemon mode for background operation
- Streaming responses with real-time display
- Syntax highlighting
- Auto-scrolling chat log
- Keyboard shortcuts
- Provider/model display
- Powered by Textual
- Environment variables
- TOML configuration files
- System keyring (secure)
- Interactive setup wizard with arrow key navigation
# Clone repository
git clone https://github.com/xflops/firewood
cd firewood
# Install dependencies (using uv - recommended)
uv sync
# Or using pip
pip install -e ".[all]"# Run interactive setup wizard
firewood --setupThe setup wizard provides a beautiful, immersive full-screen experience:
- π₯οΈ Takes over the entire terminal for a focused setup experience
- π Step-by-step flow - One step at a time, no overwhelm
- β‘ Auto-advance on Enter - Select option β press Enter β next step (automatic!)
- π― Smart focus - Cursor always ready where you need it (no extra Tab presses)
- πΎ Smart defaults - Current config pre-loaded and highlighted
- βοΈ Previous/Next navigation - Go back to change settings anytime
- π Progress indicator - Always know where you are (Step X of 6)
- β Real-time validation - Catches errors before moving forward
- β¨οΈ Full keyboard navigation with arrow keys and Enter
- π¨ Visual feedback as you configure each setting
- π Modern UI using Textual framework
- β‘ 50% fewer keystrokes than traditional setup wizards
Configuration steps include:
- Provider Selection - OpenAI, Anthropic, Kimi, or Custom (OpenAI-compatible)
- API Key - Securely enter and mask your key
- Model Selection - Choose from available models
- Storage Options - Keyring, config file, or environment variable
- Function Configuration - Enable/disable filesystem and HTTP access
- Filesystem Mode - Read-only or read/write access
This will:
- Prompt for your LLM provider (OpenAI, Anthropic, Kimi/Moonshot, Custom)
- Request your API key
- Select your preferred model
- Configure function calling (filesystem, HTTP, shell)
- Offer to store settings securely
- Validate the configuration
# Start with default settings
firewood
# Or specify provider and model
firewood --provider openai
firewood --provider kimi
firewood --provider anthropic
firewood --provider custom# Start new session
firewood --new
# Resume specific session
firewood --session <session-id>
# List all sessions
firewood --list-sessions
# Show session details and messages
firewood --show-session <session-id>
# Show session with all Flame task details
firewood --show-session <session-id> --full
# Limit lines per message in show-session output
firewood --show-session <session-id> --max-lines 50
# Export session with decoded Flame tasks
firewood --show-session <session-id> --export-tasks tasks.json
# Show configuration
firewood --show-config
# Export session
firewood --export --session <session-id> --output backup.json
# Clean old sessions (30+ days)
firewood --clean-sessions
# Show recent logs
firewood --show-logs
# Run with verbose logging
firewood -v
# List installed skills
firewood --list-skills
# Show skill details
firewood --show-skill <skill-id># List discovered roles
firewood --list-roles
# List team members
firewood --list-members
# Show role specification
firewood --show-role <role-name>
# Show member introduction
firewood --show-member <member-name># Run GitHub bridge
firewood --bridge github
# Run Gitea bridge
firewood --bridge gitea
# Run Feishu bridge
firewood --bridge feishu
# Run bridge as daemon
firewood --bridge github --daemon
# Stop bridge daemon
firewood --stop-bridge
# Show bridge status
firewood --show-bridge-status
# List configured bridges
firewood --list-bridges
# Debug specific issues
firewood --bridge github --debug-issues 123,456
# Clean up Flame application used by bridges
firewood --clean-bridge-app- Enter - Send message
- Ctrl+C - Quit and save
- Ctrl+N - New session
- Ctrl+S - Save session
Firewood includes built-in functions that work with all LLM providers:
You: Read my config.toml file
Firewood: [reads ~/.firewood/config.toml and shows contents]
You: Create a shopping list in shopping.txt with 5 items
Firewood: [writes file using write_file function]
You: Get the current weather from https://api.weather.com/current
Firewood: [makes HTTP GET request and returns data]
You: Post {"name": "test"} to https://httpbin.org/post
Firewood: [makes HTTP POST request with JSON payload]
You: What's my OPENAI_API_KEY environment variable?
Firewood: [retrieves environment variable securely]
You: Run git status in the current directory
Firewood: [executes git status via run_cmd function]
You: What's the weather in San Francisco?
Firewood: [uses weather skill to fetch and display weather data]
~/.firewood/config.toml
[provider]
provider = "openai"
model = "gpt-4"
[functions]
enabled = true
[functions.filesystem]
base_path = "~/"
read_only = false
allowed_paths = ["~/"]
[functions.http]
timeout = 30
allowed_domains = [
"*.example.com",
"api.trusted-service.com"
]
[skills]
enabled = true
directory = "~/.firewood/skills"
global_skills = []# Provider-specific API keys
export OPENAI_API_KEY='sk-...'
export ANTHROPIC_API_KEY='sk-ant-...'
export KIMI_API_KEY='sk-...'
export CUSTOM_API_KEY='your-key'
# Skills directory
export FIREWOOD_SKILLS_DIR='~/.firewood/skills'
# Logging
export FIREWOOD_LOG_LEVEL='DEBUG'| Function | Description | Parameters | Mode |
|---|---|---|---|
read_file |
Read file contents | path, encoding |
Always |
read_dir |
List directory contents | path |
Always |
write_file |
Write/create file | path, content, encoding, create_dirs |
Write |
append_file |
Append to file | path, content, encoding |
Write |
delete_file |
Delete a file | path |
Write |
rename_fs |
Rename/move file or directory | src, dst |
Write |
delete_dirs |
Delete directory recursively | path |
Write |
create_dirs |
Create directories | path |
Write |
Security Features:
- Path validation (no directory traversal)
- Configurable base path
- Read-only mode option
- Allowed paths whitelist
| Function | Description | Parameters |
|---|---|---|
http_get |
HTTP GET request | url, headers, timeout, token_name |
http_post |
HTTP POST request | url, body, json_body, headers, timeout, token_name |
http_put |
HTTP PUT request | url, body, json_body, headers, timeout, token_name |
http_patch |
HTTP PATCH request | url, body, json_body, headers, timeout, token_name |
http_delete |
HTTP DELETE request | url, headers, timeout, token_name |
Security Features:
- URL validation (http/https only)
- Domain whitelisting (optional)
- Configurable timeouts
- Token-based authentication support
- Safe error handling
| Function | Description | Parameters |
|---|---|---|
get_env |
Get environment variable | name, default |
run_cmd |
Run shell command | command, cwd, timeout, env, token_name |
Security Features:
- Workspace-constrained execution
- Configurable timeouts
- Token-based git authentication
- Safe environment variable retrieval
| Function | Description | Parameters |
|---|---|---|
list_tokens |
List available auth tokens | None |
| Function | Description | Parameters |
|---|---|---|
load_skill |
Load skill file | name, path |
| Function | Description | Parameters |
|---|---|---|
intro_member |
Get team member introduction | name |
assign_tasks |
Assign tasks to team members (parallel) | assignments (array of {name, task}) |
- Registration - Functions are registered with FunctionManager on startup
- Schema Generation - Function schemas are sent to the LLM
- LLM Decision - The model decides when to call functions
- Execution - FunctionManager executes the function securely
- Result - Result is sent back to the LLM for final response
- Streaming - All responses are streamed in real-time for better UX
Works with all providers: OpenAI, Anthropic, Kimi/Moonshot, and Custom APIs!
firewood/
βββ firewood/ # Core package
β βββ cli.py # Command-line interface
β βββ config.py # Configuration management
β βββ models.py # Data models
β βββ session.py # Session management
β βββ storage.py # Storage backends
β βββ logging_config.py # Logging setup
β βββ setup_cli.py # Setup wizard
β βββ agent/ # Agent management
β β βββ executor.py # Agent execution
β β βββ storage.py # Agent storage
β βββ bridges/ # Bridge implementations
β β βββ base.py # Base bridge interface
β β βββ manager.py # Bridge lifecycle
β β βββ models.py # Bridge data models
β β βββ state.py # Bridge state management
β β βββ cli/ # Terminal UI bridge
β β βββ github/ # GitHub integration
β β βββ gitea/ # Gitea integration
β β βββ feishu/ # Feishu integration
β βββ data/ # Built-in data
β β βββ skills/ # Skill definitions
β β βββ team/ # Team roles & members
β βββ flame/ # Flame runtime integration
β βββ providers/ # LLM provider implementations
β β βββ base.py # Base provider interface
β β βββ agent_provider.py # Unified provider using OpenAI Agents SDK
β β βββ factory.py # Provider factory
β βββ functions/ # Built-in functions
β β βββ manager.py # Function management
β β βββ filesystem.py # File operations
β β βββ http.py # HTTP operations
β β βββ shell.py # Shell/environment operations
β β βββ skills.py # Skill loading
β β βββ team.py # Team operations
β β βββ tokens.py # Token management
β βββ skills/ # Skills management
β β βββ models.py # Skill data models
β β βββ loader.py # Skill loading
β β βββ manager.py # Skill lifecycle
β β βββ discovery.py # Skill discovery
β β βββ executor.py # Skill execution
β β βββ validator.py # Skill validation
β β βββ internal.py # Internal skill functions
β βββ team/ # Team management
β βββ discovery.py # Team discovery
β βββ manager.py # Team lifecycle
β βββ models.py # Team data models
β βββ utils/ # Utility modules
β βββ discovery.py # Common discovery patterns
β βββ resources.py # Resource utilities
βββ examples/ # Usage examples
βββ tests/ # Test suite
βββ docs/ # Documentation
βββ skills/ # User-installed skills
# Run all tests
pytest
# Run with coverage
pytest --cov=firewood --cov-report=html
# Run specific test file
pytest tests/test_function_manager.py
# Run with verbose output
pytest -v# Lint and fix
ruff check --fix firewood/ tests/
# Format code
ruff format firewood/ tests/
# Type checking
mypy firewood/# Enable verbose logging (DEBUG level)
firewood -v
# View recent logs
firewood --show-logs --log-lines 100
# Show log file location
firewood --log-path
# List all sessions with details
firewood --list-sessions
# Validate configuration
firewood --validate-api-key
# List installed and enabled skills
firewood --list-skills- QUICKSTART.md - Get started in 5 minutes
- ABOUT_FIREWOOD.md - Project philosophy and architecture
- CHANGELOG.md - Version history and changes
- AGENTS.md - AI coding agent guidance
- docs/design/ - Technical design documentation
- docs/bugs/ - Bug fix documentation
- docs/testing/ - Testing documentation
Start with the Design Documentation Index for a complete overview.
Feature Documentation (consolidated, authoritative):
- Providers - Multi-provider system (OpenAI, Kimi, Custom)
- Function Calling - Universal function calling (filesystem, HTTP, shell)
- Skills - Agent Skills system and extensibility
- Streaming - Real-time response streaming
- Logging - Logging system and configuration
- CLI & Setup - CLI commands and setup wizard
Core Design Specs:
- Agent Architecture - Core agent design with session persistence
- GitHub Bridge - GitHub integration for issue automation
You: Read my Python script and suggest improvements
Firewood: [reads file, analyzes code, provides suggestions]
You: Create a README for this project
Firewood: [analyzes project, writes comprehensive README]
You: Fetch data from https://api.example.com/data
Firewood: [makes HTTP request, analyzes data]
You: Save the results to results.json
Firewood: [writes formatted JSON file]
You: Read config.yaml and make an API call with those settings
Firewood: [reads config, makes HTTP request, returns results]
You: Explain how authentication works in auth.py
Firewood: [reads and analyzes code, provides detailed explanation]
You: What's the weather like in Tokyo?
Firewood: [uses weather skill to fetch real-time weather data]
You: Should I bring an umbrella in London today?
Firewood: [checks weather and provides advice]
- Never logged or displayed
- Stored securely in system keyring (optional)
- Provider-specific environment variables
- Config file with appropriate permissions
- Path validation (no directory traversal)
- Configurable base path
- Read-only mode option
- Allowed paths whitelist
- URL validation (http/https only)
- Domain whitelisting support
- Configurable timeouts
- Safe error handling
Contributions are welcome! Please see:
- AGENTS.md for AI coding agent guidance
- docs/templates/ for documentation templates
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests
- Run test suite and linters
- Submit pull request
API Key Not Found:
# Verify API key is set
firewood --validate-api-key
# Run setup again
firewood --setupFunctions Not Working:
# Check if functions are enabled
firewood --show-config
# Verify functions are enabled in config file
# ~/.firewood/config.toml should have:
# [functions]
# enabled = true
# Test with simple function call
# Ask: "Read the README.md file"
# or: "Get my PATH environment variable"
# Check logs for function execution
firewood --show-logs --log-lines 50Sessions Not Saving:
# Check permissions
ls -la ~/.firewood/
# Check logs
firewood --show-logs- 480+ tests passing β
- 50%+ code coverage (CI enforced threshold)
- Comprehensive function calling tests
- Multi-provider integration tests
- Skills system tests
- Streaming implementation tests
- Team system tests
- Bridge system tests
| Provider | Status | Function Calling | Streaming | Models |
|---|---|---|---|---|
| OpenAI | β Ready | β Full | β Yes | gpt-4o, gpt-4o-mini, gpt-4, gpt-4-turbo, gpt-3.5-turbo, o1, o1-mini, o3, o3-mini |
| Anthropic | β Ready | β Full | β Yes | claude-3-5-sonnet, claude-3-opus, claude-3-sonnet, claude-3-haiku |
| Kimi/Moonshot | β Ready | β Full | β Yes | kimi-k2.5, moonshot-v1-8k/32k/128k |
| Custom | β Ready | β Full | β Yes | Any OpenAI-compatible API (Ollama, vLLM, etc.) |
Note: All providers use a unified AgentProvider implementation powered by the OpenAI Agents SDK.
- 8 Filesystem functions (
read_file,read_dir,write_file,append_file,delete_file,rename_fs,delete_dirs,create_dirs) β 2 always, 6 write-enabled - 5 HTTP functions (
http_get,http_post,http_put,http_patch,http_delete) - 2 Shell functions (
get_env,run_cmd) - 1 Token function (
list_tokens) - 1 Skills function (
load_skill) - 2 Team functions (
intro_member,assign_tasks) - 19 total functions (12 always available, 6 conditional on write access, 1 token)
- Weather skill - Real-time weather data via Open-Meteo API
- Git skill - Git CLI operations via run_cmd
- GitHub skill - GitHub operations via gh CLI
- Gitea skill - Gitea operations
- Makefile skill - Makefile patterns and best practices
- Backend Engineering skill - Backend development practices
- Communication skill - Communication templates
- Project Management skill - PM practices and templates
- System Design skill - System design methodology
- System Test skill - Testing practices and templates
- Custom skills - Create your own following Agent Skills spec
- 4 Roles - TPM, Dev, Architect, QA
- 10 Team Members - Pre-configured agent personas
- Multi-provider support (OpenAI, Anthropic, Kimi, Custom)
- Unified AgentProvider using OpenAI Agents SDK
- Session persistence with JSON storage
- Beautiful Terminal UI with Textual
- Universal function calling (20 functions)
- HTTP functions (GET/POST/PUT/PATCH/DELETE)
- Shell functions (environment access, command execution)
- Token-based authentication
- Streaming responses (real-time)
- Skills system with Agent Skills spec (10 built-in skills)
- Team system with roles and members
- Bridge system (CLI, GitHub, Gitea, Feishu)
- Full-screen setup wizard
- Comprehensive testing (480+ tests, 50%+ coverage)
- Additional providers (Google Gemini)
- Enhanced skills library (code review, testing, debugging)
- Skills registry and marketplace
- MCP (Model Context Protocol) integration
- SQLite storage backend option
- Session search and filtering
- Cost tracking per provider
- Web UI (browser-based interface)
- Session encryption at rest
- Multi-user support
- Cloud sync capabilities
- Plugin system for custom functions
- Advanced analytics and insights
This project is licensed under the MIT License - see the LICENSE file for details.
- Issues: GitHub Issues
- Documentation: docs/
- Examples: examples/
Built on Flame π₯
Enhanced by Firewood πͺ΅
Made for you β€οΈ