A Streamlit web application for generating audio content using AI-powered Griptape Nodes workflows with a sophisticated multi-tab interface.
- Multi-tab interface for organizing workflow inputs (World, Character, Data Experts, Speechwriter, Music Coach, Generation)
- JSON validation and auto-formatting for game data input
- Real-time audio generation with voice and music outputs
- Voice generation controls (stability, speed, voice preset)
- Quick voice-only regeneration without re-running entire workflow
- Audio playback directly in the browser
- Persistent state across page refreshes
- Direct workflow execution (no subprocess overhead)
- Comprehensive error handling
- Full development tooling (linting, type checking, spell checking)
- VSCode debugging support
- Python 3.12+
- uv package manager
- OpenAI API key (for the Agent node)
- Clone the repository:
git clone <repository-url>
cd griptape-nodes-example-with-app- Install dependencies using uv:
make install- Set up your environment variables:
cp .env.example .envThen edit .env and add your OpenAI API key:
OPENAI_API_KEY=your_actual_api_key_here
Run the Streamlit application:
make runThe app will automatically open in your browser at http://localhost:8501.
The application is organized into six tabs:
- World: Define the rules and context of your world
- Character: Define who the character is and how they think
- Data Experts: Configure three data experts and a summarizer
- Speechwriter: Define the debriefing monologue style (includes audio delivery instructions)
- Music Coach: Configure music generation guidelines
- Generation: Execute the workflow and view outputs
The Generation tab has a two-column layout:
Left Column (Game Data):
- Paste your JSON game data
- Format JSON button for auto-formatting
- Real-time JSON validation with error messages
- Input disabled while workflow is running
Right Column (Voice Settings & Execution):
- Voice Settings:
- Stability: Creative, Natural, or Robust (affects voice consistency)
- Speed: 0.7 to 1.2 (adjustable in 0.01 increments for fine control)
- Voice: 15 voice preset options
- All settings disabled while workflow is running
- Execution Buttons:
- "Run Griptape Nodes Workflow to Generate Audio" (before first run) or "Re-run entire Griptape Nodes workflow" (after first run)
- "Re-run voice generation" (appears after first run, enabled only when voice settings change) - quickly regenerates voice audio without re-running the full workflow
- Outputs (after generation):
- Voice and music audio players
- Debriefing monologue (includes audio tags for TTS delivery)
- Retrospective section with markdown support
All inputs persist across page refreshes, so you can safely reload the browser without losing your work.
All development commands are available through the Makefile:
make install # Install all dependencies
make check # Run all checks (format, lint, type-check, spell-check)
make fix # Auto-fix formatting and linting issues
make format # Check code formatting
make lint # Run linter
make type-check # Run type checker
make spell-check # Run spell checker
make clean # Remove build artifacts and caches
make run # Run the Streamlit app- Make your changes
- Run
make checkormake fixto ensure code quality - Commit your changes
Use VSCode's built-in debugger:
- Open the Run and Debug view (Cmd+Shift+D)
- Select "Debug Streamlit App" from the dropdown
- Press F5 or click the green play button
- Set breakpoints in app.py or published_nodes_workflow.py
This project follows specific code style guidelines documented in CLAUDE.md. Key principles:
- Evaluate all failure cases first, success path at the end
- Use simple, readable logic flow
- No lazy imports (imports at top of file)
- Specific, narrow exception handling
- Include context in error messages
.
├── app.py # Streamlit application
├── published_nodes_workflow.py # Griptape Nodes workflow definition
├── pyproject.toml # Project dependencies and tool configuration
├── Makefile # Development commands
├── CLAUDE.md # Code style guidelines
├── .env # Environment variables (API keys)
├── .env.example # Example environment variables
└── README.md # This file
- The Streamlit app loads published_nodes_workflow.py which defines the Griptape Nodes workflow
- User inputs are organized across multiple tabs for better organization
- Session state preserves all inputs across page refreshes
- When the user clicks "Run Griptape Nodes Workflow to Generate Audio" (or "Re-run entire Griptape Nodes workflow"):
- All inputs from all tabs are gathered
- JSON game data is validated before submission
- Voice settings (stability, speed, voice_preset) are captured
- The
LocalWorkflowExecutorexecutes the workflow with all inputs andrun_voice_generation_only=False - The workflow generates voice and music audio files
- Voice parameter tracking is updated for change detection
- When the user adjusts voice settings and clicks "Re-run voice generation":
- Only voice-related parameters are sent to the workflow
- The workflow skips data analysis and monologue generation
- Only the voice audio is regenerated with new settings
- Music audio and text outputs remain unchanged
- Results are displayed in the Generation tab with:
- Audio players for voice and music
- Debriefing monologue text output
- Markdown retrospective analysis
- The workflow maintains state across executions via the cached
LocalWorkflowExecutor
The AI-powered workflow processes data through multiple stages:
- Data Analysis: Raw JSON data is analyzed by three specialized data experts, each focusing on different aspects
- Summarization: A summarizer agent consolidates the experts' findings into a coherent analysis
- Debriefing Generation: The speechwriter (informed by character personality and world context) creates a concise debriefing monologue with audio delivery tags
- Parallel Audio Processing:
- Voice Path: Text-to-Speech with configurable stability, speed, and voice preset → Voice audio
- Music Path: Music coach analyzes the monologue's tone → Music generation → Music audio
- Retrospective: Data experts reconvene to identify what additional data would have improved their analysis
- Output: Returns debriefing monologue, voice audio, music audio, and retrospective
The included workflow (published_nodes_workflow.py) orchestrates an AI-powered audio generation pipeline.
The workflow accepts the following inputs through the "Start Flow" node:
world_rules: Context and rules for the world settingcharacter_definition: Character traits and personalitydata_expert_1,data_expert_2,data_expert_3: Three data expert configurationssummarizer: Summarizer configurationspeechwriter_rules: Guidelines for debriefing creation (includes audio delivery instructions)music_coach_rules: Music coaching guidelinesgame_data: JSON string containing mission/game datastability: Voice stability setting (Creative, Natural, or Robust)speed: Voice speed (0.7 to 1.2)voice_preset: Voice preset name (e.g., "James", "Rachel", etc.)run_voice_generation_only: Boolean flag to skip full workflow and only regenerate voice audio
The workflow returns through the "End Flow" node:
was_successful: Boolean indicating success/failureresult_details: Details about workflow executionvoice_audio_artifact: AudioUrlArtifact containing voice audiomusic_audio_artifact: AudioUrlArtifact containing music audiospeechwriter_output: Generated debriefing monologue with audio tagsretrospective: Markdown-formatted analysis from data experts
To change the default placeholder text for each tab:
- Open app.py
- Locate the
_initialize_session_state()function - Update the default values for each session state variable
To use a different workflow:
- Edit published_nodes_workflow.py or create a new workflow file
- Update the workflow import in app.py if using a different workflow
- Ensure your workflow accepts the inputs defined in the "Workflow Inputs" section above
- Ensure your workflow returns the outputs defined in the "Workflow Outputs" section above
To customize the Streamlit interface:
- Open app.py
- Modify the
main()function to adjust layouts, add/remove tabs, or change styling
If you see an error about missing API keys:
- Ensure your
.envfile exists (copy from.env.example) - Add your OpenAI API key:
OPENAI_API_KEY=sk-... - Restart the application
Check the console output for detailed error messages. Common issues:
- Invalid API key
- Network connectivity problems
- Workflow configuration issues
- Missing audio generation dependencies
If you see "Invalid JSON" in the Generation tab:
- Ensure your JSON is properly formatted
- Use a JSON validator to check syntax
- Verify all quotes are double quotes (not single quotes)
- Check for trailing commas
If audio files don't play after generation:
- Check that the workflow returned valid file paths
- Verify the audio files exist at the returned paths
- Ensure the audio format is supported by your browser
- Check console for file path or permissions errors
If you see unexpected behavior, clear Streamlit's cache:
- Press 'C' in the app (or use the menu: Settings → Clear cache)
- Or restart the application with
make run
See LICENSE for details.