diff --git a/docs/architecture.mdx b/docs/architecture.mdx
index 8d91dfb6..ef146219 100644
--- a/docs/architecture.mdx
+++ b/docs/architecture.mdx
@@ -1,146 +1,123 @@
---
title: "Architecture"
-description: "System design and module responsibilities for ShipSec Studio"
+description: "How ShipSec Studio is designed under the hood — from the visual builder to infrastructure."
---
## What is ShipSec Studio?
-ShipSec Studio is an open-source, no-code security workflow orchestration platform. Build, execute, and monitor security automation workflows through a visual interface — focus on security, not infrastructure.
+ShipSec Studio is an **open-source, no-code security workflow orchestration platform**. Build, execute, and monitor security automation workflows through a visual canvas — focus on security logic, not infrastructure plumbing.
---
## System Overview
-```
-┌────────────────────────────────────────────────────────────────────────┐
-│ FRONTEND │
-│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
-│ │ Visual │ │ Terminal │ │ Timeline │ │ Config │ │
-│ │ Builder │ │ Viewer │ │ Replay │ │ Panel │ │
-│ │ (ReactFlow) │ │ (xterm.js) │ │ (Zustand) │ │ (Forms) │ │
-│ └─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘ │
-└───────────────────────────────────┬────────────────────────────────────┘
- │ REST API + Unified SSE
-┌───────────────────────────────────▼─────────────────────────────────────┐
-│ BACKEND │
-│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
-│ │Workflows │ │ Secrets │ │ Storage │ │ Trace │ │ Auth │ │
-│ │ + DSL │ │(AES-256) │ │ (MinIO) │ │ Events │ │ (Clerk) │ │
-│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
-│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
-│ │ Webhooks │ │Schedules │ │ Agents │ │Human │ │Integr- │ │
-│ │ │ │ (CRON) │ │ │ │Inputs │ │ations │ │
-│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
-└───────────────────────────────────┬─────────────────────────────────────┘
- │ Temporal Client
-┌───────────────────────────────────▼─────────────────────────────────────┐
-│ TEMPORAL │
-│ Workflow Orchestration • Retry Logic • Durability │
-└───────────────────────────────────┬─────────────────────────────────────┘
- │ Activity Execution
-┌───────────────────────────────────▼─────────────────────────────────────┐
-│ WORKER │
-│ ┌─────────────────────────────────────────────────────────────────┐ │
-│ │ COMPONENT REGISTRY │ │
-│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │
-│ │ │ Tools │ │ AI │ │ Human │ │ Core │ │ │
-│ │ │(Security)│ │ Agents │ │ in Loop │ │ Utils │ │ │
-│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │
-│ └─────────────────────────────────────────────────────────────────┘ │
-│ ┌─────────────────────────────────────────────────────────────────┐ │
-│ │ SERVICE ADAPTERS │ │
-│ │ Secrets │ Storage │ Artifacts │ Trace │ Terminal │ Logs │ │
-│ └─────────────────────────────────────────────────────────────────┘ │
-└─────────────────────────────────────────────────────────────────────────┘
- │
-┌───────────────────────────────────▼─────────────────────────────────────┐
-│ INFRASTRUCTURE │
-│ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
-│ │PostgreSQL│ │ MinIO │ │ Redis │ │Redpanda │ │ Loki │ │
-│ │ (Data) │ │ (Files) │ │(Terminal)│ │ (Kafka) │ │ (Logs) │ │
-│ └──────────┘ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │
-└─────────────────────────────────────────────────────────────────────────┘
-```
+
+
+
---
## Technology Stack
-| Layer | Stack |
-|-------|-------|
-| **Frontend** | React 19, TypeScript, Vite, TailwindCSS, Radix UI, ReactFlow, xterm.js, Zustand |
-| **Backend** | NestJS, TypeScript, Bun, Drizzle ORM, Clerk Auth |
-| **Worker** | Node.js, TypeScript, Temporal SDK, Docker |
-| **Infrastructure** | PostgreSQL 16, Temporal, MinIO, Redis, Redpanda (Kafka), Loki |
+| Layer | Technologies |
+| ------------------ | ------------------------------------------------------------------------------- |
+| **Frontend** | React 19, TypeScript, Vite, TailwindCSS, Radix UI, ReactFlow, xterm.js, Zustand |
+| **Backend** | NestJS, TypeScript, Bun, Drizzle ORM, Clerk Auth |
+| **Worker** | Node.js, TypeScript, Temporal SDK, Docker |
+| **Infrastructure** | PostgreSQL 16, Temporal, MinIO, Redis, Redpanda (Kafka), Loki |
---
-## Core Deep-Dives
-
-To keep this guide concise, complicated subsystems are documented in their own dedicated files:
-
-- **[Workflow Compilation (DSL)](/architecture/workflow-compilation)**: How visual graphs are transformed into executable instructions.
-- **[Temporal Orchestration](/architecture/temporal-orchestration)**: How we use Temporal for durability and worker scaling.
-- **[Streaming Pipelines](/architecture/streaming-pipelines)**: How terminal, logs, and events are delivered in real-time.
-- **[Human-in-the-Loop](/architecture/human-in-the-loop)**: How we pause workflows for manual approvals and forms.
+## Core Subsystems
+
+These are the most complex parts of the system — each has its own dedicated deep-dive:
+
+
+
+ How visual node graphs are compiled into executable workflow instructions.
+
+
+ How we use Temporal for durability, retries, and worker scaling.
+
+
+ How terminal output, logs, and events are delivered in real-time via SSE.
+
+
+ How workflows pause mid-execution for manual approvals and form inputs.
+
+
---
## Component Categories
-Components are the building blocks of workflows:
+Components are the drag-and-drop building blocks of every workflow.
-| Category | Description | Examples |
-|----------|-------------|----------|
-| **security** | Security scanning and enumeration tools | Subfinder, DNSX, Nuclei, Naabu, HTTPx, TruffleHog |
-| **ai** | AI/ML and agent components | LLM Generate, AI Agent, MCP Providers |
-| **core** | Utility and data processing | HTTP Request, File Loader, Logic Script, JSON Transform |
-| **notification** | Alerts and messaging | Slack, Email |
-| **manual-action** | Human-in-the-loop | Approvals, Forms, Selection |
-| **github** | GitHub integrations | Remove Org Membership |
+| Category | What it does | Examples |
+| ----------------- | --------------------------------- | ------------------------------------------------------- |
+| **security** | Security scanning and enumeration | Subfinder, DNSX, Nuclei, Naabu, HTTPx, TruffleHog |
+| **ai** | AI/LLM and agent components | LLM Generate, AI Agent, MCP Providers |
+| **core** | Utility and data processing | HTTP Request, File Loader, Logic Script, JSON Transform |
+| **notification** | Alerts and messaging | Slack, Email |
+| **manual-action** | Human-in-the-loop controls | Approvals, Forms, Selection |
+| **github** | GitHub integrations | Remove Org Membership |
---
## Security Architecture
### Authentication & Multi-tenancy
-- **Clerk Integration** — Production-ready authentication for hosted environments.
-- **Local Auth** — Default for local setup using `ADMIN_USERNAME` / `ADMIN_PASSWORD`.
-- **Organization Isolation** — All data scoped by `organization_id`.
+
+- **Clerk Integration** — Production-ready auth for hosted deployments
+- **Local Auth** — Default for local dev via `ADMIN_USERNAME` / `ADMIN_PASSWORD` env vars
+- **Organization Isolation** — Every database record is scoped by `organization_id` — no data leaks between tenants
### Secrets Management
-- **AES-256-GCM** encryption at rest.
-- **Versioned secrets** with active/inactive tracking.
-- **Master key** via `SECRET_STORE_MASTER_KEY` environment variable.
+
+- **AES-256-GCM** encryption for all secrets at rest
+- **Versioned secrets** with active/inactive state tracking
+- **Master key** provided via the `SECRET_STORE_MASTER_KEY` environment variable
### Container Isolation
-- **IsolatedContainerVolume** — Per-tenant, per-run Docker volumes. See **[Isolated Volumes](/development/isolated-volumes)**.
-- **Network isolation** — Components run with `network: none` or `bridge`.
-- **Automatic cleanup** — Volumes destroyed after execution.
+
+- **IsolatedContainerVolume** — Each workflow run gets a dedicated, per-tenant Docker volume
+- **Network isolation** — Components execute with `network: none` or `bridge` depending on requirements
+- **Automatic cleanup** — Volumes are destroyed immediately after execution completes
+
+
+ For a full breakdown of isolated volume behavior, see the [Isolated Volumes](/development/isolated-volumes) guide.
+
---
## Development URLs
-All application services are accessible through nginx on port 80:
+All services are accessible through **nginx on port 80** in development:
-| Service | URL |
-|---------|-----|
-| Frontend | http://localhost/ |
-| Backend API | http://localhost/api/ |
-| Analytics | http://localhost/analytics/ |
-| Temporal UI | http://localhost:8081 |
-| MinIO Console | http://localhost:9001 |
-| Redpanda Console | http://localhost:8082 |
-| Loki | http://localhost:3100 |
+| Service | URL |
+| ---------------- | --------------------------- |
+| Frontend | http://localhost/ |
+| Backend API | http://localhost/api/ |
+| Analytics | http://localhost/analytics/ |
+| Temporal UI | http://localhost:8081 |
+| MinIO Console | http://localhost:9001 |
+| Redpanda Console | http://localhost:8082 |
+| Loki | http://localhost:3100 |
- Individual service ports (5173, 3211, 5601) are available for debugging but should not be used in normal development. All traffic flows through nginx on port 80.
+ Direct service ports (`5173`, `3211`, `5601`) are available for low-level debugging only. All normal development traffic should flow through nginx on port 80.
---
-## Learn More
-
-- **Component Development**: `/development/component-development`
-- **Getting Started**: `/getting-started`
+
+
+ ← Previous
+
+
+ Next →
+
+
\ No newline at end of file
diff --git a/docs/components/ai.mdx b/docs/components/ai.mdx
index f8afb31b..7c0301bf 100644
--- a/docs/components/ai.mdx
+++ b/docs/components/ai.mdx
@@ -1,183 +1,209 @@
---
title: "AI Components"
-description: "LLM integrations for intelligent workflow automation"
+description: "Connect LLMs to your security workflows — triage alerts, investigate findings, and extract structured data automatically."
---
-AI components in ShipSec Studio follow a **Provider-Consumer** architecture.
+AI components let you plug **large language models directly into your workflows**. Summarize scan results, triage alerts, run autonomous investigations, and extract structured data — all without writing a single line of code.
-1. **Providers**: Handle credentials, model selection, and API configuration (OpenAI, Gemini, OpenRouter).
-2. **Consumers**: Execute specific tasks (Text Generation or Autonomous Agents) using the configuration emitted by a Provider.
+They follow a simple **Provider → Consumer** pattern:
+
+- **Providers** handle credentials and model selection — set it once, reuse everywhere.
+- **Consumers** do the actual AI work — text generation or autonomous agent reasoning.
---
## Providers
-Provider nodes normalize credentials and model settings into a reusable **LLM Provider Config**.
+A Provider node is always the **first step** in any AI chain. It holds your API key and model choice, and outputs a reusable config that Consumer nodes plug into.
+
+
+ By keeping credentials and model selection in one Provider node, you can swap models for your entire workflow by changing just one thing.
+
### OpenAI Provider
-Configures access to OpenAI or OpenAI-compatible endpoints.
+Connects to OpenAI's API (or any OpenAI-compatible endpoint).
| Input | Type | Description |
-|-------|------|-------------|
-| `apiKey` | Secret | OpenAI API key (typically from a Secret Loader) |
+|---|---|---|
+| `apiKey` | Secret | Your OpenAI API key — use **Secret Loader** to supply this safely |
| Parameter | Type | Description |
-|-----------|------|-------------|
-| `model` | Select | `gpt-5.2`, `gpt-5.1`, `gpt-5`, `gpt-5-mini` |
-| `apiBaseUrl` | Text | Optional override for the API base URL |
+|---|---|---|
+| `model` | Select | `gpt-5.2` · `gpt-5.1` · `gpt-5` · `gpt-5-mini` |
+| `apiBaseUrl` | Text | Override the API base URL — useful for self-hosted or proxy endpoints |
+
+---
### Gemini Provider
-Configures access to Google's Gemini models.
+Connects to Google's Gemini model family.
| Input | Type | Description |
-|-------|------|-------------|
-| `apiKey` | Secret | Google AI API key |
+|---|---|---|
+| `apiKey` | Secret | Your Google AI API key |
| Parameter | Type | Description |
-|-----------|------|-------------|
-| `model` | Select | `gemini-3-pro-preview`, `gemini-3-flash-preview`, `gemini-2.5-pro` |
-| `apiBaseUrl` | Text | Optional override for the API base URL |
+|---|---|---|
+| `model` | Select | `gemini-3-pro-preview` · `gemini-3-flash-preview` · `gemini-2.5-pro` |
+| `apiBaseUrl` | Text | Optional API base URL override |
| `projectId` | Text | Optional Google Cloud project identifier |
+---
+
### OpenRouter Provider
-Configures access to multiple LLM providers through OpenRouter's unified API.
+Connects to [OpenRouter](https://openrouter.ai) — a unified API that gives you access to models from Anthropic, Google, Meta, Mistral, and more through a single key.
| Input | Type | Description |
-|-------|------|-------------|
-| `apiKey` | Secret | OpenRouter API key |
+|---|---|---|
+| `apiKey` | Secret | Your OpenRouter API key |
| Parameter | Type | Description |
-|-----------|------|-------------|
-| `model` | Text | Model slug (e.g., `openrouter/auto`, `anthropic/claude-3.5-sonnet`) |
-| `apiBaseUrl` | Text | Optional override for the API base URL |
-| `httpReferer` | Text | Application URL for OpenRouter ranking |
-| `appTitle` | Text | Application title for OpenRouter ranking |
+|---|---|---|
+| `model` | Text | Model slug — e.g. `anthropic/claude-sonnet-4-5`, `openrouter/auto` |
+| `apiBaseUrl` | Text | Optional API base URL override |
+| `httpReferer` | Text | Your app URL — used for OpenRouter analytics |
+| `appTitle` | Text | Your app name — used for OpenRouter analytics |
---
## Consumers
-Consumer nodes perform the actual AI work. They require a **Provider Config** output from one of the providers above.
+Consumer nodes do the actual AI work. Every consumer has a `chatModel` input — **connect your Provider output here.**
### AI Generate Text
-Performs a one-shot chat completion.
+A single-shot prompt-response. Give it a prompt, get back text. Simple and fast.
| Input | Type | Description |
-|-------|------|-------------|
-| `userPrompt` | Text | The primary request or data to process |
-| `chatModel` | Credential | **Required.** Connect a Provider output here |
-| `modelApiKey` | Secret | Optional. supersedes the API key in the Provider Config |
+|---|---|---|
+| `chatModel` | Credential | **Required** — connect a Provider output here |
+| `userPrompt` | Text | The question, instruction, or data to process |
+| `modelApiKey` | Secret | Optional — overrides the API key from the Provider |
| Parameter | Type | Description |
-|-----------|------|-------------|
-| `systemPrompt`| Textarea | Instructions that guide the model's behavior |
-| `temperature` | Number | Creativity vs. determinism (0.0 to 2.0) |
-| `maxTokens` | Number | Maximum tokens to generate |
+|---|---|---|
+| `systemPrompt` | Textarea | High-level instructions that shape the model's behavior (e.g. "You are a senior security analyst") |
+| `temperature` | Number | Controls creativity vs. consistency — `0.0` is deterministic, `2.0` is creative |
+| `maxTokens` | Number | Cap on how long the response can be |
| Output | Type | Description |
-|--------|------|-------------|
-| `responseText`| Text | The assistant's response |
-| `usage` | JSON | Token consumption metadata |
-| `rawResponse` | JSON | Full API response for debugging |
+|---|---|---|
+| `responseText` | Text | The model's response |
+| `usage` | JSON | Token usage metadata |
+| `rawResponse` | JSON | Full API response — useful for debugging |
---
### AI SDK Agent
-An autonomous agent that uses reasoning steps and tool-calling to solve complex tasks.
+An **autonomous agent** that reasons step-by-step, calls tools, and iterates until it completes a task. Think of it as giving the AI a goal rather than a single question.
| Input | Type | Description |
-|-------|------|-------------|
-| `userInput` | Text | The task or question for the agent |
-| `chatModel` | Credential | **Required.** Connect a Provider output here |
-| `conversationState` | JSON | Optional. Connect from a previous turn for memory |
-| `mcpTools` | List | Optional. Connect tools from MCP Providers |
+|---|---|---|
+| `chatModel` | Credential | **Required** — connect a Provider output here |
+| `userInput` | Text | The task or goal for the agent |
+| `conversationState` | JSON | Optional — pass in a previous turn's state to give the agent memory |
+| `mcpTools` | List | Optional — connect tools from MCP Provider nodes |
| Parameter | Type | Description |
-|-----------|------|-------------|
-| `systemPrompt`| Textarea | Core identity and constraints for the agent |
-| `temperature` | Number | Reasoning creativity (default 0.7) |
-| `stepLimit` | Number | Max "Think -> Act -> Observe" loops (1-12) |
-| `memorySize` | Number | Number of previous turns to retain in context |
-| `structuredOutputEnabled` | Toggle | Enable to enforce a specific JSON output structure |
-| `schemaType` | Select | How to define the schema: `json-example` or `json-schema` |
-| `jsonExample` | JSON | Example JSON object for schema inference (all properties become required) |
-| `jsonSchema` | JSON | Full JSON Schema definition for precise validation |
-| `autoFixFormat` | Toggle | Attempt to extract valid JSON from malformed responses |
+|---|---|---|
+| `systemPrompt` | Textarea | Core identity and constraints — defines who the agent is and what it's allowed to do |
+| `temperature` | Number | Reasoning creativity (default: `0.7`) |
+| `stepLimit` | Number | Max Think → Act → Observe loops before stopping (range: `1–12`) |
+| `memorySize` | Number | How many previous turns to keep in context |
+| `structuredOutputEnabled` | Toggle | Force the agent to always return a specific JSON structure |
+| `schemaType` | Select | How to define the output schema: `json-example` or `json-schema` |
+| `jsonExample` | JSON | Provide an example JSON object — all fields become required |
+| `jsonSchema` | JSON | Provide a full JSON Schema for precise validation |
+| `autoFixFormat` | Toggle | Try to salvage valid JSON from a malformed response |
| Output | Type | Description |
-|--------|------|-------------|
-| `responseText`| Text | Final answer after reasoning is complete |
-| `structuredOutput` | JSON | Parsed structured output (when enabled) |
-| `conversationState` | JSON | Updated state to pass to the next agent node |
-| `reasoningTrace` | JSON | Detailed step-by-step logs of the agent's thoughts |
+|---|---|---|
+| `responseText` | Text | The agent's final answer after all reasoning steps |
+| `structuredOutput` | JSON | Parsed, validated JSON output (when Structured Output is enabled) |
+| `conversationState` | JSON | Updated state — loop this back into the next agent node for multi-turn memory |
+| `reasoningTrace` | JSON | Step-by-step log of every thought and action the agent took |
| `agentRunId` | Text | Unique session ID for tracking and streaming |
+
+ Use **Structured Output** whenever you need the agent's response to feed into another component downstream. It guarantees the JSON schema is correct every time — no fragile prompt-parsing required.
+
+
---
## MCP Tools (Model Context Protocol)
-ShipSec Studio supports the **Model Context Protocol (MCP)**, allowing AI agents to interact with external tools over HTTP.
+MCP lets your AI Agent **call external tools** — things like searching logs, querying APIs, or running lookups — as part of its reasoning loop. You define what tools are available; the agent decides when and how to use them.
### MCP HTTP Tools
-Exposes a set of tools from a remote HTTP server that implements the MCP contract.
+Connects to a remote server that exposes tools over HTTP using the MCP protocol.
-| Input | Type | Description |
-|-------|------|-------------|
-| `endpoint` | Text | The HTTP URL where the MCP server is hosted |
-| `headersJson` | Text | Optional JSON of headers (e.g., Auth tokens) |
-| `tools` | JSON | List of tool definitions available on that endpoint |
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `endpoint` | Text | Destination URL for tool execution |
-| `tools` | JSON | Array of tools with `id`, `title`, and `arguments` |
+| Input / Parameter | Type | Description |
+|---|---|---|
+| `endpoint` | Text | The URL of the MCP server |
+| `headersJson` | Text | Optional JSON headers (e.g. `{"Authorization": "Bearer ..."}`) |
+| `tools` | JSON | Tool definitions — each needs an `id`, `title`, and `arguments` schema |
| Output | Type | Description |
-|--------|------|-------------|
-| `tools` | List | Normalized MCP tool definitions for the AI Agent |
+|---|---|---|
+| `tools` | List | Normalized tool list ready to plug into an AI Agent |
+
+---
### MCP Tool Merge
-Combines multiple MCP tool lists into a single consolidated list.
+Combines tool lists from multiple MCP servers into one unified list for the agent.
| Input | Type | Description |
-|-------|------|-------------|
-| `toolsA`, `toolsB`| List | Multiple upstream MCP tool outputs |
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `slots` | JSON | Configure additional input ports for merging |
+|---|---|---|
+| `toolsA`, `toolsB` | List | Outputs from two or more MCP HTTP Tools nodes |
+| `slots` | JSON | Add more input ports if you have more than two sources |
| Output | Type | Description |
-|--------|------|-------------|
-| `tools` | List | De-duplicated list of tools ready for an agent |
+|---|---|---|
+| `tools` | List | De-duplicated, merged tool list — connect directly to the Agent's `mcpTools` input |
---
-## Use Cases
+## Real-World Use Cases
+
+### Alert Triage
+> *"Is this alert real or noise?"*
+```
+OpenAI Provider → AI Generate Text
+```
+
+Pass raw alert data into the prompt and let the model filter signal from noise before it ever reaches a human.
+```
+System Prompt: "You are a security analyst. Classify alerts as TRUE_POSITIVE or FALSE_POSITIVE with a one-sentence reason."
+User Prompt: "{{alertPayload}}"
+```
-### Automated Alert Triage
-**Flow:** `Provider` → `AI Generate Text`
+---
-Analyze incoming security alerts to filter out false positives.
-**Prompt:** "Given this alert payload: {{alert}}, determine if it's a real threat or noise."
+### Autonomous Investigation
+> *"Investigate this IP using every tool available."*
+```
+OpenAI Provider + MCP HTTP Tools → AI SDK Agent
+```
-### Investigative Agent
-**Flow:** `Provider` + `MCP Tool` → `AI Agent`
+Give the agent a task and a set of tools (Splunk search, VirusTotal lookup, etc.) and let it figure out the investigation steps itself.
+```
+Task: "Investigate IP {{ip}}. Use the available tools to determine if it is malicious."
+```
-An agent that searches through logs and performs lookups to investigate a specific IP address.
-**Task:** "Investigate the IP {{ip}} using the available Splunk and VirusTotal tools."
+---
-### Structured Output for Data Extraction
-**Flow:** `Provider` → `AI Agent` (with Structured Output enabled)
+### Structured Data Extraction
+> *"Turn this messy report into clean JSON."*
+```
+OpenAI Provider → AI SDK Agent (Structured Output enabled)
+```
-Extract structured data from unstructured security reports. Enable **Structured Output** and provide a JSON example:
+Enable **Structured Output** and provide a JSON example. The agent will always return data in exactly this shape — no prompt-wrangling, no parsing errors.
```json
{
"severity": "high",
@@ -185,20 +211,26 @@ Extract structured data from unstructured security reports. Enable **Structured
"remediation_steps": ["Patch CVE-2024-1234", "Restart service"]
}
```
-The agent will always return validated JSON matching this schema, ready for downstream processing.
---
## Best Practices
-
- **The Provider Concept**: Always place a Provider node (OpenAI/Gemini/OpenRouter) at the start of your AI chain. This allows you to swap models or providers for the entire workflow by changing just one node.
-
+**Use System Prompts for behavior, User Prompts for data.** System prompts define *who the model is* and *how it should respond*. User prompts carry the actual data to process. Keep them separate.
-### Prompt Engineering
-1. **Use Structured Output**: When you need consistent JSON for downstream nodes, enable **Structured Output** instead of relying on prompt instructions. This guarantees schema compliance and eliminates parsing errors.
-2. **Use System Prompts**: Set high-level rules (e.g., "You are a senior security researcher") in the System Prompt parameter instead of the User Input.
-3. **Variable Injection**: Use `{{variableName}}` syntax to inject data from upstream nodes into your prompts.
+**Inject upstream data with `{{variableName}}` syntax.** Any output from a previous component can be dropped into a prompt — just reference it by name.
+
+**Loop `conversationState` for multi-turn memory.** Connect the `conversationState` output of one Agent node into the `conversationState` input of the next to give your agent persistent memory across steps.
+
+**Always use Secret Loader for API keys.** Never paste API keys directly into Provider parameters — always connect them via the Secret Loader component.
+
+---
-### Memory & State
-For multi-turn conversations, always loop the `conversationState` output of the AI Agent back into the `conversationState` input of the next agent invocation (or store it in a persistent variable).
+
+
+ ← Previous
+
+
+ Next →
+
+
\ No newline at end of file
diff --git a/docs/components/core.mdx b/docs/components/core.mdx
index 3ad0c765..3303467f 100644
--- a/docs/components/core.mdx
+++ b/docs/components/core.mdx
@@ -1,46 +1,48 @@
---
title: "Core Components"
-description: "Essential building blocks for workflow inputs, outputs, and data transformation"
+description: "The essential building blocks for starting workflows, moving data, transforming it, and storing results."
---
-Core components handle workflow triggers, file operations, data transformation, and output destinations.
+Core components are the **glue of every workflow**. They handle how a workflow starts, how data moves between steps, how secrets are accessed, and where results end up. Every workflow you build will use at least a few of these.
---
## Triggers
+Triggers are always the **first component** in a workflow. Nothing runs until a trigger fires.
+
### Manual Trigger
-Starts a workflow manually. Configure runtime inputs to collect data (files, text, etc.) when triggered.
+The simplest way to start a workflow — you click **Run**, fill in any required inputs, and it kicks off.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `runtimeInputs` | JSON | Define inputs to collect at runtime |
+| Parameter | Type | What it does |
+|---|---|---|
+| `runtimeInputs` | JSON | Define what fields to show the user at runtime (files, text, numbers, etc.) |
-**Supported input types:** `file`, `text`, `number`, `json`, `array`
+**Supported input types:** `file` · `text` · `number` · `json` · `array`
-**Example use cases:**
-- Collect uploaded scope files before running security scans
-- Prompt operators for target domains or API keys
+**Good for:**
+- Asking an operator to upload a scope file before a scan starts
+- Prompting for a target domain or API key right before execution
---
### Webhook
-Sends JSON payloads to external HTTP endpoints with retries and timeouts.
+Fires a workflow automatically when an external system sends an HTTP request to a generated URL.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `url` | URL | Destination endpoint |
-| `method` | Select | POST, PUT, or PATCH |
-| `payload` | JSON | Request body |
-| `headers` | JSON | HTTP headers |
-| `timeoutMs` | Number | Request timeout (default: 30000) |
-| `retries` | Number | Retry attempts (default: 3) |
+| Parameter | Type | What it does |
+|---|---|---|
+| `url` | URL | Where to send the request |
+| `method` | Select | `POST`, `PUT`, or `PATCH` |
+| `payload` | JSON | The request body |
+| `headers` | JSON | Any custom HTTP headers |
+| `timeoutMs` | Number | How long to wait before giving up (default: `30000` ms) |
+| `retries` | Number | How many times to retry on failure (default: `3`) |
-**Example use cases:**
-- Send scan results to Slack or Teams
-- POST assets to a custom API
+**Good for:**
+- Automatically triggering a scan when a new asset is added to your system
+- Posting scan results to a custom internal API
---
@@ -48,49 +50,51 @@ Sends JSON payloads to external HTTP endpoints with retries and timeouts.
### File Loader
-Loads file content from storage for use in workflows.
+Loads a previously uploaded file from storage so its contents can flow into the rest of your workflow.
| Input | Type | Description |
-|-------|------|-------------|
-| `fileId` | UUID | File ID from uploaded file |
+|---|---|---|
+| `fileId` | UUID | The ID of the file you want to load |
| Output | Type | Description |
-|--------|------|-------------|
-| `file` | Object | File metadata + base64 content |
-| `textContent` | String | Decoded UTF-8 text |
+|---|---|---|
+| `file` | Object | File metadata plus base64-encoded content |
+| `textContent` | String | The file's contents as plain readable text |
---
### Text Splitter
-Splits text into an array of strings by separator.
+Takes a block of text and breaks it into a list of items — like splitting a list of domains by line.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `text` | String/File | Text content to split |
-| `separator` | String | Split character (default: `\n`) |
+| Parameter | Type | What it does |
+|---|---|---|
+| `text` | String / File | The text you want to split |
+| `separator` | String | What character to split on (default: `\n` — new line) |
| Output | Type | Description |
-|--------|------|-------------|
-| `items` | Array | Split strings |
-| `count` | Number | Number of items |
+|---|---|---|
+| `items` | Array | The list of split strings |
+| `count` | Number | How many items were produced |
-**Example:** Split newline-delimited subdomains before passing to scanners.
+
+ Use this after **File Loader** to turn a newline-separated list of domains into an array you can pass to a scanner.
+
---
### Text Joiner
-Joins array elements into a single string.
+The opposite of Text Splitter — takes a list of items and merges them into one string.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `items` | Array | Array of strings to join |
-| `separator` | String | Join character (default: `\n`) |
+| Parameter | Type | What it does |
+|---|---|---|
+| `items` | Array | The list of strings to join |
+| `separator` | String | What to put between each item (default: `\n`) |
| Output | Type | Description |
-|--------|------|-------------|
-| `text` | String | Joined string |
+|---|---|---|
+| `text` | String | The combined result |
---
@@ -98,113 +102,135 @@ Joins array elements into a single string.
### Secret Loader
-Fetches secrets from the ShipSec-managed secret store.
+Safely fetches a stored secret (API key, password, token) from ShipSec's encrypted secret store and makes it available to downstream components — **without ever exposing the value in logs.**
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `secretName` | Secret | Secret name or UUID |
-| `version` | Number | Optional version pin |
-| `outputFormat` | Select | `raw` or `json` |
+| Parameter | Type | What it does |
+|---|---|---|
+| `secretName` | Secret | The name or UUID of the secret to fetch |
+| `version` | Number | Pin to a specific version (optional) |
+| `outputFormat` | Select | `raw` for plain text, `json` for parsed object |
| Output | Type | Description |
-|--------|------|-------------|
-| `secret` | Any | Resolved secret value (masked in logs) |
-| `metadata` | Object | Secret version info |
+|---|---|---|
+| `secret` | Any | The secret value — automatically masked in all logs |
+| `metadata` | Object | Version info and metadata about the secret |
- Secret values are automatically masked in all logs and terminal output.
+ Secret values are **automatically redacted** from all logs, terminal output, and trace events. They are never stored in plain text.
---
## Data Transformation
+These components let you reshape, filter, and debug data as it flows through your workflow.
+
### Array Pick
-Extracts specific items from an array by index.
+Pulls specific items out of an array by their position (index).
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `array` | Array | Source array |
-| `indices` | Array | Indices to pick |
+| Parameter | Type | What it does |
+|---|---|---|
+| `array` | Array | The source list |
+| `indices` | Array | Which positions to extract (e.g. `[0, 2, 4]`) |
| Output | Type | Description |
-|--------|------|-------------|
-| `picked` | Array | Selected items |
+|---|---|---|
+| `picked` | Array | Just the items you selected |
+
+---
### Array Pack
-Combines multiple values into a single array.
+Bundles multiple separate values into a single array — useful when you need to combine outputs from different components.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `values` | Any[] | Values to pack |
+| Parameter | Type | What it does |
+|---|---|---|
+| `values` | Any[] | The values to bundle together |
| Output | Type | Description |
-|--------|------|-------------|
-| `array` | Array | Packed array |
+|---|---|---|
+| `array` | Array | The combined array |
+
+---
### Console Log
-Outputs data to workflow logs for debugging.
+Prints data to the workflow's log panel. Use this when you're building or debugging a workflow and want to inspect what's flowing between steps.
+
+| Parameter | Type | What it does |
+|---|---|---|
+| `data` | Any | The value you want to inspect |
+| `label` | String | An optional label to identify the log entry |
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `data` | Any | Data to log |
-| `label` | String | Optional label |
+
+ Console Log doesn't affect your workflow's execution — it's purely for visibility. Remove it before running workflows in production.
+
---
-## Storage Destinations
+## Storage & Destinations
+
+Where your workflow results end up.
### Artifact Writer
-Writes workflow artifacts to ShipSec storage.
+Saves a file or data blob to ShipSec's built-in storage, making it downloadable from the workflow run page.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `content` | Any | Content to store |
-| `filename` | String | Artifact filename |
-| `mimeType` | String | Content type |
+| Parameter | Type | What it does |
+|---|---|---|
+| `content` | Any | The data to store |
+| `filename` | String | What to name the file |
+| `mimeType` | String | The file type (e.g. `text/plain`, `application/json`) |
| Output | Type | Description |
-|--------|------|-------------|
-| `artifactId` | UUID | Stored artifact ID |
-| `url` | String | Download URL |
+|---|---|---|
+| `artifactId` | UUID | A unique ID for the stored artifact |
+| `url` | String | A direct download URL |
+
+---
### File Writer
-Writes content to a file in workflow storage.
+Writes raw text content to a file path in workflow storage.
+
+| Parameter | Type | What it does |
+|---|---|---|
+| `content` | String | The text to write |
+| `path` | String | Where to save it (e.g. `results/output.txt`) |
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `content` | String | File content |
-| `path` | String | File path |
+---
### Destination S3
-Uploads files to an S3-compatible bucket.
+Uploads a file directly to any S3-compatible bucket (AWS S3, MinIO, Cloudflare R2, etc.).
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `bucket` | String | S3 bucket name |
-| `key` | String | Object key |
-| `content` | Buffer | File content |
-| `credentials` | Object | AWS credentials |
+| Parameter | Type | What it does |
+|---|---|---|
+| `bucket` | String | The bucket name |
+| `key` | String | The object path/key inside the bucket |
+| `content` | Buffer | The file content to upload |
+| `credentials` | Object | AWS credentials (use **AWS Credentials** component) |
+
+---
### AWS Credentials
-Provides AWS credentials for S3 operations.
+Provides AWS credentials to other components that need them (like **Destination S3**). Connect its output to the `credentials` input of any AWS-powered component.
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `accessKeyId` | Secret | AWS Access Key ID |
-| `secretAccessKey` | Secret | AWS Secret Access Key |
-| `region` | String | AWS region |
+| Parameter | Type | What it does |
+|---|---|---|
+| `accessKeyId` | Secret | Your AWS Access Key ID |
+| `secretAccessKey` | Secret | Your AWS Secret Access Key |
+| `region` | String | AWS region (e.g. `us-east-1`) |
| Output | Type | Description |
-|--------|------|-------------|
-| `credentials` | Object | Credential object for S3 components |
+|---|---|---|
+| `credentials` | Object | A credentials object ready to plug into S3 components |
+
+
+ Always use the **Secret Loader** to supply `accessKeyId` and `secretAccessKey` — never paste keys directly into parameters.
+
---
@@ -212,37 +238,48 @@ Provides AWS credentials for S3 operations.
### Analytics Sink
-Indexes workflow output data into OpenSearch for analytics dashboards, queries, and alerts. Connect the `results` port from upstream security scanners.
+Sends workflow output data into **OpenSearch** so you can query it, visualize it in dashboards, and track findings over time. Connect it to the `results` port of any scanner.
| Input | Type | Description |
-|-------|------|-------------|
-| `data` | Any | Data to index. Works best with `list` from scanner `results` ports. |
+|---|---|---|
+| `data` | Any | The data to index — works best with `list` from scanner `results` ports |
| Output | Type | Description |
-|--------|------|-------------|
-| `indexed` | Boolean | Whether data was successfully indexed |
-| `documentCount` | Number | Number of documents indexed |
-| `indexName` | String | Name of the OpenSearch index used |
-
-| Parameter | Type | Description |
-|-----------|------|-------------|
-| `indexSuffix` | String | Custom suffix for the index name. Defaults to slugified workflow name. |
-| `assetKeyField` | Select | Field to use as asset identifier. Options: auto, asset_key, host, domain, subdomain, url, ip, asset, target, custom |
-| `customAssetKeyField` | String | Custom field name when assetKeyField is "custom" |
-| `failOnError` | Boolean | When enabled, workflow stops if indexing fails. Default: false (fire-and-forget) |
-
-**How it works:**
-
-1. Each item in the input array becomes a separate document
-2. Workflow context is added under `shipsec.*` namespace
-3. Nested objects are serialized to JSON strings (prevents field explosion)
-4. All documents get the same `@timestamp`
-
-**Example use cases:**
-- Index Nuclei scan results for trend analysis
-- Store TruffleHog secrets for tracking over time
-- Aggregate vulnerability data across workflows
+|---|---|---|
+| `indexed` | Boolean | Whether indexing succeeded |
+| `documentCount` | Number | How many records were indexed |
+| `indexName` | String | The OpenSearch index that was used |
+
+| Parameter | Type | What it does |
+|---|---|---|
+| `indexSuffix` | String | Custom suffix appended to the index name. Defaults to the workflow name. |
+| `assetKeyField` | Select | Which field to use as the asset identifier. Options: `auto`, `host`, `domain`, `subdomain`, `url`, `ip`, `asset`, `target`, `custom` |
+| `customAssetKeyField` | String | Your own field name when `assetKeyField` is set to `custom` |
+| `failOnError` | Boolean | If `true`, the workflow stops when indexing fails. Default: `false` (fire-and-forget) |
+
+**How it works behind the scenes:**
+
+1. Each item in the input array becomes its own searchable document
+2. Workflow metadata is added automatically under the `shipsec.*` namespace
+3. Nested objects are serialized to prevent index field explosion
+4. All documents share the same `@timestamp` for time-series querying
+
+**Good for:**
+- Tracking Nuclei vulnerability findings over time
+- Storing TruffleHog secrets for audit trails
+- Aggregating results across multiple workflows into one dashboard
- See [Workflow Analytics](/development/workflow-analytics) for detailed setup and querying guide.
+ Analytics Sink requires OpenSearch to be configured. See the [Workflow Analytics](/development/workflow-analytics) guide for full setup instructions.
+
+---
+
+
+
+ ← Previous
+
+
+ Next →
+
+
\ No newline at end of file
diff --git a/docs/components/overview.mdx b/docs/components/overview.mdx
index ba42b9d7..0344ca7d 100644
--- a/docs/components/overview.mdx
+++ b/docs/components/overview.mdx
@@ -1,87 +1,146 @@
---
title: "Components Overview"
-description: "Drag-and-drop building blocks for security automation workflows"
+description: "The drag-and-drop building blocks you connect together to automate security workflows — no code required."
---
-Components are the building blocks of ShipSec Studio workflows. Each component performs a specific task and can be connected together to create powerful automation pipelines.
+## What Are Components?
+
+Think of components like **LEGO bricks for security automation**. Each one does one specific job — scan subdomains, probe for live servers, detect leaked secrets, send a Slack message. You drag them onto a canvas, connect them together, and ShipSec Studio runs the whole chain automatically.
+
+No scripting. No glue code. Just connect and run.
+
+---
## Component Categories
-
+
- Triggers, file handling, data transformation, and outputs
+ Triggers, file handling, data transformation, logic, and outputs. The backbone of every workflow.
- Subdomain discovery, port scanning, DNS resolution, secret detection
+ Industry-standard security tools — subdomain discovery, port scanning, DNS resolution, secret detection, and more.
- Provider configurations and autonomous agents
+ Connect LLM providers and autonomous agents to analyze, triage, and summarize findings automatically.
-## How Components Work
+---
-Each component has:
+## How a Component Works
-- **Inputs** – Data ports that accept connections from other components
-- **Outputs** – Data ports that can be connected to downstream components
-- **Parameters** – Configurable settings in the sidebar
+Every component has three parts you interact with:
-Components run inside Docker containers for isolation and reliability. Workflows are orchestrated by [Temporal](https://temporal.io/) with automatic retries and resumability.
+- **Inputs** — the left-side connection points. This is where data flows *in* from a previous component.
+- **Outputs** — the right-side connection points. This is where processed data flows *out* to the next component.
+- **Parameters** — the settings panel on the right sidebar. Configure behavior per-component without touching any code.
-## Component Structure
+
+ Every component runs inside its own **isolated Docker container**. This means a broken or misbehaving component can't affect the rest of your workflow. Each run is sandboxed, reproducible, and clean.
+
-```typescript
-interface ComponentDefinition {
- id: string; // Unique identifier
- label: string; // Display name
- category: string; // Category for grouping
- runner: RunnerConfig; // Execution configuration
- inputSchema: ZodSchema;
- outputSchema: ZodSchema