Product: Decent Bench
Type: Cross-platform desktop SQL-style app (Flutter)
License: Apache 2.0
Primary purpose: Drag-and-drop import into DecentDB, then inspect
schema, run the pinned DecentDB SQL surface, and export shaped results.
PRD reference: design/PRD.md
Pinned engine capability baseline: DecentDB v2.x
Release status: v1.0.0 is the shipped MVP release for the scope in this
document. References to "Phase 1" remain as historical delivery context.
- DecentDB: The target embedded database format used by Decent Bench workspaces.
- Import source: A file or external database used only as input to create or load data into DecentDB.
- Wizard: The guided flow launched on drag-and-drop of a non-DecentDB file to configure import.
- Workspace: An open DecentDB file plus UI state, such as tabs, recent objects, and per-tab query state.
- Schema browser: UI for catalog discovery such as tables, views, columns, indexes, triggers, and constraints.
- Cursor: A query handle returned by the DecentDB binding that allows page- based retrieval of results without full materialization.
This document distinguishes between:
- Phase 1 implementation slice: the smallest useful, runnable slice that should land first
- MVP scope: the shipped product scope for
v1.0.0 - Next: explicitly deferred beyond the shipped MVP
If a feature appears in multiple documents and the scope differs, this SPEC is the source of truth for implementation scope.
Phase 1 exists to reduce delivery risk and establish a runnable architecture. It should include only:
- Open an existing DecentDB file
- Create a new DecentDB file
- Load schema browser metadata for tables and columns
- Single query editor tab
- Run and cancel a query
- Paged/streamed results grid
- CSV export from query results
- Minimal TOML configuration
- Runnable desktop scaffold with tests and CI hooks
Phase 1 intentionally excludes import wizard work, autocomplete, snippets, formatter, JSON/Parquet/Excel export, and multi-tab editing.
- Drag-and-drop a file onto the app window:
- If DecentDB, open immediately
- Otherwise, launch Import Wizard based on file type
- Desktop launch may also accept
dbench --import <path>to open the matching import wizard on startup for a supported import source - Single-file drop for MVP
- If multiple files are dropped, import the first and show a warning dialog
- Excel (
.xls,.xlsx) - SQLite (
.db,.sqlite,.sqlite3) - MariaDB/MySQL-style
.sqldump (MVP-lite; commonCREATE TABLE+INSERTpatterns) - Import transforms before commit:
- Rename columns
- Type overrides
- Computed columns are deferred to Next
- Must reflect the supported DecentDB object kinds for the pinned DecentDB version used by Decent Bench
- MVP object classes:
- tables
- views
- temp tables/views
- columns
- indexes, including rich index metadata
- constraints/triggers from rich schema snapshot metadata
- generated-column metadata from rich schema snapshot metadata
- Unsupported object kinds must degrade gracefully and must not block the rest of schema browsing
- Multi-tab editor with per-tab results panes
- Keyboard navigation between tabs and focus between editor/results
- Run/stop query with best-effort cancellation
- Execute the full SQL surface documented by the pinned DecentDB engine reference, even when dedicated UI affordances for some engine features arrive later
- Schema-aware autocomplete
- Autocomplete suggestions should appear inside the editor surface and accept
with
Tabinstead of rendering in a detached tray below the editor - User-editable snippets
- Deterministic SQL formatter
- Virtualized/paginated results grid
- Export query results:
- CSV (required for MVP)
- JSON, Parquet, and Excel exports are Next
- TOML configuration file stored locally
- Desktop
Options / Preferencesdialog reads from and writes to the TOML application configuration file for editor, export, layout, shortcut, and snippet settings
- ADRs from day one using the provided template and README policy
- Postgres custom-format backup import
- Plain Postgres import unless later added by ADR and PRD/SPEC update
- Multi-workspace support
- Collaboration features
- Full migration tooling
- External databases as first-class query targets
- ERD designer
- Query plan visualizer
- Stored procedure workflow tooling
- Script orchestration engine
- Computed-column transforms during import
Decent Bench is composed of:
- UI shell (Flutter) — windowing, navigation, tabs, dialogs, wizards
- DecentDB binding adapter — Dart-side wrapper over the upstream DecentDB Dart FFI bindings
- Import pipeline — parsers/connectors + transform planner + bulk load
- Export pipeline — cursor/grid data source to exporters
- Workspace state — current database, tabs, results, recent files
- Config + secrets — TOML config + OS secure credential storage
- ADR process — decision records for long-lived technical choices
The architecture must preserve these invariants:
- no heavy work on the UI thread
- no default full-result materialization
- cancellation is best-effort but UI responsiveness is mandatory
- imports and exports must run as jobs with explicit status and error reporting
/apps/decent-bench/
/lib/
/app/ # app shell, routing, theming, composition root
/features/
/workspace/
/import_wizard/
/schema_browser/
/sql_editor/
/results_grid/
/export/
/settings/
/shared/ # shared widgets, utilities, abstractions
/native/ # native DecentDB artifacts or packaging helpers
/design/
IMPLEMENTATION_PHASES.md
/adr/
README.md
0000-template.md
0001-...
PRD.md
SPEC.md
Feature folders should avoid mixing UI and orchestration logic in the same file. As implementation grows, each complex feature should separate:
- presentation/widgets
- controllers/view models/state
- domain models/contracts
- infrastructure adapters where needed
This is guidance rather than a mandated folder taxonomy, but the separation of concerns is required.
Trigger: user drops a file onto the main window.
Detection rules (MVP):
- Extension-based detection:
- DecentDB:
.ddb - Excel:
.xls,.xlsx - SQLite:
.db,.sqlite,.sqlite3 - SQL dump:
.sql
- DecentDB:
- Lightweight signature checks may be added where safe, but extension-based detection is acceptable for MVP
- If extension is unrecognized:
- Show an "Unknown file type" wizard/screen with supported types guidance
Behavior:
- If DecentDB:
- Open workspace
- Load schema browser immediately
- Otherwise:
- Launch import wizard with source file preselected
- If launched with
--import <path>:- detect the source type using the same file-kind rules as drag-and-drop
- open the matching import wizard after app initialization completes
- show a clear notice when the supplied path is missing or not an importable source type
Decent Bench also ships a headless import CLI mode that runs without the desktop UI:
dbench --in <source-path> --out <target.ddb>runs a headless import using inferred defaultsdbench --in <source-path> --out <target.ddb> --plan <plan.json>runs a headless import with explicit import options from a versioned JSON plan file (seedocs/HEADLESS_IMPORT_PLAN_DETAILS.md)dbench --silentsuppresses non-error console output in headless mode
This mode is governed by ADR-0022 and is intended for scripting and batch workflows. It is a shipped v1.0.0 feature and is not considered experimental.
Multi-drop:
- If more than one file is dropped:
- take the first
- show warning: "MVP supports importing one file at a time."
Wizard steps:
- Source selection
Pre-filled from drag-and-drop when applicable. - Target selection
Create new DecentDB file or choose existing DecentDB file. - Preview
Show inferred schema and sample rows. - Transforms
- Rename columns
- Adjust types to DecentDB native types
- Import execution
- progress
- cancel when feasible
- Summary
- rows imported
- errors and warnings
- actions: "Open table" / "Run a query"
Each tab owns:
- SQL text buffer
- execution state
- result data source metadata
- execution-plan metadata derived from
EXPLAINwhen available - error panel state
- export state for the active result set
When a real workspace is auto-reopened from the recent-files list during app startup, the shell should rerun the most recent saved query for that workspace so the results area is populated immediately. If there is no saved query history, the shell should run a first-table preview query that visibly populates the results pane.
Keyboard requirements:
Ctrl/Cmd+Enter: executeCtrl/Cmd+Tab: next tabCtrl/Cmd+Shift+Tab: previous tabTab/Shift+Tab: move focus between editor and results controls
Per-tab history is optional for MVP.
The schema browser must be backed by DecentDB metadata queries or APIs and must not hardcode schema assumptions beyond the pinned engine version.
Selecting an object shows details such as:
- definition text where available
- columns and types
- constraints
- indexes
- triggers
- generated-column metadata and temp-object details
Search/filter should be responsive and operate on an in-memory metadata model derived from the latest loaded schema snapshot.
- Export action is initiated from the results pane
- User chooses format
- User configures format-specific options
- User chooses destination path
- Export runs as a background job with progress and error reporting
For MVP, only CSV is required to be implemented.
The binding strategy is governed by design/adr/0001-decentdb-flutter-binding- strategy.md.
Normative decision: Decent Bench uses the upstream DecentDB Dart FFI bindings as the supported integration mechanism.
This SPEC must not be interpreted as requiring a custom C shim or an alternative binding layer for MVP. If the upstream bindings prove insufficient, that gap must be addressed through:
- an ADR update or new ADR
- an implementation plan update
- corresponding PRD/SPEC changes if scope changes
Although the upstream bindings are the integration mechanism, the app should still define a local Dart-side adapter/service boundary so UI and feature code do not depend directly on raw binding calls.
That adapter must encapsulate at least:
- open/close DB
- execute SQL
- open query cursor
- fetch next page
- close cursor
- cancellation request
- schema introspection
- structured error mapping
The effective minimum capabilities required from the adapter and underlying bindings are:
- open/close DB by file path
- execute arbitrary SQL statements supported by the pinned engine version
- bind positional parameters
- query SQL with page-based retrieval
- schema introspection across supported object kinds
- best-effort cancellation
- structured error reporting where available
The pinned DecentDB compatibility line (v2.x) and its official SQL reference are the
normative source of truth for SQL capability in Decent Bench.
For the pinned engine version, Decent Bench should preserve support for the documented categories below rather than introducing an app-specific reduced SQL subset:
- DDL: tables, temp tables/views, indexes, view lifecycle, trigger lifecycle, generated columns, and supported constraints
- DML:
INSERT,SELECT,UPDATE,DELETE,ANALYZE - Query features:
WHERE, scalar functions, common table expressions including recursive CTEs, set operations, joins, aggregate functions, window functions, transactions,EXPLAIN,EXPLAIN ANALYZE, table-valued functions, and positional parameters
If the pinned engine reference documents a limitation or unsupported feature, that behavior should be treated as an engine limitation rather than papered over by Decent Bench documentation.
All heavy work must be off the UI thread.
Background execution is required for:
- query execution
- cursor paging
- imports
- exports
- large metadata loads
Implementation may use Dart isolates, native background threads, or both, depending on the behavior of the upstream bindings and the surrounding adapter.
The paging model is governed by
design/adr/0002-results-paging-and-streaming-contract.md. Until superseded,
this SPEC adopts the cursor-based paging model described there.
Query execution uses this lifecycle:
queryOpen(sql, options) -> cursor- Repeatedly call
queryNext(cursor, pageSize) -> page
- Finish with
queryClose(cursor)
A page contains:
- column metadata
- row batch
doneflag- optional warnings
The application must never load the entire result set into memory by default.
Allowed:
- keeping recent pages in memory for smooth scrolling
- holding export buffers in bounded chunks
- retaining prior successful result metadata for the current tab
Not allowed:
- converting an unbounded query result into an in-memory list before display
- exporting by first materializing the full result set in app memory
- Default page size is configurable in TOML
- Initial default target:
1000 - UI may adapt page size later, but fixed-size paging is acceptable for MVP
Each query tab must implement the following states:
- idle: no active execution
- running: query is open and pages may still arrive
- cancelling: user requested stop; no new user-initiated paging allowed
- completed: query finished successfully
- failed: query failed
- cancelled: query stopped before completion
State requirements:
idle -> runningon executerunning -> completedwhen final page arrives and cursor closesrunning -> failedon execution or paging errorrunning -> cancellingon user stopcancelling -> cancelledonce cursor is closed or the run is abandoned safelycancelling -> failedif termination surfaces an actionable error- A new execute action may start from
completed,failed, orcancelled - A new execute action from
cancellingis not allowed until cleanup completes
If cancellation occurs after one or more pages have been received:
- already received rows may remain visible
- the tab must clearly indicate that the result is partial/cancelled
- partial results must not be mislabeled as complete
Pages, warnings, and errors from an older execution must be ignored once a newer execution has started for the same tab. Each execution should have a unique run identifier at the controller/state level.
Errors should map into a UI-safe structure containing:
- message
- engine code, if available
- SQL location, if available
- whether the error occurred during open, paging, cancellation, or close
The UI must allow copying error details.
- Always map to DecentDB native types
- Wizard performs inference, but user may override
- When uncertain, prefer a safe textual representation unless a more specific mapping is clearly valid
- Mapping decisions should be visible in the summary step
Capabilities:
- choose workbook
- choose sheet(s)
- header row on/off
- type inference with override
- preview sample rows
- import into target table(s)
- accept
.xlsxdirectly and normalize legacy.xlsor parser-rejected workbooks through local background conversion when a compatible office CLI is available
Edge cases:
- empty columns
- mixed-type columns
- large sheets requiring streaming reads
- date/time columns requiring explicit mapping behavior
- legacy
.xlsworkbooks must either import through temporary local conversion or fail with a clear dependency message .xlsxworkbooks that the direct parser rejects may be retried through the same temporary normalization path- formula columns on normal row-oriented sheets are imported as formula text with warnings; formula evaluation and computed-column transforms remain deferred
- aggregate-only summary sheets may be imported as read-only views when their formulas can be translated against selected workbook dependencies; otherwise they fall back to table import with warnings
Capabilities:
- choose SQLite file
- list and select tables
- copy schema and data
- map SQLite affinities to DecentDB types
- preserve column defaults, stored generated columns, and table/column
CHECKconstraints - preserve single-column foreign keys plus
ON DELETE/ON UPDATEactions - preserve SQLite BTREE indexes that DecentDB supports, including multi-column, partial, and supported expression indexes
Edge cases:
STRICTtablesWITHOUT ROWIDtablesBLOBhandling- nullability inference
- SQLite virtual generated columns are imported as regular value columns with warnings because DecentDB currently supports stored generated columns
MVP-lite parsing scope:
CREATE TABLEINSERT INTO- common scalar types
- unsupported statements may be skipped with warnings
Wizard requirements:
- encoding detect/override
- preview parsed schema
- preview sample rows
- skipped statement count in summary
Imports should be transactional where practical.
Minimum behavior:
- a failed import must not leave the target table in an ambiguous half-finished state without surfacing that fact to the user
- summary must distinguish:
- succeeded
- partially succeeded with warnings
- failed and rolled back
- failed with manual cleanup required
Imports are background jobs with explicit state:
- queued
- running
- cancelling
- completed
- failed
- cancelled
- UI for renaming before commit
- unique-name enforcement
- collision warnings
- resulting names shown in preview
- per-column type dropdown limited to DecentDB native types
- invalid coercions must be validated before commit where possible
- optional coercion-to-null behavior may be added, but if supported it must be surfaced in the summary with counts
Computed columns are not part of MVP and must not be treated as required
acceptance criteria for v0.2 scope.
These remain in MVP scope but are lower implementation priority than the Phase 1 slice.
Sources:
- schema metadata cache
- full pinned-engine DecentDB keywords/functions/operator list
Context-aware behavior should support at least:
- after
FROM-> tables/views - after alias +
.-> columns - function suggestion contexts
Autocomplete coverage should track the pinned engine reference for SQL keywords,
DDL/DML verbs, joins, CTEs, scalar functions, aggregate functions, window
functions, table-valued functions, transaction keywords, and EXPLAIN
variants.
- snippet store in TOML config or a separate TOML file
- include sensible defaults
- user-editable
- insertion may be via picker, shortcut, or token expansion
- deterministic formatting
- format selection or whole document
- preserve comments and string literals
- formatter dependency must be Apache-compatible
- virtualized scrolling
- responsive selection
- copy support for:
- cell
- row(s)
- selection as TSV/CSV to clipboard
Column resize and reorder are desirable but not mandatory for MVP unless later promoted by issue or ADR.
Show at minimum:
- rows fetched
- whether completion is known
- current page size
- running/cancelling/completed/cancelled status
Loading more may be automatic on scroll threshold, manual, or hybrid.
The grid/results pane must support:
- loading state without visual jank
- zero-row state
- error panel with message and details
- partial-result state after cancellation
CSV is the only required MVP export format.
CSV options:
- delimiter
- quote behavior
- include headers
The following are explicitly Next and not required for MVP:
- JSON
- Parquet
- Excel
If implemented early, they must be treated as optional stretch work, not as MVP acceptance blockers.
Exports must consume query pages/cursor data in the background and must not require full preloading of the entire result set into memory.
Export jobs must surface:
- progress when possible
- warnings
- completion state
- destination path
- actionable failure details
Config location must follow OS-standard application config directories.
Config should include:
- recent files
- default page size
- max interactive rows guard
- editor settings
- snippets
- export defaults
The implementation must distinguish between:
- user config: global preferences and defaults
- workspace state: open-file-specific UI state
They may be stored separately even if both use TOML.
For any future external connection support:
- macOS: Keychain
- Windows: Credential Manager
- Linux: libsecret/gnome-keyring where available
Any fallback strategy requires an ADR before implementation.
Config format must include a schema version or migration mechanism before the format is considered stable.
The current TOML config format includes config_version, and workspace-state
storage includes an independent schema version for file-specific UI state.
Unit tests:
- config parsing
- query state transitions
- paging controller logic
- import type inference
- export option validation
Integration tests:
- open DecentDB
- execute
SELECT 1 - page through a multi-page result
- cancel a query
- load schema metadata
The Phase 1 smoke suite should validate a representative slice of the pinned
DecentDB v2.x SQL surface. It is not a full compatibility suite, but it
must cover each major engine category that the app intends to preserve.
| Area | Representative operation | Minimum assertion |
|---|---|---|
| Parameters + paging | Create a table, insert/query with $1/$2, fetch via paged cursor |
Parameter binding works and paged retrieval returns expected rows |
| Views + indexes | CREATE VIEW, CREATE INDEX, query the view, inspect schema metadata |
SQL executes successfully and adapter metadata does not regress |
| Recursive CTEs | WITH RECURSIVE sequence or hierarchy query |
Recursive result set is correct and cancellable through the normal query path |
| Constraints + generated columns | CHECK, UNIQUE, DEFAULT, GENERATED ALWAYS AS ... STORED |
Valid rows succeed, invalid rows fail, generated values persist correctly |
| Window + aggregate functions | ROW_NUMBER() OVER (...) plus grouped aggregates |
Result ordering and aggregate values match expectations |
| Table-valued JSON functions | json_each(...) and json_tree(...) in FROM |
Returned row shape and representative values match expectations |
| Transactions + savepoints | BEGIN, SAVEPOINT, ROLLBACK TO SAVEPOINT, COMMIT |
Only committed rows remain visible after rollback paths |
| Triggers + temp objects | CREATE TRIGGER, CREATE TEMP TABLE/VIEW, run trigger-producing DML |
Trigger side effects occur and temp objects remain connection-scoped |
| Planner introspection | EXPLAIN and EXPLAIN ANALYZE on representative queries |
Non-empty plan output is returned without UI hangs |
| Statistics collection | ANALYZE table_name outside explicit transaction |
Command succeeds and leaves the database usable for subsequent queries |
UI/integration tests:
- open workspace
- run query and display results
- export CSV
- drag-and-drop launches import flow once implemented
The project should maintain reproducible scenarios for:
- opening a DB with many tables
- scrolling a large paged result set
- exporting large result sets without UI stalls
Exact performance gates may mature over time, but regressions in responsiveness are release-blocking.
When the app scaffold exists, expected validation commands are:
flutter analyzeflutter testflutter test integration_test
CI should run these as soon as the project becomes runnable.
- Bundle required DecentDB native libraries with desktop builds
- Ensure deterministic library discovery at app startup
- Runtime discovery order is:
- Bundled with the app (platform-specific location)
- System library paths (
/usr/local/lib/,~/.local/lib/), or packaged staging output for development
- The desktop packaging flow may stage the DecentDB native library into the generated bundle through a repeatable helper script
- Keep packaging aligned with the upstream binding strategy from ADR-0001
- Signing/notarization and final installer formats may be staged after MVP, but packaging must not require manual developer-only steps for normal app startup
To reduce scope drift:
design/SPEC.mdis the implementation scope source of truthdesign/PRD.mddescribes product intent and user value- Accepted ADRs govern architectural decisions
- If an accepted ADR changes implementation expectations, the SPEC must be updated in the same change or immediately afterward
- If scope changes materially, update both PRD and SPEC
A contributor can:
- launch a runnable Flutter desktop app
- open or create a DecentDB file
- see tables and columns in the schema browser
- run representative pinned-engine SQL statements in a single tab
- receive paged results without full materialization
- cancel a running query and recover the UI
- export visible query results to CSV
- pass the Phase 1 representative engine smoke-test matrix
- run analyzer and tests successfully
A user can:
- drag and drop a DecentDB file and open it
- drag and drop an Excel, SQLite, or supported
.sqldump file and enter the import wizard - import Excel with sheet selection, headers option, and type overrides
- import SQLite with table selection
- import at least one MariaDB/MySQL-style
.sqldump successfully - rename columns and adjust target types before import
- inspect supported schema objects in the schema browser
- use multi-tab query editing with paired results panes
- run and cancel queries in a responsive UI
- use schema-aware autocomplete, snippets, and formatting
- export query results to CSV
- complete the above without noticeable UI hangs in normal desktop use