Skip to content

Modern data profiling and drift detection framework for SQL databases. Automatically profile datasets, track metadata, and detect schema/statistical drift over time.

License

Notifications You must be signed in to change notification settings

baselinrhq/baselinr

Repository files navigation

🧩 Baselinr

PyPI version CI License: Apache 2.0 Python 3.10+

🌐 Visit our website β†’

Baselinr is a modern, open-source data profiling and drift detection framework for SQL-based data warehouses. It automatically profiles datasets, stores metadata and statistics, and detects drift over time.

πŸš€ Features

  • Automated Profiling: Profile tables with column-level metrics (count, null %, distinct values, mean, stddev, histograms, etc.)
  • Drift Detection: Compare profiling runs to detect schema and statistical drift with configurable strategies
  • Type-Specific Thresholds: Adjust drift sensitivity based on column data type (numeric, categorical, timestamp, boolean) to reduce false positives
  • Intelligent Baseline Selection: Automatically selects optimal baseline method (last run, moving average, prior period, stable window) based on column characteristics
  • Advanced Statistical Tests: Kolmogorov-Smirnov (KS) test, Population Stability Index (PSI), Chi-square, Entropy, and more for rigorous drift detection
  • Expectation Learning: Automatically learns expected metric ranges from historical profiling data, including control limits, distributions, and categorical frequencies for proactive anomaly detection
  • Anomaly Detection: Automatically detects outliers and seasonal anomalies using learned expectations with multiple detection methods (IQR, MAD, EWMA, trend/seasonality, regime shift)
  • Event & Alert Hooks: Pluggable event system for real-time alerts and notifications on drift, schema changes, anomalies, and profiling lifecycle events
  • Partition-Aware Profiling: Intelligent partition handling with strategies for latest, recent_n, or sample partitions
  • Adaptive Sampling: Multiple sampling methods (random, stratified, top-k) for efficient profiling of large datasets
  • Multi-Database Support: Works with PostgreSQL, Snowflake, SQLite, MySQL, BigQuery, and Redshift
  • Schema Versioning & Migrations: Built-in schema version management with migration system for safe database schema evolution
  • Metadata Querying: Powerful CLI and API for querying profiling runs, drift events, and table history
  • Dagster Integration: Built-in orchestration support with Dagster assets and schedules
  • Configuration-Driven: Simple YAML/JSON configuration for defining profiling targets
  • Historical Tracking: Store profiling results over time for trend analysis
  • CLI Interface: Comprehensive command-line interface for profiling, drift detection, querying, schema management, and dashboard UI

πŸ“‹ Requirements

  • Python 3.10+
  • One of the supported databases: PostgreSQL, Snowflake, SQLite, MySQL, BigQuery, or Redshift

πŸ”§ Installation

Install from PyPI

Install Baselinr directly from PyPI:

pip install baselinr

Install with Optional Dependencies

Baselinr supports optional dependencies for enhanced functionality:

Snowflake Support:

pip install baselinr[snowflake]

Dagster Integration:

pip install baselinr[dagster]

All Features:

pip install baselinr[all]

Development Installation

For local development, clone the repository and install in editable mode:

git clone https://github.com/baselinrhq/baselinr.git
cd baselinr
pip install -e ".[dev]"

πŸ“š Documentation

All documentation has been organized into the docs/ directory:

See docs/README.md for the complete documentation index.

πŸƒ Quick Start

1. Create a Configuration File

Create a config.yml file:

environment: development

source:
  type: postgres
  host: localhost
  port: 5432
  database: mydb
  username: user
  password: password
  schema: public

storage:
  connection:
    type: postgres
    host: localhost
    port: 5432
    database: mydb
    username: user
    password: password
  results_table: baselinr_results
  runs_table: baselinr_runs
  create_tables: true
  enable_expectation_learning: true  # Learn expected ranges automatically
  learning_window_days: 30           # Use last 30 days of data
  min_samples: 5                     # Require at least 5 historical runs
  enable_anomaly_detection: true     # Detect anomalies using learned expectations

profiling:
  tables:
    # Explicit table selection (highest priority)
    - table: customers
      schema: public
    
    # Pattern-based selection (wildcard)
    - pattern: "user_*"
      schema: public
      # Matches: user_profile, user_settings, user_preferences, etc.
    
    # Schema-based selection (all tables in schema)
    - select_schema: true
      schema: analytics
      exclude_patterns:
        - "*_temp"
        - "*_backup"
    
    # Regex pattern matching
    - pattern: "^(customer|order)_\\d{4}$"
      pattern_type: regex
      schema: public
      # Matches: customer_2024, order_2024, etc.
    
    # Multi-database profiling (optional database field)
    # - table: users
    #   schema: public
    #   database: analytics_db  # Profile from analytics_db instead of source.database
    # - pattern: "order_*"
    #   schema: public
    #   database: warehouse_db  # Profile matching tables from warehouse_db
    # - select_schema: true
    #   schema: analytics
    #   database: production_db  # Profile all tables in analytics schema from production_db
  
  # Discovery options for pattern-based selection
  discovery_options:
    max_tables_per_pattern: 1000
    max_schemas_per_database: 100
    cache_discovery: true
    validate_regex: true
  
  default_sample_ratio: 1.0
  compute_histograms: true
  histogram_bins: 10

2. Preview What Will Be Profiled

baselinr plan --config config.yml

This shows you what tables will be profiled without actually running the profiler.

3. Run Profiling

baselinr profile --config config.yml

4. Detect Drift

After running profiling multiple times:

baselinr drift --config config.yml --dataset customers

5. Query Profiling Metadata

Query your profiling history and drift events:

# List recent profiling runs
baselinr query runs --config config.yml --limit 10

# Query drift events
baselinr query drift --config config.yml --table customers --days 7

# Get detailed run information
baselinr query run --config config.yml --run-id <run-id>

# View table profiling history
baselinr query table --config config.yml --table customers --days 30

6. Check System Status

Get a quick overview of recent runs and active drift:

# Show status dashboard
baselinr status --config config.yml

# Show only drift summary
baselinr status --config config.yml --drift-only

# Watch mode (auto-refresh)
baselinr status --config config.yml --watch

# JSON output for scripting
baselinr status --config config.yml --json

7. Start Dashboard UI

Launch the web dashboard to view profiling runs, drift alerts, and metrics:

# Start dashboard (foreground mode)
baselinr ui --config config.yml

# Custom ports
baselinr ui --config config.yml --port-backend 8080 --port-frontend 3001

# Localhost only
baselinr ui --config config.yml --host 127.0.0.1

Press Ctrl+C to stop the dashboard. See docs/schemas/UI_COMMAND.md for more details.

8. Manage Schema Migrations

Check and apply schema migrations:

# Check schema version status
baselinr migrate status --config config.yml

# Apply migrations to latest version
baselinr migrate apply --config config.yml --target 1

# Validate schema integrity
baselinr migrate validate --config config.yml

🐳 Docker Development Environment

Baselinr includes a complete Docker environment for local development and testing.

Start the Environment

cd docker
docker-compose up -d

This will start:

Stop the Environment

cd docker
docker-compose down

πŸ“Š Profiling Metrics

Baselinr computes the following metrics:

All Column Types

  • count: Total number of rows
  • null_count: Number of null values
  • null_ratio: Ratio of null values (0.0 to 1.0)
  • distinct_count: Number of distinct values
  • unique_ratio: Ratio of distinct values to total (0.0 to 1.0)
  • approx_distinct_count: Approximate distinct count (database-specific)
  • data_type_inferred: Inferred data type from values (email, url, date, etc.)
  • column_stability_score: Column presence stability (0.0 to 1.0)
  • column_age_days: Days since column first appeared
  • type_consistency_score: Type consistency across runs (0.0 to 1.0)

Numeric Columns

  • min: Minimum value
  • max: Maximum value
  • mean: Average value
  • stddev: Standard deviation
  • histogram: Distribution histogram (optional)

String Columns

  • min: Lexicographic minimum
  • max: Lexicographic maximum
  • min_length: Minimum string length
  • max_length: Maximum string length
  • avg_length: Average string length

Table-Level Metrics

  • row_count_change: Change in row count from previous run
  • row_count_change_percent: Percentage change in row count
  • row_count_stability_score: Row count stability (0.0 to 1.0)
  • row_count_trend: Trend direction (increasing/stable/decreasing)
  • schema_freshness: Timestamp of last schema modification
  • schema_version: Incrementing schema version number
  • column_count_change: Net change in column count

See docs/guides/PROFILING_ENRICHMENT.md for detailed documentation on enrichment features.

🧠 Expectation Learning

Baselinr can automatically learn expected metric ranges from historical profiling data, creating statistical models that help identify outliers without explicit thresholds.

Key Features

  • Automatic Learning: Continuously learns expected values for metrics like mean, stddev, null_ratio, count, and unique_ratio
  • Control Limits: Calculates lower and upper control limits using Shewhart (3-sigma) method or EWMA (Exponentially Weighted Moving Average)
  • Distribution Detection: Automatically detects if metrics follow normal or empirical distributions
  • Categorical Frequencies: Tracks expected frequency distributions for categorical columns
  • Separate from Baselines: Learned expectations are stored separately from drift detection baselines, enabling proactive anomaly detection

How It Works

Expectation learning analyzes historical profiling data over a configurable window (default: 30 days) to compute:

  • Expected mean, variance, and standard deviation
  • Control limits for outlier detection (3-sigma or EWMA-based)
  • Distribution parameters (normal vs empirical)
  • Expected categorical value frequencies

These learned expectations are automatically updated after each profiling run, providing an evolving model of what "normal" looks like for your data.

Configuration

Enable expectation learning in your config.yml:

storage:
  enable_expectation_learning: true
  learning_window_days: 30      # Historical window in days
  min_samples: 5                 # Minimum runs required for learning
  ewma_lambda: 0.2              # EWMA smoothing parameter (0 < lambda <= 1)

Use Cases

  • Proactive Monitoring: Identify anomalies before they cause drift
  • Automated Alerting: Flag unexpected metric values automatically
  • Trend Analysis: Understand normal ranges for your data over time
  • Quality Assurance: Ensure metrics stay within expected operational ranges

See docs/guides/EXPECTATION_LEARNING.md for comprehensive documentation on expectation learning.

πŸ”„ Dagster Integration

Baselinr can create Dagster assets dynamically from your configuration:

from baselinr.integrations.dagster import build_baselinr_definitions

defs = build_baselinr_definitions(
    config_path="config.yml",
    asset_prefix="baselinr",
    job_name="baselinr_profile_all",
    enable_sensor=True,  # optional
)

πŸ”§ dbt Integration

Baselinr provides comprehensive integration with dbt for scalable profiling and drift detection.

Using dbt Refs/Selectors in Configs

Reference dbt models directly in your baselinr configuration:

profiling:
  tables:
    - dbt_ref: customers
      dbt_project_path: ./dbt_project
    - dbt_selector: tag:critical
      dbt_project_path: ./dbt_project

Direct dbt Model Integration

Use dbt model references and selectors in baselinr configs:

# baselinr_config.yml
profiling:
  tables:
    - dbt_ref: customers
      dbt_project_path: ./dbt_project
    - dbt_selector: tag:critical
      dbt_project_path: ./dbt_project

Installation:

  1. Install baselinr: pip install baselinr
  2. Use dbt refs/selectors in your baselinr config:
    profiling:
      tables:
        - dbt_ref: customers
          dbt_project_path: ./dbt_project
        - dbt_selector: tag:critical
          dbt_project_path: ./dbt_project
  3. Run dbt compile or dbt run to generate manifest.json
  4. Run profiling: baselinr profile --config baselinr_config.yml

Note: dbt hooks can only execute SQL, not Python scripts. Run profiling after dbt run using an orchestrator or manually.

See dbt Integration Guide for complete documentation.

🐍 Python SDK

Baselinr provides a high-level Python SDK for programmatic access to all functionality.

Quick Start

from baselinr import BaselinrClient

# Initialize client
client = BaselinrClient(config_path="config.yml")

# Build execution plan
plan = client.plan()
print(f"Will profile {plan.total_tables} tables")

# Profile tables
results = client.profile()
for result in results:
    print(f"Profiled {result.dataset_name}: {len(result.columns)} columns")

# Detect drift
drift_report = client.detect_drift("customers")
print(f"Found {len(drift_report.column_drifts)} column drifts")

# Query recent runs
runs = client.query_runs(days=7, limit=10)

# Get status summary
status = client.get_status()
print(f"Active drift events: {len(status['drift_summary'])}")

Documentation

SDK Examples

Key Features

  • Simple API: All functionality through a single BaselinrClient class
  • Automatic Setup: Handles configuration loading, connection management, and event bus setup
  • Type Hints: Full type annotations for IDE support
  • Lazy Loading: Connections initialized only when needed

For complete SDK documentation including all methods, parameters, and advanced patterns, see the Python SDK Guide.

🎯 Use Cases

  • Data Quality Monitoring: Track data quality metrics over time
  • Schema Change Detection: Automatically detect schema changes
  • Statistical Drift Detection: Identify statistical anomalies in your data
  • Data Documentation: Generate up-to-date metadata about your datasets
  • CI/CD Integration: Fail builds when critical drift is detected

πŸ“ Project Structure

baselinr/
β”œβ”€β”€ baselinr/           # Main package
β”‚   β”œβ”€β”€ config/           # Configuration management
β”‚   β”œβ”€β”€ connectors/       # Database connectors
β”‚   β”œβ”€β”€ profiling/        # Profiling engine
β”‚   β”œβ”€β”€ storage/          # Results storage
β”‚   β”œβ”€β”€ drift/            # Drift detection
β”‚   β”œβ”€β”€ learning/         # Expectation learning
β”‚   β”œβ”€β”€ anomaly/          # Anomaly detection
β”‚   β”œβ”€β”€ integrations/
β”‚   β”‚   └── dagster/      # Dagster assets & sensors
β”‚   └── cli.py            # CLI interface
β”œβ”€β”€ examples/             # Example configurations
β”‚   β”œβ”€β”€ config.yml        # PostgreSQL example
β”‚   β”œβ”€β”€ config_sqlite.yml # SQLite example
β”‚   β”œβ”€β”€ config_mysql.yml  # MySQL example
β”‚   β”œβ”€β”€ config_bigquery.yml # BigQuery example
β”‚   β”œβ”€β”€ config_redshift.yml # Redshift example
β”‚   β”œβ”€β”€ config_with_metrics.yml # Metrics example
β”‚   β”œβ”€β”€ config_slack_alerts.yml # Slack alerts example
β”‚   β”œβ”€β”€ dagster_repository.py
β”‚   └── quickstart.py
β”œβ”€β”€ docker/               # Docker environment
β”‚   β”œβ”€β”€ docker-compose.yml
β”‚   β”œβ”€β”€ Dockerfile
β”‚   β”œβ”€β”€ init_postgres.sql
β”‚   β”œβ”€β”€ dagster.yaml
β”‚   └── workspace.yaml
β”œβ”€β”€ setup.py
β”œβ”€β”€ requirements.txt
└── README.md

πŸ§ͺ Running Examples

Quick Start Example

python examples/quickstart.py

CLI Examples

# View profiling plan (dry-run)
baselinr plan --config examples/config.yml

# View plan in JSON format
baselinr plan --config examples/config.yml --output json

# View plan with verbose details
baselinr plan --config examples/config.yml --verbose

# Profile all tables in config
baselinr profile --config examples/config.yml

# Profile with output to JSON
baselinr profile --config examples/config.yml --output results.json

# Dry run (don't write to storage)
baselinr profile --config examples/config.yml --dry-run

# Detect drift
baselinr drift --config examples/config.yml --dataset customers

# Detect drift with specific runs
baselinr drift --config examples/config.yml \
  --dataset customers \
  --baseline <run-id-1> \
  --current <run-id-2>

# Fail on critical drift (useful for CI/CD)
baselinr drift --config examples/config.yml \
  --dataset customers \
  --fail-on-drift

# Use statistical tests for advanced drift detection
# (configure in config.yml: strategy: statistical)

# Query profiling runs
baselinr query runs --config examples/config.yml --limit 10

# Query drift events for a table
baselinr query drift --config examples/config.yml \
  --table customers \
  --severity high \
  --days 7

# Get detailed run information
baselinr query run --config examples/config.yml \
  --run-id <run-id> \
  --format json

# View table profiling history
baselinr query table --config examples/config.yml \
  --table customers \
  --days 30 \
  --format csv \
  --output history.csv

# Check system status
baselinr status --config examples/config.yml

# Watch status (auto-refresh)
baselinr status --config examples/config.yml --watch

# Status with JSON output
baselinr status --config examples/config.yml --json

# Start dashboard UI
baselinr ui --config examples/config.yml

# Check schema migration status
baselinr migrate status --config examples/config.yml

# Apply schema migrations
baselinr migrate apply --config examples/config.yml --target 1

# Validate schema integrity
baselinr migrate validate --config examples/config.yml

πŸ” Drift Detection

Baselinr provides multiple drift detection strategies and intelligent baseline selection:

Available Strategies

  1. Absolute Threshold (default): Simple percentage-based thresholds

    • Low: 5% change
    • Medium: 15% change
    • High: 30% change
  2. Standard Deviation: Statistical significance based on standard deviations

  3. Statistical Tests (advanced): Multiple statistical tests for rigorous detection

    • Numeric columns: KS test, PSI, Z-score
    • Categorical columns: Chi-square, Entropy, Top-K stability
    • Automatically selects appropriate tests based on column type

Intelligent Baseline Selection

Baselinr automatically selects the optimal baseline for drift detection based on column characteristics:

  • Auto Selection: Automatically chooses the best baseline method per column
    • High variance columns β†’ Moving average (smooths noise)
    • Seasonal columns β†’ Prior period (accounts for weekly/monthly patterns)
    • Stable columns β†’ Last run (simplest baseline)
  • Moving Average: Average of last N runs (configurable, default: 7)
  • Prior Period: Same period last week/month (handles seasonality)
  • Stable Window: Historical window with low drift (most reliable)
  • Last Run: Simple comparison to previous run (default)

Thresholds and baseline selection are fully configurable via the drift_detection configuration. See docs/guides/DRIFT_DETECTION.md for general drift detection and docs/guides/STATISTICAL_DRIFT_DETECTION.md for statistical tests.

πŸ”” Event & Alert Hooks

Baselinr includes a pluggable event system that emits events for drift detection, schema changes, and profiling lifecycle events. You can register hooks to process these events for logging, persistence, or alerting.

Built-in Hooks

  • LoggingAlertHook: Log events to stdout
  • SQLEventHook: Persist events to any SQL database
  • SnowflakeEventHook: Persist events to Snowflake with VARIANT support

Example Configuration

hooks:
  enabled: true
  hooks:
    # Log all events
    - type: logging
      log_level: INFO
    
    # Persist to database
    - type: sql
      table_name: baselinr_events
      connection:
        type: postgres
        host: localhost
        database: monitoring
        username: user
        password: pass

Event Types

  • DataDriftDetected: Emitted when drift is detected
  • SchemaChangeDetected: Emitted when schema changes
  • ProfilingStarted: Emitted when profiling begins
  • ProfilingCompleted: Emitted when profiling completes
  • ProfilingFailed: Emitted when profiling fails

Custom Hooks

Create custom hooks by implementing the AlertHook protocol:

from baselinr.events import BaseEvent

class MyCustomHook:
    def handle_event(self, event: BaseEvent) -> None:
        # Process the event
        print(f"Event: {event.event_type}")

Configure custom hooks:

hooks:
  enabled: true
  hooks:
    - type: custom
      module: my_hooks
      class_name: MyCustomHook
      params:
        webhook_url: https://api.example.com/alerts

See docs/architecture/EVENTS_AND_HOOKS.md for comprehensive documentation and examples.

πŸ”„ Schema Versioning & Migrations

Baselinr includes a built-in schema versioning system to manage database schema evolution safely.

Migration Commands

# Check current schema version status
baselinr migrate status --config config.yml

# Apply migrations to a specific version
baselinr migrate apply --config config.yml --target 1

# Preview migrations (dry run)
baselinr migrate apply --config config.yml --target 1 --dry-run

# Validate schema integrity
baselinr migrate validate --config config.yml

How It Works

  • Schema versions are tracked in the baselinr_schema_version table
  • Migrations are applied incrementally and can be rolled back
  • The system automatically detects when your database schema is out of date
  • Migrations are idempotent and safe to run multiple times

πŸ” Metadata Querying

Baselinr provides powerful querying capabilities to explore your profiling history and drift events.

Query Commands

# Query profiling runs with filters
baselinr query runs --config config.yml \
  --table customers \
  --status completed \
  --days 30 \
  --limit 20 \
  --format table

# Query drift events
baselinr query drift --config config.yml \
  --table customers \
  --severity high \
  --days 7 \
  --format json

# Get detailed information about a specific run
baselinr query run --config config.yml \
  --run-id abc123-def456 \
  --format json

# View table profiling history over time
baselinr query table --config config.yml \
  --table customers \
  --schema public \
  --days 90 \
  --format csv \
  --output history.csv

Output Formats

All query commands support multiple output formats:

  • table: Human-readable table format (default)
  • json: JSON format for programmatic use
  • csv: CSV format for spreadsheet analysis

πŸ› οΈ Configuration Options

Source Configuration

source:
  type: postgres | snowflake | sqlite | mysql | bigquery | redshift
  host: hostname
  port: 5432
  database: database_name
  username: user
  password: password
  schema: schema_name  # Optional
  
  # Snowflake-specific
  account: snowflake_account
  warehouse: warehouse_name
  role: role_name
  
  # SQLite-specific
  filepath: /path/to/database.db
  
  # BigQuery-specific (credentials via extra_params)
  extra_params:
    credentials_path: /path/to/service-account-key.json
    # Or use GOOGLE_APPLICATION_CREDENTIALS environment variable
  
  # MySQL-specific
  # Uses standard host/port/database/username/password
  
  # Redshift-specific
  # Uses standard host/port/database/username/password
  # Default port: 5439

Profiling Configuration

profiling:
  # Table discovery and pattern-based selection
  table_discovery: true  # Enable automatic table discovery
  discovery_options:
    max_tables_per_pattern: 1000  # Limit matches per pattern
    max_schemas_per_database: 100  # Limit schemas to scan
    validate_regex: true  # Validate regex patterns at config load time
    tag_provider: auto  # Tag metadata provider: auto, snowflake, bigquery, postgres, mysql, redshift, sqlite, dbt
  
  tables:
    # Explicit table selection (highest priority)
    - table: table_name
      schema: schema_name  # Optional
    
    # Pattern-based selection (wildcard)
    - pattern: "user_*"
      schema: public
      # Matches all tables starting with "user_"
    
    # Regex pattern matching
    - pattern: "^(customer|order)_\\d{4}$"
      pattern_type: regex
      schema: public
      # Matches: customer_2024, order_2024, etc.
    
    # Schema-based selection (all tables in schema)
    - select_schema: true
      schema: analytics
      exclude_patterns:
        - "*_temp"
        - "*_backup"
    
    # Database-level selection (all schemas)
    - select_all_schemas: true
      exclude_schemas:
        - "information_schema"
        - "pg_catalog"
    
    # Multi-database profiling (optional database field)
    # When database is specified, the pattern operates on that database
    # When omitted, uses config.source.database (backward compatible)
    # - table: customers
    #   schema: public
    #   database: analytics_db
    # - select_all_schemas: true
    #   database: staging_db  # Profile all schemas in staging_db
    
    # Tag-based selection
    - tags:
        - "data_quality:critical"
        - "domain:customer"
      schema: public
    
    # Precedence override (explicit table overrides pattern)
    - pattern: "events_*"
      schema: analytics
      override_priority: 10
    
    - table: events_critical
      schema: analytics
      override_priority: 100  # Higher priority overrides pattern
  
  default_sample_ratio: 1.0
  max_distinct_values: 1000
  compute_histograms: true  # Enable for statistical tests
  histogram_bins: 10
  
  metrics:
    - count
    - null_count
    - null_ratio
    - distinct_count
    - unique_ratio
    - approx_distinct_count
    - min
    - max
    - mean
    - stddev
    - histogram
    - data_type_inferred

Drift Detection Configuration

drift_detection:
  # Strategy: absolute_threshold | standard_deviation | statistical
  strategy: absolute_threshold
  
  # Absolute threshold (default)
  absolute_threshold:
    low_threshold: 5.0
    medium_threshold: 15.0
    high_threshold: 30.0
  
  # Baseline auto-selection configuration
  baselines:
    strategy: auto  # auto | last_run | moving_average | prior_period | stable_window
    windows:
      moving_average: 7    # Number of runs for moving average
      prior_period: 7      # Days for prior period (1=day, 7=week, 30=month)
      min_runs: 3          # Minimum runs required for auto-selection
  
  # Statistical tests (advanced)
  # statistical:
  #   tests:
  #     - ks_test
  #     - psi
  #     - z_score
  #     - chi_square
  #     - entropy
  #     - top_k
  #   sensitivity: medium
  #   test_params:
  #     ks_test:
  #       alpha: 0.05
  #     psi:
  #       buckets: 10
  #       threshold: 0.2

Expectation Learning Configuration

storage:
  # Enable automatic learning of expected metric ranges
  enable_expectation_learning: true
  
  # Historical window in days for learning expectations
  learning_window_days: 30
  
  # Minimum number of historical runs required for learning
  min_samples: 5
  
  # EWMA smoothing parameter for control limits (0 < lambda <= 1)
  # Lower values = more smoothing (0.1-0.3 recommended)
  ewma_lambda: 0.2

Anomaly Detection Configuration

storage:
  # Enable automatic anomaly detection using learned expectations
  enable_anomaly_detection: true
  
  # List of enabled detection methods (default: all methods)
  anomaly_enabled_methods:
    - control_limits
    - iqr
    - mad
    - ewma
    - seasonality
    - regime_shift
  
  # IQR multiplier threshold for outlier detection
  anomaly_iqr_threshold: 1.5
  
  # MAD threshold (modified z-score) for outlier detection
  anomaly_mad_threshold: 3.0
  
  # EWMA deviation threshold (number of stddevs)
  anomaly_ewma_deviation_threshold: 2.0
  
  # Enable trend and seasonality detection
  anomaly_seasonality_enabled: true
  
  # Enable regime shift detection
  anomaly_regime_shift_enabled: true
  
  # Number of recent runs for regime shift comparison
  anomaly_regime_shift_window: 3
  
  # P-value threshold for regime shift detection
  anomaly_regime_shift_sensitivity: 0.05

πŸ” Environment Variables

Baselinr supports environment variable overrides:

# Override source connection
export BASELINR_SOURCE__HOST=prod-db.example.com
export BASELINR_SOURCE__PASSWORD=secret

# Override environment
export BASELINR_ENVIRONMENT=production

# Run profiling
baselinr profile --config config.yml

πŸ§ͺ Development

Run Tests

pytest

Code Formatting

black baselinr/
isort baselinr/

Type Checking

mypy baselinr/

πŸ“ License

Apache License 2.0 - see LICENSE file for details.

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“§ Contact

For questions and support, please open an issue on GitHub.


Baselinr - Modern data profiling made simple 🧩

About

Modern data profiling and drift detection framework for SQL databases. Automatically profile datasets, track metadata, and detect schema/statistical drift over time.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Packages

No packages published