RunMat automatically fuses operations and intelligently routes between CPU and GPU. MATLAB syntax. No kernel code, no rewrites.
๐ Website โข ๐ Documentation
RunMat is an early build. The core runtime and GPU engine already pass thousands of tests, but some plotting features are still missing or buggy. Expect a few rough edges. Feedback and bug reports help us decide what to fix next.
With RunMat you write your math in clean, readable MATLAB-style syntax. RunMat automatically fuses your operations into optimized kernels and runs them on the best place โ CPU or GPU. On GPU, it can often match or beat hand-tuned CUDA on many dense numerical workloads
It runs on whatever GPU you have โ NVIDIA, AMD, Apple Silicon, Intel โ through native APIs (Metal / DirectX 12 / Vulkan). No device management. No vendor lock-in. No rewrites.
Core ideas:
- MATLAB syntax, not a new language
- Fast on CPU and GPU, with one runtime
- No device flags โ Fusion automatically chooses CPU vs GPU based on data size and transfer cost heuristics
-
MATLAB language
- Familiar
.mfiles, arrays, control flow - Many MATLAB / Octave scripts run with few or no changes
- Familiar
-
Fusion: automatic CPU+GPU choice
- Builds an internal graph of array ops
- Fuses elementwise ops and reductions into bigger kernels
- Chooses CPU or GPU per kernel based on shape and transfer cost
- Keeps arrays on device when that is faster
-
Modern CPU runtime
- Ignition interpreter for fast startup
- Turbine JIT (Cranelift) for hot paths
- Generational GC tuned for numeric code
- Memory-safe by design (Rust)
-
Cross-platform GPU backend
- Uses wgpu / WebGPU
- Supports Metal (macOS), DirectX 12 (Windows), Vulkan (Linux)
- Falls back to CPU when workloads are too small for GPU to win
-
Plotting and tooling (pre-release)
- Simple 2D line and scatter plots work today
- Plots that use filled shapes or meshes (box plots, violin plots, surfaces, many 3D views) are not wired up yet
- 3D plots and better camera controls are on the roadmap
- VS Code / Cursor extensions are also on the roadmap
-
Open source
- MIT License with attribution
- Small binary, CLI-first design
These are large workloads where Fusion chooses GPU.
Hardware: Apple M2 Max, Metal, each point is the mean of 3 runs.
| B | RunMat (ms) | PyTorch (ms) | NumPy (ms) | NumPy รท RunMat | PyTorch รท RunMat |
|---|---|---|---|---|---|
| 4 | 217.9 | 922.9 | 548.4 | 2.52x | 4.23x |
| 8 | 270.3 | 960.1 | 989.6 | 3.66x | 3.55x |
| 16 | 317.4 | 1,040.7 | 1,859.1 | 5.86x | 3.28x |
| 32 | 520.5 | 1,178.3 | 3,698.6 | 7.11x | 2.26x |
| 64 | 893.8 | 1,379.6 | 7,434.6 | 8.32x | 1.54x |
| M | RunMat (ms) | PyTorch (ms) | NumPy (ms) | NumPy รท RunMat | PyTorch รท RunMat |
|---|---|---|---|---|---|
| 250โฏ000 | 179.8 | 955.4 | 4,252.3 | 23.65x | 5.31x |
| 500โฏ000 | 203.1 | 1,021.8 | 9,319.9 | 45.90x | 5.03x |
| 1โฏ000โฏ000 | 243.3 | 1,283.9 | 17,946.4 | 73.78x | 5.28x |
| 2โฏ000โฏ000 | 372.0 | 1,469.4 | 38,826.8 | 104.36x | 3.95x |
| 5โฏ000โฏ000 | 678.1 | 1,719.5 | 95,539.2 | 140.89x | 2.54x |
| points | RunMat (ms) | PyTorch (ms) | NumPy (ms) | NumPy รท RunMat | PyTorch รท RunMat |
|---|---|---|---|---|---|
| 1โฏ000โฏ000 | 197.1 | 820.8 | 68.3 | 0.35x | 4.16x |
| 2โฏ000โฏ000 | 211.4 | 896.2 | 76.7 | 0.36x | 4.24x |
| 5โฏ000โฏ000 | 207.7 | 1,104.7 | 111.9 | 0.54x | 5.32x |
| 10โฏ000โฏ000 | 173.8 | 1,426.1 | 166.6 | 0.96x | 8.20x |
| 100โฏ000โฏ000 | 170.9 | 16,878.8 | 1,098.8 | 6.43x | 98.77x |
| 200โฏ000โฏ000 | 202.8 | 17,393.0 | 2,188.9 | 10.79x | 85.76x |
| 500โฏ000โฏ000 | 171.8 | 18,880.2 | 5,946.9 | 34.61x | 109.87x |
| 1โฏ000โฏ000โฏ000 | 199.4 | 22,652.0 | 12,570.0 | 63.04x | 113.61x |
On smaller arrays, Fusion keeps work on CPU so you still get low overhead and a fast JIT.
Benchmarks run on Apple M2 Max with BLAS/LAPACK optimization and GPU acceleration. See benchmarks/ for reproducible test scripts, detailed results, and comparisons against NumPy, PyTorch, and Julia.
# Quick install (Linux/macOS)
curl -fsSL https://runmat.org/install.sh | sh
# Quick install (Windows PowerShell)
iwr https://runmat.org/install.ps1 | iex
# Or install from crates.io
cargo install runmat --features gui
# Or build from source
git clone https://github.com/runmat-org/runmat.git
cd runmat && cargo build --release --features guiFor BLAS/LAPACK acceleration on Linux, install the system OpenBLAS package before building:
sudo apt-get update && sudo apt-get install -y libopenblas-dev# Start the interactive REPL
runmat
# Or run an existing .m file
runmat script.m
# Or pipe a script into RunMat
echo "a = 10; b = 20; c = a + b" | runmat
# Check GPU acceleration status
runmat accel-info
# Benchmark a script
runmat benchmark script.m --iterations 5 --jit
# View system information
runmat info# Register RunMat as a Jupyter kernel
runmat --install-kernel
# Launch JupyterLab with RunMat support
jupyter lab% RunMat automatically uses GPU when beneficial
x = rand(10000, 1, 'single');
y = sin(x) .* x + 0.5; % Automatically fused and GPU-accelerated
mean(y) % Result computed on GPU% Your existing MATLAB code just works
A = [1 2 3; 4 5 6; 7 8 9];
B = A' * A;
eigenvals = eig(B);
plot(eigenvals);% RunMat automatically fuses this chain into a single GPU kernel
% No kernel code, no rewritesโjust MATLAB syntax
x = rand(1024, 1, 'single');
y = sin(x) .* x + 0.5; % Fused: sin, multiply, add
m = mean(y, 'all'); % Reduction stays on GPU
fprintf('m=%.6f\n', double(m)); % Single download at sink% Simple 2D line plot (works in the pre-release)
x = linspace(0, 2*pi, 1000);
y = sin(x);
plot(x, y);
grid on;
title("Sine wave");RunMat uses a tiered CPU runtime plus a fusion engine that automatically picks CPU or GPU for each chunk of math.
| Component | Purpose | Technology / Notes |
|---|---|---|
| โ๏ธ runmat-ignition | Baseline interpreter for instant startup | HIR โ bytecode compiler, stack-based interpreter |
| โก runmat-turbine | Optimizing JIT for hot code | Cranelift backend, tuned for numeric workloads |
| ๐ง runmat-gc | High-performance memory management | Generational GC with pointer compression |
| ๐ runmat-accelerate | GPU acceleration subsystem | Fusion engine + auto-offload planner + wgpu backend |
| ๐ฅ Fusion engine | Collapses op chains, chooses CPU vs GPU | Builds op graph, fuses ops, estimates cost, keeps tensors on device |
| ๐จ runmat-plot | Plotting layer (pre-release) | 2D line/scatter plots work today; 3D, filled shapes, and full GPU plotting are on the roadmap |
| ๐ธ runmat-snapshot | Fast startup snapshots | Binary blob serialization / restore |
| ๐งฐ runmat-runtime | Core runtime + 200+ builtin functions | BLAS/LAPACK integration and other CPU/GPU-accelerated operations |
- Tiered CPU execution gives quick startup and strong single-machine performance.
- Fusion engine removes most manual device management and kernel tuning.
- GPU backend runs on NVIDIA, AMD, Apple Silicon, and Intel through Metal / DirectX 12 / Vulkan, with no vendor lock-in.
RunMat automatically accelerates your MATLAB code on GPUs without requiring kernel code or rewrites. The system works through four stages:
RunMat builds an "acceleration graph" that captures the intent of your operationsโshapes, operation categories, dependencies, and constants. This graph provides a complete view of what your script computes.
The fusion engine detects long chains of elementwise operations and linked reductions, planning to execute them as combined GPU programs. The auto-offload planner estimates break-even points and routes work intelligently:
- Fusion detection: Combines multiple operations into single GPU dispatches
- Auto-offload heuristics: Considers element counts, reduction sizes, and matrix multiply saturation
- Residency awareness: Keeps tensors on device once they're worth it
RunMat generates portable WGSL (WebGPU Shading Language) kernels that work across platforms:
- Metal on macOS
- DirectX 12 on Windows
- Vulkan on Linux
Kernels are compiled once and cached for subsequent runs, eliminating recompilation overhead.
The runtime minimizes hostโdevice transfers by:
- Uploading tensors once and keeping them resident
- Executing fused kernels directly on GPU memory
- Only gathering results when needed (e.g., for
fprintfor display)
% This code automatically fuses into a single GPU kernel
x = rand(1024, 1, 'single');
y = sin(x) .* x + 0.5; % Fused: sin, multiply, add
m = mean(y, 'all'); % Reduction stays on GPU
fprintf('m=%.6f\n', double(m)); % Single download at sinkRunMat detects the elementwise chain (sin, .*, +), fuses them into one GPU dispatch, keeps y resident on GPU, and only downloads m when needed for output.
For more details, see Introduction to RunMat GPU and How RunMat Fusion Works.
runmat> .info
๐ฆ RunMat v0.1.0 - High-Performance MATLAB Runtime
โก JIT: Cranelift (optimization: speed)
๐ง GC: Generational (heap: 45MB, collections: 12)
๐ GPU: wgpu provider (Metal/DX12/Vulkan)
๐จ Plotting: GPU-accelerated (wgpu)
๐ Functions loaded: 200+ builtins + 0 user-defined
runmat> .stats
Execution Statistics:
Total: 2, JIT: 0, Interpreter: 2
Average time: 0.12ms
runmat> accel-info
GPU Acceleration Provider: wgpu
Device: Apple M2 Max
Backend: Metal
Fusion pipeline cache: 45 hits, 2 misses- Rich output formatting with LaTeX math rendering
- Interactive widgets for parameter exploration
- Full debugging support with breakpoints
// Adding a new builtin function is trivial
#[runtime_builtin("myfunction")]
fn my_custom_function(x: f64, y: f64) -> f64 {
x.powf(y) + x.sin()
}RunMat includes a comprehensive CLI with powerful features:
# Check GPU acceleration status
runmat accel-info
# Benchmark a script
runmat benchmark my_script.m --iterations 5 --jit
# Create a snapshot for faster startup
runmat snapshot create -o stdlib.snapshot
# GC statistics and control
runmat gc stats
runmat gc major
# System information
runmat infoSee CLI Documentation for the complete command reference.
RunMat's package system enables both systems programmers and MATLAB users to extend the runtime. The core stays lean while packages provide domain-specific functionality.
High-performance built-ins implemented in Rust:
#[runtime_builtin(
name = "norm2",
category = "math/linalg",
summary = "Euclidean norm of a vector.",
examples = "n = norm2([3,4]) % 5"
)]
fn norm2_builtin(a: Value) -> Result<Value, String> {
let t: Tensor = (&a).try_into()?;
let s = t.data.iter().map(|x| x * x).sum::<f64>().sqrt();
Ok(Value::Num(s))
}Native packages get type-safe conversions, deterministic error IDs, and zero-cost documentation generation.
MATLAB source packages compile to RunMat bytecode:
% +mypackage/norm2.m
function n = norm2(v)
n = sqrt(sum(v .^ 2));
endBoth package types appear identically to usersโfunctions show up in the namespace, reference docs, and tooling (help, search, doc indexing).
# Declare dependencies in .runmat
[packages]
linalg-plus = { source = "registry", version = "^1.2" }
viz-tools = { source = "git", url = "https://github.com/acme/viz-tools" }
# Install packages
runmat pkg install
# Publish your package
runmat pkg publishNote: Package manager CLI is currently in beta. See Package Manager Documentation for design details.
RunMat follows a minimal core, fast runtime, open extension model philosophy:
- Full language support: The core implements the complete MATLAB grammar and semantics, not a subset
- Extensive built-ins: The standard library aims for complete base MATLAB built-in coverage (200+ functions)
- Tiered execution: Ignition interpreter for fast startup, Turbine JIT for hot code
- GPU-first math: Fusion engine automatically turns MATLAB code into fast GPU workloads
- Small, portable runtime: Single static binary, fast startup, modern CLI, Jupyter kernel support
- Toolboxes as packages: Signal processing, statistics, image processing, and other domains live as packages
- A modern, high-performance runtime for MATLAB code
- A minimal core with a thriving package ecosystem
- GPU-accelerated by default with intelligent CPU/GPU routing
- Open source and free forever
- A reimplementation of MATLAB-in-full (toolboxes are packages)
- A compatibility layer (we implement semantics, not folklore)
- An IDE (use any editor: Cursor, VSCode, IntelliJ, etc.)
RunMat keeps the core small and uncompromisingly high-quality; everything else is a package. This enables:
- Fast iteration without destabilizing the runtime
- Domain experts shipping features without forking
- A smaller trusted compute base, easier auditing
- Community-driven package ecosystem
See Design Philosophy for the complete design rationale.
RunMat is built for array-heavy math in many domains.
Examples:
|
Imaging / geospatial 4K+ tiles, normalization, radiometric correction, QC metrics |
Quant / simulation Monte Carlo risk, scenario analysis, covariance, factor models |
Signal processing / control Filters, NLMS, large time-series jobs |
Researchers and students MATLAB background, need faster runs on laptops or clusters |
If you write math in MATLAB and hit performance walls on CPU, RunMat is built for you.
RunMat is more than just softwareโit's a movement toward open, fast, and accessible scientific computing. We're building the future of numerical programming, and we need your help.
|
๐ For Rust Developers
|
๐ฌ For Domain Experts
|
๐ For Everyone Else
|
- GitHub Discussions: Share ideas and get help
- Twitter: @dystreng for updates and announcements
RunMat is licensed under the MIT License with Attribution Requirements. This means:
โ
Free for everyone - individuals, academics, most companies
โ
Open source forever - no vendor lock-in or license fees
โ
Commercial use allowed - embed in your products freely
See LICENSE.md for complete terms or visit runmat.org/license for FAQs.
Built with โค๏ธ by Dystr Inc. and the RunMat community
โญ Star us on GitHub if RunMat is useful to you.
๐ Get Started โข ๐ฆ Follow @dystr
MATLABยฎ is a registered trademark of The MathWorks, Inc. RunMat is not affiliated with, endorsed by, or sponsored by The MathWorks, Inc.