![]() |
![]() |
The archive and benchmark repository for multivariate time series classification.
Datasets · Results · Leaderboard · Evaluation · Classifiers · Contributing
The Multiverse is a new archive of multivariate time series classification datasets. This repository is for accessing, benchmarking, and extending this new archive.
It brings together datasets, published results, reproducible evaluation workflows, and leaderboard infrastructure in one place. The aim is to make it easier to:
- access the multiverse , a collection of benchmark datasets for multivariate time series classification,
- explore and compare against published results of classification algorithms,
- reproduce baseline experiments,
- evaluate new classifiers consistently,
- and contribute new algorithms and results back to the archive.
This repository is intended as both a practical resource for researchers and a public record of benchmark results.
Places 1 to 5 by ranks
Further information and more extensive leaderboard views linked here:
Install the release package from PyPI:
pip install aeon-multiverseor install the development version from GitHub:
pip install git+https://github.com/aeon-toolkit/multiverse.gitUse aeon to download data from zenodo and load into memory.
from aeon.datasets import load_classification
X, y = load_classification("BasicMotions")
print(X.shape)
print(y[:10])
trainX, trainy = load_classification("BasicMotions", split="train")
testX, testy = load_classification("BasicMotions", split="test")More info and links to code - docs/leaderboard.md
Train and test any aeon classifier that can
from aeon.classification.deep_learning import InceptionTimeClassifier
from multiverse.classification import TimesNet
clf = InceptionTimeClassifier()
clf.fit(X, y)
preds = clf.predict(X)More info and links to aeon classifiers - docs/classifiers.md
Multiverse ported classifiers - multiverse/classification
Load results directly in code
from aeon.classification.deep_learning import InceptionTimeClassifierOr explore published results explored in this repo - [docs/results.md]
(docs/results.md)
To reproduce a benchmark run or evaluate a new classifier, start from:
Coming soon
multiverse/
├── docs/ # Documentation
├── experiments/ # Benchmark and reproduction scripts
├── results/ # Submitted results and schema
└── multiverse/ # Python package source for classifiers

