Skip to content

Releases: nanoporetech/bonito

# v1.1.0

20 Feb 10:13

Choose a tag to compare

v1.1.0

  • 1576fd47 Added RNA v5.3.0 models
  • 0404594c Added --no-compile option to training
  • 58e57fee Set training seed to random by default

v1.0.1

14 Jan 10:18

Choose a tag to compare

v1.0.1

  • 5fed0a48 Fixes #426 where model was compiled twice when loading from checkpoint
  • 41618b26 Bump torch to v2.9.1

v1.0.0

14 Nov 14:39

Choose a tag to compare

  • 4df274e
    • Update dependencies to support the latest python (3.10-3.14), torch (2.9), cuda (12.8, 13.0) versions.
    • Update build-pipeline to pyproject style
    • Remove the dependency on FlashAttention
    • Torch does not yet support torch.compile on py3.14+ and training performance will be degraded, see pytorch#156856.
  • f5db9e6 (f5db9e6c): Only autocast in the forward pass during training
  • 1671d49 support removed for fast5.

v0.9.1

21 May 16:19

Choose a tag to compare

v0.9.1

  • 4bfce35 v5.2 basecall models for DNA + RNA

v0.9.0

11 Apr 14:15

Choose a tag to compare

v0.9.0

  • 205c5aa3: Updated dependencies and provide support for new python and pytorch versions.
  • 84ebf8ed: Updated documentation on model training process and training data loading.
  • 47690c54, 8144ddb0: Refactored training data loading process.
  • 9573d7f0: Improved output from bonito evaluate to assist with debugging model development
  • 4db046e1: Added support for dynamically loading different optimisers.
  • 7261dbe3: Added RNA v5.1.0 models.
  • aa5bc804: Remove mod-calling options from bonito. Please see dorado for supported mods and remora for modified basecall model development.

v0.8.1

21 May 17:29

Choose a tag to compare

  • e33a860 Attention Is All You Need!
  • 454324a v5.0.0 models and example training sets for DNA & RNA.
    • dna_r10.4.1_e8.2_400bps_sup@v5.0.0
    • dna_r10.4.1_e8.2_400bps_hac@v5.0.0
    • dna_r10.4.1_e8.2_400bps_fast@v5.0.0
    • rna004_130bps_sup@v5.0.0
    • rna004_130bps_hac@v5.0.0
    • rna004_130bps_fast@v5.0.0
    • example_data_dna_r10.4.1_v0
    • example_data_rna004_v0
  • ed36968 new model configs.
  • 37a0557 default alignment preset is now lr:hq.
  • d4f6dd2 unpin bonito requirements.
  • 40a9753 fast5 deprecation warning.
  • 0ba190f fixed progress count when setting --max-reads.
  • 6c8ecb5 batchnorm fusing for inference.
  • 302b1ce bonito view now accepts a model directory or a config.
  • a170b7c default scaling fixes.

v0.7.3

12 Dec 17:21

Choose a tag to compare

v0.7.2

31 Jul 15:16

Choose a tag to compare

v0.7.1

01 Jun 13:23

Choose a tag to compare

Highlights

  • 9113e24 v4.2.0 5kHz simplex models.
    • dna_r10.4.1_e8.2_400bps_fast@v4.2.0
    • dna_r10.4.1_e8.2_400bps_hac@v4.2.0
    • dna_r10.4.1_e8.2_400bps_sup@v4.2.0
  • 8c96eb8 make sample_id optional for fast5 input.
  • 3b4bcad ensure decoder runs on same device as nn model.
  • 8fe1f61 fix training data downloading.
  • 26d52d9 set default --valid-chunks to None.
  • ebc32a0 fix models as list.

Thanks @chAwater for his collection of bug fixes in this release.

Installation

$ pip install ont-bonito

Note: For anything other than basecaller training or method development please use dorado.

v0.7.0

03 Apr 13:08

Choose a tag to compare

Highlights

  • 66ee29a v4.1.0 simplex models.
    • dna_r10.4.1_e8.2_260bps_fast@v4.1.0
    • dna_r10.4.1_e8.2_260bps_hac@v4.1.0
    • dna_r10.4.1_e8.2_260bps_sup@v4.1.0
    • dna_r10.4.1_e8.2_400bps_fast@v4.1.0
    • dna_r10.4.1_e8.2_400bps_hac@v4.1.0
    • dna_r10.4.1_e8.2_400bps_sup@v4.1.0
  • 4cf3c6f torch 2.0 + updated requirements.
  • 3bc338a fix use of TLEN.
  • 21df7d5 v4.0.0 simplex models.
    • dna_r10.4.1_e8.2_260bps_fast@v4.0.0
    • dna_r10.4.1_e8.2_260bps_hac@v4.0.0
    • dna_r10.4.1_e8.2_260bps_sup@v4.0.0
    • dna_r10.4.1_e8.2_400bps_fast@v4.0.0
    • dna_r10.4.1_e8.2_400bps_hac@v4.0.0
    • dna_r10.4.1_e8.2_400bps_sup@v4.0.0

Installation

Torch 2.0 (from pypi.org) is now built using CUDA 11.7 so the default installation of ont-bonito can be used for Turing/Ampere GPUs.

$ pip install ont-bonito

Note: For anything other than basecaller training or method development please use dorado.