Skip to content

Milestones

List view

  • This group of work covers everything required to implement our measurement bias mitigations. Once completed, we can be confident that the source changes are the cause of any speed up or slow down, and not any incidental code, data, or heap layout changes. See the [RFC](https://github.com/bytecodealliance/rfcs/blob/main/accepted/benchmark-suite.md#incremental-milestones) for more details.

    No due date
    0/2 issues closed
  • The MVP analysis will only do a simple test of significance for the difference in performance with and without a given change. This will tell us whether any difference really exists or is just noise, but it won't tell us what we really want to know: how much faster or slower did it get?! Implementing an effect size confidence interval analysis will give us exactly that information. See the [RFC](https://github.com/bytecodealliance/rfcs/blob/main/accepted/benchmark-suite.md#incremental-milestones) for more details.

    No due date
    1/3 issues closed
  • This group of work covers everything required to build a representative set of candidate benchmark programs to choose from and to select a subset of them for inclusion in the benchmark suite. We would choose the subset such that it doesn't contain duplicate workloads but still covers the range of workloads represented in the full set of candidates. Upon completion we can be sure that our benchmarks make efficient use of experiment time, and that speed ups to our benchmark suite translate into speed ups in real world applications. See the [RFC](https://github.com/bytecodealliance/rfcs/blob/main/accepted/benchmark-suite.md#incremental-milestones) for more details.

    No due date
    0/1 issues closed
  • This group of work covers everything we need for an excellent developer experience and integrated workflow. When completed we will automatically be notified of any accidental performance regressions, and we will be able to test specific PRs if we suspect they might have an impact on performance. See the [RFC](https://github.com/bytecodealliance/rfcs/blob/main/accepted/benchmark-suite.md#incremental-milestones) for more details.

    No due date
    5/8 issues closed
  • In order to reach an MVP where we are able to get some kind of return on our engineering investments, we'll need an initial benchmark runner, initial set of candidate programs, and finally a simple analysis that can tell us whether a change is statistically significant or not. This is enough to be useful for some ad-hoc local experiments. The MVP does not mitigate measurement bias, it does not have a representative and diverse corpus, and it does not integrate with GitHub Actions or automatically detect regressions on the main branch. See the [RFC](https://github.com/bytecodealliance/rfcs/blob/main/accepted/benchmark-suite.md#incremental-milestones) for more details.

    No due date
    5/6 issues closed