Skip to content

Conversation

@saksham-stack
Copy link

This PR introduces a dedicated benchmarking suite to the dingo repository. As discussed in volesti#423, the goal is to provide a repeatable way to measure the performance overhead introduced by the Python interface and to audit the numerical consistency (convergence) of the sampling process.

By moving this logic into a standardized /benchmarks directory, we ensure that future changes to the C++ core or Python wrappers can be audited for performance regressions.

Changes
• Added /benchmarks directory: Established a new package for performance-related tests.

• test_samplers_bench.py: Implemented a pytest-benchmark suite that measures the wall-clock time for sampling from high-dimensional polytopes (e.g., unit cube).

• metrics_utils.py: Integrated ArviZ to calculate Effective Sample Size (ESS), addressing the need for automated convergence diagnostics.

•Documentation: Added a README.md and requirements-bench.txt to guide contributors on how to run and interpret these benchmarks.

Verification Results
Running pytest benchmarks/ locally on [Your OS/Processor] yielded the following for a 10D unit cube:

Mean Sampling Time: [Insert your result from terminal, e.g., 12.5ms]

Avg ESS: [Insert your ESS result, e.g., 450.2]

Interface Overhead: [Insert your calculated overhead %]

GSoC 2026 Context
I am a prospective GSoC 2026 contributor. This task has helped me familiarize myself with the dingo sampler interface and the integration of volesti bindings. I plan to expand this suite to include more complex geometries (e.g., simplices) as the project progresses.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant