Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
15743fc
feat(traceability): add coverage checker and reporting docs
FScholPer Apr 13, 2026
58ae80d
add coverage check
FScholPer Apr 13, 2026
4e9c60e
fix lint
FScholPer Apr 13, 2026
0ec5217
refactoring the coverage, metrics and dashboard
FScholPer Apr 14, 2026
764da8d
add generic filters
FScholPer Apr 14, 2026
ec2e994
Update src/extensions/score_metamodel/traceability_metrics.py
FScholPer Apr 16, 2026
a364257
Apply suggestions from code review
FScholPer Apr 17, 2026
ecd1caf
readd genai headers
FScholPer Apr 17, 2026
313ff9b
Merge branch 'main' into score-2774-traceability
FScholPer Apr 20, 2026
6287c69
changed to new json structure
FScholPer Apr 20, 2026
5876e16
Merge branch 'main' into score-2774-traceability
FScholPer Apr 27, 2026
b4ec35b
removed md and refactored gate
FScholPer Apr 27, 2026
7ce6835
Added the uml from the comment
FScholPer Apr 27, 2026
a6029b7
refactoring to the new json approach and refactoring of dashboards fo…
FScholPer Apr 27, 2026
a93233b
lint fix
FScholPer Apr 27, 2026
6e1e0aa
fixed liniting issues
FScholPer Apr 27, 2026
0d8d75c
Merge branch 'main' into score-2774-traceability
FScholPer Apr 27, 2026
c8e4058
improved description
FScholPer Apr 27, 2026
4437580
fix warnings
FScholPer Apr 27, 2026
243aa21
fix docs build
FScholPer Apr 28, 2026
cf19ba8
fixed review comments(removed coverage py to utilize extension)
FScholPer Apr 28, 2026
2b8aace
review comment fixes
FScholPer Apr 28, 2026
f137f03
replaced for loop by list
FScholPer Apr 28, 2026
e927339
fix linting
FScholPer Apr 28, 2026
a81e4ff
Merge branch 'main' into score-2774-traceability
FScholPer Apr 30, 2026
ef1f5ca
review fixes
FScholPer May 11, 2026
7745cce
added example
FScholPer May 11, 2026
0a1a885
Update docs/reference/commands.md
FScholPer May 12, 2026
15d8e91
fix: address PR #484 review comments (dashboards, implementation_stat…
FScholPer May 12, 2026
5e49964
Merge origin/main into score-2774-traceability (resolve docs.bzl conf…
FScholPer May 12, 2026
56779d3
fix: remove undocumented --fail-on-broken-test-refs flag from docs
FScholPer May 12, 2026
5403782
format fix
FScholPer May 12, 2026
fcde178
Merge branch 'main' into score-2774-traceability
FScholPer May 12, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,4 +1,7 @@
# Commonly used for local settings and secrets
# ╓ ╖
# ║ Some portions generated by Github Copilot ║
# ╙ ╜
.env

# Bazel
Expand Down Expand Up @@ -26,3 +29,4 @@ __pycache__/

# bug: This file is created in repo root on test discovery.
/consumer_test.log
.clwb
Comment thread
FScholPer marked this conversation as resolved.
6 changes: 4 additions & 2 deletions docs.bzl
Original file line number Diff line number Diff line change
Expand Up @@ -304,10 +304,12 @@ def docs(source_dir = "docs", data = [], deps = [], scan_code = [], known_good =
"--jobs",
"auto",
"--define=external_needs_source=" + str(data),
] + metamodel_opts,
"--define=score_sourcelinks_json=$(location :sourcelinks_json)",
"--define=score_source_code_linker_plain_links=1",
],
formats = ["needs"],
sphinx = ":sphinx_build",
tools = data + metamodel_data,
tools = data + [":sourcelinks_json"],
visibility = ["//visibility:public"],
# Persistent workers cause stale symlinks after dependency version
# changes, corrupting the Bazel cache.
Expand Down
142 changes: 142 additions & 0 deletions docs/how-to/dashboards_and_quality_gates.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
..
# *******************************************************************************
# Copyright (c) 2026 Contributors to the Eclipse Foundation
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Apache License Version 2.0 which is available at
# https://www.apache.org/licenses/LICENSE-2.0
#
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************

Build Dashboards and Quality Gates
==================================

Use this guide in repositories that consume docs-as-code as a Bazel
dependency.

Goals:

1. Publish traceability dashboards from repository needs.
2. Export machine-readable metrics.
3. Enforce CI thresholds with ``traceability_gate``.

What You Get
------------

With the ``docs(...)`` macro and ``score_metamodel`` extension enabled, your
repository can:

- build an HTML dashboard from its own Sphinx needs,
- include external needs from other repositories when desired,
- export ``needs.json`` and ``metrics.json`` for machine-readable reporting,
- gate CI on traceability thresholds via ``traceability_gate``.

Typical Setup
Comment thread
FScholPer marked this conversation as resolved.
-------------

For details, see :ref:`setup`.

Minimal Configuration Example
-----------------------------

In ``docs/conf.py``:

.. code-block:: python

score_metamodel_requirement_types = "feat_req,comp_req,aou_req"
Comment thread
FScholPer marked this conversation as resolved.
score_metamodel_include_external_needs = False
Comment thread
FScholPer marked this conversation as resolved.

Use ``score_metamodel_include_external_needs = True`` (aggregate_traceability_across_dependencies)
only in repositories that intentionally aggregate requirements across module dependencies, such as
integration repositories. Use ``False`` for module repositories to gate only on local traceability.

Building the Dashboard
----------------------

After building/running any docs command (i.e. ``bazel build //:needs_json`` or ``bazel run //:docs_verify`` are the fastest):

The documentation build writes ``metrics.json`` via ``score_metamodel``, and the ``needs_json`` artifact contains:

- ``bazel-bin/needs_json/_build/needs/needs.json``
- ``bazel-bin/needs_json/_build/needs/metrics.json``
Comment thread
FScholPer marked this conversation as resolved.

The dashboard charts and the CI gate both use the same computed metrics.

Inputs for Linkage Metrics
--------------------------

To get meaningful dashboard and gate values, consumer repositories typically
need three inputs:

1. Requirement and architecture needs in the documentation itself.
2. Source code references via :doc:`source_to_doc_links`.
3. Test metadata via :doc:`test_to_doc_links`.

If one of those inputs is missing, the related chart or gate metric will remain
empty or low.

Choosing Local vs Aggregated Views
----------------------------------

There are two common modes:

**Module repository**

- Set ``score_metamodel_include_external_needs = False``.
- Gate only on the needs owned by the repository itself.
- Use this for per-module implementation progress and traceability.

**Integration repository**

- Set ``score_metamodel_include_external_needs = True``.
- Aggregate requirements across module dependencies when that is the intended
repository purpose.
- Use this for system or integration-level dashboards.

CI Quality Gate
---------------

After building ``//:needs_json``, run the gate on the exported metrics:
Comment thread
FScholPer marked this conversation as resolved.

.. code-block:: bash

bazel run //:docs && \
bazel run //:traceability_gate -- \
--metrics-json bazel-bin/needs_json/_build/needs/metrics.json \
Comment thread
FScholPer marked this conversation as resolved.
--min-req-code 70 \
--min-req-test 70 \
--min-req-fully-linked 60 \
--min-tests-linked 70

In CI, wire targets through Bazel dependencies so test execution and
``needs_json`` generation happen before the gate target.

In larger repositories, define a dedicated wrapper target for your standard
gate thresholds so CI calls a single Bazel target.

Useful flags:

- ``--require-all-links`` for strict 100 percent gating

Recommended Rollout
-------------------

For a new consumer repository:

1. Start with local-only metrics.
2. Enable ``scan_code`` and verify ``source_code_link`` coverage first.
3. Add test metadata and verify ``testlink`` coverage.
4. Introduce modest thresholds in CI.
5. Raise thresholds over time as the repository matures.

Related Guides
--------------

- :ref:`setup`
- :doc:`other_modules`
- :doc:`source_to_doc_links`
- :doc:`test_to_doc_links`
3 changes: 3 additions & 0 deletions docs/how-to/get_started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,6 @@ In an existing S-CORE repository, you can build the documentation using Bazel:
Open the generated site at ``_build/index.html`` in your browser.

In a new S-CORE repository, see :ref:`setup`.

After the initial setup, continue with :doc:`dashboards_and_quality_gates` to
build a repository dashboard and enforce CI quality gates.
1 change: 1 addition & 0 deletions docs/how-to/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@ Here you find practical guides on how to use docs-as-code.
write_docs
faq
other_modules
dashboards_and_quality_gates
source_to_doc_links
test_to_doc_links
add_extensions
6 changes: 6 additions & 0 deletions docs/how-to/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,3 +88,9 @@ bazel run //:docs
#### 6. Access your documentation at

`/_build/index.html`

## Next Step

After basic setup, see {doc}`dashboards_and_quality_gates` to configure
traceability dashboards, export `metrics.json`, and enforce CI quality gates in
consumer repositories.
8 changes: 8 additions & 0 deletions docs/how-to/test_to_doc_links.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************
# Assisted-by: GitHub Copilot

Reference Docs in Tests
=======================

Expand Down Expand Up @@ -53,3 +55,9 @@ Limitations
- Partial properties will lead to no Testlink creation.
If you want a test to be linked, please ensure all requirement properties are provided.
- Tests must be executed by Bazel first so `test.xml` files exist.

Related
-------

For end-to-end dashboard and CI threshold setup, see
:doc:`dashboards_and_quality_gates`.
136 changes: 69 additions & 67 deletions docs/internals/requirements/implementation_state.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,21 +11,58 @@
#
# SPDX-License-Identifier: Apache-2.0
# *******************************************************************************

# Assisted-by: GitHub Copilot
.. _docs_statistics:

Implementation State Statistics
================================
Tooling Coverage
================

This page shows how the docs-as-code tooling covers process and tool
requirements. It focuses on tooling capabilities offered to downstream
repositories rather than on product-specific traceability inside those
repositories.

Overview
--------

.. needpie:: Requirements Status
:labels: not implemented, implemented but not tested, implemented and tested
.. needpie:: Tool Requirements Status
:labels: not implemented, implemented but incomplete traceability, fully linked
:colors: red,yellow, green
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_status(tool_req)

Jump to evidence tables:

- :ref:`Tool Requirement Implementation and Links table <tooling_coverage_table_impl_links>`
- :ref:`Process Requirement to Tool Requirement mapping table <tooling_coverage_table_process_mapping>`

How To Read These Levels
------------------------

The overview pie combines implementation state and traceability evidence:

- ``not implemented``:
requirement has ``implemented == NO``.
- ``implemented but incomplete traceability``:
requirement has ``implemented == YES`` or ``implemented == PARTIAL``,
but is missing at least one traceability link (code link and/or test link).
- ``fully linked``:
requirement has both ``source_code_link`` and ``testlink``.
Comment thread
FScholPer marked this conversation as resolved.

type == 'tool_req' and implemented == 'NO'
type == 'tool_req' and testlink == '' and (implemented == 'YES' or implemented == 'PARTIAL')
type == 'tool_req' and testlink != '' and (implemented == 'YES' or implemented == 'PARTIAL')
Implementation labels used on this page:

- ``NO``: requirement is not implemented.
- ``PARTIAL``: requirement is partly implemented.
- ``YES``: requirement is implemented.

Why multiple pies are shown:

- ``Requirements with Codelinks`` shows requirement-to-implementation traceability.
- ``Requirements with linked tests`` shows requirement-to-verification traceability.
- ``Requirements fully linked`` is the strict roll-up (both links present).

These are intentionally separate because they answer different diagnostics:
missing code links, missing test links, or both.

In Detail
---------
Expand All @@ -48,78 +85,43 @@ In Detail
.. needpie:: Requirements with Codelinks
:labels: no codelink, with codelink
:colors: red, green

type == 'tool_req' and source_code_link == ''
type == 'tool_req' and source_code_link != ''
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_with_code_links(tool_req)

.. grid-item-card::

.. needpie:: Test Results
:labels: passed, failed, skipped
:colors: green, red, orange

type == 'testcase' and result == 'passed'
type == 'testcase' and result == 'failed'
type == 'testcase' and result == 'skipped'

.. grid:: 2
.. needpie:: Requirements with linked tests
:labels: no test link, with test link
:colors: red, green
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_with_test_links(tool_req)

.. grid-item-card::

Failed Tests

*Hint: this table is empty by definition, as PRs with failing tests are not allowed to be merged in docs-as-code repo.*

.. needtable:: FAILED TESTS
:filter: result == "failed"
:tags: TEST
:columns: name as "testcase";result;fully_verifies;partially_verifies;test_type;derivation_technique;id as "link"
.. needpie:: Requirements fully linked (code + tests)
:labels: not fully linked, fully linked
:colors: orange, green
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_requirements_fully_linked(tool_req)

.. grid-item-card::

Skipped / Disabled Tests

*Hint: this table is empty by definition, as we do not allow skipped or disabled tests in docs-as-code repo.*

.. needtable:: SKIPPED/DISABLED TESTS
:filter: result != "failed" and result != "passed"
:tags: TEST
:columns: name as "testcase";result;fully_verifies;partially_verifies;test_type;derivation_technique;id as "link"




All passed Tests
-----------------

.. needtable:: SUCCESSFUL TESTS
:filter: result == "passed"
:tags: TEST
:columns: name as "testcase";result;fully_verifies;partially_verifies;test_type;derivation_technique;id as "link"

.. needpie:: Process requirements linked by tool requirements
:labels: not linked, linked
:colors: red, green
:filter-func: src.extensions.score_metamodel.checks.traceability_dashboard.pie_process_requirements_linked(tool_req,true)

Details About Testcases
------------------------
*Data is not filled out yet within the test cases.*

.. needpie:: Test Types Used In Testcases
:labels: fault-injection, interface-test, requirements-based, resource-usage
:legend:
Process-to-Tool Mapping
-----------------------

type == 'testcase' and test_type == 'fault-injection'
type == 'testcase' and test_type == 'interface-test'
type == 'testcase' and test_type == 'requirements-based'
type == 'testcase' and test_type == 'resource-usage'
.. _tooling_coverage_table_process_mapping:

.. needtable:: Process requirement -> tool requirement mapping
:types: tool_req
:columns: satisfies as "Process Requirement";id as "Tool Requirement"
:style: table

.. needpie:: Derivation Techniques Used In Testcases
:labels: requirements-analysis, design-analysis, boundary-values, equivalence-classes, fuzz-testing, error-guessing, explorative-testing
:legend:
.. _tooling_coverage_table_impl_links:

type == 'testcase' and derivation_technique == 'requirements-analysis'
type == 'testcase' and derivation_technique == 'design-analysis'
type == 'testcase' and derivation_technique == 'boundary-values'
type == 'testcase' and derivation_technique == 'equivalence-classes'
type == 'testcase' and derivation_technique == 'fuzz-testing'
type == 'testcase' and derivation_technique == 'error-guessing'
type == 'testcase' and derivation_technique == 'explorative-testing'
.. needtable:: Tool requirement implementation and links
:types: tool_req
:columns: id as "Tool Requirement";implemented;source_code_link;testlink
:style: table
Loading
Loading