Skip to content

Conversation

@giuseppe
Copy link
Member

@giuseppe giuseppe commented Dec 5, 2025

add a new target "make coverage" to generate the lcov coverage report for the tests suite.

Most of the changes were done with Claude Code.

It is the first step to get more coverage in our tests.

Summary by Sourcery

Add configurable code coverage support to the build and introduce richer, structured logging and skip reasons in the Python test suite.

Build:

  • Introduce --enable-coverage configure option with gcov/lcov/gcovr integration and corresponding Makefile coverage targets and flags.

Tests:

  • Enhance Python tests to return structured skip reasons and improve diagnostics via a shared logger and additional /tmp mount.
  • Adjust many tests to hide container stderr by default to reduce noise while preserving detailed error logging.
  • Add a tmpfs /tmp mount to the base test configuration and tweak some test setups/cleanup for robustness.

@giuseppe giuseppe requested a review from kolyshkin December 5, 2025 20:47
@sourcery-ai
Copy link

sourcery-ai bot commented Dec 5, 2025

Reviewer's Guide

Adds optional code coverage support (configure flag, compiler/linker flags, and make coverage targets) and refactors the Python test suite to use a shared logger, structured skip reasons, quieter run_and_get_output calls, and more robust TAP-style diagnostics and cleanup behavior.

File-Level Changes

Change Details Files
Introduce configurable code coverage support and make targets driven by gcov/lcov/gcovr.
  • Add --enable-coverage configure flag that wires up GCOV/LCOV/GCOVR detection and exposes COVERAGE_CFLAGS/COVERAGE_LDFLAGS via autotools
  • Apply coverage CFLAGS/LDFLAGS to libcrun, the crun binary, and the testing library when ENABLE_COVERAGE is set
  • Add non-parallel coverage-* make targets (clean/reset/check/html/xml/summary) that run the test suite serially and post-process coverage data, plus hook coverage-clean into clean-local
  • Adjust maint.mk coverage output directory and gen-coverage behavior, and mark coverage-related phony targets as non-parallel
configure.ac
Makefile.am
maint.mk
Standardize Python test logging, error reporting, and TAP output handling across the test suite.
  • Introduce a shared logger in tests_utils configured to emit TAP-style diagnostics to stderr, and export it for use in test modules
  • Replace direct sys.stderr writes and some prints in tests with logger.info/logger.error calls to keep diagnostics consistent and parseable
  • Extend the TAP runner in tests_utils to support timing-based slow-test warnings and richer exception diagnostics (including subprocess return codes, commands, and traces)
tests/tests_utils.py
tests/test_devices.py
tests/test_mounts.py
tests/test_exec.py
tests/test_delete.py
tests/test_resources.py
tests/test_uid_gid.py
tests/test_capabilities.py
tests/test_oci_features.py
tests/test_hostname.py
tests/test_mounts.py
tests/test_oci_features.py
tests/test_cwd.py
tests/test_limits.py
tests/test_seccomp.py
tests/test_bpf_devices.py
tests/test_domainname.py
tests/test_pid.py
tests/test_rlimits.py
tests/test_time.py
tests/test_tty.py
tests/test_hooks.py
tests/test_start.py
tests/test_mempolicy.py
Make tests more automation-friendly by returning structured skip reasons and reducing noisy stderr from helpers.
  • Change many tests to return (77, "reason") instead of bare 77 and adjust the TAP runner to print the reason in the #SKIP annotation
  • Pass hide_stderr=True into run_and_get_output in many tests to prevent command stderr from polluting TAP output while still logging via the shared logger
  • Improve robustness in several tests (e.g., more careful cleanup, safer handling when prerequisites like NUMA hardware, systemd, BPF, or /dev/fuse are missing)
tests/tests_utils.py
tests/test_mempolicy.py
tests/test_start.py
tests/test_devices.py
tests/test_mounts.py
tests/test_exec.py
tests/test_delete.py
tests/test_resources.py
tests/test_uid_gid.py
tests/test_capabilities.py
tests/test_limits.py
tests/test_seccomp.py
tests/test_bpf_devices.py
tests/test_time.py
tests/test_tty.py
tests/test_bpf_devices.py
tests/test_update.py
tests/test_pid_file.py
tests/test_init.c (via tests_init_CFLAGS in Makefile.am)
Tighten and extend container test behavior (tmpfs /tmp, mounts, idmapped mounts, cgroups, etc.) to be more deterministic and debuggable.
  • Add a tmpfs /tmp mount with fixed mode/size to the default base_config to ensure predictable tmp directory behavior inside containers
  • Refine various mount-, idmapped mount-, and device-related tests to use clearer expectations, better cleanup, and more structured logging of mismatches
  • Add a small CFLAGS tweak for tests/init to enable debug info and optimization level suitable for coverage and debugging
tests/tests_utils.py
tests/test_mounts.py
tests/test_devices.py
tests/test_update.py
Makefile.am

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

This fixes CRIU checkpoint/restore failures that occur when CRIU tries
to create temporary directories for mount namespace reconstruction but
encounters a read-only filesystem.

The error was:
  Error (criu/mount.c:2955): mnt: Can't create a temporary directory: Read-only file system
  Error (criu/mount.c:3700): mnt: Can't remove the directory /tmp/.criu.mntns.Y96TTI: Device or resource busy

Signed-off-by: Giuseppe Scrivano <[email protected]>
Copy link

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey there - I've reviewed your changes and found some issues that need to be addressed.

  • Several tests now return tuples like (77, "reason") instead of plain integers; please double-check that the test runner expects and correctly handles this new return type, since previously these functions appeared to return only numeric codes.
  • There are multiple logging calls using undefined or mismatched variables (e.g. userns instead of have_userns, expected_path instead of target, found_owner/expected_owner instead of out/expected, key/soft_limit/hard_limit in rlimits, prop_value/prog_file in BPF tests, error_output vs e.output); these will raise NameError or log incorrect values and should be corrected.
  • Some error paths still mix print and logger (e.g. in test_simple_delete and test_multiple_containers_delete you log via logger.error but still use print(output) in certain branches); consider standardizing on the logger to keep output consistent and easier to parse.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Several tests now return tuples like `(77, "reason")` instead of plain integers; please double-check that the test runner expects and correctly handles this new return type, since previously these functions appeared to return only numeric codes.
- There are multiple logging calls using undefined or mismatched variables (e.g. `userns` instead of `have_userns`, `expected_path` instead of `target`, `found_owner`/`expected_owner` instead of `out`/`expected`, `key`/`soft_limit`/`hard_limit` in rlimits, `prop_value`/`prog_file` in BPF tests, `error_output` vs `e.output`); these will raise `NameError` or log incorrect values and should be corrected.
- Some error paths still mix `print` and `logger` (e.g. in `test_simple_delete` and `test_multiple_containers_delete` you log via `logger.error` but still use `print(output)` in certain branches); consider standardizing on the logger to keep output consistent and easier to parse.

## Individual Comments

### Comment 1
<location> `tests/test_mounts.py:59` </location>
<code_context>
-                sys.stderr.write("# helper_mount failed: mount target '%s' not found in mountinfo\n" % target)
-                sys.stderr.write("# mount options: %s, tmpfs=%s, userns=%s, is_file=%s\n" % (options, tmpfs, userns, is_file))
-                sys.stderr.write("# mountinfo output: %s\n" % out[:300])
+                logger.info("helper_mount failed: mount target '%s' not found in mountinfo", expected_path)
+                logger.info("mount options: %s, tmpfs=%s, userns=%s, is_file=%s", options, tmpfs, userns, is_file)
+                logger.info("mountinfo output: %s", out)
</code_context>

<issue_to_address>
**issue (bug_risk):** Undefined variable `expected_path` in helper_mount logging will raise a NameError and break the test

In `helper_mount`, this log line references `expected_path`, but only `target` is in scope. When this error path is hit, it will raise a `NameError` and prevent returning `[None, None]`. Please log `target` (or another in-scope variable) instead to preserve the existing behavior.
</issue_to_address>

### Comment 2
<location> `tests/test_mounts.py:569` </location>
<code_context>
             out = run_and_get_output(conf, chown_rootfs_to=1)
             if expected not in out[0]:
-                sys.stderr.write("# wrong file owner, found %s instead of %s\n" % (out[0], expected))
+                logger.info("wrong file owner, found %s instead of %s", found_owner, expected_owner)
                 return True
             return False
</code_context>

<issue_to_address>
**issue (bug_risk):** Logging in idmapped mounts check uses undefined variables and will crash the test on failure

In the `check` helper in `test_idmapped_mounts`, the new log statement uses `found_owner` and `expected_owner`, which are not defined in this scope. When the condition is true, this will raise a `NameError` instead of returning `True`. Please change the log line to use variables that exist here (e.g. `out[0]` and `expected`), or compute owner strings locally before logging so the original behavior is preserved.
</issue_to_address>

### Comment 3
<location> `tests/test_devices.py:50` </location>
<code_context>
-                sys.stderr.write("# device mode test failed with userns=%s: expected '%s' in output\n" % (have_userns, expected))
-                sys.stderr.write("# actual output: %s\n" % out[0])
-                sys.stderr.write("# device config: %s\n" % conf['linux']['devices'][0])
+                logger.info("device mode test failed with userns=%s: expected '%s' in output", userns, expected)
+                logger.info("actual output: %s", out[0])
+                logger.info("device config: %s", conf['linux']['devices'])
</code_context>

<issue_to_address>
**issue (bug_risk):** Tests in test_devices.py log with undefined `userns` variable instead of `have_userns`

In `test_mode_device` (and `test_owner_device`), the log message uses `userns`, but the loop variable is `have_userns`. This will raise a `NameError` in the failure path instead of cleanly failing the assertion. Please use `have_userns` in the log call (and fix any similar uses) so failures are reported correctly.
</issue_to_address>

### Comment 4
<location> `tests/test_resources.py:61` </location>
<code_context>
+    out, _ = run_and_get_output(conf, hide_stderr=True)
     if "1024" not in out:
-        sys.stderr.write("# found %s instead of 1024\n" % out)
+        logger.info("found %s instead of 1024", found_value)
         return -1
     return 0
</code_context>

<issue_to_address>
**issue (testing):** Log messages in PID limit tests reference undefined `found_value` variable

In both `test_resources_pid_limit` and `test_resources_pid_limit_userns`, the new log line uses `found_value`, but only `out` is defined in that scope. If the condition fails, this will raise a `NameError` instead of logging the unexpected value. Please update these logger calls to use `out` (or another defined variable) so the tests fail cleanly without runtime errors.
</issue_to_address>

### Comment 5
<location> `tests/test_bpf_devices.py:98-100` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 6
<location> `tests/test_bpf_devices.py:103-105` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 7
<location> `tests/test_bpf_devices.py:111-113` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 8
<location> `tests/test_bpf_devices.py:120-122` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 9
<location> `tests/test_capabilities.py:34-38` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 10
<location> `tests/test_capabilities.py:35-38` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 11
<location> `tests/test_capabilities.py:51-55` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 12
<location> `tests/test_capabilities.py:52-55` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 13
<location> `tests/test_capabilities.py:69-73` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 14
<location> `tests/test_capabilities.py:70-73` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 15
<location> `tests/test_capabilities.py:85-87` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 16
<location> `tests/test_capabilities.py:94-95` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 17
<location> `tests/test_capabilities.py:101-103` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 18
<location> `tests/test_capabilities.py:139-140` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 19
<location> `tests/test_capabilities.py:144-145` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 20
<location> `tests/test_capabilities.py:149-150` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 21
<location> `tests/test_capabilities.py:154-155` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 22
<location> `tests/test_capabilities.py:159-160` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 23
<location> `tests/test_delete.py:43-70` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 24
<location> `tests/test_delete.py:44-53` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 25
<location> `tests/test_delete.py:47-49` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 26
<location> `tests/test_delete.py:61-67` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 27
<location> `tests/test_delete.py:69-70` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 28
<location> `tests/test_devices.py:24-25` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 29
<location> `tests/test_devices.py:45-56` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid loops in tests. ([`no-loop-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-loop-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like loops, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

### Comment 30
<location> `tests/test_devices.py:49-53` </location>
<code_context>

</code_context>

<issue_to_address>
**issue (code-quality):** Avoid conditionals in tests. ([`no-conditionals-in-tests`](https://docs.sourcery.ai/Reference/Rules-and-In-Line-Suggestions/Python/Default-Rules/no-conditionals-in-tests))

<details><summary>Explanation</summary>Avoid complex code, like conditionals, in test functions.

Google's software engineering guidelines says:
"Clear tests are trivially correct upon inspection"
To reach that avoid complex code in tests:
* loops
* conditionals

Some ways to fix this:

* Use parametrized tests to get rid of the loop.
* Move the complex logic into helpers.
* Move the complex part into pytest fixtures.

> Complexity is most often introduced in the form of logic. Logic is defined via the imperative parts of programming languages such as operators, loops, and conditionals. When a piece of code contains logic, you need to do a bit of mental computation to determine its result instead of just reading it off of the screen. It doesn't take much logic to make a test more difficult to reason about.

Software Engineering at Google / [Don't Put Logic in Tests](https://abseil.io/resources/swe-book/html/ch12.html#donapostrophet_put_logic_in_tests)
</details>
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

sys.stderr.write("# helper_mount failed: mount target '%s' not found in mountinfo\n" % target)
sys.stderr.write("# mount options: %s, tmpfs=%s, userns=%s, is_file=%s\n" % (options, tmpfs, userns, is_file))
sys.stderr.write("# mountinfo output: %s\n" % out[:300])
logger.info("helper_mount failed: mount target '%s' not found in mountinfo", expected_path)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Undefined variable expected_path in helper_mount logging will raise a NameError and break the test

In helper_mount, this log line references expected_path, but only target is in scope. When this error path is hit, it will raise a NameError and prevent returning [None, None]. Please log target (or another in-scope variable) instead to preserve the existing behavior.

out = run_and_get_output(conf, chown_rootfs_to=1)
if expected not in out[0]:
sys.stderr.write("# wrong file owner, found %s instead of %s\n" % (out[0], expected))
logger.info("wrong file owner, found %s instead of %s", found_owner, expected_owner)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Logging in idmapped mounts check uses undefined variables and will crash the test on failure

In the check helper in test_idmapped_mounts, the new log statement uses found_owner and expected_owner, which are not defined in this scope. When the condition is true, this will raise a NameError instead of returning True. Please change the log line to use variables that exist here (e.g. out[0] and expected), or compute owner strings locally before logging so the original behavior is preserved.

sys.stderr.write("# device mode test failed with userns=%s: expected '%s' in output\n" % (have_userns, expected))
sys.stderr.write("# actual output: %s\n" % out[0])
sys.stderr.write("# device config: %s\n" % conf['linux']['devices'][0])
logger.info("device mode test failed with userns=%s: expected '%s' in output", userns, expected)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): Tests in test_devices.py log with undefined userns variable instead of have_userns

In test_mode_device (and test_owner_device), the log message uses userns, but the loop variable is have_userns. This will raise a NameError in the failure path instead of cleanly failing the assertion. Please use have_userns in the log call (and fix any similar uses) so failures are reported correctly.

out, _ = run_and_get_output(conf, hide_stderr=True)
if "1024" not in out:
sys.stderr.write("# found %s instead of 1024\n" % out)
logger.info("found %s instead of 1024", found_value)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (testing): Log messages in PID limit tests reference undefined found_value variable

In both test_resources_pid_limit and test_resources_pid_limit_userns, the new log line uses found_value, but only out is defined in that scope. If the condition fails, this will raise a NameError instead of logging the unexpected value. Please update these logger calls to use out (or another defined variable) so the tests fail cleanly without runtime errors.

@packit-as-a-service
Copy link

Ephemeral COPR build failed. @containers/packit-build please check.

@packit-as-a-service
Copy link

TMT tests failed. @containers/packit-build please check.

giuseppe and others added 7 commits December 5, 2025 21:16
Add enhanced diagnostic output for test failures including:
- Exception type and detailed messages
- Process return codes and failed commands
- Process output and stderr
- Working directory and test environment info

This improves debugging of test failures by providing more context
about what went wrong during test execution.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
Add timing thresholds to detect slow-running tests:
- Warn for tests taking >30 seconds (slow threshold)
- Warn for tests taking >60 seconds (very slow threshold)

This helps identify performance regressions and tests that may
need optimization or may be hanging.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
Enhance the TAP test framework to support and display specific skip reasons:
- Modify run_all_tests() to handle (return_code, reason) tuples
- Update tests to return (77, reason) instead of just 77
- Add descriptive skip reasons like "requires root privileges"
- Show skip reasons in TAP output as "#SKIP reason"

This makes test output more informative by explaining why tests
were skipped rather than showing generic skip messages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
Implement clean logging infrastructure using Python's standard library:

Infrastructure Changes:
- Add simple logging setup in tests_utils.py using logging.basicConfig()
- Configure logger with TAP diagnostic format ('# %(message)s')
- Export logger through __all__ for use in test files
- Set default level to WARNING for production use

Comprehensive Replacement:
- Replace all sys.stderr.write() calls in tests_utils.py with logger calls
- Replace all sys.stderr.write() calls across 15+ test files
- Use appropriate log levels (warning, error, info) based on message type
- Clean up format strings for proper logger parameter passing

Benefits:
- Consistent diagnostic output with TAP '#' prefix
- Standard library only - no external dependencies
- Configurable log levels via logging module
- Proper format string handling with logger parameters
- Cleaner code without manual string formatting

All test files automatically import logger via 'from tests_utils import *'
maintaining backward compatibility while improving logging infrastructure.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
When running tests with --enable-coverage, gcov writes diagnostic
messages to stderr which get mixed with program output due to
stderr=subprocess.STDOUT in run_and_get_output(). This causes
test failures as the TAP parser encounters unexpected output.

Add hide_stderr=True to most run_and_get_output() calls to discard
coverage diagnostics while preserving the actual program output
needed for test validation.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
Signed-off-by: Giuseppe Scrivano <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant