diff --git a/README.md b/README.md
index 236e807e..634065a5 100644
--- a/README.md
+++ b/README.md
@@ -24,7 +24,6 @@ Initiatives.
## Contributing statement
-
## How to setup
This repository contains the source files for the [pyOpenSci Python packaging guide](https://pyopensci.org/python-package-guide).
@@ -44,12 +43,13 @@ To build, follow these steps:
1. Install `nox`
```console
- $ python -m pip install nox
+ python -m pip install nox
```
+
2. Build the documentation:
```console
- $ nox -s docs
+ nox -s docs
```
This should create a local environment in a `.nox` folder, build the documentation (as specified in the `noxfile.py` configuration), and the output will be in `_build/html`.
@@ -57,9 +57,13 @@ This should create a local environment in a `.nox` folder, build the documentati
To build live documentation that updates when you update local files, run the following command:
```console
-$ nox -s docs-live
+nox -s docs-live
```
+If you are a uv user, you can also skip installing `nox` and use `uvx` instead:
+
+`uvx nox -s docs-live`
+
### Building for release
When building for release, the docs are built multiple times for each translation,
@@ -67,21 +71,21 @@ but translations are only included in the production version of the guide after
The sphinx build environment is controlled by an environment variable `SPHINX_ENV`
-- when `SPHINX_ENV=development` (default), sphinx assumes all languages are built,
+* when `SPHINX_ENV=development` (default), sphinx assumes all languages are built,
and includes them in the language selector
-- when `SPHINX_ENV=production`, only those languages in `release_languages` (set in `conf.py`)
+* when `SPHINX_ENV=production`, only those languages in `release_languages` (set in `conf.py`)
are built and included in the language selector.
Most of the time you should not need to set `SPHINX_ENV`,
as it is forced by the primary nox sessions intended to be used for release or development:
`SPHINX_ENV=development`
-- `docs-live` - autobuild english
-- `docs-live-lang` - autobuild a single language
-- `docs-live-langs` - autobuild all languages
+* `docs-live` - autobuild english
+* `docs-live-lang` - autobuild a single language
+* `docs-live-langs` - autobuild all languages
`SPHINX_ENV=production`
-- `build-test` - build all languages for production
+* `build-test` - build all languages for production
## Contributing to this guide
diff --git a/_static/pyos.css b/_static/pyos.css
index 7a9334b1..37bb5a8e 100644
--- a/_static/pyos.css
+++ b/_static/pyos.css
@@ -340,21 +340,21 @@ See https://github.com/pydata/pydata-sphinx-theme/pull/1784
src: url("./fonts/poppins-v20-latin-600.woff2") format("woff2"); /* Chrome 36+, Opera 23+, Firefox 39+, Safari 12+, iOS 10+ */
}
-/* nunitosans-regular - latin */
+/* nunitosans-variable - latin */
@font-face {
font-display: swap;
font-family: "NunitoSans";
font-style: normal;
- font-weight: 400;
+ font-weight: 200 800;
src: url("./fonts/NunitoSans-VariableFont.woff2") format("woff2");
}
-/* nunitosans-italic - latin */
+/* nunitosans-italic-variable - latin */
@font-face {
font-display: swap;
font-family: "NunitoSans";
font-style: italic;
- font-weight: 400;
+ font-weight: 200 800;
src: url("./fonts/NunitoSans-Italic-VariableFont.woff2") format("woff2");
}
diff --git a/tests/code-cov.md b/tests/code-cov.md
index 3e4d2b97..175f64ac 100644
--- a/tests/code-cov.md
+++ b/tests/code-cov.md
@@ -10,31 +10,44 @@ using code coverage effectively.
A good practice is to ensure that every line of your code runs at least once
during your test suite. This helps you:
-- Identify untested parts of your codebase.
-- Catch bugs that might otherwise go unnoticed.
-- Build confidence in your software's stability.
+- **Identify untested parts:** Parts of your codebase that are not
+ covered by tests.
+- **Catch bugs:** Bugs that might otherwise go unnoticed.
+- **Build confidence:** Confidence in your software's stability.
## Limitations of code coverage
While high code coverage is valuable, it has its limits:
-- **Difficult-to-test code:** Some parts of your code might be challenging to
- test, either due to complexity or limited resources.
-- **Missed edge cases:** Running all lines of code doesn’t guarantee that edge
- cases are handled correctly.
+- **Difficult-to-test code:** Some parts of your code might be
+ challenging to test, either due to complexity or limited resources.
+- **Missed edge cases:** Running all lines of code doesn't guarantee
+ that edge cases are handled correctly.
Ultimately, you should focus on how your package will be used and ensure your
tests cover those scenarios adequately.
## Tools for analyzing Python package code coverage
-Some common services for analyzing code coverage are [codecov.io](https://about.codecov.io/) and [coveralls.io](https://coveralls.io/). These projects are free for open source tools and will provide dashboards that tell you how much of your codebase is covered during your tests. We recommend setting up an account (on either CodeCov or Coveralls) and using it to keep track of your code coverage.
+Some common services for analyzing code coverage are
+[codecov.io](https://about.codecov.io/) and [coveralls.io](https://coveralls.io/).
+These projects are free for open source tools and provide dashboards that
+show how much of your codebase is covered during your tests. We recommend
+setting up an account (on either CodeCov or Coveralls) and using it to
+keep track of your code coverage.
:::{figure} ../images/code-cov-stravalib.png
:height: 450px
:alt: Screenshot of the code cov service - showing test coverage for the stravalib package. This image shows a list of package modules and the associated number of lines and % lines covered by tests. At the top of the image, you can see what branch is being evaluated and the path to the repository.
-The CodeCov platform is a useful tool if you wish to track code coverage visually. Using it, you can not only get the same summary information that you can get with the **pytest-cov** extension. You can also see what lines are covered by your tests and which are not. Code coverage is useful for evaluating unit tests and/or how much of your package code is "covered". It, however, will not evaluate things like integration tests and end-to-end workflows.
+The CodeCov platform is a useful tool if you wish to track code coverage
+visually. Using it, you can get the same summary information that you can get
+with the **pytest-cov** extension. You can also see what lines are covered by
+your tests and which are not. Code coverage is useful for evaluating
+[unit tests](test-types.md#unit-tests) and/or how much of your package code
+is "covered". It, however, will not evaluate things like
+[integration tests](test-types.md#integration-tests) and
+[end-to-end workflows](test-types.md).
:::
@@ -46,21 +59,37 @@ You can also create and upload typing reports to CodeCov.
## Exporting Local Coverage Reports
-In addition to using services like CodeCov or Coveralls, you can generate local coverage reports directly using the **coverage.py** tool. This can be especially useful if you want to create reports in Markdown or HTML format for offline use or documentation.
+In addition to using services like CodeCov or Coveralls, you can generate
+local coverage reports directly using the **coverage.py** tool. This can be
+especially useful if you want to create reports in Markdown or HTML format for
+offline use or documentation.
To generate a coverage report in **Markdown** format, run:
```bash
-$ python -m coverage report --format=markdown
+python -m coverage report --format=markdown
```
-This command will produce a Markdown-formatted coverage summary that you can easily include in project documentation or share with your team.
-To generate an HTML report that provides a detailed, interactive view of which lines are covered, use:
+This command will produce a Markdown-formatted coverage summary that you can
+include in project documentation or share with your team.
+
+To generate an HTML report that provides a detailed, interactive view of which
+lines are covered, use:
```bash
python -m coverage html
```
-The generated HTML report will be saved in a directory named htmlcov by default. Open the index.html file in your browser to explore your coverage results.
+The generated HTML report will be saved in a directory named `htmlcov` by
+default. Open the `index.html` file in your browser to explore your coverage
+results.
+
+These local reports are an excellent way to quickly review coverage without
+setting up an external service.
+
+## Next steps
-These local reports are an excellent way to quickly review coverage without setting up an external service.
+Writing meaningful tests is the foundation of useful coverage.
+See [Write tests](write-tests.md) and [Test types](test-types.md)
+to learn more about developing better test suites. Learn how to run your tests both
+[locally](run-tests.md) and in [continuous integration](tests-ci.md).
diff --git a/tests/index.md b/tests/index.md
index dc4666d5..254e4b18 100644
--- a/tests/index.md
+++ b/tests/index.md
@@ -9,7 +9,6 @@ In this section, you will learn more about the importance of writing
tests for your Python package and how you can set up infrastructure
to run your tests both locally and on GitHub.
-
:::::{grid} 1 1 3 2
:class-container: text-center
:gutter: 3
@@ -20,9 +19,8 @@ to run your tests both locally and on GitHub.
:link-type: doc
:class-card: left-aligned
-Learn more about the art of writing tests for your Python package.
-Learn about why you should write tests and how they can help you and
-potential contributors to your project.
+Learn about the importance of writing tests for your Python
+package and how they help you and potential contributors.
:::
::::
@@ -32,8 +30,8 @@ potential contributors to your project.
:link-type: doc
:class-card: left-aligned
-There are three general types of tests that you can write for your Python
-package: unit tests, integration tests and end-to-end (or functional) tests. Learn about all three.
+Get to know the three test types: unit, integration, and
+end-to-end tests. Learn when and how to use each.
:::
::::
@@ -43,8 +41,8 @@ package: unit tests, integration tests and end-to-end (or functional) tests. Lea
:link-type: doc
:class-card: left-aligned
-If you expect your users to use your package across different versions
-of Python, then using an automation tool such as nox to run your tests is useful. Learn about the various tools that you can use to run your tests across python versions here.
+Learn about testing frameworks like pytest and taskrunners/automation tools like nox, and tox that can be used to run
+tests across different Python versions on your computer.
:::
::::
@@ -54,21 +52,23 @@ of Python, then using an automation tool such as nox to run your tests is useful
:link-type: doc
:class-card: left-aligned
-Continuous integration platforms such as GitHub Actions can be
-useful for running your tests across both different Python versions
-and different operating systems. Learn about setting up tests to run in Continuous Integration here.
+Set up continuous integration with GitHub Actions to run
+tests across Python versions and operating systems.
:::
::::
-:::::
-
-
-:::{figure-md} fig-target
-
-
+::::{grid-item}
+:::{card} ✨ Code coverage ✨
+:link: code-cov
+:link-type: doc
+:class-card: left-aligned
-Graphic showing the elements of the packaging process.
+Measure how much of your package code runs during tests.
+Learn to generate local reports and visualize coverage online.
:::
+::::
+
+:::::
```{toctree}
:hidden:
@@ -79,6 +79,7 @@ Intro
Write tests
Test types
Run tests locally
+Run tests with Nox
Run tests online (using CI)
Code coverage
```
diff --git a/tests/run-tests-nox.md b/tests/run-tests-nox.md
new file mode 100644
index 00000000..045bd3b8
--- /dev/null
+++ b/tests/run-tests-nox.md
@@ -0,0 +1,172 @@
+```{eval-rst}
+:og:description: Learn how to use Nox to run tests for your Python package
+ locally across multiple Python versions and operating systems.
+:og:title: Run tests for your Python package with Nox
+```
+
+# Run tests with Nox
+
+**Nox** is a Python-based automation tool for running tests across multiple
+Python versions and managing isolated test environments. If you prefer
+Python-driven configuration over TOML, or need complex automation workflows,
+Nox is an excellent choice.
+
+For more information about Nox, see the
+[official Nox documentation](https://nox.thea.codes/) or the
+[Scientific Python guide to testing](https://scientific-python.org/tools/testing).
+
+## Why Nox?
+
+**Nox** is a great automation tool because it:
+
+* Is Python-based, making it accessible if you already know Python
+* Will create isolated environments to run workflows
+* Supports complex, custom automation beyond standard testing
+* Is flexible and powerful for intricate build and test scenarios
+
+`nox` simplifies creating and managing testing environments. With `nox`, you
+can set up virtual environments and run tests across Python versions using the
+environment manager of your choice with a single command.
+
+## Set up Nox
+
+To get started with Nox, you create a `noxfile.py` file at the root of your
+project directory. You then define commands using Python functions.
+
+:::{note}
+Nox installations
+
+When you install and use Nox to run tests across different Python versions,
+Nox will create and manage individual `venv` environments for each Python
+version that you specify in the Nox function. Nox will manage each environment
+on its own.
+:::
+
+Nox can also be used for other development tasks such as building
+documentation, creating your package distribution, and testing installations
+across both PyPI-related environments (e.g., venv, virtualenv) and `conda`
+(e.g., `conda-forge`).
+
+## Test environments
+
+By default, `nox` uses Python's built-in `venv` environment manager. A virtual
+environment (`venv`) is a self-contained Python environment that allows you to
+isolate and manage dependencies for different Python projects. It helps ensure
+that project-specific libraries and packages do not interfere with each other,
+promoting a clean and organized development environment.
+
+### Nox with venv environments
+
+Below is an example of setting up Nox to run tests using `venv`, which is the
+built-in environment manager that comes with base Python.
+
+Note that the example below assumes that you have setup your `pyproject.toml`
+to declare test dependencies using `project.optional-dependencies`:
+
+```toml
+[build-system]
+requires = ["hatchling"]
+build-backend = "hatchling.build"
+
+[project]
+name = "pyosPackage"
+version = "0.1.0"
+dependencies = [
+ "geopandas",
+ "xarray",
+]
+
+[project.optional-dependencies]
+tests = ["pytest", "pytest-cov"]
+```
+
+With this setup, you can use `session.install(".[tests]")` to install your
+test dependencies. Notice that below one single Nox session allows you to run
+your tests on 4 different Python environments (Python 3.9, 3.10, 3.11, and
+3.12).
+
+:::{note}
+For this to run you will need to have python3.9, python3.10, python3.11, and
+python3.12 installed on your computer. Otherwise nox will skip running tests
+for whatever versions are missing.
+:::
+
+```python
+# This code would live in a noxfile.py file located at the root of your
+# project directory
+import nox
+
+@nox.session(python=["3.9", "3.10", "3.11", "3.12"])
+def test(session):
+ # Install dependencies
+ session.install(".[tests]")
+ # Run tests
+ session.run("pytest")
+```
+
+Above you create a Nox session in the form of a function with a
+`@nox.session` decorator. Notice that within the decorator you declare the
+versions of Python that you wish to run.
+
+To run the above, you'd execute the following command, specifying which
+session with `--session` (sometimes shortened to `-s`). Your function above is
+called `test`, therefore the session name is `test`:
+
+```bash
+nox --session test
+```
+
+### Nox with conda / mamba
+
+Below is an example for setting up Nox to use mamba (or conda) for your
+environment manager. Unlike venv, conda can automatically install the various
+versions of Python that you need. You won't need to install all four Python
+versions if you use conda/mamba, like you do with `venv`.
+
+:::{note}
+For `conda` to work with `nox`, you will need to ensure that either `conda` or
+`mamba` is installed on your computer.
+:::
+
+```python
+# This code should live in your noxfile.py file
+import nox
+
+# The syntax below allows you to use mamba / conda as your environment
+# manager. If you use this approach, you don't have to worry about installing
+# different versions of Python
+
+@nox.session(venv_backend='mamba', python=["3.9", "3.10", "3.11", "3.12"])
+def test_mamba(session):
+ """Nox function that installs dev requirements and runs tests on Python
+ 3.9 through 3.12.
+ """
+ # Install dev requirements
+ session.conda_install(".[tests]")
+ # Run tests using any parameters that you need
+ session.run("pytest")
+```
+
+To run the above session you'd use:
+
+```bash
+nox --session test_mamba
+```
+
+## Hatch vs Nox
+
+If you're trying to decide between Hatch and Nox, see the
+[comparison and recommendations on the main testing page](run-tests.md#nox-vs-hatch-choosing-the-right-tool).
+
+## In summary
+
+* **Choose Hatch** if you're already using Hatch for packaging and want
+ everything in one place
+* **Choose Nox** if you need maximum flexibility, prefer Python-driven
+ configuration, or need complex automation workflows
+
+## Next steps
+
+Now that you understand how to run tests locally with Nox, you can learn about
+[running tests automatically with continuous integration](tests-ci) or
+[running tests with Hatch](run-tests.md).
diff --git a/tests/run-tests.md b/tests/run-tests.md
index 275c2813..b2affb4b 100644
--- a/tests/run-tests.md
+++ b/tests/run-tests.md
@@ -1,74 +1,101 @@
```{eval-rst}
-:og:description: Learn how to setup and run tests for your Python package locally on your computer using automation tools such as Nox. Also learn about other tools that scientific Python community members use to run tests.
-:og:title: Run tests for your Python package across Python versions
+:og:description: Learn how to run tests for your Python package locally
+ across multiple Python versions and operating systems using Hatch or Nox.
+:og:title: Run tests for your Python package across environments
```
-# Run Python package tests
-
-Running your tests is important to ensure that your package
-is working as expected. It's good practice to consider that tests will run on your computer and your users' computers that may be running a different Python version and operating systems. Think about the following when running your tests:
-
-1. Run your test suite in a matrix of environments that represent the Python versions and operating systems your users are likely to have.
-2. Running your tests in an isolated environment provides confidence in the tests and their reproducibility. This ensures that tests do not pass randomly due to your computer's specific setup. For instance, you might have unexpectedly installed dependencies on your local system that are not declared in your package's dependency list. This oversight could lead to issues when others try to install or run your package on their computers.
-
-On this page, you will learn about the tools that you can use to both run tests in isolated environments and across
-Python versions.
-
-
+# Run tests for your Python package
+
+Running your tests across different Python versions and operating systems is
+critical to ensuring your package works for your users. Your users may be
+running different versions of Python and operating systems than you are.
+
+This page teaches you how to run tests locally in isolated environments and
+across multiple Python versions. You'll learn about two main automation tools:
+[**Hatch**](https://hatch.pypa.io/) and
+[**Nox**](https://nox.thea.codes/en/stable/index.html). In the next lesson,
+you will learn about running your tests online in
+[continuous integration (CI)](tests-ci).
+
+## Why test across multiple environments?
+
+When you develop a package on your computer, it works in one specific
+environment: your Python version, your operating system, and your installed
+dependencies. Your users, however, will run your code in many different
+environments. By running your tests across multiple Python versions and
+operating systems, you catch compatibility issues before users do.
+
+Additionally, running tests in isolated environments ensures that your tests
+pass because of your code, not because of unexpected dependencies installed
+on your computer. This gives you confidence that your package will work when
+others install it.
+
+On this page, you will learn about the tools that you can use to both
+run tests in isolated environments and across Python versions.
+
+:::{seealso}
+**Related pages:**
+
+* [Write tests](write-tests.md) for best practices on writing test
+ suites
+* [Test types](test-types.md) to understand unit, integration, and
+ end-to-end tests
+* [Run tests online with CI](tests-ci.md) for GitHub Actions setup
+* [Code coverage](code-cov.md) to measure how much code your tests
+ cover
+:::
## Tools to run your tests
-There are three categories of tools that will make is easier to setup
+There are three categories of tools that will make it easier to setup
and run your tests in various environments:
-1. A **test framework**, is a package that provides a particular syntax and set of tools for _both writing and running your tests_. Some test frameworks also have plugins that add additional features such as evaluating how much of your code the tests cover. Below you will learn about the **pytest** framework which is one of the most commonly used Python testing frameworks in the scientific ecosystem. Testing frameworks are essential but they only serve to run your tests. These frameworks don't provide a way to easily run tests across Python versions without the aid of additional automation tools.
-2. **Automation tools** allow you to automate running workflows such as tests in specific ways using user-defined commands. For instance it's useful to be able to run tests across different Python versions with a single command. Tools such as [**nox**](https://nox.thea.codes/en/stable/index.html) and [**tox**](https://tox.wiki/en/latest/index.html) also allow you to run tests across Python versions. However, it will be difficult to test your build on different operating systems using only nox and tox - this is where continuous integration (CI) comes into play.
-3. **Continuous Integration (CI):** is the last tool that you'll need to run your tests. CI will not only allow you to replicate any automated builds you create using nox or tox to run your package in different Python environments. It will also allow you to run your tests on different operating systems (Windows, Mac and Linux). [We discuss using CI to run tests here](tests-ci).
-
-:::{list-table} Table: Testing & Automation Tool
-:widths: 40 15 15 15 15
-:header-rows: 1
-:align: center
-:stub-columns: 1
-:class: pyos-table
-
-* - Features
- - Testing Framework (pytest)
- - Test Runner (Tox)
- - Automation Tools (Nox)
- - Continuous Integration (GitHub Actions)
-* - Run Tests Locally
- -
- -
- -
- -
-* - Run Tests Online
- -
- -
- -
- -
-* - Run Tests Across Python Versions
- -
- -
- -
- -
-* - Run Tests In Isolated Environments
- -
- -
- -
- -
-* - Run Tests Across Operating Systems (Windows, MacOS, Linux)
- -
- -
- -
- -
-* - Use for other automation tasks (e.g. building docs)
- -
- -
- -
- -
-:::
-
+1. **Testing framework (pytest):** Provides the syntax and tools for
+ writing and running your tests. Learn more from the
+ [pytest documentation](https://docs.pytest.org/). Below you will learn
+ about pytest, the most commonly used testing framework in the scientific
+ Python ecosystem. Testing frameworks are essential for running tests, but
+ they don't provide an easy way to run tests across Python versions
+ or in isolated environments—that's where automation tools come in.
+
+2. **Automation tools (Nox, Tox, Hatch):** Allow you to run tests in
+ isolated environments and across multiple Python versions with a
+ single command. We focus on
+ [**Hatch**](https://hatch.pypa.io/) and
+ [**Nox**](https://nox.thea.codes/) below. These tools create virtual
+ environments automatically and ensure your tests run consistently.
+ However, they typically only test on your local operating system.
+
+3. **Continuous Integration (CI):** Runs your tests online across
+ different operating systems (Windows, Mac, and Linux) and Python
+ versions. CI integrates with platforms like GitHub Actions to
+ automatically test every pull request and code change.
+ [Learn about CI here](tests-ci), or see our
+ [continuous integration tutorial](../../maintain-automate/ci.html)
+ for more context.
+
+### Quick comparison: what each tool does
+
+**Testing Framework (pytest):**
+
+* Runs your tests locally in your current Python environment
+* Provides the core syntax for writing tests (assertions, fixtures,
+ etc.)
+* Can be extended with plugins (like pytest-cov for coverage)
+
+**Automation Tools (Nox, Tox, Hatch):**
+
+* Run tests locally across multiple Python versions
+* Create and manage isolated virtual environments automatically
+* Can automate other tasks like building documentation
+* Make it easy to reproduce test environments
+
+**Continuous Integration (GitHub Actions):**
+
+* Runs tests online automatically for every pull request
+* Tests across different operating systems (Windows, Mac, Linux)
+* Tests across multiple Python versions in parallel
+* Can automate deployments, releases, and other workflows
## What testing framework / package should I use to run tests?
@@ -77,7 +104,7 @@ We recommend using `Pytest` to build and run your package tests. Pytest is the m
[The Pytest package](https://docs.pytest.org/en/latest/) also has a number of
extensions that can be used to add functionality such as:
-- [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) allows you to analyze the code coverage of your package during your tests, and generates a report that you can [upload to codecov](https://about.codecov.io/).
+* [pytest-cov](https://pytest-cov.readthedocs.io/en/latest/) allows you to analyze the code coverage of your package during your tests, and generates a report that you can [upload to codecov](https://about.codecov.io/).
:::{todo}
Learn more about code coverage here. (add link)
@@ -122,147 +149,183 @@ We will focus on [Nox](https://nox.thea.codes/) in this guide. `nox` is a Python
```{admonition} Other automation tools you'll see in the wild
:class: note
-- **[Tox](https://tox.wiki/en/latest/index.html#useful-links)** is an automation tool that supports common steps such as building documentation, running tests across various versions of Python, and more.
-
-- **[Hatch](https://github.com/pypa/hatch)** is a modern end-to-end packaging tool that works with the popular build backend called hatchling. `hatch` offers a `tox`-like setup where you can run tests locally using different Python versions. If you are using `hatch` to support your packaging workflow, you may want to also use its testing capabilities rather than using `nox`.
+- **[Tox](https://tox.wiki/en/latest/index.html#useful-links)** is an
+ automation tool that supports common steps such as building
+ documentation, running tests across various versions of Python, and
+ more.
-* [**make:**](https://www.gnu.org/software/make/manual/make.html) Some developers use Make, which is a build automation tool, for running tests
-due to its versatility; it's not tied to a specific language and can be used
-to run various build processes. However, Make's unique syntax and approach can
-make it more challenging to learn, particularly if you're not already familiar
-with it. Make also won't manage environments for you like **nox** will do.
+- **[Make](https://www.gnu.org/software/make/manual/make.html)** is a
+ build automation tool that some developers use for running tests due
+ to its versatility. However, Make's unique syntax can be challenging
+ to learn, and it won't manage environments for you like Hatch and Nox
+ do.
```
-## Run tests across Python versions with nox
+## Run tests with Hatch
-**Nox** is a great automation tool to learn because it:
+**Hatch** is a modern Python packaging and environment manager that
+integrates test running capabilities directly into your `pyproject.toml`.
+Unlike Nox (which uses a separate `noxfile.py`), Hatch keeps all your
+project configuration in one place, making it ideal if you're already
+using Hatch for packaging workflows.
-- Is Python-based making it accessible if you already know Python and
-- Will create isolated environments to run workflows.
+### Why Hatch for testing?
-`nox` simplifies creating and managing testing environments. With `nox`, you can
-set up virtual environments, and run tests across Python versions using the environment manager of your choice with a
-single command.
+* Configuration lives in `pyproject.toml` alongside your project
+ metadata
+* Integrates seamlessly with Hatch's packaging and build workflows
+* No separate Python file needed (unlike Nox)
+* Easy to share standardized test environments across your team
-:::{note} Nox Installations
+### Setting up Hatch environments
-When you install and use nox to run tests across different Python versions, nox will create and manage individual `venv` environments for each Python version that you specify in the nox function.
+Hatch environments are defined in your `pyproject.toml`. Rather than
+duplicating dependencies, use `dependency-groups` to reference your test
+dependencies:
-Nox will manage each environment on its own.
-:::
+```toml
+[dependency-groups]
+tests = [
+ "pytest>=7.0",
+ "pytest-cov",
+]
-Nox can also be used for other development tasks such as building
-documentation, creating your package distribution, and testing installations
-across both PyPI related environments (e.g. venv, virtualenv) and `conda` (e.g. `conda-forge`).
+[tool.hatch.envs.test]
+dependency-groups = [
+ "tests",
+]
-To get started with nox, you create a `noxfile.py` file at the root of your
-project directory. You then define commands using Python functions.
-Some examples of that are below.
+[tool.hatch.envs.test.scripts]
+run = "pytest {args:--cov=test --cov-report=term-missing --cov-report=xml}"
-## Test Environments
+```
-By default, `nox` uses the Python built in `venv` environment manager. A virtual environment (`venv`) is a self-contained Python environment that allows you to isolate and manage dependencies for different Python projects. It helps ensure that project-specific libraries and packages do not interfere with each other, promoting a clean and organized development environment.
+This approach keeps your test dependencies in one place and avoids
+duplication. For a complete example, see our
+[packaging template tutorial](https://www.pyopensci.org/tutorials/create-python-package.html)
+which shows a full `pyproject.toml` configuration.
-An example of using nox to run tests in `venv` environments for Python versions 3.9, 3.10, 3.11 and 3.12 is below.
+### Running tests with Hatch
-```{warning}
-Note that for the code below to work, you need to have all 4 versions of Python installed on your computer for `nox` to find.
+Once you've defined your test environment, you can run tests with simple
+commands:
+
+**List available environments:**
+
+```bash
+hatch env show
```
-### Nox with venv environments
+**Run pytest in the test environment:**
-```{todo}
-TODO: add some tests above and show what the output would look like in the examples below...
+```bash
+hatch run test:run
```
-Below is an example of setting up nox to run tests using `venv` which is the built in environment manager that comes with base Python.
+### Testing across Python versions
-Note that the example below assumes that you have [setup your `pyproject.toml` to declare test dependencies in a way that pip
-can understand](../package-structure-code/declare-dependencies.md). An example
-of that setup is below.
+To test across multiple Python versions, define a matrix in your
+`pyproject.toml`:
```toml
-[build-system]
-requires = ["hatchling"]
-build-backend = "hatchling.build"
-
-[project]
-name = "pyosPackage"
-version = "0.1.0"
-dependencies = [
- "geopandas",
- "xarray",
+[dependency-groups]
+tests = [
+ "pytest>=7.0",
+ "pytest-cov",
]
-[project.optional-dependencies]
-tests = ["pytest", "pytest-cov"]
+[tool.hatch.envs.test]
+dependency-groups = [
+ "tests",
+]
+
+[[tool.hatch.envs.test.matrix]]
+python = ["3.10", "3.11", "3.12"]
```
-If you have the above setup, then you can use `session.install(".[tests]")` to install your test dependencies.
-Notice that below one single nox session allows you to run
-your tests on 4 different Python environments (Python 3.9, 3.10, 3.11, and 3.12).
+Then run all versions with a single command:
-```python
-# This code would live in a noxfile.py file located at the root of your project directory
-import nox
+```bash
+hatch run test:run
+```
-# For this to run you will need to have python3.9, python3.10 and python3.11 installed on your computer. Otherwise nox will skip running tests for whatever versions are missing
+Hatch will automatically run your tests on Python 3.10, 3.11, and 3.12.
+If you only want to test a specific Python version:
-@nox.session(python=["3.9", "3.10", "3.11", "3.12"])
-def test(session):
+```bash
+hatch run test.py3.11:pytest
+```
+
+### Using Hatch in GitHub Actions
- # install
- session.install(".[tests]")
+Hatch integrates well with CI/CD. Here's a minimal GitHub Actions
+setup:
- # Run tests
- session.run("pytest")
+```yaml
+name: Run tests
+on:
+ pull_request:
+ push:
+ branches:
+ - main
+
+jobs:
+ test:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+ - uses: hynek/setup-hatch@v1
+ - run: hatch run test:pytest
```
-Above you create a nox session in the form of a function
-with a `@nox.session` decorator. Notice that within the decorator you declare the versions of python that you
-wish to run.
+Since all test dependencies are declared in `pyproject.toml`, your CI
+environment is reproducible and consistent with local testing.
-To run the above you'd execute the following command, specifying which session
-with `--session` (sometimes shortened to `-s`). Your function above
-is called test, therefore the session name is test.
+## Nox vs Hatch: choosing the right tool
-```
-nox --session test
-```
+Both Hatch and Nox are excellent automation tools for running tests
+across Python versions. Here's how they compare to help you decide which
+fits your workflow:
-### Nox with conda / mamba
+### Hatch
-Below is an example for setting up nox to use mamba (or conda) for your
-environment manager.
-Note that unlike venv, conda can automatically install
-the various versions of Python that you need. You won't need to install all four Python versions if you use conda/mamba, like you do with `venv`.
+* **Configuration:** All settings live in `pyproject.toml` alongside your
+ project metadata
+* **Integration:** Seamlessly integrates with Hatch's packaging and build
+ workflows—use the same tool for everything
+* **Learning curve:** Easier if you prefer configuration over code
+* **Best for:** Teams using Hatch for packaging, or those who want
+ standardized configuration in one place
-```{note}
-For `conda` to work with `nox`, you will need to
-ensure that either `conda` or `mamba` is installed on your computer.
-```
+### Nox
-```python
-# This code should live in your noxfile.py file
-import nox
+* **Configuration:** Python-driven via `noxfile.py` for maximum flexibility
+* **Customization:** Great for complex workflows that need custom logic
+* **Learning curve:** Easier if you already know Python and want flexible
+ session control
+* **Best for:** Complex automation needs, building docs alongside tests,
+ or workflows that don't fit the standard model
-# The syntax below allows you to use mamba / conda as your environment manager, if you use this approach you don’t have to worry about installing different versions of Python
+### What we recommend
-@nox.session(venv_backend='mamba', python=["3.9", "3.10", "3.11", "3.12"])
-def test_mamba(session):
- """Nox function that installs dev requirements and runs
- tests on Python 3.9 through 3.12
- """
+**If you're using Hatch for packaging:** Use Hatch for testing too. You get
+everything in one place and one consistent tool.
- # Install dev requirements
- session.conda_install(".[tests]")
- # Run tests using any parameters that you need
- session.run("pytest")
-```
+**If you need maximum flexibility:** Choose Nox. Its Python-driven approach
+lets you implement almost any workflow.
-To run the above session you'd use:
+**If you're just starting out:** Start with Hatch. It's simpler to set up
+and understand, and you can always switch to Nox later if you need to.
-```bash
-nox --session test_mamba
-```
+**Both tools are solid choices.** The Python scientific community uses both
+extensively. For a complete guide to Nox, see [Run tests with
+Nox](run-tests-nox.md) and the [Scientific Python testing
+guide](https://scientific-python.org/tools/testing).
+
+## Next steps
+
+Now that you understand how to run tests locally across Python versions, you
+can learn about [running tests automatically in GitHub Actions with
+continuous integration](tests-ci). You can also review [test types](test-types)
+and [write tests](write-tests) for your package.
diff --git a/tests/test-types.md b/tests/test-types.md
index dea163fc..5431cccc 100644
--- a/tests/test-types.md
+++ b/tests/test-types.md
@@ -1,67 +1,89 @@
# Test Types for Python packages
-## Three types of tests: Unit, Integration & Functional Tests
+## Three types of tests: unit, integration, and functional tests
-There are different types of tests that you want to consider when creating your
-test suite:
+There are different types of tests that you want to consider when
+creating your test suite:
1. Unit tests
-2. Integration
-3. End-to-end (also known as Functional) tests
+2. Integration tests
+3. End-to-end (also known as functional) tests
-Each type of test has a different purpose. Here, you will learn about all three types of tests.
+Each type of test has a different purpose. Here, you will learn
+about all three types of tests by working through simple, runnable
+examples that you can use in your own package.
-```{todo}
-I think this page would be stronger if we did have some
-examples from our package here: https://github.com/pyOpenSci/pyosPackage
+## Unit tests
-```
-
-## Unit Tests
+A unit test involves testing individual components or units of code
+in isolation to ensure that they work correctly. The goal of unit
+testing is to verify that each part of the software, typically at the
+function or method level, performs its intended task correctly.
-A unit test involves testing individual components or units of code in isolation to ensure that they work correctly. The goal of unit testing is to verify that each part of the software, typically at the function or method level, performs its intended task correctly.
+Unit tests can be compared to examining each piece of your puzzle to
+ensure parts of it are not broken. If all of the pieces of your puzzle
+don't fit together, you will never complete it. Similarly, when working
+with code, tests ensure that each function, attribute, class, and
+method works properly when isolated.
-Unit tests can be compared to examining each piece of your puzzle to ensure parts of it are not broken. If all of the pieces of your puzzle don’t fit together, you will never complete it. Similarly, when working with code, tests ensure that each function, attribute, class, method works properly when isolated.
-
-**Unit test example:** Pretend that you have a function that converts a temperature value from Celsius to Fahrenheit. A test for that function might ensure that when provided with a value in Celsius, the function returns the correct value in degrees Fahrenheit. That function is a unit test. It checks a single unit (function) in your code.
+**Unit test example:** Suppose you have a function that adds two
+numbers together. A unit test for that function ensures that when
+provided with two numbers, it returns the correct sum. This is a unit
+test because it checks a single unit (function) in isolation.
```python
-# Example package function
-def celsius_to_fahrenheit(celsius):
+# src/mypackage/math_utils.py
+def add_numbers(a, b):
"""
- Convert temperature from Celsius to Fahrenheit.
-
- Parameters:
- celsius (float): Temperature in Celsius.
-
- Returns:
- float: Temperature in Fahrenheit.
+ Add two numbers together.
+
+ Parameters
+ ----------
+ a : float
+ First number.
+ b : float
+ Second number.
+
+ Returns
+ -------
+ float
+ Sum of a and b.
"""
- fahrenheit = (celsius * 9/5) + 32
- return fahrenheit
+ return a + b
```
-Example unit test for the above function. You'd run this test using the `pytest` command in your **tests/** directory.
+Example unit test for the above function. You'd run this test using
+the `pytest` command in your **tests/** directory.
```python
-import pytest
-from temperature_converter import celsius_to_fahrenheit
+# tests/test_math_utils.py
+from mypackage.math_utils import add_numbers
+
-def test_celsius_to_fahrenheit():
+def test_add_numbers():
"""
- Test the celsius_to_fahrenheit function.
+ Test the add_numbers function.
"""
- # Test with freezing point of water
- assert pytest.approx(celsius_to_fahrenheit(0), abs=0.01) == 32.0
+ # Test with positive numbers
+ assert add_numbers(2, 3) == 5
- # Test with boiling point of water
- assert pytest.approx(celsius_to_fahrenheit(100), abs=0.01) == 212.0
-
- # Test with a negative temperature
- assert pytest.approx(celsius_to_fahrenheit(-40), abs=0.01) == -40.0
+ # Test with negative numbers
+ assert add_numbers(-1, 4) == 3
+ # Test with zero
+ assert add_numbers(0, 5) == 5
```
+Notice that the tests above don't just test one case where numbers are
+added together. Instead, they test multiple scenarios: adding positive
+numbers, adding a negative number, and adding zero. This helps ensure
+that the `add_numbers` function behaves correctly in different
+situations and is the beginning of thinking about programming
+defensively.
+
+You can run this test from your terminal using
+`pytest tests/example.py`.
+
```{figure} ../images/pyopensci-puzzle-pieces-tests.png
:height: 300px
:alt: image of puzzle pieces that all fit together nicely. The puzzle pieces are colorful - purple, green and teal.
@@ -71,39 +93,118 @@ Your unit tests should ensure each part of your code works as expected on its ow
## Integration tests
-Integration tests involve testing how parts of your package work together or integrate. Integration tests can be compared to connecting a bunch of puzzle pieces together to form a whole picture. Integration tests focus on how different pieces of your code fit and work together.
+Integration tests involve testing how parts of your package work
+together or integrate. Integration tests can be compared to connecting
+a bunch of puzzle pieces together to form a whole picture. Integration
+tests focus on how different pieces of your code fit and work together.
-For example, if you had a series of steps that collected temperature data in a spreadsheet, converted it from degrees celsius to Fahrenheit and then provided an average temperature for a particular time period. An integration test would ensure that all parts of that workflow behaved as expected.
+For example, suppose you have functions that convert temperatures and
+calculate statistics. An integration test would ensure that these
+functions work together correctly in a workflow where you convert
+temperatures and then analyze them.
```python
+# src/mypackage/temperature_utils.py
+def celsius_to_fahrenheit(celsius):
+ """
+ Convert temperature from Celsius to Fahrenheit.
+
+ Parameters
+ ----------
+ celsius : float
+ Temperature in Celsius.
-def fahr_to_celsius(fahrenheit):
+ Returns
+ -------
+ float
+ Temperature in Fahrenheit.
+ """
+ return (celsius * 9 / 5) + 32
+
+
+def fahrenheit_to_celsius(fahrenheit):
"""
Convert temperature from Fahrenheit to Celsius.
- Parameters:
- fahrenheit (float): Temperature in Fahrenheit.
+ Parameters
+ ----------
+ fahrenheit : float
+ Temperature in Fahrenheit.
+
+ Returns
+ -------
+ float
+ Temperature in Celsius.
+ """
+ return (fahrenheit - 32) * 5 / 9
+
+
+def average_temperature(temps):
+ """
+ Calculate average temperature from a list.
+
+ Parameters
+ ----------
+ temps : list
+ List of temperatures.
- Returns:
- float: Temperature in Celsius.
+ Returns
+ -------
+ float
+ Average temperature.
"""
- celsius = (fahrenheit - 32) * 5/9
- return celsius
+ return sum(temps) / len(temps)
+
+
+def convert_and_average(temps_celsius):
+ """
+ Convert list of Celsius temps to Fahrenheit and
+ calculate the average.
+
+ Parameters
+ ----------
+ temps_celsius : list
+ List of Celsius temperatures.
+
+ Returns
+ -------
+ float
+ Average temperature in Fahrenheit.
+ """
+ temps_fahrenheit = [celsius_to_fahrenheit(t)
+ for t in temps_celsius]
+ return average_temperature(temps_fahrenheit)
+```
-# Function to calculate the mean temperature for each year and the final mean
-def calc_annual_mean(df):
- # TODO: make this a bit more robust so we can write integration test examples??
- # Calculate the mean temperature for each year
- yearly_means = df.groupby('Year').mean()
+Here's an integration test that checks how the conversion and
+statistics functions work together:
- # Calculate the final mean temperature across all years
- final_mean = yearly_means.mean()
+```python
+# tests/test_temperature_integration.py
+from mypackage.temperature_utils import convert_and_average
- # Return a converted value
- return fahr_to_celsius(yearly_means), fahr_to_celsius(final_mean)
+def test_convert_and_average():
+ """
+ Test that convert_and_average correctly combines conversion
+ and averaging.
+ """
+ # Test with known values: [0, 10, 20] Celsius
+ # Should average to 10 Celsius = 50 Fahrenheit
+ temps_celsius = [0, 10, 20]
+ result = convert_and_average(temps_celsius)
+ assert abs(result - 50.0) < 0.01
+
+ # Test with different values
+ temps_celsius = [0, 100]
+ result = convert_and_average(temps_celsius)
+ # Average of 32 and 212 Fahrenheit = 122
+ assert abs(result - 122.0) < 0.01
```
+This integration test verifies that the conversion and averaging
+functions work together as expected in a real workflow.
+
```{figure} ../images/python-tests-puzzle.png
:height: 350px
:alt: image of two puzzle pieces with some missing parts. The puzzle pieces are purple teal yellow and blue. The shapes of each piece don’t fit together.
@@ -123,34 +224,126 @@ together, do so as expected.
## End-to-end (functional) tests
-End-to-end tests (also referred to as functional tests) in Python are like comprehensive checklists for your software. They simulate real user end-to-end workflows to make sure the code base supports real life applications and use-cases from start to finish. These tests help catch issues that might not show up in smaller tests and ensure your entire application or program behaves correctly. Think of them as a way to give your software a final check before it's put into action, making sure it's ready to deliver a smooth experience to its users.
+End-to-end tests (also referred to as functional tests) in Python are
+like comprehensive checklists for your software. They simulate real
+user workflows to make sure the code base supports real-life
+applications and use-cases from start to finish. These tests help catch
+issues that might not show up in smaller tests and ensure your entire
+application behaves correctly. Think of them as a way to give your
+software a final check before it's put into action, making sure it's
+ready to deliver a smooth user experience.
```{figure} ../images/flower-puzzle-pyopensci.jpg
:height: 450px
:alt: Image of a completed puzzle showing a daisy
-End-to-end or functional tests represent an entire workflow that you
-expect your package to support.
+End-to-end or functional tests represent an entire workflow that
+your package supports.
```
-End-to-end test also test how a program runs from start to finish. A tutorial that you add to your documentation that runs in CI in an isolated environment is another example of an end-to-end test.
+**End-to-end test example:** Let's say your package opens and processes/converts temperature data from Celsius to Fahrenheit and then calculates the average temperature. An end-to-end test would simulate this entire workflow, ensuring that the package correctly handles the input
+temperature data and returns a summary average value. An end-to-end
+test would provide sample data, run the entire workflow, and verify
+that the final output is correct.
-```{note}
-For scientific packages, creating short tutorials that highlight core workflows that your package supports, that are run when your documentation is built could also serve as end-to-end tests.
+```python
+# tests/test_temperature_e2e.py
+from mypackage.temperature_utils import convert_and_average
+
+
+def test_temperature_workflow():
+ """
+ Test the complete temperature processing workflow.
+
+ This end-to-end test provides sample temperature data in
+ Celsius, processes it through the full workflow
+ (conversion and averaging), and verifies the output is
+ correct.
+ """
+ # Sample temperature data in Celsius
+ temps_celsius = [0, 10, 20]
+
+ # Run the complete workflow
+ result = convert_and_average(temps_celsius)
+
+ # Verify the output
+ # Average of 32, 50, and 68 Fahrenheit = 50 Fahrenheit
+ assert abs(result - 50.0) < 0.01
```
-## Comparing unit, integration and end-to-end tests
+This end-to-end test exercises the entire user workflow: providing
+sample data, converting and averaging it, and verifying the output
+is correct.
+
+End-to-end tests also verify how a program runs from start to finish.
+A tutorial that you add to your documentation and run in CI is another
+example of an end-to-end test. For example, a Jupyter (`.ipynb`) notebook or
+`.md` file with embedded code that demonstrates a complete user
+workflow.
+
+:::{note}
+For scientific packages, creating short tutorials that highlight core
+workflows that your package supports, that are run when your
+documentation is built, could also serve as end-to-end tests.
+:::
+
+## When to use which test type
+
+Choosing the right test type depends on what you are trying to verify.
+Use this guide to decide which test type is most appropriate for
+different situations:
+
+### Decision tree
+
+**Are you testing a single function, method, or class in isolation?**
+
+→ **Yes:** Use a [unit test](test-types.md#unit-tests).
+
+- Example: Testing that `add_numbers(2, 3)` returns `5`.
+- Unit tests are fast and help you pinpoint exactly where errors occur.
+- Write unit tests for all the core building blocks of your package.
+
+**Are you testing how multiple components work together?**
+
+→ **Yes:** Use an [integration test](test-types.md#integration-tests).
+
+- Example: Testing that temperature conversion and averaging functions
+ work together correctly in a single workflow.
+- Integration tests verify that your components communicate properly.
+- Use these after you have tested individual components with unit tests.
+
+**Are you testing a complete, realistic user workflow from start to finish?**
+
+→ **Yes:** Use an [end-to-end test](test-types.md#end-to-end-functional-tests).
+
+- Example: Simulating a user loading data, processing it, and getting a
+ summary result.
+- End-to-end tests catch issues that do not show up in smaller tests.
+- For scientific packages, tutorials run during documentation builds can
+ serve as end-to-end tests.
+
+## Comparing unit, integration, and end-to-end tests
+
+Unit tests, integration tests, and end-to-end tests have complementary
+advantages and disadvantages. The fine-grained nature of unit tests
+makes them well-suited for isolating where errors are occurring.
+However, unit tests are not useful for verifying that different
+sections of code work together.
+
+Integration and end-to-end tests verify that different portions of the
+program work together, but are less valuable for immediately isolating exactly where
+errors are occurring.
-Unit tests, integration tests, and end-to-end tests have complementary advantages and disadvantages. The fine-grained nature of unit tests make them well-suited for isolating where errors are occurring. However, unit tests are not useful for verifying that different sections of code work together.
+## Tests don't have to be perfect
+It is important to note that you don't need to spend energy worrying
+about the specifics of test types. When you begin to work on your test
+suite, consider what your package does and how you may need to test
+parts of it. Being familiar with different test types provides a
+framework to help you think about writing tests and how they can
+complement each other.
-Integration and end-to-end tests verify that the different portions of the program work together, but are less well-suited for isolating where errors are occurring. For example, when you refactor your code, it is possible that that your end-to-end tests will
-break. But if the refactor didn't introduce new behavior to your existing
-code, then you can rely on your unit tests to continue to pass, testing the
-original functionality of your code.
+## Next steps
-It is important to note that you don't need to spend energy worrying about
-the specifics surrounding the different types of tests. When you begin to
-work on your test suite, consider what your package does and how you
-may need to test parts of your package. Bring familiar with the different types of tests can provides a framework to
-help you think about writing tests and how different types of tests can complement each other.
+Now that you understand test types, learn how to [write effective tests](write-tests) for your package. Then explore how to [run tests locally](run-tests) and in [continuous integration](tests-ci). Track your
+(run-tests) and in [continuous integration](tests-ci). You can also learn about tracking the progress of test coverage using tools like [Code cov](code-cov).
diff --git a/tests/write-tests.md b/tests/write-tests.md
index 760f3c1c..b56748a4 100644
--- a/tests/write-tests.md
+++ b/tests/write-tests.md
@@ -1,32 +1,54 @@
# Write tests for your Python package
-Writing code that tests your package code, also known as test suites, is important for you as a maintainer, your users, and package contributors. Test suites consist of sets of functions, methods, and classes
-that are written with the intention of making sure a specific part of your code
-works as you expected it to.
+**Writing code** that tests your package code, also known as test suites,
+is important for you as a maintainer, your users, and package
+contributors. Test suites consist of sets of functions, methods, and
+classes that are written with the intention of making sure a specific
+part of your code works as you expected it to.
## Why write tests for your package?
-Tests act as a safety net for code changes. They help you spot and rectify bugs
-before they affect users. Tests also instill confidence that code alterations from
-contributors won't breaking existing functionality.
+Tests act as a safety net for code changes. They help you identify and fix bugs
+before they affect users. Tests also instill confidence that code changes from
+contributors won't break existing functionality.
Writing tests for your Python package is important because:
-- **Catch Mistakes:** Tests are a safety net. When you make changes or add new features to your package, tests can quickly tell you if you accidentally broke something that was working fine before.
-- **Save Time:** Imagine you have a magic button that can automatically check if your package is still working properly. Tests are like that magic button! They can run all those checks for you saving you time.
-- **Easier Collaboration:** If you're working with others, or have outside contributors, tests help everyone stay on the same page. Your tests explain how your package is supposed to work, making it easier for others to understand and contribute to your project.
-- **Fearless Refactoring:** Refactoring means making improvements to your code structure without changing its behavior. Tests empower you to make these changes as if you break something, test failures will let you know.
-- **Documentation:** Tests serve as technical examples of how to use your package. This can be helpful for a new technical contributor that wants to contribute code to your package. They can look at your tests to understand how parts of your code functionality fits together.
-- **Long-Term ease of maintenance:** As your package evolves, tests ensure that your code continues to behave as expected, even as you make changes over time. Thus you are helping your future self when writing tests.
-- **Easier pull request reviews:** By running your tests in a CI framework such as GitHub Actions, each time you or a contributor makes a change to your code-base, you can catch issues and things that may have changed in your code base. This ensures that your software behaves the way you expect it to.
+- **Catch mistakes:** Tests are a safety net. When you make changes or
+ add new features to your package, tests can quickly tell you if you
+ accidentally broke something that was working fine before.
+- **Save time:** Imagine you have a magic button that can automatically
+ check if your package is still working properly. Tests are like that
+ magic button! They can run all those checks for you, saving you time.
+- **Easier collaboration:** If you're working with others or have outside
+ contributors, tests help everyone stay on the same page. Your tests
+ explain how your package is supposed to work, making it easier for
+ others to understand and contribute to your project.
+- **Fearless refactoring:** Refactoring means making improvements to your
+ code structure without changing its behavior. Tests empower you to make
+ these changes; if you break something, test failures will let you know.
+- **Documentation:** Tests serve as technical examples of how to use your
+ package. This can be helpful for new technical contributors who want to
+ contribute code to your package. They can look at your tests to
+ understand how parts of your code functionality fits together.
+- **Long-term ease of maintenance:** As your package evolves, tests
+ ensure that your code continues to behave as expected, even as you make
+ changes over time. Thus you are helping your future self when writing
+ tests.
+- **Easier pull request reviews:** By running your tests in a CI framework
+ such as GitHub Actions, each time you or a contributor makes a change
+ to your code-base, you can catch issues and things that may have changed
+ in your code base. This ensures that your software behaves the way you
+ expect it to.
### Tests for user edge cases
-Edge cases refer to unexpected or "outlier" ways that some users may use your package. Tests enable you to address various edge cases that could impair
-your package's functionality. For example, what occurs if a function expects a
-pandas `dataframe` but a user supplies a numpy `array`? Does your code gracefully
-handle this situation, providing clear feedback, or does it leave users
-frustrated by an unexplained failure?
+Edge cases refer to unexpected or "outlier" ways that some users may use
+your package. Tests enable you to address various edge cases that could
+impair your package's functionality. For example, what occurs if a function
+expects a pandas `dataframe` but a user supplies a numpy `array`? Does your
+code gracefully handle this situation, providing clear feedback, or does it
+leave users frustrated by an unexplained failure?
:::{note}
@@ -44,14 +66,30 @@ Imagine you're working on a puzzle where each puzzle piece represents a function
````{admonition} Test examples
:class: note
-Let’s say you have a Python function that adds two numbers a and b together.
+Let's say you have a Python function that adds two numbers together.
```python
-def add_numbers(a, b):
+def add_numbers(a: float, b: float) -> float:
+ """
+ Add two numbers together and return the result.
+
+ Parameters
+ ----------
+ a : float
+ The first number to add.
+ b : float
+ The second number to add.
+
+ Returns
+ -------
+ float
+ The sum of the two numbers.
+ """
return a + b
```
-A test to ensure that function runs as you might expect when provided with different numbers might look like this:
+A test to ensure that function runs as you might expect when provided with
+different numbers might look like this:
```python
def test_add_numbers():
@@ -69,20 +107,45 @@ test_add_numbers()
```
````
-🧩🐍
+### 🧩🐍 How do you know what type of tests to write?
-### How do I know what type of tests to write?
+As you begin to write tests for your package, you should consider:
+
+1. there are [three types of tests Test Types for Python Packages](test-types.md) that can help guide your development.
+2. your tests should consider how a user might use (and misuse!) your package.
:::{note}
This section has been adapted from [a presentation by Nick Murphy](https://zenodo.org/records/8185113).
+
:::
-At this point, you may be wondering - what should you be testing in your package? Below are a few examples:
+But, what should you be testing in your
+package? Below are a few examples:
+
+- **Test some typical cases:** Test that the package functions as you
+ expect it to when users use it. For instance, if your package is supposed
+ to add two numbers, test that the outcome value of adding those two
+ numbers is correct.
+
+- **Test special cases:** Sometimes there are special or outlier cases. For
+ instance, if a function performs a specific calculation that may become
+ problematic closer to the value of 0, test it with the input of both 0
+ and nearby values.
-- **Test some typical cases:** Test that the package functions as you expect it to when users use it. For instance, if your package is supposed to add two numbers, test that the outcome value of adding those two numbers is correct.
+- **Test at and near expected boundaries:** If a function requires a value
+ that is greater than or equal to 1, make sure that the function still
+ works with the values 1 and 0.999, as well as 1.001 (values close to the
+ constraint).
-- **Test special cases:** Sometimes there are special or outlier cases. For instance, if a function performs a specific calculation that may become problematic closer to the value = 0, test it with the input of both 0 and
+ than or equal to 1, test it at 0.999 to ensure failure. Make sure that
+ the function fails gracefully when given unexpected values and that the
+ user can easily understand why it failed by providing a useful error
+ message.
-* **Test at and near the expected boundaries:** If a function requires a value that is greater than or equal to 1, make sure that the function still works with both the values 1 and less than one and 1.001 as well (something close to the constraint value)..
+## Next steps
-* **Test that code fails correctly:** If a function requires a value greater than or equal to 1, then test at 0.999. Make sure that the function fails gracefully when given unexpected values and help and that the user can easily understand why if failed (provides a useful error message).
+Now that you understand what and why to test, explore the [three types of
+tests](test-types.md) (unit, integration, and end-to-end) to determine
+which style of tests best fits your package. Then, learn how to [run your
+tests locally](run-tests.md) and [in continuous integration](tests-ci.md).
+Finally, track your progress with [code coverage](code-cov.md) metrics.