Skip to content

Contributing#

python_boilerplate is an actively maintained and utilised project.

How to contribute#

to report issues, request features, or exchange with our community, just follow the links below.

Is something not working?

Report a bug

Missing information in our docs?

Report a docs issue

Want to submit an idea?

Request a change

Have a question or need help?

Ask a question

Developing python_boilerplate#

To find beginner-friendly existing bugs and feature requests you may like to start out with, take a look at our good first issues.

Setting up a development environment#

To create a development environment for python_boilerplate, with all libraries required for development and quality assurance installed, it is easiest to install bryn_python_boilerplate using the mamba package manager, as follows:

  1. Install mamba with the Mambaforge executable for your operating system.
  2. Open the command line (or the "miniforge prompt" in Windows).
  3. Download (a.k.a., clone) the python_boilerplate repository: git clone git@github.com:brynpickering/python_boilerplate.git
  4. Change into the python_boilerplate directory: cd python_boilerplate
  5. Create the python_boilerplate mamba environment: mamba create -n python_boilerplate -c conda-forge --file requirements/base.txt --file requirements/dev.txt
  6. Activate the python_boilerplate mamba environment: mamba activate python_boilerplate
  7. Install the bryn_python_boilerplate package into the environment, in editable mode and ignoring dependencies (we have dealt with those when creating the mamba environment): pip install --no-deps -e .

All together:

git clone git@github.com:brynpickering/python_boilerplate.git
cd python_boilerplate
mamba create -n python_boilerplate -c conda-forge --file requirements/base.txt --file requirements/dev.txt
mamba activate python_boilerplate
pip install --no-deps -e .

If installing directly with pip, you can install these libraries using the dev option, i.e., pip install -e '.[dev]' Either way, you should add your environment as a jupyter kernel, so the example notebooks can run in the tests: ipython kernel install --user --name=python_boilerplate

If you plan to make changes to the code then please make regular use of the following tools to verify the codebase while you work:

  • pre-commit: run pre-commit install in your command line to load inbuilt checks that will run every time you commit your changes. The checks are: 1. check no large files have been staged, 2. lint python files for major errors, 3. format python files to conform with the PEP8 standard. You can also run these checks yourself at any time to ensure staged changes are clean by calling pre-commit.
  • pytest - run the unit test suite and check test coverage.

Note

If you already have an environment called python_boilerplate on your system (e.g., for a stable installation of the package), you will need to chose a different environment name. You will then need to add this as a pytest argument when running the tests: pytest --nbmake-kernel=[my-env-name].

Rapid-fire testing#

The following options allow you to strip down the test suite to the bare essentials: 1. The test suite includes unit tests and integration tests (in the form of jupyter notebooks found in the examples directory). The integration tests can be slow, so if you want to avoid them during development, you should run pytest tests/. 1. You can avoid generating coverage reports, by adding the --no-cov argument: pytest --no-cov. 1. By default, the tests run with up to two parallel threads, to increase this to e.g. 4 threads: pytest -n4.

All together:

pytest tests/ --no-cov -n4

Note

You cannot debug failing tests and have your tests run in parallel, you will need to set -n0 if using the --pdb flag

Memory profiling#

Note

When you open a pull request (PR), one of the GitHub actions will run memory profiling for you. This means you don't have to do any profiling locally. However, if you can, it is still good practice to do so as you will catch issues earlier.

python_boilerplate can be memory intensive; we like to ensure that any development to the core code does not exacerbate this. If you are running on a UNIX device (i.e., not on Windows), you can test whether any changes you have made adversely impact memory and time performance as follows:

  1. Install memray in your python_boilerplate mamba environment: mamba install memray pytest-memray.
  2. Run the memory profiling integration test: pytest -p memray -m "high_mem" --no-cov.
  3. Optionally, to visualise the memory allocation, run pytest -p memray -m "high_mem" --no-cov --memray-bin-path=[my_path] --memray-bin-prefix=[my_prefix] - where you must define [my_path] and [my_prefix] - followed by memray flamegraph [my_path]/[my_prefix]-tests-test_100_memory_profiling.py-test_mem.bin. You will then find the HTML report at [my_path]/memray-flamegraph-[my_prefix]-tests-test_100_memory_profiling.py-test_mem.html.

All together:

mamba install memray pytest-memray
pytest -p memray -m "high_mem" --no-cov --memray-bin-path=[my_path] --memray-bin-prefix=[my_prefix]
memray flamegraph [my_path]/[my_prefix]-tests-test_100_memory_profiling.py-test_mem.bin

For more information on using memray, refer to their documentation.

Submitting changes#

To contribute changes:

  1. Fork the project on GitHub.
  2. Create a feature branch to work on in your fork (git checkout -b new-fix-or-feature).
  3. Test your changes using pytest.
  4. Commit your changes to the feature branch (you should have pre-commit installed to ensure your code is correctly formatted when you commit changes).
  5. Push the branch to GitHub (git push origin new-fix-or-feature).
  6. On GitHub, create a new pull request from the feature branch.

Pull requests#

Before submitting a pull request, check whether you have:

  • Added your changes to CHANGELOG.md.
  • Added or updated documentation for your changes.
  • Added tests if you implemented new functionality.

When opening a pull request, please provide a clear summary of your changes!

Commit messages#

Please try to write clear commit messages. One-line messages are fine for small changes, but bigger changes should look like this:

A brief summary of the commit (max 50 characters)

A paragraph or bullet-point list describing what changed and its impact,
covering as many lines as needed.

Code conventions#

Start reading our code and you'll get the hang of it.

We mostly follow the official Style Guide for Python Code (PEP8).

We have chosen to use the uncompromising code formatter black and the linter ruff. When run from the root directory of this repo, pyproject.toml should ensure that formatting and linting fixes are in line with our custom preferences (e.g., 100 character maximum line length). The philosophy behind using black is to have uniform style throughout the project dictated by code. Since black is designed to minimise diffs, and make patches more human readable, this also makes code reviews more efficient. To make this a smooth experience, you should run pre-commit install after setting up your development environment, so that black makes all the necessary fixes to your code each time you commit, and so that ruff will highlight any errors in your code. If you prefer, you can also set up your IDE to run these two tools whenever you save your files, and to have ruff highlight erroneous code directly as you type. Take a look at their documentation for more information on configuring this.

We require all new contributions to have docstrings for all modules, classes and methods. When adding docstrings, we request you use the Google docstring style.

Release checklist#

Pre-release#

  • Make sure all unit and integration tests pass (This is best done by creating a pre-release pull request).
  • Re-run tutorial Jupyter notebooks (pytest examples/ --overwrite).
  • Make sure documentation builds without errors (mike deploy [version], where [version] is the current minor release of the form X.Y).
  • Make sure the changelog is up-to-date, especially that new features and backward incompatible changes are clearly marked.

Create release#

  • Bump the version number in src/python_boilerplate/__init__.py
  • Update the changelog with final version number of the form vX.Y.Z, release date, and github compare link (at the bottom of the page).
  • Commit with message Release vX.Y.Z, then add a vX.Y.Z tag.
  • Create a release pull request to verify that the conda package builds successfully.
  • Once the PR is approved and merged, create a release through the GitHub web interface, using the same tag, titling it Release vX.Y.Z and include all the changelog elements that are not flagged as internal.

Post-release#

  • Update the changelog, adding a new [Unreleased] heading.
  • Update src/python_boilerplate/__init__.py to the next version appended with .dev0, in preparation for the next main commit.