In the intensive times at work, most of us don’t have time or energy to bother with the little details on the coding standards. But sometimes these things that we avoid chase us. When we finally end up with a spagetti code or deploying issues or environment management issues, then we take action to fix these stuff. These issues can be reduced drastically from the beginning of the projects. I will share some basic and easily applicable tips for project structure and boilerplates.

A boilerplate (a.k.a. skeleton code, base template, scaffolding, starter kits) is simply a code repository to generate or start new projects from it. You define everything you use in a project from the beginning, e.g. ignore files (.gitignore, .dockerignore, …), config files and structures (lint configurations, testing configurations, environment configurations, …), code style, project structure and so on… If you want to start a new project, you just need to fork the boilerplate repo and build on it.

Benefits:

  • Consistency on code structure and code design
  • Follow best practices from the beginning
  • Quick start to development on a new project
  • Good documentation

TL;DR (Show me the code!)

Build your standards once, and then build your projects on top of it.

You can see the complete template code in my Github Account

When to Use a Boilerplate

Boilerplates has cool benefits, yet in some cases you better start from scratch. You may need to produce boilerplates if you are going to…

  • … share it as a learning resource
  • … use it as code example presentation
  • … create a proof of concept
  • … create multiple projects for production with similar infrastructure, dependencies etc…
  • … share the coding standards with other developers

other situations than the above, you might not really need a boilerplate. Contrary, boilerplates might limit your elasticity in development in different aspects.

An Example Boilerplate

We are going to create a boilerplate for FastAPI (Python3 web framework), which features;

  • Split development environments & configs
  • Easy to spin-up & develop
  • Automated tests in docker
  • Multi-stage docker builds

from the beginning. So you don’t have to deal with these stuff unless you need something custom. Just focus on development!

Our project structure will be:

fastapi-boilerplate
├── configs
│   ├── .env
│   ├── local.env
│   └── prod.env
├── requirements
│   ├── base.txt
│   ├── local.txt
│   └── prod.txt
├── src
│   ├── helpers
│   │   ├── __init__.py
│   │   └── config_helper.py
│   ├── models
│   │   ├── __init__.py
│   │   └── models.py
│   ├── routers
│   │   ├── __init__.py
│   │   └── router.py
│   ├── __init__.py
│   └── main.py
├── tests
│   └── test_api_endpoints.py
├── .dockerignore
├── .gitignore
├── .pylintrc
├── Dockerfile
├── README.md
├── docker-compose.yaml
└── run.py

Explanation of the Project Structure

Just the production and local environments are involved to reduce the complexity of the example.

  • All the source code is held under /src (src layout) and the code is separated into 4 pieces:

    • /src/helpers/: Helper modules, utility functions
    • /src/models/: Request and response models, custom error models
    • /src/routers/: Separated and grouped API endpoints
    • /src/main.py: Main script which defines the ultimate API server
  • Both requirements and configs has a base file which has the least common configurations (subset of other configs) and its data is overwritten at runtime by a supset of configs. See ‘Managing the Config’ section for detailed example.

  • .pylintrc is the configuration file for Pylint. You can set your linting standards here.

  • .gitignore and .dockerignore are not optional in my humble opinion, they must exist in all of your projects. You better be meticulous on what you push to your remote repository or what you include in your production image. Cruft comes into existence this way.

  • Dockerfile keeps our multi-stage docker build manifest. And we also have a docker-compose.yaml file. “But why?” you ask. Because docker-compose up is the short hand for:

1
2
docker build -t container_name .
docker run --env-file=".env" -e SECRET_API_KEY="5UP3R_53CR3T_@P1_K3Y" -p 8080:8080 container_name
  • We have a main.py under ./src/ and run.py under root directory. You may wonder the bother. You’ll see the reason behind this run.py in Running Tests section.
  • Do you ever use README.md file? I believe every developer has notes to other developers. So why not jot down your notes and other details, need-to-knows in the place reserved for it (swh).

Managing the Config

One thing to use from the birth of a project, it’s the config. Most of the projects I’ve seen so far, uses the application settings in application state, which is being carried to every corner in the app in an event loop (for asynchronously working microservices), for example. If not, it’s being read from a config file every time it’s been called.

A simple yet comprehensive solution might be something like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# src/helpers/config_helper.py
import os
import logging
from typing import Literal, Optional, Union
from functools import lru_cache

from pydantic import BaseSettings, HttpUrl, SecretStr, validator, StrictInt
from dotenv import dotenv_values

class Settings(BaseSettings):
    environment: Literal['local', 'production']
    host: str
    port: StrictInt
    api_base_url: HttpUrl
    api_secret: SecretStr
    debug: bool = False
    auto_reload: bool = False
    sentry_integration: bool = False
    sentry_env: Optional[Literal['development', 'production']]
    sentry_dsn: Optional[HttpUrl]

    @validator('*', pre=True)
    def convert_string_to_bool(cls, v):
        if v in ['True', 'False']:
            return True if v == 'True' else False
        return v

    @validator('port', pre=True)
    def convert_port_to_integer(cls, v):
        try:
            v = int(v)
            if 64000 >= v >= 1:
                return v
            raise Exception(f'Port must be between the range(1, 64000). Yours is {v}')
        except ValueError:
            raise Exception(f'Port must be an integer. Yours is {v}')


@lru_cache
def get_settings(
        environment: Literal['local', 'production', 'testing']
    ) -> BaseSettings:
    config = {
        **dotenv_values('./configs/.env'),
        **dotenv_values(f'./configs/{environment}.env')
    }
    logging.info(f"Config: <{environment}>")
    return Settings(**config)

APP_CONFIG = get_settings(os.getenv('ENVIRONMENT', 'local'))

# >>> print(APP_CONFIG.api_secret.get_secret_value())
# 'NoSecretsInLocalhost'

See Pydantic Validators, Pydantic Types and Python Dotenv Library for more detail.

When you run your program on your local machine, with no environment variables, the program will run with local variables. As we will see in later sections, when you run your program with docker or docker compose, the program will run with production variables.

Spin-up the Project

It should be easy for a developer to spin-up a project with different environments and settings. It can be time consuming to adjust settings to switch development environment.

Using separate requirements, environment variable files, multi-stage docker builds and docker-compose can overcome this issue.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
FROM python:3.9.5-alpine AS base

FROM base AS builder

COPY requirements.txt .

RUN apk add --update gcc musl-dev linux-headers build-base \
    && pip install -r /requirements.txt \
    && apk del musl-dev gcc linux-headers build-base

FROM base AS test

COPY --from=builder /usr/local/lib/python3.9 /usr/local/lib/python3.9
COPY . .

CMD python3 run.py

FROM base AS deployment

COPY --from=builder /usr/local/lib/python3.9 /usr/local/lib/python3.9
COPY . .

EXPOSE $PORT

CMD python3 run.py

Checkout my other blog post (external link to my blog) for more details on multi-stage builds.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
# `./docker-compose.yaml`

# `TARGET_ENVIRONMENT` is enum: [local, prod]
# This variable is going to be checked in the code
version: '3.7'
services:
  container_name:
    build:
      context: .
      dockerfile: Dockerfile # (Optional) To be explicit
      target: ${TARGET_ENVIRONMENT:-prod} # Target build in specified Dockerfile
      environment:
        - ENVIRONMENT=${TARGET_ENVIRONMENT:-prod}
      env_file:
        - configs/.env
        - configs/${TARGET_ENVIRONMENT:-prod}.env # To override `.env` with the target ENV variables

With this setup, we can switch between environments, builds with one simple command:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
# Switch environment with one flag
docker-compose up --target=local

# No flag means prod environment
docker-compose up

# Better for CI/CD
# Runs automated tests for
docker build -t project-a-test --target=test .

# Local execution defaults to local settings
python run.py

# Running different environments in local environment
ENVIRONMENT="production" python run.py

Running Tests

Test cases are separated into the ./tests/ directory to keep the ./src/ directory plain. You won’t use the tests at runtime anyway, so no need to keep them under ./src/.

Running tests on a src layout can be tricky because of the import rules of Python. Both tests and the project has to be started from the root directory. Otherwise, you might encounter import errors while trying to run the tests or the program itself.

Pytest Import Error

But we already have a main script to run at the root: ./run.py . So we can run the program in root directory and test it in the same place without facing any import errors.

Pytest Success

We can also run our automated tests in docker multi-stage builds or at CI/CD pipeline. You may also add listing using pylint and security checks using bandit.

Conclusion

I believe that a developer should get right into coding, without any efforts on adjusting the environments or changing/creating coding standards. As you can see in the example code I’ve written, my needs in a project is;

  • Environment, dependency management from the beginning
  • Automated Tests
  • Sentry Integration, etc…

So I’ve decided how should this needs ought to met. I’ve set some standards from the scratch and that should save me lots of time during the life of the project.

Further Read