Best practices
Follow these best practices to help you get the most out of Architect.
Components
These are some best practices to keep in mind when creating components.
Component names
A component name is used in the architect.yml
component file that defines
services, tasks, interfaces, and secrets. A component name can include lowercase
letters, numbers, underscores, and dashes. Component names can’t start or end
with a dash or underscore. The maximum length is 32 characters.
Examples:
my-component
my-component-2
my_third_component
Example usage:
# architect.yml
---
name: my-component
Component versions
An architect component version, or tag, is created when a component is
registered with Architect.
The --tag
or -t
flag of the architect register
command can be used
to specify the name of the version. Architect components can have multiple
versions similar to any package manager. Component versions can contain
alphanumeric characters, periods, dashes, and underscores. The maximum length of
a component version is 128 characters.
Examples:
0.0.1
second-version
third_component_version
Example usage:
architect register architect.yml --tag=0.0.1
or
architect register architect.yml -t 0.0.1
Parameterizing services
Your service definitions should parameterize any values that are likely to change between local development and deploying to production.
Secrets should be defined in the architect.yml
component file for configurable values between local or remote deployments. This
ensures that applications maintain portability.
Examples include:
- An environment variable that needs to change between dev and prod
- A value for the desired number of replicas for a service
- An API key for a third-party service that helps to power your application
Secrets help avoid hardcoded values for things that may change between local development and production, such as an API address or a database username and password.
Similarly, services defined in the component configuration file don’t need to use hardcoded values when referencing one another. This is because Architect will dynamically detect connections between services and dependencies and generate them at deployment time. This applies to both service and dependency references.
Interpolation
Architect component configuration files are meant to be used to deploy applications to any environment, whether that’s local or remote. Because of this, things that can change between environments such as service addresses and secrets should be referenced by interpolation.
Volumes
Volumes are important in both local and production deployments to persist data between container lifecycles.
Local volumes
Local deployments can use volumes within the debug
block of the architect.yml
file to support hot reloading.
Hot reloading
Hot reloading allows users to
make changes to their application or component without the need to redeploy.
Code is shared between the host machine and the container running the
application. This feature is only needed for local deployments, so a volume is
configured through a debug
block within the architect.yml
file.
Below is an example using a volume in the debug block:
# architect.yml
---
services:
my-service:
debug:
volumes:
src:
mount_path: /usr/src/app/src # The location of the code within the Docker container
host_path: ./api/src # The location of the code relative to the architect.yml file on the host machine
Note that the application running in the container must be configured to support hot reloading. This will vary depending on your framework. For example, Node applications can implement hot reloading by utilizing the nodemon package.
Debug volumes
Data persistence should also
be considered when running local deployments with architect dev
. Using volumes,
data can be shared between the container and the host machine so that even as
the local stack spins up and down, the data being used to debug won’t disappear.
This is especially useful in cases where it’s important to retain data from a
database.
This is an example of mounting a local database volume when running locally in debug
mode:
# architect.yml
---
services:
api-db:
image: postgres:14
interfaces:
postgres:
port: 5432
protocol: postgresql
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: api_database
debug:
volumes:
data:
mount_path: /var/lib/api/postgres
Setting only the mount_path
of a debug volume will automatically provision a
Docker volume on the host machine. In order to store the data directly on the
host machine instead of inside of a Docker volume, include a host_path
as
well.
When changing the service name, the volume name, or the architect dev environment name (via architect dev -e <env-name>
),
a new volume will be provisioned if mount_path
is used in the volume config without a host_path
.
Remote volumes
Data stored directly on the filesystem of a Kubernetes pod won’t be persisted. For that reason, important data should be stored in a volume and added to the service similar to the following:
# architect.yml
---
services:
my-service:
volumes:
my-volume:
mount_path: /usr/app/images
key: my-persistent-volume-claim
Service dependencies
When using a service that depends on another service that must be fully operational
first, a depends_on
block can be added to
the service definition of the dependent service. An example of this is an API
requiring a database service to run before the API can start up. The
depends_on
block ensures that services are deployed sequentially.
Take a simple web service named my-service
which will fail to start if it
cannot connect to the database
service. A depends_on
block can be added to
my-service
to ensure that the database has been spun up before trying
to start my-service
.
Below is a snippet of an architect.yml
file to demonstrate what this would
look like.
# architect.yml
---
services:
my-service:
depends_on:
- database
---
database:
interfaces:
postgres:
port: 5432
Ensuring service health
The Architect service configuration includes a liveness_probe
block to configure
a check that ensures service health. It can let a container environment know
when to restart a service if the container becomes unhealthy.
When configuring the command for the liveness probe, ensure that the command
used (ie curl
) is installed in the Docker image for the service.