Microservices have revolutionised the way we build and deploy applications, offering more resilience, easier scalability, and a more modular structure that better suits the needs of modern, rapidly changing businesses. A new world of possibilities has been opened up by the advent of containerisation platforms like Kubernetes, which provide the perfect platform for running these microservices.
To get the most out of these technologies, though, we need a set of guiding principles that take into account the unique challenges and opportunities they present. That's where the 12-factor methodology comes in.
The 12-factor methodology is a set of best practices for building software-as-a-service applications, which has gained wide acceptance in the world of microservices and containerisation. In this blog, we will explore how these 12 principles can be applied to building microservice applications on Kubernetes.
1. Codebase: One codebase tracked in revision control, many deploys
In Kubernetes, each microservice should have its separate codebase and a separate deployable Docker image. Use a version control system to track changes and maintain the history of each service. This ensures each microservice can be developed, tested, and deployed independently, thereby enhancing agility and reducing risk.
2. Dependencies: Explicitly declare and isolate dependencies
Each microservice should explicitly declare its dependencies using a language- or framework-specific dependency management tool. Kubernetes supports this philosophy well, with its concept of Pods which encapsulate and isolate each microservice along with its dependencies.
Example: A package.json file in a Node.js project:
"dependencies": {
"express": "^4.17.1",
"axios": "^0.19.2"
}
3. Config: Store config in the environment
Sensitive data like config details should be separated from the application code and stored securely. Kubernetes has built-in objects like ConfigMaps and Secrets, which are designed for storing non-sensitive and sensitive configuration details, respectively.
Example: of a ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
data:
my_key: my_data
another_key: another_data
4. Backing Services: Treat backing services as attached resources
Your application should treat backing services, such as databases or messaging queues, as attached resources, accessed via a URL or other locator stored in the configuration. Kubernetes Services and Persistent Volumes can be used to handle these backing services, decoupling them from the microservices.
Example: The URL to a backing service might be stored in a ConfigMap and then provided to the microservices at runtime:
apiVersion: v1
kind: ConfigMap
metadata:
name: service-config
data:
database_url: "jdbc:mysql://my-database:3306/db"
5. Build, Release, Run: Strictly separate build and run stages
Kubernetes supports this principle through its image-based deployment model. You build a Docker image (build), tag it with a version number (release), and deploy it to a Kubernetes cluster (run).
Exapmle: you could have a Dockerfile for building your image.
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
CMD [ "npm", "start" ]
6. Processes: Execute the app as one or more stateless processes
Microservices should be stateless and share-nothing. Any data that needs to persist must be stored in a stateful backing service. Kubernetes, with its ephemeral Pods, inherently encourages statelessness.
7. Port Binding: Export services via port binding
Microservices communicate with each other through well-defined APIs and listen on a specific port for incoming requests. Kubernetes supports this through its Service abstraction, which defines a logical set of Pods and a policy by which to access them.
Example: A Kubernetes Service could be used to expose your microservice on a specific port:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
8. Concurrency: Scale-out via the process model
Kubernetes excels at this, providing first-class support for horizontal scaling through the ReplicationController or the newer, more powerful Deployment.
~ Horizontal Pod Autoscaler (HPA): This is a Kubernetes component that automatically scales the number of Pods in a deployment, replication controller, or replica set based on the observed CPU utilisation. Here's an example of how to apply it:
This command creates an autoscaler for the 'my-app' deployment, maintaining between 2 and 10 replicas with a CPU utilisation target of 80%.
kubectl autoscale deployment my-app --min=2 --max=10 --cpu-percent=80
Alternatively, you could define it in YAML: This performs the same operation as the previous command, but allows more fine-grained control over the autoscaling rules.
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: my-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: my-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 80
~Vertical Pod Autoscaler (VPA): This is a mechanism that can provide Pod resizing decisions, both for CPU and memory requests. It can recommend suitable values, and optionally automatically update the requests. It consists of a Recommender, an Updater, and an Admission Plugin.
Here's a simple VPA object that could be created:
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: my-app-vpa
spec:
targetRef:
apiVersion: "apps/v1"
kind: Deployment
name: my-app
updatePolicy:
updateMode
9. Disposability: Maximise robustness with fast startup and graceful shutdown
Kubernetes ensures the rapid startup and graceful shutdown of Pods. It provides liveness probes to detect and replace instances that aren't starting properly and readiness probes to avoid sending traffic to instances that aren't ready to handle it.
Example: liveness and readiness probes might look like this in a Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: my-app
image: my-app:1.0.0
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /health
port: 8080
readinessProbe:
httpGet:
path: /ready
port: 8080
10. Dev/prod parity: Keep development, staging, and production as similar as possible
Using Kubernetes and containerisation, it's easier to maintain consistency across different environments. With the same Docker images running in every environment and Kubernetes providing the same API for deployment and management, parity is significantly improved.
11. Logs: Treat logs as event streams
Logs should not be concerned with routing or storing their output. Kubernetes ensures this through its logging architecture, redirecting the output streams (stdout and stderr) of each container to a central logging solution.
12. Admin Processes: Run admin/management tasks as one-off processes
Kubernetes supports running administrative tasks as one-off processes. This can be done using Kubernetes Jobs, which ensure that a specified task is completed at least once, or using a Kubernetes Pod directly for tasks which do not need to be restarted if they fail.
Example: Jobs for this purpose; this would run a one-time task using the my-job:1.0.0 Docker image. The Job will not be restarted after it has successfully completed.
apiVersion: batch/v1
kind: Job
metadata:
name: my-job
spec:
template:
spec:
containers:
- name: my-job
image: my-job:1.0.0
restartPolicy: OnFailure
In conclusion, Kubernetes and the 12-factor methodology go hand in hand. The 12-factor methodology provides a valuable set of principles that can guide you in developing highly scalable, maintainable, and resilient microservices applications, and Kubernetes provides the features and abstractions to implement these principles effectively. Together, they can help you take your microservices strategy to the next level.
Comentarios