Best practices for running Kubernetes on GKE Standard

Get updates as soon as we post them

Latest news, tips and tricks, straight to your inbox

BEST PRACTICES FOR RUNNING KUBERNETES ON GKE STANDARD

Kubernetes has become the de facto standard for container orchestration, enabling businesses to deploy and manage complex distributed applications with ease. However, as the number of containers and workloads on a Kubernetes cluster grows, so does the associated cost. This is where Google Kubernetes Engine (GKE) comes in, providing a managed Kubernetes service that simplifies the deployment and operation of containerized applications while also offering a range of cost-optimization features.

To help you optimize your Kubernetes costs on GKE, we’ve compiled a list of best practices:

  1. Understand your application capacity

Before you start deploying your application, it’s important to understand its resource requirements. Know how many concurrent requests your application can handle, how much CPU and memory it requires, and how it responds under heavy load.

  1. Make sure your application can grow vertically and horizontally

Ensure that your application can grow and shrink. This means you can choose to handle traffic increases either by adding more CPU and memory or by adding more pod replicas. This gives you the flexibility to experiment with what fits your application better, whether that’s a different autoscaler setup or a different node size.

  1. Set appropriate resource requests and limits

By understanding your application capacity, you can determine what to configure in your container resources. You configure CPU or memory as the amount required to run your application by using the request spec.containers[].resources.requests.<cpu|memory>, and you configure the cap by using the request spec.containers[].resources.limits.<cpu|memory>.

Here is an example for setting your container resources

apiVersion: apps/v1

kind: Deployment
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wp
  template:
    metadata:
      labels:
        app: wp
    spec:
      containers:
  - name: wp
    image: wordpress
    resources:
      requests:
        memory: "128Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"

pod resource allocation
  1. Observe your GKE clusters and watch for recommendations

You can check the resource utilization in a Kubernetes cluster by examining the containers, Pods, and services, as well as the characteristics of the overall cluster.

  1. Enable GKE usage metering

GKE usage metering provides you with detailed information about your cluster’s resource usage, which can help you identify areas where you can optimize your costs.

  1. Understand how the Metrics Server works and monitor it

The Metrics Server is a cluster-wide aggregator of resource usage information. It’s important to understand how the Metrics Server works so that you can properly interpret the resource usage data that it collects.

  1. Use Kubernetes Resource Quotas

Kubernetes Resource Quotas allow you to limit the amount of resources that a pod can use. This can help you prevent pods from using more resources than they need, which can save you money.

  1. Consider using Anthos Policy Controller

Anthos Policy Controller is a policy enforcement tool that can help you enforce cost-saving policies across your Kubernetes clusters.

  1. Design your CI/CD pipeline to enforce cost-saving practices

Your CI/CD pipeline can be used to enforce cost-saving practices, such as building smaller images and using spot VMs.

By following these best practices, you can significantly reduce your Kubernetes costs on GKE.

In addition to the above, here are some other tips for optimizing your Kubernetes costs on GKE:

  • Use autoscaling to automatically scale your cluster up and down based on demand.
  • Use managed services, such as Cloud SQL and Cloud Spanner, to offload the management of your databases.
  • Use Kubernetes Horizontal Pod Autoscaler (HPA) to automatically scale the number of pods in a deployment up or down based on CPU or memory usage.

  • Use Kubernetes Ingress to manage external traffic to your cluster.

  • Use Kubernetes Services to expose pods to the outside world.

  1. Choose the right machine type for your Nodes

GKE nodes are the workhorses of your Kubernetes cluster, responsible for running your containerized applications. These nodes are essentially Compute Engine virtual machines (VMs) provisioned and managed by GKE. The control plane, the brains of the Kubernetes cluster, continuously monitors and gathers health information from each node. To optimize your GKE cluster’s cost-effectiveness, consider utilizing E2 machine types (E2 VMs). These cost-optimized VMs offer a significant 31% savings compared to traditional N1 machine types, whereas E2 machine types are versatile and well-suited for a wide range of workloads including web servers micro-services, and development environments.