Grafana + Prometheus + Loki is used by us as a monitoring system with dashboards, monitoring, alerting, and logging for our Kubernetes clusters.
At first, Loki stores its data (logs and index data) on the persistent volume (PV) inside the Kubernetes clusters where an EBS with 1TB (in the beta environment) is attached to the cluster assigned to the PV.
After running Loki for a while in our clusters, it is considered to move the storage of Loki from K8s PV (EBS) to S3. There are some points being considered.
In these days I worked with Loki, the full disk…
In a fast-growing company, like Compass, things may become challenging for Cloud infrastructure teams. As serving more and more customers, many backend services scaled up in our Kubernetes clusters. Meanwhile, a variety of new backend services went online to satisfy new requirements.
Recently, a big challenge for our Cloud Engineering team in Compass is the shortage of IP addresses in some Kubernetes clusters which are managed by AWS EKS. I would like to share our experiences of troubleshooting, investigations, exploring solutions and mitigating the issues.
The problem was noticed first when some teams reported some transient failures during deployments in…
I recently have set up Loki locally on the Minikube cluster for testing and verifying purpose. I found some introductions online had expired so they are not working exactly.
Therefore, in this post I would like to share my detailed steps for installing Loki and Grafana and connecting them together on the Minikube cluster.
Loki is a horizontally-scalable, highly-available, multi-tenant log aggregation system inspired by Prometheus. It is designed to be very cost effective and easy to operate. It does not index the contents of the logs, but rather a set of labels for each log stream.
The Ingress addon…
My previous post Setup Kubernetes Cluster Multi-tenancy with AWS EKS discussed how to setup the namespace-based multi-tenancy environment on Kubernetes clusters.
After completing that, tenant team members should have the access to their own namespace in the Kubernetes cluster by running
kubectl command. Next step for the tenant team might be to deploy their apps, either services or cronjobs, to the tenant namespace.
In this post, I would like to discuss a little bit about what could be done for tenant apps after the environment of multi-tenancy is ready. …
Kubernetes networking is critical for managing Kubernetes clusters because for most Kubernetes clusters, applications running inside need to communicate with other entities outside.
For services running in a Kubernetes cluster, we don’t want all of them exposed directly to outside for internet public access for security and management purpose.
Typically, we can expose a service dedicated for accepting public accesses and also route requests to the right target services hosted in the cluster. That is what Ingress and Ingress controllers do.
In this post, the following points are included:
It is a completely open source service mesh that layers transparently onto existing distributed applications. It is also a platform, including APIs that let it integrate into any logging platform, or telemetry or policy system. Istio’s diverse feature set lets you successfully, and efficiently, run a distributed microservice architecture, and provides a uniform way to secure, connect, and monitor microservices.
Istio is used more and more with Kubernetes for networking management, including service discovery, load balancing, traffic routing, etc. This is very useful when implementing A/B testing, canary rolling update, rate limit, access control, etc.
Istio traffic management is good…
Can you imagine someday your Kubernetes cluster on AWS EKS running into a problem that IP addresses are exhausted? Even though you assigned a CIDR block large enough to host all of Pods, but IP address range of the CIDR block might not be that large as you thought. That is the situation what I met in one of our Kubernetes clusters recently.
After doing some research online, I am not alone with it and this could be considered as a common issue for AWS EKS Kubernetes clusters. …
A multi-tenant cluster is shared by multiple users and/or workloads which are referred to as “tenants”. The operators of multi-tenant clusters must isolate tenants from each other to minimize the damage that a compromised or malicious tenant can do to the cluster and other tenants. Also, cluster resources must be fairly allocated among tenants.
There are many articles discussing multi-tenancy on Kubernetes clusters. Typically, Kubernetes Namespace is used for setting up multi-tenancy in Kubernetes clusters. …
VirtualBox is a free and open-source hosted hypervisor for x86 virtualization, developed by Oracle Corporation. I used it for running a VM of Windows on my MacOS. But Windows kept eating disk spaces and 50GB becomes inadequate recently.
Therefore I have to enlarge the disk size of my VM but it is not straightforward since the size is not configurable in the setting of VirtualBox.
Kubernetes Patterns, like Design Patterns, abstracts Kubernetes primitives into some repeatable solutions to solving problems.
This post is to introduce Declarative Deployment pattern, which mostly focuses on the Kubernetes’ Deployment resource. The following points would be discussed:
Declarative Deployment pattern encapsulates the upgrade and rollback processes of a group of containers and makes its execution a repeatable and automated activity.
A cloud-native application or service are commonly deployed in multiple Pods in order for high availability. …