Deploying Fluent Bit on AWS EKS with Persistent EFS Storage

Deploying Fluent Bit on AWS EKS with Persistent EFS Storage

A comprehensive, step-by-step guide to configuring a resilient and stateful logging pipeline, ensuring data integrity and preventing log loss


 

What is Amazon EKS?

image-20250826-082833.png

Amazon Elastic Kubernetes Service (EKS) is AWS's managed service that makes it easy to run, manage, and scale containerized applications using Kubernetes on the AWS cloud.

Think of it like this: Kubernetes is a powerful but complex engine for orchestrating containers. Instead of building, securing, and maintaining the most complicated parts of that engine yourself, EKS provides a fully managed, highly available, and secure Kubernetes control plane as a service.

Key Benefits of EKS

  • Managed Control Plane: This is the core advantage. AWS automatically manages the availability, scalability, and patching of the Kubernetes control plane components (like etcd and the API server). This frees you from significant operational overhead and lets you focus on your applications.

  • High Availability: The EKS control plane is distributed across multiple AWS Availability Zones (AZs), eliminating any single point of failure and ensuring your cluster's brain is always running.

  • Seamless AWS Integration: EKS is deeply integrated with the AWS ecosystem. It works effortlessly with services like:

    • IAM for secure authentication and authorization.

    • VPC for isolated and secure networking.

    • Elastic Load Balancers (ALB/NLB) for exposing your services.

    • EFS & EBS for persistent storage solutions.

  • Pure Kubernetes Experience: EKS runs upstream, certified Kubernetes. This means you get a standard, community-tested experience, and any tools or add-ons that work with Kubernetes will work with EKS.

How It Works: Control Plane vs. Worker Nodes

An EKS cluster is primarily composed of two parts:

  1. The EKS Control Plane: Managed entirely by AWS. You don't see the underlying instances, but you interact with them through the Kubernetes API (e.g., using kubectl).

  2. Worker Nodes: These are the EC2 instances where your application containers (Pods) actually run. You provision and manage these nodes within your VPC and are responsible for them. They register themselves with the control plane to form the complete cluster.

Why EKS Matters for This Guide

EKS provides the robust, production-grade Kubernetes environment where our applications will run and generate logs. We will deploy Fluent Bit as a DaemonSet across all the worker nodes in our EKS cluster. Fluent Bit's task is to reliably collect logs from every application on every node and forward them to a central location. By using EKS, we start with a secure and scalable foundation for our entire logging pipeline.


Create New Cluster

Open your AWS GUI

Search for EKS:

image-20250826-081905.png
image-20250716-075448.png

Press “Create cluster”

image-20250716-075654.png

The EKS Cluster IAM Role: Your Cluster's AWS Passport

When you create an EKS cluster, you're asked to select a "Cluster IAM role." This is one of the most important configuration steps as it defines the permissions your cluster has to interact with other AWS services.

In simple terms, this IAM role is what the Kubernetes control plane uses to make AWS API calls on your behalf.

Why is it Necessary?

Think of the EKS control plane as a manager hired by you but living in a separate AWS-managed building. This manager (the control plane) needs a set of keys (the IAM role) to access and manage resources within your building (your AWS account and VPC).

Without this role, the control plane would be isolated and unable to perform essential tasks, such as:

  • Networking: Creating and managing Elastic Network Interfaces (ENIs) in your VPC subnets for pod networking.

  • Load Balancing: Provisioning and configuring Application or Network Load Balancers when you create a Kubernetes Service of type LoadBalancer.

  • Storage: Interacting with services like EBS when creating PersistentVolumes.

This role acts as a secure "passport," granting the EKS service just enough permission to manage these resources without giving it full access to your entire AWS account.

What Permissions Does It Need?

You don't have to figure out the permissions yourself. AWS provides a managed policy specifically for this purpose called AmazonEKSClusterPolicy. This policy contains all the necessary permissions (ec2:CreateNetworkInterface, elasticloadbalancing:RegisterTargets, etc.) that the control plane requires to function correctly.

When you create the cluster using the AWS Management Console, it will often guide you to create a new role and will automatically attach this policy for you.

Key Takeaway

The Cluster IAM Role is the security link between the AWS-managed Kubernetes control plane and the resources running in your own AWS account. You are granting the EKS service explicit permission to manage cluster-related resources on your behalf.


Creating the EKS Cluster IAM Role

You will create a new IAM role that the EKS service can assume. The AWS console simplifies this process by pre-selecting the correct trust relationship and permissions policy for you.

Here are the step-by-step instructions:

  1. Navigate to the IAM Console(or press “Create recommended role”) in your AWS account.

  2. On the left-hand navigation pane, click on Roles, then click the "Create role" button.

  3. Step 1: Select Trusted Entity

  • image-20250826-115348.png

    For "Trusted entity type," choose AWS service.

  • Under "Use case," select EKS from the dropdown menu.

  • This will reveal another option below. Choose EKS - Auto Cluster.

  • Click Next.

  1. Step 2: Add Permissions

  • image-20250826-115500.png

    The console will automatically select the required permissions policy: AmazonEKSClusterPolicy.

  • You don't need to do anything else on this screen. Simply click Next.

  1. Step 3: Name, Review, and Create

image-20250826-100236.png

 

  • Role name: Give your role a descriptive name that you will easily recognize. For example: my-eks-cluster-role or EKSClusterRoleForGuide.

  • Review the details to ensure the trusted entity is eks.amazonaws.com and the attached policy is AmazonEKSClusterPolicy.

  • Click the "Create role" button at the bottom.

The Role to Select

image-20250826-100456.png

Now, when you are creating your EKS cluster and you get to the "Cluster IAM role" dropdown menu, you will select the role you just created (e.g., my-eks-cluster-role).

This explicitly grants the EKS control plane the permissions defined in the AmazonEKSClusterPolicy to manage resources within your account.


Of course. After the Cluster Role, the next critical component is the Node IAM Role.


 

The Node IAM Role: The Worker's Toolkit

While the Cluster Role is for the EKS control plane (the manager), the Node IAM Role is attached to each of your EC2 worker nodes. This role grants the necessary permissions for the nodes themselves to function correctly within the cluster and interact with other AWS services.

Think of this as the toolkit you give to each individual worker on your team. Each worker node needs this toolkit to perform its core job.

Why is it Necessary?

Your worker nodes are not just passive machines; they are active participants in the Kubernetes cluster. The kubelet (the primary "node agent") running on each node, and the pods scheduled on them, need permissions to:

  • Join the Cluster: A node needs permission to communicate with the EKS control plane to register itself and receive workloads.

  • Pull Container Images: To run your applications, the nodes must have permission to pull container images from Amazon ECR (Elastic Container Registry).

  • Manage Networking: The AWS VPC CNI plugin, which handles pod networking, runs on each node and needs permissions to manage network interfaces.

  • Access Other AWS Services: If a pod on a node needs to access an S3 bucket or a DynamoDB table, it will (by default) inherit permissions from this role.

How to Create the Node IAM Role

The creation process is similar to the Cluster Role, but with a different trusted entity and different policies.

  1. Navigate to the IAM Console(or press “Create recommended role”), go to Roles, and click "Create role".

  2. Step 1: Select Trusted Entity

image-20250826-101117.png

 

  • For "Trusted entity type," choose AWS service.

  • Under "Use case," select EC2. This is because your worker nodes are EC2 instances.

  • Click Next.

  1. Step 2: Add Permissions

image-20250826-101336.png
  • In the search bar, find and attach the following three AWS managed policies. You must attach all of them:

    • AmazonEKSWorkerNodePolicy

    • It provides the worker nodes with the minimum permissions needed to communicate with the EKS control plane.

    • AmazonEKS_CNI_Policy

    • The Amazon VPC CNI plugin (aws-node DaemonSet) is what allows Kubernetes pods in EKS to get IP addresses from your VPC subnets and communicate with other resources (pods, services, and AWS infrastructure)

    • AmazonEC2ContainerRegistryReadOnly (This allows nodes to pull images from ECR)

  • Click Next.

  1. Step 3: Name, Review, and Create

image-20250826-101433.png

 

  • Role name: Give it a clear name, such as my-eks-node-role.

  • Review the configuration to ensure the trusted entity is ec2.amazonaws.com and the three required policies are attached.

  • Click "Create role".

Where This Role is Used

You will select this role (my-eks-node-role) later in the EKS setup process, specifically when you create a Node Group for your cluster. Assigning this role to the node group ensures that every EC2 instance launched within it has the correct permissions to operate as a functional E-K-S worker node.


Choosing Your Cluster's Network: The VPC

At this step, you're defining the private network space where your entire EKS cluster will live. VPC stands for Virtual Private Cloud, and you can think of it as your own logically isolated, fenced-off area within the vast AWS cloud. All your cluster's resources—the worker nodes, the pods, and the internal load balancers—will be launched inside this VPC.

Key Requirements for an EKS VPC

For an EKS cluster to be resilient and function correctly, the VPC you select must meet a few critical requirements:

  • Multiple Subnets: The VPC must have at least two subnets.

  • Multiple Availability Zones (AZs): Crucially, these subnets must be in different Availability Zones. An AZ is a distinct data center within an AWS region. Spanning your cluster across multiple AZs ensures high availability, so if one data center has an issue, your cluster can continue running in another.

  • Public and Private Subnets: A production-ready setup includes both public and private subnets:

    • Public Subnets: These are for internet-facing resources, primarily your public-facing load balancers. They have a direct route to an AWS Internet Gateway.

    • Private Subnets: This is where your worker nodes should live for security. They don't have public IP addresses and can access the internet securely through a NAT Gateway that resides in a public subnet.

 

Your Options

You have two main choices on the EKS creation screen:

  1. Use an Existing VPC: If you already have a VPC configured that meets the requirements above, you can select it. This is common in established AWS environments.

  2. Let AWS Create a New VPC: For this guide, and for anyone new to EKS, this is the highly recommended option. AWS provides a CloudFormation template that automatically creates a new VPC perfectly configured for EKS. It will set up the public and private subnets across multiple AZs, create the necessary route tables, and provision an Internet Gateway and NAT Gateways for you.

For this guide, select the default VPC or follow the prompts to have AWS create a new VPC for you. This will prevent common networking issues and ensure your cluster is built on a solid, secure, and highly available foundation.

Subnets:

image-20250826-102716.png

Leave all the subnets selected just as they are.

For EKS to function correctly, it needs to be aware of all the available subnets in its VPC. Deselecting any of them could lead to issues with networking, load balancing, or node placement. Simply accept the default selection and proceed to the next step.

Press Create and wait for status “Active

image-20250826-104222.png

Connect your terminal to the cloud and new cluster:

Open your terminal:

aws configure
AWS Access Key ID: AWS Secret Access Key:

AWS Management Console

  1. Sign in to the AWS Console.

  2. Navigate to IAM (under “Security, Identity, & Compliance”).

  3. In the left sidebar, click Users, then select your user name.

  4. Go to the Security credentials tab.

  5. Under Access keys, you’ll see your existing Access Key IDs (but you can only view the ID, not the secret).

    • If you need a new key, click Create access key, give it a name/description, and you’ll be shown both the Access Key ID and the Secret Access Key one time.

image-20250716-081244.png
image-20250716-081318.png
image-20250716-081401.png
image-20250716-081545.png

Connect kubectl to EKS:

aws eks update-kubeconfig --region <cluster_region> --name <cluster_name>
image-20250828-051000.png
aws eks update-kubeconfig --region eu-north-1 --name andrey-test-fb
image-20250716-082342.png

Optional: Set environment variables:

export CLUSTER="andrey-test-fb" export REGION="eu-north-1" export ACCOUNT_ID="655536767854"

Deploy test App (Apache)

nano apache-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: apache-deployment spec: replicas: 1 selector: matchLabels: app: apache template: metadata: labels: app: apache spec: containers: - name: apache image: httpd:2.4 ports: - containerPort: 80
kubectl apply -f apache-deployment.yaml

 

Deploy XpoLog

nano xpolog-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: xpolog spec: replicas: 1 selector: matchLabels: app: xpolog template: metadata: labels: app: xpolog spec: containers: - name: xpolog image: 1200km/xplg:fixed ports: - containerPort: 30303 --- apiVersion: v1 kind: Service metadata: name: xpolog-service spec: type: LoadBalancer ports: - port: 30303 targetPort: 30303 selector: app: xpolog
kubectl apply -f xpolog-deployment.yaml

Expose the XpoLog Receiver and Configure Cluster DNS

  • This YAML creates an internal NLB on port 30303 fronting Pods with app: xpolog.

  • Inside the cluster: use xpolog-service.default.svc.cluster.local:30303.

  • Inside the VPC (outside K8s): create a Route 53 private A-alias (e.g., logs.internal.company) → NLB hostname, then use that name on EC2/Lambda/etc.

nano xpolog-service.yaml
apiVersion: v1 kind: Service metadata: name: xpolog-service namespace: default annotations: service.beta.kubernetes.io/aws-load-balancer-internal: "true" spec: type: LoadBalancer selector: app: xpolog ports: - name: http port: 30303 targetPort: 30303 protocol: TCP
kubectl apply -f xpolog-service.yaml

Get pods list:

kubectl get pods
image-20250716-091357.png

Optional: Create dns if not works

eksctl create addon --name coredns --cluster andreyXpologTest --region eu-north-1 --force

Temporarily Port‑Forward XpoLog to Localhost

kubectl port-forward pod/xpolog-66cc88bc4-tc4wl 30303:30303
image-20250716-091507.png

Create Namespace

kubectl create namespace logging
image-20250716-082749.png

RBAC & ServiceAccount for Fluent Bit

nano fluentbit-rbac.yaml
apiVersion: v1 kind: ServiceAccount metadata: name: fluent-bit namespace: logging --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: fluent-bit-read rules: - apiGroups: [""] resources: ["pods", "namespaces"] verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: fluent-bit-read roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: fluent-bit-read subjects: - kind: ServiceAccount name: fluent-bit namespace: logging
kubectl apply -f fluentbit-rbac.yaml
image-20250716-083043.png

Stand up a shared, ReadWriteMany Amazon EFS volume in EKS. And use it for each Fluent Bit pod’s offset DB


EKS: create a nodegroup on a non-eksctl-managed cluster

1. Prerequisites

  • Installed: awscli, kubectl, eksctl

  • IAM user/role with EKS + EC2 + CloudFormation rights

  • Existing EKS cluster: andrey-test-fb in eu-north-1

  • Subnet IDs you plan to use:

    subnet-00ab400cacdce9d40 subnet-0a2f471dec4cb6d70 subnet-0be575b8b3138c576

2. Verify cluster access

aws eks update-kubeconfig --name andrey-test-fb --region eu-north-1 kubectl get svc

If you see the Kubernetes services, kubectl context is correct.

3. Collect VPC info

You need VPC ID + control-plane security group ID.

# VPC ID (from one of your subnets) aws ec2 describe-subnets \ --subnet-ids subnet-00ab400cacdce9d40 \ --query "Subnets[0].VpcId" --output text # Cluster SG aws eks describe-cluster \ --name andrey-test-fb --region eu-north-1 \ --query "cluster.resourcesVpcConfig.clusterSecurityGroupId" --output text

4. Enable OIDC provider for IRSA (one-time)

eksctl utils associate-iam-oidc-provider \ --cluster andrey-test-fb --region eu-north-1 --approve

5. Create a nodegroup config file

Save as nodegroup.yaml:

apiVersion: eksctl.io/v1alpha5 kind: ClusterConfig metadata: name: andrey-test-fb region: eu-north-1 vpc: id: vpc-xxxxxxxx # from step 3 securityGroup: sg-xxxxxxx # from step 3 subnets: private: # or public, depending on your design eu-north-1a: id: subnet-00ab400cacdce9d40 eu-north-1b: id: subnet-0a2f471dec4cb6d70 eu-north-1c: id: subnet-0be575b8b3138c576 managedNodeGroups: - name: my-first-nodegroup instanceType: t3.medium desiredCapacity: 2 minSize: 2 maxSize: 4 privateNetworking: true # set false if using public subnets amiFamily: AmazonLinux2023 # recommended labels: role: worker

6. Create the nodegroup

eksctl create nodegroup -f nodegroup.yaml

This provisions a CloudFormation stack and launches EC2 nodes.

7. Install EKS core add-ons

Even on non-eksctl clusters, you need the three standard add-ons:

eksctl create addon --cluster andrey-test-fb --region eu-north-1 --name vpc-cni --version latest --force eksctl create addon --cluster andrey-test-fb --region eu-north-1 --name kube-proxy --version latest --force eksctl create addon --cluster andrey-test-fb --region eu-north-1 --name coredns --version latest --force

8. Verify nodes

kubectl get nodes -o wide kubectl get pods -n kube-system -o wide
image-20250828-055501.png

Create FileSystem

aws efs create-file-system \ --region eu-north-1 \ --creation-token fluentbit-efs-new \ --performance-mode generalPurpose \ --throughput-mode bursting \ --tags Key=Name,Value=fluentbit-efs-new \ --query FileSystemId --output text
image-20250716-100710.png

Gather your cluster’s VPC ID and subnets

CLUSTER=andrey-test-fb REGION=eu-north-1 VPC_ID=$(aws eks describe-cluster \ --name $CLUSTER --region $REGION \ --query "cluster.resourcesVpcConfig.vpcId" \ --output text) echo "VPC ID: $VPC_ID" read -r -a SUBNETS <<< "$( aws ec2 describe-subnets \ --filters Name=vpc-id,Values=$VPC_ID \ --region $REGION \ --query 'Subnets[].SubnetId' \ --output text )" echo "Subnets: ${SUBNETS[*]}"
image-20250716-100936.png

Pick a Security Group (from your node group)

NG=$(aws eks list-nodegroups \ --cluster-name andrey-test-fb \ --region eu-north-1 \ --query "nodegroups[0]" --output text) SGS=$(aws eks describe-nodegroup \ --cluster-name andrey-test-fb \ --nodegroup-name $NG --region eu-north-1 \ --query "nodegroup.resources.remoteAccess.securityGroupIds[]" \ --output text) echo "Using SGs: $SGS"

Create mount targets for your new EFS

for SN in subnet-00ab400cacdce9d40 subnet-0a2f471dec4cb6d70 subnet-0be575b8b3138c576; do aws efs create-mount-target \ --file-system-id fs-080e942f89b0ba54f \ --subnet-id $SN \ --security-groups sg-0b2d8ea6688a2f958 \ --region eu-north-1 \ && echo "✔ Created mount target in $SN" \ || echo "⚠ Skip or already exists in $SN" done
image-20250716-101242.png

Create a static NFS PV

nano efs-pv-static.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: fluentbit-state-pv spec: capacity: storage: 5Gi accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: "" # static binding, no dynamic provisioner csi: driver: efs.csi.aws.com volumeHandle: fs-080e942f89b0ba54f # <<< your real EFS FS ID

Apply it:

kubectl apply -f efs-pv-static.yaml

Verify it’s Available:

kubectl get pv fluentbit-state-pv
image-20250828-081849.png

Create your PVC

nano efs-pvc.yaml
# efs-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: fluentbit-state-pvc namespace: logging spec: accessModes: - ReadWriteMany resources: requests: storage: 5Gi storageClassName: "" # must match PV volumeName: fluentbit-state-pv # bind to your static PV

Apply it:

kubectl apply -f efs-pvc.yaml kubectl get pvc fluentbit-state-pvc -n logging
image-20250716-112532.png

eksctl creates the IRSA role for you (fixed policy ARN)

ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text) eksctl utils associate-iam-oidc-provider \ --cluster andreyXpologTest --region eu-north-1 --approve eksctl create iamserviceaccount \ --cluster andreyXpologTest \ --region eu-north-1 \ --namespace kube-system \ --name efs-csi-controller-sa \ --role-name eks-efs-csi-controller-andrey-test-fb \ --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \ --approve --override-existing-serviceaccounts eksctl create addon \ --cluster andreyXpologTest \ --region eu-north-1 \ --name aws-efs-csi-driver \ --version latest \ --service-account-role-arn arn:aws:iam::$ACCOUNT_ID:role/eks-efs-csi-controller-andrey-test-fb \ --force

Create Fluent Bit ConfigMap
Info Gathering Cheat Sheet

🔍 Info Needed

✅ Purpose

🧪 Command / Example

🔍 Info Needed

✅ Purpose

🧪 Command / Example

Cluster access

Ensure kubectl works

kubectl config current-context

Node count

Know how many DaemonSet pods

kubectl get nodes

Namespace

Decide where to deploy

kubectl get namespaces

Pod labels

Needed for Service.selector

kubectl get pods --show-labels

Container port

For Port in Output

kubectl describe pod <name> or check manifest

App logs path

For Fluent Bit input path

Usually /var/log/containers/*.log

Log format

To choose the correct parser

Look at the sample logs

RBAC resources/actions

For metadata enrichment (FB)

Usually get, list, watch on pods, namespaces

ServiceAccount name

Needed for RBAC bindings

Define in your YAML (fluent-bit)

PersistentVolume info

Persistent Volume

  • List all PVs: kubectl get pv

  • Describe a PV: kubectl describe pv <pv-name>

  • See PVCs in your namespace: kubectl get pvc -n <namespace>

Construct the internal DNS name

Use instead of hardcoded IPs

For “Host” in the Output
You can use a hardcoded IP

<SERVICE_NAME>.<NAMESPACE>.svc.cluster.local

service.default.svc.cluster.local

DNS test inside pod

Validate Service DNS works

image-20250702-105813.png

 

kubectl run test --image=busybox -it --rm -- sh

nslookup xpolog-service.default.svc.cluster.local

URI and HTTP listener token

URI field

Run xpolog and take token from http listener:

image-20250630-182235.png

 

Cluster-wide log collection using Fluent Bit on every node

ConfigMap (absolute paths + CRI parser)

nano fluentbit-config.yaml
apiVersion: v1 kind: ConfigMap metadata: name: fluent-bit-config namespace: logging data: fluent-bit.conf: | [SERVICE] Flush 5 Daemon Off Log_Level info Parsers_File /fluent-bit/etc/parsers.conf # Persist state (your PVC is mounted at /var/fluent-bit/state) storage.path /var/fluent-bit/state/${NODE_NAME} storage.type filesystem storage.checkpoint true HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 @INCLUDE /fluent-bit/etc/inputs.conf @INCLUDE /fluent-bit/etc/filters.conf @INCLUDE /fluent-bit/etc/outputs.conf inputs.conf: | [INPUT] Name tail Tag kube.${NODE_NAME} Path /var/log/containers/*.log Parser cri Mem_Buf_Limit 10MB Skip_Long_Lines On Refresh_Interval 10 DB /var/fluent-bit/state/${NODE_NAME}/flb_${NODE_NAME}.db storage.type filesystem filters.conf: | [FILTER] Name kubernetes Match kube.* Kube_Tag_Prefix kube.var.log.containers. Merge_Log On Merge_Log_Key log Keep_Log Off outputs.conf: | [OUTPUT] Name http Match * Host xpolog-service.default.svc.cluster.local Port 30303 URI /logeye/api/logger.jsp?token=99037382-4e34-464c-8442-ebcc7687ce15 Json_date_key time Json_date_format iso8601 Header X-Xpolog-Sender local-k8s Retry_Limit 5 # Parser compatible with containerd (CRI) logs parsers.conf: | [PARSER] Name cri Format regex Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[FP]) (?<log>.*)$ Time_Key time Time_Format %Y-%m-%dT%H:%M:%S.%L%z
kubectl apply -f fluentbit-config.yaml

DaemonSet (no Docker mount; keep your PVC)

nano fluentbit-daemonset.yaml
apiVersion: apps/v1 kind: DaemonSet metadata: name: fluent-bit namespace: logging labels: app.kubernetes.io/name: fluent-bit spec: selector: matchLabels: app.kubernetes.io/name: fluent-bit updateStrategy: type: RollingUpdate template: metadata: labels: app.kubernetes.io/name: fluent-bit spec: serviceAccountName: fluent-bit dnsPolicy: ClusterFirst tolerations: - operator: Exists containers: - name: fluent-bit image: fluent/fluent-bit:2.2.2 imagePullPolicy: IfNotPresent env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName resources: requests: cpu: "50m" memory: "100Mi" limits: cpu: "200m" memory: "256Mi" ports: - name: http containerPort: 2020 livenessProbe: httpGet: path: /api/v1/health port: 2020 initialDelaySeconds: 15 periodSeconds: 60 volumeMounts: - name: config mountPath: /fluent-bit/etc - name: varlog mountPath: /var/log readOnly: true - name: state mountPath: /var/fluent-bit/state volumes: - name: config configMap: name: fluent-bit-config - name: varlog hostPath: path: /var/log type: Directory - name: state persistentVolumeClaim: claimName: fluentbit-state-pvc
kubectl apply -f fluentbit-daemonset.yaml

Check:

kubectl -n logging get pods -o wide
image-20250828-075128.png
image-20250828-075102.png

Direct shell into the PVC/EFS storage:

kubectl -n logging run pvc-shell --rm -it \ --image=alpine:3.20 \ --overrides=' { "spec": { "containers": [{ "name": "sh", "image": "alpine:3.20", "command": ["sh"], "stdin": true, "tty": true, "volumeMounts": [{ "mountPath": "/mnt/storage", "name": "data" }] }], "volumes": [{ "name": "data", "persistentVolumeClaim": { "claimName": "fluentbit-state-pvc" } }] } }'

 

image-20250828-080509.png


ConfigMap explanation

kind: ConfigMap # Resource type: holds configuration data metadata: name: fluent-bit-config # Name of the ConfigMap

In Kubernetes manifests, every object you create follows a common structure. These two fields are part of that structure:

  • kind: ConfigMap

    • The kind field tells Kubernetes what type of object you’re defining.

    • A ConfigMap is an API object for storing non‑confidential configuration data as key‑value pairs, which pods can consume as environment variables, command‑line arguments, or mounted files Kubernetes.