Deploying Fluent Bit on AWS EKS with Persistent EFS Storage
A comprehensive, step-by-step guide to configuring a resilient and stateful logging pipeline, ensuring data integrity and preventing log loss
What is Amazon EKS?
Amazon Elastic Kubernetes Service (EKS) is AWS's managed service that makes it easy to run, manage, and scale containerized applications using Kubernetes on the AWS cloud.
Think of it like this: Kubernetes is a powerful but complex engine for orchestrating containers. Instead of building, securing, and maintaining the most complicated parts of that engine yourself, EKS provides a fully managed, highly available, and secure Kubernetes control plane as a service.
Key Benefits of EKS
Managed Control Plane: This is the core advantage. AWS automatically manages the availability, scalability, and patching of the Kubernetes control plane components (like
etcdand the API server). This frees you from significant operational overhead and lets you focus on your applications.High Availability: The EKS control plane is distributed across multiple AWS Availability Zones (AZs), eliminating any single point of failure and ensuring your cluster's brain is always running.
Seamless AWS Integration: EKS is deeply integrated with the AWS ecosystem. It works effortlessly with services like:
IAM for secure authentication and authorization.
VPC for isolated and secure networking.
Elastic Load Balancers (ALB/NLB) for exposing your services.
EFS & EBS for persistent storage solutions.
Pure Kubernetes Experience: EKS runs upstream, certified Kubernetes. This means you get a standard, community-tested experience, and any tools or add-ons that work with Kubernetes will work with EKS.
How It Works: Control Plane vs. Worker Nodes
An EKS cluster is primarily composed of two parts:
The EKS Control Plane: Managed entirely by AWS. You don't see the underlying instances, but you interact with them through the Kubernetes API (e.g., using
kubectl).Worker Nodes: These are the EC2 instances where your application containers (Pods) actually run. You provision and manage these nodes within your VPC and are responsible for them. They register themselves with the control plane to form the complete cluster.
Why EKS Matters for This Guide
EKS provides the robust, production-grade Kubernetes environment where our applications will run and generate logs. We will deploy Fluent Bit as a DaemonSet across all the worker nodes in our EKS cluster. Fluent Bit's task is to reliably collect logs from every application on every node and forward them to a central location. By using EKS, we start with a secure and scalable foundation for our entire logging pipeline.
Create New Cluster
Open your AWS GUI
Search for EKS:
Press “Create cluster”
The EKS Cluster IAM Role: Your Cluster's AWS Passport
When you create an EKS cluster, you're asked to select a "Cluster IAM role." This is one of the most important configuration steps as it defines the permissions your cluster has to interact with other AWS services.
In simple terms, this IAM role is what the Kubernetes control plane uses to make AWS API calls on your behalf.
Why is it Necessary?
Think of the EKS control plane as a manager hired by you but living in a separate AWS-managed building. This manager (the control plane) needs a set of keys (the IAM role) to access and manage resources within your building (your AWS account and VPC).
Without this role, the control plane would be isolated and unable to perform essential tasks, such as:
Networking: Creating and managing Elastic Network Interfaces (ENIs) in your VPC subnets for pod networking.
Load Balancing: Provisioning and configuring Application or Network Load Balancers when you create a Kubernetes
Serviceof typeLoadBalancer.Storage: Interacting with services like EBS when creating
PersistentVolumes.
This role acts as a secure "passport," granting the EKS service just enough permission to manage these resources without giving it full access to your entire AWS account.
What Permissions Does It Need?
You don't have to figure out the permissions yourself. AWS provides a managed policy specifically for this purpose called AmazonEKSClusterPolicy. This policy contains all the necessary permissions (ec2:CreateNetworkInterface, elasticloadbalancing:RegisterTargets, etc.) that the control plane requires to function correctly.
When you create the cluster using the AWS Management Console, it will often guide you to create a new role and will automatically attach this policy for you.
Key Takeaway
The Cluster IAM Role is the security link between the AWS-managed Kubernetes control plane and the resources running in your own AWS account. You are granting the EKS service explicit permission to manage cluster-related resources on your behalf.
Creating the EKS Cluster IAM Role
You will create a new IAM role that the EKS service can assume. The AWS console simplifies this process by pre-selecting the correct trust relationship and permissions policy for you.
Here are the step-by-step instructions:
Navigate to the IAM Console(or press “Create recommended role”) in your AWS account.
On the left-hand navigation pane, click on Roles, then click the "Create role" button.
Step 1: Select Trusted Entity
For "Trusted entity type," choose AWS service.
Under "Use case," select EKS from the dropdown menu.
This will reveal another option below. Choose EKS - Auto Cluster.
Click Next.
Step 2: Add Permissions
The console will automatically select the required permissions policy:
AmazonEKSClusterPolicy.You don't need to do anything else on this screen. Simply click Next.
Step 3: Name, Review, and Create
Role name: Give your role a descriptive name that you will easily recognize. For example:
my-eks-cluster-roleorEKSClusterRoleForGuide.Review the details to ensure the trusted entity is
eks.amazonaws.comand the attached policy isAmazonEKSClusterPolicy.Click the "Create role" button at the bottom.
The Role to Select
Now, when you are creating your EKS cluster and you get to the "Cluster IAM role" dropdown menu, you will select the role you just created (e.g., my-eks-cluster-role).
This explicitly grants the EKS control plane the permissions defined in the AmazonEKSClusterPolicy to manage resources within your account.
Of course. After the Cluster Role, the next critical component is the Node IAM Role.
The Node IAM Role: The Worker's Toolkit
While the Cluster Role is for the EKS control plane (the manager), the Node IAM Role is attached to each of your EC2 worker nodes. This role grants the necessary permissions for the nodes themselves to function correctly within the cluster and interact with other AWS services.
Think of this as the toolkit you give to each individual worker on your team. Each worker node needs this toolkit to perform its core job.
Why is it Necessary?
Your worker nodes are not just passive machines; they are active participants in the Kubernetes cluster. The kubelet (the primary "node agent") running on each node, and the pods scheduled on them, need permissions to:
Join the Cluster: A node needs permission to communicate with the EKS control plane to register itself and receive workloads.
Pull Container Images: To run your applications, the nodes must have permission to pull container images from Amazon ECR (Elastic Container Registry).
Manage Networking: The AWS VPC CNI plugin, which handles pod networking, runs on each node and needs permissions to manage network interfaces.
Access Other AWS Services: If a pod on a node needs to access an S3 bucket or a DynamoDB table, it will (by default) inherit permissions from this role.
How to Create the Node IAM Role
The creation process is similar to the Cluster Role, but with a different trusted entity and different policies.
Navigate to the IAM Console(or press “Create recommended role”), go to Roles, and click "Create role".
Step 1: Select Trusted Entity
For "Trusted entity type," choose AWS service.
Under "Use case," select EC2. This is because your worker nodes are EC2 instances.
Click Next.
Step 2: Add Permissions
In the search bar, find and attach the following three AWS managed policies. You must attach all of them:
AmazonEKSWorkerNodePolicyIt provides the worker nodes with the minimum permissions needed to communicate with the EKS control plane.
AmazonEKS_CNI_PolicyThe Amazon VPC CNI plugin (
aws-nodeDaemonSet) is what allows Kubernetes pods in EKS to get IP addresses from your VPC subnets and communicate with other resources (pods, services, and AWS infrastructure)AmazonEC2ContainerRegistryReadOnly(This allows nodes to pull images from ECR)
Click Next.
Step 3: Name, Review, and Create
Role name: Give it a clear name, such as
my-eks-node-role.Review the configuration to ensure the trusted entity is
ec2.amazonaws.comand the three required policies are attached.Click "Create role".
Where This Role is Used
You will select this role (my-eks-node-role) later in the EKS setup process, specifically when you create a Node Group for your cluster. Assigning this role to the node group ensures that every EC2 instance launched within it has the correct permissions to operate as a functional E-K-S worker node.
Choosing Your Cluster's Network: The VPC
At this step, you're defining the private network space where your entire EKS cluster will live. VPC stands for Virtual Private Cloud, and you can think of it as your own logically isolated, fenced-off area within the vast AWS cloud. All your cluster's resources—the worker nodes, the pods, and the internal load balancers—will be launched inside this VPC.
Key Requirements for an EKS VPC
For an EKS cluster to be resilient and function correctly, the VPC you select must meet a few critical requirements:
Multiple Subnets: The VPC must have at least two subnets.
Multiple Availability Zones (AZs): Crucially, these subnets must be in different Availability Zones. An AZ is a distinct data center within an AWS region. Spanning your cluster across multiple AZs ensures high availability, so if one data center has an issue, your cluster can continue running in another.
Public and Private Subnets: A production-ready setup includes both public and private subnets:
Public Subnets: These are for internet-facing resources, primarily your public-facing load balancers. They have a direct route to an AWS Internet Gateway.
Private Subnets: This is where your worker nodes should live for security. They don't have public IP addresses and can access the internet securely through a NAT Gateway that resides in a public subnet.
Your Options
You have two main choices on the EKS creation screen:
Use an Existing VPC: If you already have a VPC configured that meets the requirements above, you can select it. This is common in established AWS environments.
Let AWS Create a New VPC: For this guide, and for anyone new to EKS, this is the highly recommended option. AWS provides a CloudFormation template that automatically creates a new VPC perfectly configured for EKS. It will set up the public and private subnets across multiple AZs, create the necessary route tables, and provision an Internet Gateway and NAT Gateways for you.
For this guide, select the default VPC or follow the prompts to have AWS create a new VPC for you. This will prevent common networking issues and ensure your cluster is built on a solid, secure, and highly available foundation.
Subnets:
Leave all the subnets selected just as they are.
For EKS to function correctly, it needs to be aware of all the available subnets in its VPC. Deselecting any of them could lead to issues with networking, load balancing, or node placement. Simply accept the default selection and proceed to the next step.
Press Create and wait for status “Active”
Connect your terminal to the cloud and new cluster:
Open your terminal:
aws configureAWS Access Key ID:
AWS Secret Access Key:AWS Management Console
Sign in to the AWS Console.
Navigate to IAM (under “Security, Identity, & Compliance”).
In the left sidebar, click Users, then select your user name.
Go to the Security credentials tab.
Under Access keys, you’ll see your existing Access Key IDs (but you can only view the ID, not the secret).
If you need a new key, click Create access key, give it a name/description, and you’ll be shown both the Access Key ID and the Secret Access Key one time.
Connect kubectl to EKS:
aws eks update-kubeconfig --region <cluster_region> --name <cluster_name>aws eks update-kubeconfig --region eu-north-1 --name andrey-test-fbOptional: Set environment variables:
export CLUSTER="andrey-test-fb"
export REGION="eu-north-1"
export ACCOUNT_ID="655536767854"Deploy test App (Apache)
nano apache-deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: apache-deployment
spec:
replicas: 1
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd:2.4
ports:
- containerPort: 80
kubectl apply -f apache-deployment.yaml
Deploy XpoLog
nano xpolog-deployment.yamlapiVersion: apps/v1
kind: Deployment
metadata:
name: xpolog
spec:
replicas: 1
selector:
matchLabels:
app: xpolog
template:
metadata:
labels:
app: xpolog
spec:
containers:
- name: xpolog
image: 1200km/xplg:fixed
ports:
- containerPort: 30303
---
apiVersion: v1
kind: Service
metadata:
name: xpolog-service
spec:
type: LoadBalancer
ports:
- port: 30303
targetPort: 30303
selector:
app: xpolog
kubectl apply -f xpolog-deployment.yamlExpose the XpoLog Receiver and Configure Cluster DNS
This YAML creates an internal NLB on port 30303 fronting Pods with
app: xpolog.Inside the cluster: use
xpolog-service.default.svc.cluster.local:30303.Inside the VPC (outside K8s): create a Route 53 private A-alias (e.g.,
logs.internal.company) → NLB hostname, then use that name on EC2/Lambda/etc.
nano xpolog-service.yamlapiVersion: v1
kind: Service
metadata:
name: xpolog-service
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
type: LoadBalancer
selector:
app: xpolog
ports:
- name: http
port: 30303
targetPort: 30303
protocol: TCP
kubectl apply -f xpolog-service.yaml Get pods list:
kubectl get podsOptional: Create dns if not works
eksctl create addon --name coredns --cluster andreyXpologTest --region eu-north-1 --forceTemporarily Port‑Forward XpoLog to Localhost
kubectl port-forward pod/xpolog-66cc88bc4-tc4wl 30303:30303Create Namespace
kubectl create namespace loggingRBAC & ServiceAccount for Fluent Bit
nano fluentbit-rbac.yamlapiVersion: v1
kind: ServiceAccount
metadata:
name: fluent-bit
namespace: logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluent-bit-read
rules:
- apiGroups: [""]
resources: ["pods", "namespaces"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: fluent-bit-read
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: fluent-bit-read
subjects:
- kind: ServiceAccount
name: fluent-bit
namespace: logging
kubectl apply -f fluentbit-rbac.yamlStand up a shared, ReadWriteMany Amazon EFS volume in EKS. And use it for each Fluent Bit pod’s offset DB
EKS: create a nodegroup on a non-eksctl-managed cluster
1. Prerequisites
Installed:
awscli,kubectl,eksctlIAM user/role with EKS + EC2 + CloudFormation rights
Existing EKS cluster:
andrey-test-fbineu-north-1Subnet IDs you plan to use:
subnet-00ab400cacdce9d40 subnet-0a2f471dec4cb6d70 subnet-0be575b8b3138c576
2. Verify cluster access
aws eks update-kubeconfig --name andrey-test-fb --region eu-north-1
kubectl get svcIf you see the Kubernetes services, kubectl context is correct.
3. Collect VPC info
You need VPC ID + control-plane security group ID.
# VPC ID (from one of your subnets)
aws ec2 describe-subnets \
--subnet-ids subnet-00ab400cacdce9d40 \
--query "Subnets[0].VpcId" --output text
# Cluster SG
aws eks describe-cluster \
--name andrey-test-fb --region eu-north-1 \
--query "cluster.resourcesVpcConfig.clusterSecurityGroupId" --output text
4. Enable OIDC provider for IRSA (one-time)
eksctl utils associate-iam-oidc-provider \
--cluster andrey-test-fb --region eu-north-1 --approve
5. Create a nodegroup config file
Save as nodegroup.yaml:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: andrey-test-fb
region: eu-north-1
vpc:
id: vpc-xxxxxxxx # from step 3
securityGroup: sg-xxxxxxx # from step 3
subnets:
private: # or public, depending on your design
eu-north-1a:
id: subnet-00ab400cacdce9d40
eu-north-1b:
id: subnet-0a2f471dec4cb6d70
eu-north-1c:
id: subnet-0be575b8b3138c576
managedNodeGroups:
- name: my-first-nodegroup
instanceType: t3.medium
desiredCapacity: 2
minSize: 2
maxSize: 4
privateNetworking: true # set false if using public subnets
amiFamily: AmazonLinux2023 # recommended
labels:
role: worker
6. Create the nodegroup
eksctl create nodegroup -f nodegroup.yamlThis provisions a CloudFormation stack and launches EC2 nodes.
7. Install EKS core add-ons
Even on non-eksctl clusters, you need the three standard add-ons:
eksctl create addon --cluster andrey-test-fb --region eu-north-1 --name vpc-cni --version latest --force
eksctl create addon --cluster andrey-test-fb --region eu-north-1 --name kube-proxy --version latest --force
eksctl create addon --cluster andrey-test-fb --region eu-north-1 --name coredns --version latest --force8. Verify nodes
kubectl get nodes -o wide
kubectl get pods -n kube-system -o wideCreate FileSystem
aws efs create-file-system \
--region eu-north-1 \
--creation-token fluentbit-efs-new \
--performance-mode generalPurpose \
--throughput-mode bursting \
--tags Key=Name,Value=fluentbit-efs-new \
--query FileSystemId --output text
Gather your cluster’s VPC ID and subnets
CLUSTER=andrey-test-fb
REGION=eu-north-1
VPC_ID=$(aws eks describe-cluster \
--name $CLUSTER --region $REGION \
--query "cluster.resourcesVpcConfig.vpcId" \
--output text)
echo "VPC ID: $VPC_ID"
read -r -a SUBNETS <<< "$(
aws ec2 describe-subnets \
--filters Name=vpc-id,Values=$VPC_ID \
--region $REGION \
--query 'Subnets[].SubnetId' \
--output text
)"
echo "Subnets: ${SUBNETS[*]}"Pick a Security Group (from your node group)
NG=$(aws eks list-nodegroups \
--cluster-name andrey-test-fb \
--region eu-north-1 \
--query "nodegroups[0]" --output text)
SGS=$(aws eks describe-nodegroup \
--cluster-name andrey-test-fb \
--nodegroup-name $NG --region eu-north-1 \
--query "nodegroup.resources.remoteAccess.securityGroupIds[]" \
--output text)
echo "Using SGs: $SGS"Create mount targets for your new EFS
for SN in subnet-00ab400cacdce9d40 subnet-0a2f471dec4cb6d70 subnet-0be575b8b3138c576; do
aws efs create-mount-target \
--file-system-id fs-080e942f89b0ba54f \
--subnet-id $SN \
--security-groups sg-0b2d8ea6688a2f958 \
--region eu-north-1 \
&& echo "✔ Created mount target in $SN" \
|| echo "⚠ Skip or already exists in $SN"
done
Create a static NFS PV
nano efs-pv-static.yamlapiVersion: v1
kind: PersistentVolume
metadata:
name: fluentbit-state-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: "" # static binding, no dynamic provisioner
csi:
driver: efs.csi.aws.com
volumeHandle: fs-080e942f89b0ba54f # <<< your real EFS FS IDApply it:
kubectl apply -f efs-pv-static.yamlVerify it’s Available:
kubectl get pv fluentbit-state-pvCreate your PVC
nano efs-pvc.yaml# efs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: fluentbit-state-pvc
namespace: logging
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: "" # must match PV
volumeName: fluentbit-state-pv # bind to your static PV
Apply it:
kubectl apply -f efs-pvc.yaml
kubectl get pvc fluentbit-state-pvc -n loggingeksctl creates the IRSA role for you (fixed policy ARN)
ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
eksctl utils associate-iam-oidc-provider \
--cluster andreyXpologTest --region eu-north-1 --approve
eksctl create iamserviceaccount \
--cluster andreyXpologTest \
--region eu-north-1 \
--namespace kube-system \
--name efs-csi-controller-sa \
--role-name eks-efs-csi-controller-andrey-test-fb \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
--approve --override-existing-serviceaccounts
eksctl create addon \
--cluster andreyXpologTest \
--region eu-north-1 \
--name aws-efs-csi-driver \
--version latest \
--service-account-role-arn arn:aws:iam::$ACCOUNT_ID:role/eks-efs-csi-controller-andrey-test-fb \
--force
Create Fluent Bit ConfigMap
Info Gathering Cheat Sheet
🔍 Info Needed | ✅ Purpose | 🧪 Command / Example | |
|---|---|---|---|
Cluster access | Ensure |
| |
Node count | Know how many DaemonSet pods |
| |
Namespace | Decide where to deploy |
| |
Pod labels | Needed for |
| |
Container port | For |
| |
App logs path | For Fluent Bit input path | Usually | |
Log format | To choose the correct parser | Look at the sample logs | |
RBAC resources/actions | For metadata enrichment (FB) | Usually | |
ServiceAccount name | Needed for RBAC bindings | Define in your YAML ( | |
| Persistent Volume |
| |
Construct the internal DNS name | Use instead of hardcoded IPs For “Host” in the Output | <SERVICE_NAME>.<NAMESPACE>.svc.cluster.local
| |
DNS test inside pod | Validate Service DNS works
|
| |
URI and HTTP listener token | URI field | Run xpolog and take token from http listener: |
Cluster-wide log collection using Fluent Bit on every node
ConfigMap (absolute paths + CRI parser)
nano fluentbit-config.yamlapiVersion: v1
kind: ConfigMap
metadata:
name: fluent-bit-config
namespace: logging
data:
fluent-bit.conf: |
[SERVICE]
Flush 5
Daemon Off
Log_Level info
Parsers_File /fluent-bit/etc/parsers.conf
# Persist state (your PVC is mounted at /var/fluent-bit/state)
storage.path /var/fluent-bit/state/${NODE_NAME}
storage.type filesystem
storage.checkpoint true
HTTP_Server On
HTTP_Listen 0.0.0.0
HTTP_Port 2020
@INCLUDE /fluent-bit/etc/inputs.conf
@INCLUDE /fluent-bit/etc/filters.conf
@INCLUDE /fluent-bit/etc/outputs.conf
inputs.conf: |
[INPUT]
Name tail
Tag kube.${NODE_NAME}
Path /var/log/containers/*.log
Parser cri
Mem_Buf_Limit 10MB
Skip_Long_Lines On
Refresh_Interval 10
DB /var/fluent-bit/state/${NODE_NAME}/flb_${NODE_NAME}.db
storage.type filesystem
filters.conf: |
[FILTER]
Name kubernetes
Match kube.*
Kube_Tag_Prefix kube.var.log.containers.
Merge_Log On
Merge_Log_Key log
Keep_Log Off
outputs.conf: |
[OUTPUT]
Name http
Match *
Host xpolog-service.default.svc.cluster.local
Port 30303
URI /logeye/api/logger.jsp?token=99037382-4e34-464c-8442-ebcc7687ce15
Json_date_key time
Json_date_format iso8601
Header X-Xpolog-Sender local-k8s
Retry_Limit 5
# Parser compatible with containerd (CRI) logs
parsers.conf: |
[PARSER]
Name cri
Format regex
Regex ^(?<time>[^ ]+) (?<stream>stdout|stderr) (?<logtag>[FP]) (?<log>.*)$
Time_Key time
Time_Format %Y-%m-%dT%H:%M:%S.%L%z
kubectl apply -f fluentbit-config.yamlDaemonSet (no Docker mount; keep your PVC)
nano fluentbit-daemonset.yamlapiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
labels:
app.kubernetes.io/name: fluent-bit
spec:
selector:
matchLabels:
app.kubernetes.io/name: fluent-bit
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app.kubernetes.io/name: fluent-bit
spec:
serviceAccountName: fluent-bit
dnsPolicy: ClusterFirst
tolerations:
- operator: Exists
containers:
- name: fluent-bit
image: fluent/fluent-bit:2.2.2
imagePullPolicy: IfNotPresent
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
resources:
requests:
cpu: "50m"
memory: "100Mi"
limits:
cpu: "200m"
memory: "256Mi"
ports:
- name: http
containerPort: 2020
livenessProbe:
httpGet:
path: /api/v1/health
port: 2020
initialDelaySeconds: 15
periodSeconds: 60
volumeMounts:
- name: config
mountPath: /fluent-bit/etc
- name: varlog
mountPath: /var/log
readOnly: true
- name: state
mountPath: /var/fluent-bit/state
volumes:
- name: config
configMap:
name: fluent-bit-config
- name: varlog
hostPath:
path: /var/log
type: Directory
- name: state
persistentVolumeClaim:
claimName: fluentbit-state-pvc
kubectl apply -f fluentbit-daemonset.yamlCheck:
kubectl -n logging get pods -o wideDirect shell into the PVC/EFS storage:
kubectl -n logging run pvc-shell --rm -it \
--image=alpine:3.20 \
--overrides='
{
"spec": {
"containers": [{
"name": "sh",
"image": "alpine:3.20",
"command": ["sh"],
"stdin": true,
"tty": true,
"volumeMounts": [{
"mountPath": "/mnt/storage",
"name": "data"
}]
}],
"volumes": [{
"name": "data",
"persistentVolumeClaim": {
"claimName": "fluentbit-state-pvc"
}
}]
}
}'
ConfigMap explanation
kind: ConfigMap # Resource type: holds configuration data
metadata:
name: fluent-bit-config # Name of the ConfigMapIn Kubernetes manifests, every object you create follows a common structure. These two fields are part of that structure:
kind: ConfigMapThe
kindfield tells Kubernetes what type of object you’re defining.A ConfigMap is an API object for storing non‑confidential configuration data as key‑value pairs, which pods can consume as environment variables, command‑line arguments, or mounted files Kubernetes.