Background
When maintaining a k8s cluster, a very common requirement is to persist the logs, on the one hand to facilitate future traceability, and on the other hand to aggregate the logs of the replicate out of the same service.
In my current needs, I only need to persist the logs and do not need to do additional analysis. So I did not deploy a service like ElasticSearch to index the logs.
Implementation
The implementation mainly refers to the official repository: https://github.com/fluent/fluentd-kubernetes-daemonset. It packages some common plugins into a docker image and then provides some default settings, such as getting k8s logs and pod logs, etc. To achieve my requirements, I want.
- a fluentd on each node to collect logs, forward to a separate fluentd on the log server
- the fluentd on the log server saves the received logs to a file
Since the log server is not managed by k8s, I have to install it manually as described in official website.
Then, edit the configuration /etc/td-agent/td-agent.conf
.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
<source>
@type forward
@id input_forward
bind x.x.x.x
</source>
<match **>
@type file
path /var/log/fluentd/k8s
compress gzip
<buffer>
timekey 1d
timekey_use_utc true
timekey_wait 10m
</buffer>
</match>
|
Set input: listen to fluentd forward protocol; output: set output file, and buffer configuration. If needed, you can add forensics.
Then, according to https://github.com/fluent/fluentd-kubernetes-daemonset/blob/master/fluentd-daemonset-forward.yaml, I made some changes and got the following configuration.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
|
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
namespace: kube-system
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: kube-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
namespace: kube-system
labels:
k8s-app: fluentd-logging
version: v1
spec:
selector:
matchLabels:
k8s-app: fluentd-logging
version: v1
template:
metadata:
labels:
k8s-app: fluentd-logging
version: v1
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-forward
env:
- name: FLUENT_FOWARD_HOST
value: "x.x.x.x"
- name: FLUENT_FOWARD_PORT
value: "24224"
- name: FLUENTD_SYSTEMD_CONF
value: "disable"
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: config-volume
mountPath: /fluentd/etc/tail_container_parse.conf
subPath: tail_container_parse.conf
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: config-volume
configMap:
name: fluentd-config
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-config
namespace: kube-system
data:
tail_container_parse.conf: |-
<parse>
@type cri
</parse>
|
There are a few differences in details from the original version.
- k8s has rbac enabled, so you need the corresponding configuration; just copy the other files with rbac configuration from the repository.
- disabled SYSTEMD log capture, because I am using k3s, not kubeadm, so naturally I can’t find the systemd service of kubelet.
- Override the reading of container logs, because the container runtime log format is different from the default, and this part is also mentioned in the README of the repository.
Just deploy it to k8s. To ensure the accuracy of the logs, it is recommended to keep each node synchronized with NTP.