kubectl is a command line tool (CLI) for Kubernetes, mainly used to help you manage Kubernetes clusters, deploy applications, view and manage resources and records in various clusters. When we want to create resources, we often use kubectl create or kubectl apply to create resources. Is it really the literal meaning (create/apply)? In this article, we will explore this question.
Imperative vs. Declarative
When we built the Kubernetes resource, we basically split it into two strategies, and understanding the difference between the two is critical to the future implementation of GitOps or IaC, and should be thoroughly understood.
-
Imperative
When creating multiple resources for a system, you can create resources through line by line commands. Since resources can be created and started in a sequential manner, it is important that you understand precisely what the dependencies are in the resource creation process.
You need to know “what” and “how” to build the system completely, just as we build resources “by hand”, except that this sequence of commands can be written as a Shell Script to run automatically.
-
Declarative
When you create multiple resources for a system, you can declare a resource definition file (YAML/JSON) and provide the definition file to the platform or cluster directly, and let the cluster or platform decide how to deploy the resources, and the deployment sequence can be analyzed and judged by the system itself.
So you just need to know what you want, and the system will do the rest for you.
Both strategies can be used to deploy services. After understanding the differences, which one do you prefer?
In fact, the Imperative
approach is the most common in our practice. Why? Because when people build resources, they lack resource planning, or they often change whatever they think of after the resources are built, and the system must follow the IT staff’s “orders” to change its status. For example, if we need a VM, we may create a VM manually, then install the software, and then configure the system. This step-by-step behavior is part of the Imperative
deployment strategy.
But don’t you think it’s great to use the “declarative” approach? I just say what I want, just like writing a program, and then compile it, theoretically, if it works now, it will work next time. All you need to do is define the resource specification and leave the rest! It is true that the ideal is perfect, but the reality is really hard because there is a certain learning threshold to deploy resources in a “declarative” way, and sometimes you need to learn some “resource definition language” to start defining them.
Can you only choose one of the two strategies? No. In practice, I often see a mix of the two approaches. You can define the basic service definition in Declarative
way, build the resources, and then fine tune it in Imperative
way. For example, because of insufficient computing resources, we may temporarily “order” the cluster to increase the number of replicas
, and this manual adjustment of the cluster by “order” is an Imperative
behavior. Or if the service is running abnormally, we may “order” the cluster to cut a pod immediately to restart the service, which is also considered as Imperative
.
However, although this hybrid strategy is feasible and its behavior is reasonable, it has a serious drawback, that is, it will have the configuration drift problem.
Configuration Drift
We want to manage the service through Declarative way, that is, we hope not to spend our efforts and time on “what to do” and “how to do” this thing, and clearly define what we “want” is good, more help us clarify the structure, have a clearer thinking, not to be affected by the details of the complexity, so that we can better master the Infra architecture. To do IaC (Infrastructure as Code) (Infrastructure as Code) results, that is, the “infrastructure” as “code” to maintain the same, significantly reduce the management of the Cognitive load.
Configuration drift usually means that your definition
of the service is different from the current state
, which makes it easy to lose control of the service and rebuild the exact same environment. When your cluster’s current state is very different from its definition, you can no longer manage resources in a ``Declarative’’ way, which means you lose control of your infrastructure and the IaC mechanism will fail.
kubectl create and kubectl apply
The kubectl create
command that we often use is actually the Imperative
approach, because you explicitly tell Kubernetes to “create” a resource, and it doesn’t record the final state of the created resource, it really just creates it for you.
The kubectl apply
command that we often use is actually a Declarative
approach, because you don’t have to explicitly tell Kubernetes how to create a resource, and you don’t have to care if the resource is available in the cluster, it simply creates the resource you want. If you don’t have a resource, Kubernetes will also create a Snapshot of the resource when you create it, which is recorded in the resource’s .metadata.annotations.kubectl.kubernetes.io/last-applied-configuration
, so if you update the YAML file in the future, it will compare the previous version with the most recent applied version, calculate the differences, and apply the differences to the update.
I will use two examples to illustrate the slight differences between the two commands.
-
the case of mixing
kubectl create
andkubectl apply
Create resource (nginx.yaml).
The contents of the
metadata
for this new resource are as follows.1 2 3 4 5 6 7 8 9 10 11 12 13 14
apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 0234111f7347e3ebdf5737dc2c81ccd376b12e552047b1d8899f692dd5f5fee5 cni.projectcalico.org/podIP: 10.1.254.71/32 cni.projectcalico.org/podIPs: 10.1.254.71/32 creationTimestamp: "2022-10-20T16:00:28Z" labels: app: nginx name: nginx namespace: default resourceVersion: "162118" uid: 670a2341-d8fb-43c2-b991-768a3781f1b4
Applying resources.
1 2 3
$ kubectl apply -f nginx.yaml Warning: resource pods/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically. pod/nginx configured
A warning message will appear here.
1
Warning: resource pods/nginx is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
We can use the
kubectl get pod nginx -o yaml
command to see themetadata
content of this resource.1 2 3 4 5 6 7 8 9 10 11 12 13 14
metadata: annotations: cni.projectcalico.org/containerID: 6f8ca18ef90deb9974e0675d26d659cc3846125828b21ef0e0dbaeae33b40b64 cni.projectcalico.org/podIP: 10.1.254.72/32 cni.projectcalico.org/podIPs: 10.1.254.72/32 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"containers":[{"image":"nginx","name":"nginx","ports":[{"containerPort":80,"name":"http"}],"resources":{"limits":{"cpu":"200m","memory":"500Mi"},"requests":{"cpu":"100m","memory":"200Mi"}}}],"restartPolicy":"Always"}} creationTimestamp: "2022-10-20T16:02:47Z" labels: app: nginx name: nginx namespace: default resourceVersion: "162367" uid: 396b1d88-79f2-45a0-9036-d5073ffb8982
He does have an additional
kubectl.kubernetes.io/last-applied-configuration
tag. We’ve typeset the content as follows.1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
{ "apiVersion": "v1", "kind": "Pod", "metadata": { "annotations": {}, "labels": { "app": "nginx" }, "name": "nginx", "namespace": "default" }, "spec": { "containers": [ { "image": "nginx", "name": "nginx", "ports": [{ "containerPort": 80, "name": "http" }], "resources": { "limits": { "cpu": "200m", "memory": "500Mi" }, "requests": { "cpu": "100m", "memory": "200Mi" } } } ], "restartPolicy": "Always" } }
To put it bluntly, this data is just a snapshot of the last applied resource definition**!
Since
kubectl.kubernetes.io/last-applied-configuration
has been automatically patched, if you run the same command again, there will be no warning!Delete the
nginx
resource. -
Using
kubectl create --save-config
orkubectl apply
Create resources.
|
|
The --save-config
here means to create the kubectl.kubernetes.io/last-applied-configuration
markup.
|
|
Note: When creating resources,
kubectl create --save-config -f nginx.yaml
andkubectl apply -f nginx.yaml
are the same, but I am used to using thekubectl apply
command because it is shorter and easier to type.
If you repeat the kubectl create
command once, you will get the following error message.
A more correct approach is to use kubectl apply
to update resources in the future.
Delete the nginx
resource.
So should I use kubectl apply?
Actually, you should be able to use kubectl apply
to create resources as long as you have YAML files on hand. After all, anyone using Kubernetes should have YAML files defined in advance, and if they want to update resources in the future, they should modify YAML before applying updates. so kubectl create
should be used rather infrequently. If you do use it, remember to add the --save-config
parameter.
Is kubectl create
really useless? Not really! Because when we create a Kubernetes resource, there are many “default values” in the resource that are automatically generated by Kubernetes when we create it. Some fields cannot be applied when applying updates using kubectl apply
. Simply put, you can use kubectl apply
to create resources, but most fields cannot be updated by applying kubectl apply
, as I’ll illustrate with a short example.
Note: Kubernetes may have been modified by Admission Controllers during the resource creation process, so the final resource may not be the same as your original YAML definition file. so the final resource created may not be the same as your original YAML definition file.
-
Create Resources
1
kubectl create --save-config -f nginx.yaml
-
Back up resources to
nginx-dump.yaml
(with full resource definition)1
kubectl get pod nginx -o yaml > nginx-dump.yaml
-
Rebuild Resources
-
Update Resources
1
kubectl apply -f nginx-dump.yaml
An error occurs in this step because
kubectl apply
does not apply to updating information inall fields
.
|
|
See here, you should be able to clearly know the timing of kubectl create
and kubectl apply
, you should not use it wrongly in the future! 👍