id=”js_tags” class=”article-tag__list”> included in the collection #K8s
id=”js_article-tag-card__right” class=”article-tag-card__right”> 12
class=”js_uneditable custom_select_card mp_profile_iframe”>
-
2. What is the difference between container and host deployment applications?
-
3. What is the composition of K8S architecture?
-
4. The health monitoring mechanism of kubenetes for pod resource objects
-
5. How to control the rolling update process?
-
6. What is the image download policy?
-
7. What are the status of images?
-
8. What is the restart policy of the pod?
-
command to deploy the rollback of the application version in K8S
-
:
-
11. What are the commonly used label classifications?
-
12. How to view tags?
-
13. Add and modify the commands to delete tags
-
14. Features of DaemonSet resource
-
objects15. What are the states of the pod life cycle?
-
16. What is the process of creating a pod?
-
17. What is the process of deleting a pod?
-
18. What is the service of K8S?
-
19. How does K8S enter the service registration?
-
20. What are the ways to persist K8S data?
>1, what is K8S?
9. What is the function of the
1. What is K8S?
Kubenetes is an open source system for automatic deployment, auto scaling and management of container applications. The primary function is container orchestration in production.
There are many introductions about K8S on the Internet, and you can tell them according to your own understanding.
2. What is the difference between container and host deployment applications?
The central idea of containers is to start in seconds; Encapsulate once, run everywhere;
This is an effect that cannot be achieved by host deployment applications, but it should also pay more attention to the problem of data persistence of containers. In addition, container deployment can isolate services without affecting each other, which is another core concept of containers.
3. What is the composition of K8S architecture?
The master node is mainly used to expose APIs, schedule deployments, and manage nodes;
The compute node runs a container runtime environment, usually a docker environment (similar to the docker
environment and rkt), and runs a K8s
agent (kubelet
) for communication with the master
.
Compute nodes also run additional components like logging, node monitoring, service discovery, and so on. Compute nodes are the nodes that actually work in a K8s cluster
Master node
:
-
Kubectl
: client command line tool, as the operation entry of the entire K8s cluster; -
Server
: In the K8s architecture, it assumes the role of a “bridge
“, as the only entry point for resource operations, it provides authentication, authorization, access control, API registration and discovery mechanisms.The communication between the client and the k8s cluster and the internal components of K8s must be through the API Server component;
-
Controller-manager
: responsible for maintaining the state of the cluster, such as failure detection, automatic scaling, rolling update, etc. -
Scheduler
: Responsible for resource scheduling, scheduling pods to corresponding node nodes according to the predetermined scheduling strategy; -
etcd
: Assumes the role of data center and saves the state of the entire cluster;
API
Node node:
-
Kubelet
: Responsible for maintaining the life cycle of the container, but also responsible for the management of Volume and network, generally running on all nodes, is the agent of the node node, when the Scheduler determines that a node is running pod, will send the specific information of the pod (image, volume) to the kubelet of the node, kubelet creates and runs the container according to this information, and returns the running status to the master. (Automatic repair function: if a container in a node goes down, it will try to restart the container, if the restart is invalid, it will kill the pod and then recreate a container); -
Kube-proxy
: Service logically represents multiple pods on the backend. Responsible for providing service discovery and load balancing within the cluster for the service (when the external community accesses the services provided by the pod through the service, the request received by the service is forwarded to the pod through kube-proxy); -
container-runtime
: is the software responsible for managing running containers, such as docker -
pods
: It is the smallest unit in the K8S cluster. Each pod can run one or more containers (containers), if there are two containers in a pod, then the container’s USR (user), MNT (mount point), PID (process number) are isolated from each other, UTS (host name and domain name), IPC (message queue), NET (network stack) are shared with each other.
4. Kubenetes provides
three types of probes
(probes
) to perform health monitoring of pods in the health monitoring mechanism K8s for pod resource objects:
1) livenessProbe probes
If the livenessProbe detects that the container is unhealthy, kubelet will decide whether to restart according to its restart policy, and the initial detection state is healthy until the probe fails. If a container does not contain a livenessProbe probe, kubelet considers the return value of the container’s livenessProbe probe to be a success forever.
2) The ReadinessProbe
probe
can also determine whether the pod is healthy according to user-defined rules, if the probe fails, the controller will remove the pod from the endpoint list of the corresponding service, and no longer schedule any requests to this pod until the next probe is successful. The initial probe is in a failed state until the probe succeeds, and the pod is added to the endpoint list of the service.
3) startupProbe
probe
start check mechanism, apply some slow startup business, to avoid business startup for a long time and be killed
by the above two types of probes, this problem can also be solved in another way, that is, when defining the above two types of probe mechanisms, the initialization time can be defined longer.
The probe check supports the following parameter settings
:
-
periodSeconds
: check interval, how often to perform probe check, default is 10s; -
timeoutSeconds
: Check the timeout period, and fail to detect timeout after applying timeout. -
successThreshold
: The successful detection threshold indicates how many times the detection is healthy, and the default detection is 1 time.
>initialDelaySeconds
: the initial first probe interval, which is used for the time of application startup, to prevent the health check from failing before the application is started
The
probe supports sub-detection schemes:
1) Check whether the service is normal by executing a command, such as using the cat command to check whether an important configuration file in the pod exists, and if so, it indicates that the pod is healthy. The opposite is abnormal.
The syntax of the YAML file for the Exec probe mode is as follows
: spec:containers:
- name: liveness
image: k8s.gcr.io/busybox
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -rf /tmp/healthy; sleep 600
livenessProbe: #选择livenessProbe的探测机制
exec: #执行以下命令
command : -
cat
- /tmp/healthy
initialDelaySeconds: 5 #在容器运行五秒后开始探测
periodSeconds: 5 #每次探测的时间间隔为5秒
In the configuration file above, the detection mechanism is to probe every five seconds after the container runs for 5 seconds, if the value returned by the cat command is “0”, it means healthy, if it is non-0, it means abnormal.
2) Httpget: Check whether the service is normal by sending an http/htps request, and the returned status code of 200-399 indicates that the container is healthy (note that http get is similar to the command curl -I).
The syntax of the yaml file for the httpget probe mode is as follows
:
spec:
containers:
- name: liveness
image: k8s.gcr.io/liveness
livenessProbe: #采用livenessProbe机制探测
httpGet : #采用httpget的方式
scheme: HTTP #指定协议, also supports https
path: /healthz #检测是否可以访问到网页根目录下的healthz网页文件
port: 8080 #监听端口是8080
initialDelaySeconds: 3 #容器运行3秒后开始探测
periodSeconds: 3 #探测频率为3秒
In the above configuration file, the probe method is to send an HTTP GET request to the item container, which requests the healthz file on port 8080, and returns any status code greater than or equal to 200 and less than 400 to indicate success. Any other code represents an exception.
3) tcpSocket: TCP check is performed through the IP and Port of the container, if a TCP connection can be established, it indicates that the container is healthy, which is somewhat similar to the detection mechanism of HTTPget, tcpsocket health check is suitable for TCP services.
The syntax of the yaml file for tcpSocket probe mode is as follows
:
spec:
containers:
- name: goproxy
image: k8s.gcr.io/goproxy:0.1
ports:
- containerPort: 8080 #这里两种探测机制都用上了, all for the purpose of establishing a TCP connection to port 8080 of
the container
readinessprobe:
tcpSocket:
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
In the above yaml configuration file, both types of probes are used, after 5 seconds of container startup, kubelet will send the first readinessProbe probe, which will connect port 8080 of the container, if the probe is successful, the pod is healthy, after ten seconds, kubelet will make a second connection.
In addition to the readinessProbe probe, 15 seconds after the container starts, kubelet will send the first livenessProbe probe, still trying to connect to port 8080 of the container, and restart the container if the connection fails.
There are three possibilities for the probe detection
:
-
Success
:Container passed the check; -
Failure
: The container failed the check; -
Unknown
: No check was performed, so no action was taken (usually we didn’t define probe detection, which defaults to success).
5. How to control the rolling update process?
You can view the parameters that can be controlled at the time of the update with the following command:
kubectl explain deploy.spec.strategy.rollingUpdate
-
maxUnavailable
: this parameter controls the number of unavailable pods during the rolling update process.This value has nothing to do with the above value, for example: I have ten pods, but in the process of updating, I allow up to three of these ten pods to be unavailable, then set the value of this parameter to 3,
in the process of updating, as long as the number of unavailable pods is less than or equal to 3, then the update process will not stop
>maxSurge
: This parameter controls the rolling update process where the total number of replicas exceeds the upper limit on the expected number of pods. This can be a percentage or a specific value. The default is 1.
The function of the above parameter is that during the update process,
if the value is 3, then how, run three pods first, to replace the old pod, and so on
6, What is the image download policy?
The explanation of the imagePullPolicy line can be viewed with the command “kubectl explain pod.spec.containers".
There are three image download strategies for K8s: Always, Never, IFNotPresent
-
Always
: When the image tag is latest, the image is always obtained from the specified repository; -
Never
: Downloading images from repositories is prohibited, which means that only local images can be used; -
IfNotPresent
: It is downloaded from the target repository only if there is no corresponding image locally.
The default image download policy is
: When the image tag is latest, the default policy is Always
; When the image tag is custom (that is, the tag is not latest), then the default policy is IfNotPresent
7, what is the status of the image?
-
Running
😛 The container required for od has been successfully scheduled to a node and has been successfully run -
Pending
:APIserver has created a pod resource object and has been stored in etcd, However, it has not been scheduled or is still in the process of downloading images in the repository -
: APIserver cannot obtain the state of the pod object normally, usually due to its inability to communicate with the kubelet of the worker node.
,
You can view the restart policy of a pod with the command kubectl explain pod.spec.
(restartPolicy field)
-
Always
: restarts whenever the pod object terminates, which is the default policy. -
OnFailure
: Restart
the #运行yaml文件 of the
deployment application version rollback in 9 and K8S
only when there is an error in the pod object
, and record the version information.
kubectl apply -f httpd2-deploy1.yaml --record #查看该deployment的历史版本
kubectl rollout History deployment httpd-devploy1 #执行回滚操作, specifying rollback to version 1
kubectl rollout undo deployment httpd-devploy1 -- to-revision=1
tag: When there are more and more resource objects of the same type, in order to better manage, they can be divided into a group according to the tag, in order to improve the management efficiency of resource objects.
Tag selector: This is the query filter condition for tags. At present, the API supports two kinds of label selectors
:
-
Collection-based, such as: in,
notin, exists
(matchExpressions field in yaml file);
> based on equivalent relationships, such as: =, ==, !=
(Note: ==
also means equal, matchLabels field in yaml file);
11. What are the commonly used label classifications?
Label classification can be customized, but in order to make others achieve a clear effect, the following classifications are generally used
:
- version
-
class label
: stable (stable), canary (canary version, you can call it the beta version of the beta), beta (beta); -
Environment tags
:d EV (development), QA (testing), production (production), OP (operation and maintenance); -
Application (app):
UI, AS, PC, SC; -
Schema classes
: frontend, backend, cache; -
Partition tags
: customerA (customer A), customerB (customer B); -
Quality Control Level (Track:d
aily (daily), weekly (weekly)
12, how to view labels?
kubectl get pod --show-labels #查看pod and display the label content kubectl get pod -L env,tier #显示资源对象标签的值
kubectl get pod -l env, tier #只显示符合键值资源对象的pod, and "-L" shows all pods
#对pod标签的操作 kubectl label pod label-pod abc=
123 #给名为label-pod pod add label kubectl label pod
label-pod abc=456 --overwrite #修改名为label-pod label kubectl label pod label-pod
abc- #删除名为label-pod label
kubectl Get pod --show-labels #对node节点的标签操作
kubectl label nodes node01 disk=ssd #给节点node01添加disk标签
kubectl label nodes node01 disk=sss –overwrite #修改节点node01的标签
kubectl label nodes node01 disk- #删除节点node01的disk标签
DaemonSet
, a resource object, runs on each node in each k8s cluster, and each node can only run one pod, which is the biggest and only difference between it and the deployment resource object.
15. What are the status of the life cycle of the pod?
-
Pending
: indicates that the pod has been agreed to be created, waiting for kube-scheduler to select the appropriate node to create, or preparing the image; -
Running
: means that all containers in the pod have been created, and at least one container is running or starting or restarting. -
Succeeded
: indicates that all containers have been successfully terminated and will not be started again; -
Failed
: indicates that all containers in the pod are in a non-0 (unhealthy) state and exited; -
Unknown
: indicates that the pod status cannot be read, usually kube-controller-manager cannot communicate with the pod.
- the
-
client submits the pod’s configuration information (which can be the information defined in the yaml file) to
the kube-apiserver
; -
After the
apiserver receives the
instruction, it notifies the controller-manager to create a resource object; -
Controller-manager
stores pod configuration information in the ETCD data center through api-server. -
Kube-scheduler
detects pod information and starts scheduling pre-selection, will first filter out nodes that do not meet the pod resource configuration requirements, and then start scheduling tuning, mainly to select the nodes that are more suitable for running pods, and then send the pod resource configuration sheet to the kubelet component on the node node. -
Kubelet
configures a single pod according to the resource sent by the scheduler, and after the run is successful, the pod operation information is returned to the scheduler, and the scheduler stores the returned pod health information to the etcd data center.
17. What is the process of deleting a pod?
Kube-apiserver
will accept the user’s delete instruction, by default there is 30 seconds to wait for graceful exit, more than 30 seconds will be marked as dead, at this time the state of the pod Terminating
, kubelet sees the pod marked as Terminating and starts the work of closing the pod;
The shutdown process is as follows:
-
pod is removed from the service’s endpoint list;
-
a pre-stop hook, it will be called inside the pod, and the stop hook generally defines how to gracefully end the process;
-
Processes are signaled TERM (kill -14) When
-
graceful exit time is exceeded, all processes in the pod are sent a SIGKILL signal (kill -9).
If the pod defines
the
18. What is the service of K8S?
Every time a pod is restarted or redeployed, its IP address will change, which makes it difficult to communicate between pods and between pods and external pods, and at this time, the service needs to provide a fixed entry point for the pod.
The endpoint list of the service is usually bound to a set of pods with the same configuration, and external requests are distributed to multiple pods through load balancing.
19. How does K8S enter the service registration?
After the pod starts, all service information of the current environment is loaded so that different pods can communicate according to the service name.
20. What are the ways to persist K8S data?
emptyDir
: is the most basic Volume type, a simple empty directory for storing temporary data. If the pod is set to the emptyDir type Volume, when the pod is assigned to the Node, emptyDir will be created, as long as the pod is running on the Node, emptyDir will exist (hanging up the container will not cause emptyDir to lose data), but if the pod is deleted from the node (the pod is deleted, or the pod is migrated), the emptyDir will also be deleted and permanently lost.
Hostpath
: Mount a directory or file that already exists on the host to the container. Similar to the bind mount mount method in Docker. This data persistence method is not used in many scenarios because it increases the coupling between pods and nodes.
Persistent Volume (PV) and Persistent VolumeClaim
(PVC)
make the K8s cluster have the ability to logically abstract storage, so that the configuration of the actual background storage technology can be ignored in the logic of configuring the pod, and the configuration work is handed over to the PV configurator. That is, the manager of the cluster.
The relationship between stored PV and PVC is very similar to the relationship between Node and Pod of computing; PV and Node are the providers of resources, which change according to the infrastructure changes of the cluster and are configured by the K8s cluster administrator; PVCs and pods are resource users, which change according to the requirements of business services, and are configured by the administrator of the K8s cluster as a service.