Author: Wu Bo, an operations and maintenance engineer from Beijing, focuses on cloud native, KubeSphere enthusiast.
KubeEdge is the industry’s first cloud-native edge computing framework designed for edge computing scenarios and designed for edge cloud co-operation, which realizes application collaboration, resource collaboration, data collaboration, and device collaboration between edge clouds on top of K8s’ native container orchestration and scheduling capabilities, and completely opens up the cloud, edge, and device collaboration scenarios in edge computing. Among them, the KubeEdge architecture mainly includes three parts of the cloud edge side:
KubeEdge is an extension of K8s in edge scenes. The goal is to extend K8s’ ability to orchestrate containers to the edges; KubeEdge consists of two main components, CloudCore in the cloud and EdgeCore on edge nodes, and a Device module for managing a massive number of edge devices.
In order to better support KubeEdge and provide a visual interface to manage edge nodes, this document uses the KubeSphere platform to manage edge nodes, KubeSphere official document [8].
Use admin to access the KubeSphere console, enter cluster management, click Custom Resource Definition, find ClusterConfiguration, and edit ks-install;
When you’re done, click OK in the lower right corner and check the KS-installer’s logs to see the deployment status.
After the startup is complete, use the following command to see the NodePort port of CloudCore.
You need to configure public port forwarding according to the following ports to forward ports 10000-10004 to ports 30000-30004 of NodePort.
If you have a cloud vendor, you need to create a Server Load Balancer to forward according to the rules in the preceding table. If there is no cloud vendor, you can configure iptables rules for port forwarding using the following command:
After the deployment is complete, it is found that the DaemonSet resource iptables are not scheduled to the k8s-master node, and you need to configure to tolerate the master taint.
Find “Application Payload” -” Workload “-” Daemon Set “, edit “cloud-iptables-manager” and add the following configuration:
Note: If you do not modify the above configuration, you cannot view logs and execute commands on the Pod of the edge node on KubeSphere.
Check again after the configuration is complete to see if the iptables daemon has been scheduled to all nodes
Add an edge node document: https://kubesphere.com.cn/docs/installing-on-linux/cluster-operation/add-edge-nodes/
KubeEdge supports multiple container runtimes, including Docker, Containerd, CRI-O, and Virtlet. For more information, see the KubeEdge documentation[9]. To ensure that KubeSphere can get pod metrics, you need to install Docker v19.3.0 or later at the edge.
Execute the commands copied on KubeSphere to the edge
Check whether the edge node was added successfully
After an edge node joins the cluster, some pods might remain in the Pending state after they are scheduled to that edge node. Because some of the daemons (for example, Calico) have strong tolerance, you need to manually use the following script to prevent them from being dispatched to that edge node.
1. In the ks-installer of ClusterConfiguration, change the enable of metrics_server to true.
2. Go to the edge node and edit the vim /etc/kubeedge/config/edgecore.yaml configuration file to change the enable of edgeStream to true
3. Restart systemctl restart edgecore.service
Pods deployed to edge nodes need to be configured to tolerate taint
EdgeMesh is positioned as a lightweight communication component of KubeEdge user data surface, completes the mesh of the network between nodes, establishes P2P channels between nodes on the complex network topology of the edge, and completes the management and forwarding of traffic in the edge cluster on this channel, and finally provides the container application in the KubeEdge cluster with a consistent service discovery and traffic forwarding experience with K8s service.
Official website:https://edgemesh.netlify.app/zh/
The diagram above shows a brief architecture of EdgeMesh, which contains two microservices: edgemesh-server and edgemesh-agent.
EdgeMesh-Server:
EdgeMesh-Agent:
The cloud is a standard K8s cluster, which can use any CNI network plug-in, such as Flannel, Calico, and can deploy any K8s native components, such as Kubelet, KubeProxy; At the same time, the KubeEdge cloud component CloudCore is deployed in the cloud, and the KubeEdge edge component EdgeCore is run on the edge node to complete the registration of the edge node with the cluster on the cloud.
Core Benefits:
Log in to KubeSphere with admin identity, click the workbench to enter the “system-workspace” workspace, find kubeedge in the kubesphere-master cluster project and enter.
Create a template-based app in the project app payload, select “edgemesh” from the Store search and click Install, and confirm that the installation location is correct before installing.
Edit the following in your app settings and click Install:
After the deployment is complete, you need to set the node tolerance of the edgemesh-agent so that it can be scheduled to the master and edge nodes.
Finally review the deployment results (make sure the edgemesh-agent runs a pod on each node):
Prerequisites
Note: Because the IP of a node may be duplicated, only connections by node name are supported.
In v3.3.0, login to the terminal in the ks console is supported.
Both the Kubeedge and edgemesh services are healthy and the logs are error-free, but the cloud and edge cannot access each other.
Cloud configuration:
Edge configuration:
Verify:
Q: How is the security letter protected? A: There are certificates between EdgeMesh-Server and EdgeMesh-Agent for encryption.
Q: How efficient is the communication between services? A: Access within 500 qps is close to the direct connection network, the network loss is particularly low, and the success of the hole will have about 10% of the relay consumption.
Q: What is the resource consumption? A: Each EdgeMesh-Agent occupies less than 40 megabytes of memory, and the CPU is only 1%-5%.
Edged: https://kubeedge.io/zh/docs/architecture/edge/edged
EdgeHub: https://kubeedge.io/zh/docs/architecture/edge/edgehub
CloudHub: https://kubeedge.io/zh/docs/architecture/cloud/cloudhub
EdgeController: https://kubeedge.io/zh/docs/architecture/cloud/edge_controller
EventBus: https://kubeedge.io/zh/docs/architecture/edge/eventbus
DeviceTwin: https://kubeedge.io/zh/docs/architecture/edge/devicetwin
MetaManager: https://kubeedge.io/zh/docs/architecture/edge/metamanager
KubeSphere Official Documentation: https://kubesphere.com.cn/docs/v3.3/
KubeEdge Documentation: https://docs.kubeedge.io/zh/docs/advanced/cri/
NebulaGraph’s cloud product delivery practices
2022-09-14
Learn about Prometheus’ long-term storage mainstream
2022-09-08
Use the Qingyun Load Balancer to access K8s cluster internal services on the cloud
2022-09-06
About KubeSphere
KubeSphere (https://kubesphere.io) is an open source container platform built on top of Kubernetes that provides full-stack IT automation for operations and simplifies enterprise DevOps workflows.
KubeSphere has been adopted by tens of thousands of enterprises at home and abroad, such as Aqara Smart Home, Ericsson, Ben Life, Neusoft, Huayun, Sina, Sany Heavy Industry, Huaxia Bank, Sichuan Airlines, Sinopharm Group, WeBank, Hangzhou Shupao, Zijin Insurance, Qunar, Zhongtong, People’s Bank of China, Bank of China, Chinese Insurance Life Insurance, China Taiping Insurance, China Mobile, China Unicom, China Telecom, Tianyi Cloud, China Mobile Jinke, Radore, ZaloPay and so on. KubeSphere provides a developer-friendly, wizard-based interface and rich enterprise-grade features, including Kubernetes multi-cloud and multi-cluster management, DevOps (CI/CD), application lifecycle management, edge computing, microservice governance (Service Mesh), multi-tenant management, observability, storage and network management, GPU support, and other features to help enterprises quickly build a powerful and feature-rich container cloud platform.