Note: The background is a bit wordy, talk about the changes in the development of local debugging along the way, and the annoying can be skipped directly, without affecting the reading.
What was the situation in my company at the time, there were only two Java applications, and both running in a Tomcat Servlet container.
How was it debugged locally? Are the development of their own computer to install a Mysql, install a Tomcat, their own computer running debugging, the advantage is that the back-end research and development does not interfere with each other, how to change how to change, APP research and development is directly connected to the back-end notebook debugging. The on-line deployment is a research and development manual compilation of a Jar package thrown to the cloud server, which is basically a grass team that can work, but that’s it.
In 2020, the company bought a server, Centos system, to install Mysql, Tomcat, the use of Redis cache, RabbitMQ message queue, with an independent test environment, the use of Jenkins automatic packaging and deployment of applications, but also count as a shotgun for cannons, at least do not have to pack themselves.
How to debug locally at this time? At least do not have to install Mysql on their own computers, the back framework from SpringMVC and Struts2 have been changed to Spring Boot, the external Tomcat can also be removed. When the backend develops and runs Spring Boot, the Mysql directly connected to the server is debugged, and the APP side no longer needs to connect the notebook developed by the backend, and has a relatively stable debugging environment. The cost is that the database update structure of each backend should be compatible and avoid affecting others.
With the growth of business, the back-end framework has evolved from Spring Boott to Spring Cloud Family Bucket, the application running environment has changed from Linux direct to Docker image deployment, and various middleware also use Docker images. The product line has increased, and a single development branch can no longer meet the requirements, so another back-end code branch has been opened, and the same development and test environment has also been added.
Local debugging at this time does not change much for the APP side, and the difference is that different environments use different domain names for the backend. For the back-end of the R & D students is not the same, each time the local debugging of their own computer to be resident a Eureka and a Config Server, if the local debugging of microservices dependence more, no large memory is really unbearable.
The volume of business continues to increase, the number of product colleagues has increased, the demand is really piling up, two branches can no longer meet the requirements, and a third branch has been opened, or not enough. Every time a new branch running environment is added, the backend R&D students are also very painful, and a bunch of environments and third-party platform callbacks need to be configured. In order to dynamically expand and shrink, Spring Cloud Family Bucket continued to evolve, abandoning the Zuul gateway and Eureka and using Spring Cloud Kubernetes, and the operating environment was fully closer to K8S. During this period, the company purchased a server for development testing, and the memory CPU disk was full!
Entering the K8S era, the back-end research and development of local computers can not be connected to the Linux server above the various middleware, each POD in each new branch environment is a new IP, it is impossible to open and specify several middleware ports to the backend connection as before, so many environments are set up, O&M students do not have to do anything else all day.
This also leads to today’s kt-connect tool, through this tool, the backend development of the local computer can proxy access to all the services of the various branch environments, that is, the namespace in K8S, and only need to start the services that need to be debugged, which greatly saves the CPU memory occupation of the computer.
There are several tools on the web that choose a proxy to access the K8S environment to facilitate local debugging.
Using Ingress, NodePort, LoadBalancer, etc. to forward traffic to the specified port, as mentioned above, will make the workload of O&M students relatively large, and it is not convenient for the automatic creation and recycling of branch environments, which is only suitable for scenarios where the number of exposed ports is not large.
By setting a POD running a VPN service in each namespace of K8S, the backend R&D notebook enters the specified namespace through the VPN client connection proxy, which can normally access and resolve various services in the cluster, which can basically meet the daily requirements, and the disadvantage is that each namespace is permanently resident in the running resources of a VPN service.
In the process of searching found this proxy tool, almost 90% of the Chinese and English technical articles are recommended to use this tool, the function is very powerful, not only provides the VPN has the proxy function, can access all services in the namespace, but also specify a variety of rules to intercept the traffic of the specified service to the local machine, equivalent to the local machine can also be used as an ordinary POD to provide external services. The general design principle is as follows:
Run the following command on the local computer of R&D
telepresence helm install –kubeconfig .\kubeconfig
telepresence connect —kubeconfig .\kubeconfig
A namespace ambassador is automatically created in the K8S cluster and a traffic-manager pod is deployed for traffic management, while two Daemon services are started locally in the R&D notebook, one of which is called Root Daemon, which is used to establish a two-way proxy channel and manage traffic between the local computer and the K8S cluster, and the other User Daemon is responsible for traffic with Traffic Manager Communication, set up interception rules, and if you are also responsible for communicating with Ambassador Cloud after logging in.
By configuring interception rules, a traffic-agent will be installed in the intercepted pod, and the official documentation states that it is a sidecar mode similar to K8S clusters, which hijacks traffic to the injected POD, and all traffic in and out is rerouted through traffic-manager.
The Traffic Agent is a sidecar container that facilitates intercepts. When an intercept is first started, the Traffic Agent container is injected into the workload’s pod(s).
Although his function is very powerful, but in the current 2.5 version of the use process, in order to use his interception and Preview Url function must be registered in his commercial cloud platform Ambassador Cloud (Note: do not know why the online technical articles do not mention this, the test must be logged in to his cloud platform), and the configuration of the interception rules is operated through the cloud platform web page, networking requirements, including possible security, leakage and other hidden dangers I felt that it was unacceptable and had to give up using this tool.
There is also a disadvantage that has to be said that after the old version is used, you can clean up the automatically created namespace (namespace) and Pod, intercept the function of the agent (telepresence uninstall) is gone, and it completely disappears in the command parameters of the 2.5 version, which leads to every use, if you want to keep the environment clean, you have to trouble the operation and maintenance students to clean it up, which is very troublesome, and it is almost killing the cleanliness patient.
Fortunately, the open source community has found another Telepresence similar tool, called kt-connect, using version v0.3.6 (by the way, the K8S version we use is 1.24), and it does not require the network to log in to any account, and the end command execution will automatically clean up by default. Ali produced, not sure if it is another KPI open source project, but at least at this moment I am very satisfied with this tool.
Similar to Telepresence but with the difference that kt-connect will only create a new pod in the namespace of the specified connection, and then deploy a kt-connect-shadow image. Compared to Telepresence, it is subdivided and expanded into four major modes:
In this mode, kt-connect plays a role similar to a VPN, where the local computer can access all the services in the connected namespace (namespace), but it is not added to other services in the cluster, and the traffic of other services is not forwarded to the local computer.
Note 1: Similar to telepresence , kt-connect all commands should be brought with –kubeconfig to ensure that there are sufficient permissions and can correctly connect to the K8S cluster’s API server, many articles rarely mention this, if the K8S cluster restricts permissions, or is not on the same network as R&D, you must ensure that you use the authorization file kubeconfig provided by the operation and maintenance students with sufficient permissions to connect.
Note 2:
Failed to setup port forward local:28344 -> pod kt-connect-shadow-gseak:53 error=”error upgrading connection: error sending request: Post “https://10.0.8.101:8443/api/v1/namespaces/feature-N/pods/kt-connect-shadow-gseak/portforward”: dial tcp 10.0.8.101:8443: connectex: A socket operation was attempted to an unreachable host.”,
If the above error occurs, it may be a kt-connect routing bug, the local computer’s route may conflict with the newly added route to the API server, increase the parameter –excludeIps 10.0.8.101/32, if there are more network segment conflicts, you can expand the network segment range, for example, excludeIps 10.0.8.0/24 refer to issue-302.
This mode is similar to the Telepresence interception mode, which intercepts all traffic of the specified service and forwards it to the port of the local computer for research and development, and uses this mode to directly debug the access request in the environment.
The specific principle is to replace the pod in the service with a serviceA-kt-exchange pod.
Note 1: The traffic direction of Exchange mode is one-way, and will not proxy the request initiated by the local computer, if the K8S cluster is not in the same network segment as the local computer, you need to open a command line to run Connect mode to ensure that the local service can normally connect to other services of the K8S cluster, refer to issue-216.
Note 2: Exchange mode is to intercept the service for traffic forwarding, if the cluster request does not go through the service, for example, directly parsed to the pod class, there may be a failure to intercept (the same is true for the Mesh mode), so if there is a problem, remember to confirm the routing situation in the K8S cluster with the O&M students.
After executing the command, you can see that the output log contains similar text:
In this mode, the local computer’s services and the same services in the K8S cluster respond to requests at the same time, but only requests through the specified http request header VERSION: xxxx will be forwarded to the local computer, compared to Exchange mode, to ensure the normal use of other people’s services, while research and development can be local debugging. Each time the value of the generated request header VERSION is dynamically generated, if you want to fix this value, you can write it dead through the parameter –versionMark, for example, the fixed value is test-version, the command is as follows:
The specific principle is to replace the pod in serviceA with a serviceA-kt-router routing image, responsible for traffic proxy forwarding according to the request header, and generate a serviceA-kt-stuntman service, which is the serviceA that is running normally online, and a serviceA-kt-mesh-xxxxx service, which is responsible for proxy traffic to the local computer.
Unlike the Exchange and Mesh modes, which require the K8S cluster to have a running service, the Preview mode can deploy the programs running on the local computer to the K8S cluster as a new service to provide external services, which is very convenient for the development, debugging, preview, etc. of the new service.
This article is reproduced from: “Blog Garden”, original: https://url.hi-linux.com/9JmW5, copyright belongs to the original author. Welcome to contribute, submission email: editor@hi-linux.com.
Recently, we set up a WeChat group for technical exchanges. At present, the group has joined a lot of gods in the industry, interested students can join us to exchange technology, in the “wonderful Linux world” public account directly reply to “add group” to invite you to join the group.
You may also like it
Click on the image below to read it
Three commands to help you quickly achieve SSH intranet penetration Click on the picture above, “Meituan | hungry” takeaway red envelope free every day
For more interesting Internet news, pay attention to the “Wonderful Internet” video number to understand it all!