In Part 2, we deployed a Jenkins pod into our Kubernetes cluster, and we set up a pipeline for a full CI/CD build for our Hello-Kenzan application.
Now, in Part 3, we are going to set aside the Hello-Kenzan application and get to the main event: running our Kr8sswordz Puzzle application. We will showcase various components of the app such as Ectd caching and persistence in MongoDB. We will also highlight built-in UI functionality to scale backend service pods up and down using the Kubernetes API, and then simulate a load test.
Before we start the install, it’s helpful to take a look at the pods we’ll run as part of the Kr8sswordz Puzzle app:
kr8sswordz – A React container with our node.js UI frontend.
services – The primary backend service that handles submitting and getting answers to the crossword puzzle, storing them in MongoDB and Ectd.
mongo – A MongoDB container for persisting crossword answers.
etcd – An Etcd client for caching crossword answers.
monitor-scale – A backend service that handles functionality for scaling services pods up and down.
We will go into the different service endpoints and architecture in more detail after running the application. Let’s get it going!
Exercise 1: Running the Kr8sswordz Puzzle App
First, make sure you’ve run through the steps in Part 1 and Part 2, in which we set up our image repository and Jenkins pods—you will need these to proceed with Part 3. If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:
minikube start --memory 8000 --cpus 2 --kubernetes-version v1.6.0
If you’d like, you can check the cluster status and view all the pods that are running with the following commands:
kubectl cluster-infokubectl get pods --all-namespaces
Now let’s start the interactive tutorial for Part 3 with the following terminal commands:
cd ~/kubernetes-ci-cdnpm run part3
Remember, you don’t have to actually type the commands below—just press Enter at each step and the script will enter the command for you!
1. Start the etcd operator and service on the cluster.
The etcd.sh script will install a couple components that etcd uses to run: anoperator that helps manage the etcd cluster, and a cluster service for storing and retrieving key values. Here the cluster service runs as three pod instances for redundancy.
If you’d like, you can see these new pods by entering kubectl get pods in a separate terminal window.
2. Now that we have an etcd service, we need an etcd client. The following will set up a directory within etcd for storing key-value pairs, and then run the etcd client.
kubectl create -f manifests/etcd-job.yml
3. Check the status of the job in step 2 to make sure it deployed.
kubectl describe jobs/etcd-job
4. The crossword application is a multi-tier application, and its services depend on each other. We will create the three services ahead of time, so that the deployments are already aware of them later.
kubectl apply -f manifests/all-services.yml
5. Now we’re going to walk through an initial build of the monitor-scale service.
docker build -t 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD` -f applications/monitor/Dockerfile applications/monitor
6. Set up a proxy so we can push the monitor-scale Docker image we just built to our cluster’s registry.
docker stop socat-registry; docker rm socat-registry; docker run -d -e "REGIP=`minikube ip`" --name socat-registry -p 30400:5000 chadmoon/socat:latest bash -c "socat TCP4-LISTEN:5000,fork,reuseaddr TCP4:`minikube ip`:30400"
7. Push the monitor-scale image to the registry.
docker push 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD`
8. The proxy’s work is done, so go ahead and stop it.
docker stop socat-registry
9. Open the registry UI and verify that the monitor-scale image is in our local registry.
minikube service registry-ui
10. Create the monitor-scale deployment and service.
sed 's#127.0.0.1:30400/monitor-scale:latest#127.0.0.1:30400/monitor-scale:' `git rev-parse --short HEAD`'#' applications/monitor/k8s/monitor-scale.yaml | kubectl apply -f -
11. Wait for the monitor-scale deployment to finish.
kubectl rollout status deployment/monitor-scale
12. View pods and make sure the monitor-scale pod is running.
kubectl get pods
13. View services and make sure the monitor-scale service is set up.
kubectl get services
14. View ingress rules and make sure the monitor-scale ingress rule is configured.
kubectl get ingress
15. View deployments to see the monitor-scale deployment.
kubectl get deployments
16. Now we will run a script to bootstrap the services and mongo pods, creating Docker images and storing them in the local registry. Theserver.sh script runs through the same build, proxy, push, and deploy steps we previously ran manually for the monitor-scale service.
17. Check to see if the services and mongo pods have been deployed.
kubectl rollout status deployment/services
18. Bootstrap thekr8sswordz frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed.
19. Check to see if the frontend has been deployed.
kubectl rollout status deployment/kr8sswordz
20. Check out all the pods that are running.
kubectl get pods
21. Start the web application in your default browser.
minikube service kr8sswordz
Exercise 2: Giving the Kr8sswordz Puzzle a Spin
Now that it’s up and running, let’s give the Kr8sswordz puzzle a try. We’ll also spin up several backend service instances and hammer it with a load test to see how Kubernetes automatically balances the load.
Try filling out some of the answers to the puzzle. You’ll see that any wrong answers are automatically shown in red as letters are filled in.
Click Submit. When you clickSubmit, your current answers for the puzzle are stored in MongoDB. Notice the blue arrows showing the submission to the services endpoint (the blue box in the center with a pod name), and the subsequent storage in MongoDB.
Try filling out the puzzle a bit more, then click Reload. Reload will perform a GET which retrieves the last submitted puzzle answers in MongoDB, and caches those same answers in Etcd with a 30 sec TTL (time to live). If you immediately press Reload again, it will retrieve answers from Etcd until the TTL expires, at which point answers are again retrieved from MongoDB and re-cached.
Scale the number of instances of the Kr8sswordz puzzle service up to 10 or more by dragging the middle slider to the right, then click Scale. Notice the number of service pods increase. In a terminal, run kubectl get pods to see the new replicas.
Now run a load test. Drag the slider under the puzzle to the right to 30 requests or more. Try clicking the Concurrent Requests and Consecutive Requests buttons to simulate sending both of these types of requests. Note how it very quickly hits several of the service pods in green to manage the numerous requests.
Drag the middle slider back down to 1 and click Scale. In a terminal, run kubectl get pods to see the service pods terminating.
Try deleting the services pod to see Kubernetes restart a new pod using its ability to automatically heal downed pods.
In a terminal enter kubectl get pods to see all pods. Copy the services pod name (similar to the one shown in the picture above).
Enter the following command to delete the services pod. kubectl delete pod [services podname]
Enter kubectl get pods to see the old pod terminating and the new pod starting. You should see the new services pod appear in the Kr8sswordz Puzzle app.
If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:
What’s Happening on the Backend
Let’s take a closer look at what’s happening on the backend of our Kr8sswordz Puzzle app.
When the Submit button is pressed, a PUT request is sent from the UI to a pod instance of the puzzle service. The service uses a LoopBack data source to store answers in MongoDB, and an Etcd client to store answers in cache. Similarly, when the Reload button is pressed, answers are retrieved with a GET request.
The monitor-scale pod handles scaling and load test functionality for the app. When theScale button is pressed, the monitor-scale pod uses the Kubectl API to scale the number of service instances up and down in Kubernetes.
When the Concurrent/Consecutive buttons are pressed, the monitor-scale pod handles the loadtest by sending several GET requests to the service pods based on the count sent from the front end. The service pod sendsHits to monitor-scale whenever it receives a request. Monitor-scale then uses websockets to broadcast to the UI to have pod instances light up green.
When a service pod instance goes upor down, the service sends this information to the monitor-scale pod. The up and down states are configured as lifecycle hooks in the service pod k8s deployment, which curls the same endpoint on monitor-scale (see kubernetes-ci-cd/applications/crossword/k8s/deployment.yml to view the hooks). Monitor-scale persists the list of available service pods in Etcd with set, delete, and get pod requests.
Now that we’ve run our Kr8sswordz Puzzle app, the next step is to set up CI/CD for it. Similar to what we did for the Hello-Kenzan app, Part 4 will cover creating Jenkins pipelines for the Kr8sswordz Puzzle components so that the entire application builds at the touch of a button. We will also modify a bit of code to enhance the application with a new feature coming down the pipeline.
Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.
Want to learn more about Kubernetes? Get unlimited access to the new Kubernetes Fundamentals training course for one year for $199. Sign up now!