The Kubernetes platform is fundamentally different from earlier compute infrastructures and IaaS. In Kubernetes, there is no mapping of applications to servers or VMs. so the backup solution needs to understand this Kubernetes-native architectural pattern, and be able to deal with continuous change. High-velocity application development and deployment cycles are the norms in Kubernetes environments. Consequently, this requires that backup solutions be application-centric and not infrastructure-focused.
Veeam’s Kasten offers backup solutions for containerized workloads, to address cloud-native data protection needs for enterprises. Kasten’s K10 Data Management Platform’s ability to capture the entire application stack by taking a consistent application-to-infrastructure view is critical for compliance and restore testing.
Get hands dirty using Kasten K10 using the free lab to learn how Kubernetes-native application backup and recovery works. Kasten K10 test drive has 8 challenges and they will provide step-by-step instructions to complete. At the end of every task, you can check your work.
Here is the list of tasks that you need to accomplish to complete the test drive.
Adding Kasten Helm Repo
1. Add the Kasten Helm 10 repository. Setting our environment up by adding the Kasten K10 Helm repository to the system.
root@k8svm:~# helm repo add kasten https://charts.kasten.io/ "kasten" has been added to your repositories root@k8svm:~#
Test Application/DB Deployment
2. Create a new Kubernetes deployment using the helm chart. We will use MySQL database to experiment with the backup and recovery of a cloud-native application. Install MySQL in the new namespace using the following commands in the Kubernetes cluster.
root@k8svm:~# helm install mysql bitnami/mysql --create-namespace --namespace=mysql root@k8svm:~# kubectl get ns mysql NAME STATUS AGE mysql Active 49s root@k8svm:~# kubectl get all -n mysql NAME READY STATUS RESTARTS AGE pod/mysql-0 1/1 Running 0 59s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/mysql ClusterIP 10.96.38.17 <none> 3306/TCP 59s service/mysql-headless ClusterIP None <none> 3306/TCP 59s NAME READY AGE statefulset.apps/mysql 1/1 59s root@k8svm:~#
3. Once the pod is up & running, use the following command to create a test local database.
root@k8svm:~# MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace mysql mysql -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo) root@k8svm:~# kubectl exec -it --namespace=mysql $(kubectl --namespace=mysql get pods -o jsonpath='{.items[0].metadata.name}') \ > -- mysql -u root --password=$MYSQL_ROOT_PASSWORD -e "CREATE DATABASE k10demo" mysql: [Warning] Using a password on the command line interface can be insecure. root@k8svm:~#
Install Kasten K10
4. Let’s install Kasten K10 and configure storage.
root@k8svm:~# helm install k10 kasten/k10 --namespace=kasten-io --create-namespace NAME: k10 LAST DEPLOYED: Fri Sep 10 17:16:47 2021 NAMESPACE: kasten-io STATUS: deployed REVISION: 1 NOTES: Thank you for installing Kasten’s K10 Data Management Platform! Documentation can be found at https://docs.kasten.io/. How to access the K10 Dashboard: The K10 dashboard is not exposed externally. To establish a connection to it use the following `kubectl` command: `kubectl --namespace kasten-io port-forward service/gateway 8080:8000` The Kasten dashboard will be available at: `http://127.0.0.1:8080/k10/#/` root@k8svm:~#
List all the Kasten-io namespace resources.
root@k8svm:~# kubectl get all -n kasten-io NAME READY STATUS RESTARTS AGE pod/aggregatedapis-svc-78dbdcbdc4-jvfnl 1/1 Running 0 83s pod/auth-svc-55b99d848f-2tdxt 1/1 Running 0 84s pod/catalog-svc-855649ffb5-5m2v7 0/2 Init:Error 0 84s pod/config-svc-59b4f495f9-pf6l2 1/1 Running 0 83s pod/crypto-svc-867bb4b974-s9pt2 1/2 Running 0 84s pod/dashboardbff-svc-56c778dc45-7qwpm 1/1 Running 0 84s pod/executor-svc-8476c96df6-5pnqh 0/2 ContainerCreating 0 84s pod/executor-svc-8476c96df6-jxbj9 0/2 ContainerCreating 0 84s pod/executor-svc-8476c96df6-ws748 0/2 ContainerCreating 0 84s pod/frontend-svc-b49f9d57f-p8pnx 1/1 Running 0 83s pod/gateway-77749568c9-gbtnn 0/1 Running 0 82s pod/jobs-svc-6b7b87cb84-q4wdw 0/1 Init:Error 0 84s pod/k10-grafana-758d8475f7-zts4g 1/1 Running 0 84s pod/kanister-svc-664fb69468-xkk5r 1/1 Running 0 83s pod/logging-svc-6c665f7dd7-ms7fp 0/1 Init:Error 0 84s pod/metering-svc-68bdff4b58-zlcvd 0/1 PodInitializing 0 83s pod/prometheus-server-7d84c5477b-qjng2 0/2 ContainerCreating 0 84s pod/state-svc-fb5bb8567-tfldd 1/1 Running 0 83s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/aggregatedapis-svc ClusterIP 10.99.108.194 <none> 443/TCP 85s service/auth-svc ClusterIP 10.105.30.183 <none> 8000/TCP 85s service/catalog-svc ClusterIP 10.98.192.179 <none> 8000/TCP 85s service/config-svc ClusterIP 10.107.55.99 <none> 8000/TCP 85s service/crypto-svc ClusterIP 10.111.21.173 <none> 8000/TCP,8001/TCP 85s service/dashboardbff-svc ClusterIP 10.97.218.52 <none> 8000/TCP 85s service/executor-svc ClusterIP 10.109.53.250 <none> 8000/TCP 85s service/frontend-svc ClusterIP 10.111.188.64 <none> 8000/TCP 85s service/gateway ClusterIP 10.97.127.168 <none> 8000/TCP 85s service/gateway-admin ClusterIP 10.108.113.188 <none> 8877/TCP 85s service/jobs-svc ClusterIP 10.102.248.213 <none> 8000/TCP 85s service/k10-grafana ClusterIP 10.106.105.17 <none> 80/TCP 85s service/kanister-svc ClusterIP 10.111.195.119 <none> 8000/TCP 84s service/logging-svc ClusterIP 10.100.247.10 <none> 8000/TCP,24224/TCP,24225/TCP 85s service/metering-svc ClusterIP 10.111.102.49 <none> 8000/TCP 85s service/prometheus-server ClusterIP 10.97.111.61 <none> 80/TCP 85s service/prometheus-server-exp ClusterIP 10.105.41.26 <none> 80/TCP 85s service/state-svc ClusterIP 10.109.181.136 <none> 8000/TCP 85s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/aggregatedapis-svc 1/1 1 1 84s deployment.apps/auth-svc 1/1 1 1 84s deployment.apps/catalog-svc 0/1 1 0 84s deployment.apps/config-svc 1/1 1 1 84s deployment.apps/crypto-svc 0/1 1 0 84s deployment.apps/dashboardbff-svc 1/1 1 1 84s deployment.apps/executor-svc 0/3 3 0 84s deployment.apps/frontend-svc 1/1 1 1 84s deployment.apps/gateway 0/1 1 0 84s deployment.apps/jobs-svc 0/1 1 0 84s deployment.apps/k10-grafana 1/1 1 1 84s deployment.apps/kanister-svc 1/1 1 1 84s deployment.apps/logging-svc 0/1 1 0 84s deployment.apps/metering-svc 0/1 1 0 84s deployment.apps/prometheus-server 0/1 1 0 84s deployment.apps/state-svc 1/1 1 1 84s NAME DESIRED CURRENT READY AGE replicaset.apps/aggregatedapis-svc-78dbdcbdc4 1 1 1 84s replicaset.apps/auth-svc-55b99d848f 1 1 1 84s replicaset.apps/catalog-svc-855649ffb5 1 1 0 84s replicaset.apps/config-svc-59b4f495f9 1 1 1 83s replicaset.apps/crypto-svc-867bb4b974 1 1 0 84s replicaset.apps/dashboardbff-svc-56c778dc45 1 1 1 84s replicaset.apps/executor-svc-8476c96df6 3 3 0 84s replicaset.apps/frontend-svc-b49f9d57f 1 1 1 83s replicaset.apps/gateway-77749568c9 1 1 0 83s replicaset.apps/jobs-svc-6b7b87cb84 1 1 0 84s replicaset.apps/k10-grafana-758d8475f7 1 1 1 84s replicaset.apps/kanister-svc-664fb69468 1 1 1 83s replicaset.apps/logging-svc-6c665f7dd7 1 1 0 84s replicaset.apps/metering-svc-68bdff4b58 1 1 0 84s replicaset.apps/prometheus-server-7d84c5477b 1 1 0 84s replicaset.apps/state-svc-fb5bb8567 1 1 1 84s root@k8svm:~#
5. Once all the pods of K10 are up and running, configure the local storage.
root@k8svm:~# kubectl annotate volumesnapshotclass csi-hostpath-snapclass k10.kasten.io/is-snapshot-class=true volumesnapshotclass.snapshot.storage.k8s.io/csi-hostpath-snapclass annotated root@k8svm:~#
6. Expose the K10 dashboard creating a node port service.
root@k8svm:~# echo | kubectl apply -f - << EOF > apiVersion: v1 > kind: Service > metadata: > name: gateway-nodeport > namespace: kasten-io > spec: > selector: > service: gateway > ports: > - name: http > port: 8000 > nodePort: 32000 > type: NodePort > EOF service/gateway-nodeport created root@k8svm:~#
7. View the K10 dashboard.
Snapshot your apps
8. Let’s play around with K10 capabilities. We will take the manual snapshot of the MySQL namespace. From the dashboard on the left, click on the Applications card. You will see the MySQL instance you had created. Click on Snapshot and then click Snapshot Application.
Go back to the main dashboard page and view the activity. The snapshot should complete in a couple of minutes. Click on the completed job to view artifact information generated by the snapshot action in the side panel.
Clone your namespace
9. Let’s Cloning MySQL namespace. You can clone the application by clicking the restore icon.
In the pop-up window, just provide the new namespace to create a clone.
You can go back to the dashboard and check the clone status. You could also use the “kubectl” command to list all the resources from the new namespace created by Kasten K10 for the MySQL clone.
This test drives even accidental application deletion/corruption and restores tasks to demonstrate how to recover applications quickly using Kasten K10 by veeam. I would strongly recommend experiencing this lab to get more confident in Kasten K10 which is emerging in the market to protect the containerized application.
Leave a Reply