Kubernetes etcd Backup and Restore Documentation
This document describes the process to back up and restore the Kubernetes etcd cluster safely, update configuration, and restart the API server.
1. Backup etcd
Use the etcdctl backup command to create a snapshot of your current etcd data. Save the backup in the /backup directory.
# Create a backup directory if it doesn't exist
sudo mkdir -p /backup
# Set etcdctl API version
export ETCDCTL_API=3
# Create the backup without using other environment variables
sudo etcdctl \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key \
snapshot save /backup/etcd-snapshot.db2. Restore etcd from Backup
Restore the backup into a new directory /new_etcd_restore.
# Restore the etcd snapshot
etcdctl snapshot restore /backup/etcd-snapshot.db \
--data-dir /new_etcd_restore3. Update etcd Configuration
Edit the static pod manifest file for etcd, typically located at /etc/kubernetes/manifests/etcd.yaml.
Update the --data-dir flag to point to the restored location:
spec:
containers:
- command:
- etcd
- --data-dir=/new_etcd_restore
...Update volume mounts if necessary:
volumeMounts:
- mountPath: /new_etcd_restore
name: etcd-data
...
volumes:
- name: etcd-data
hostPath:
path: /new_etcd_restore4. Restart the API Server
To apply the updated etcd configuration, restart the static pods for the API server:
# Move static pod manifest files to /tmp as a temporary measure
sudo mv /etc/kubernetes/manifests/* /tmp/
# Move them back to trigger a restart
sudo mv /tmp/* /etc/kubernetes/manifests/5. Reload kubelet
Finally, reload the kubelet to ensure it reconciles the new pod manifests:
sudo systemctl daemon-reload
sudo systemctl restart kubeletVerification
Check that the etcd pod is running with the restored data:
kubectl get pods -n kube-system -l component=etcdVerify the API server can communicate with etcd:
kubectl get componentstatuses