kubernetes maintenance

I can’t connect to my cluster:

systemctl restart kubelet

kubectl get nodes

NAME STATUS ROLES AGE VERSION
a6 NotReady control-plane 28m v1.25.2

kubectl uncordon a6

phomlish@a6:~/kubernetes$ k get all
The connection to the server 10.11.1.96:6443 was refused – did you specify the right host or port?

 

Where is the etcd certificate?  How to update?
k -n kube-system describe pod/etcd-a6

–peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt
–peer-client-cert-auth=true
–peer-key-file=/etc/kubernetes/pki/etcd/peer.key
–peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt

root@a6:~# docker info
Client:
Context: default
Debug Mode: false
Plugins:
app: Docker App (Docker Inc., v0.9.1-beta3)
buildx: Docker Buildx (Docker Inc., v0.9.1-docker)
scan: Docker Scan (Docker Inc., v0.17.0)

Server:
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 48
Server Version: 20.10.18
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
userxattr: false
Logging Driver: json-file
Cgroup Driver: systemd
Cgroup Version: 1
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc io.containerd.runc.v2 io.containerd.runtime.v1.linux
Default Runtime: runc
Init Binary: docker-init
containerd version: 9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6
runc version: v1.1.4-0-g5fd4c4d
init version: de40ad0
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.19.0-21-amd64
Operating System: Debian GNU/Linux 10 (buster)
OSType: linux
Architecture: x86_64
CPUs: 8
Total Memory: 31.25GiB
Name: a6
ID: HEER:PWF2:3EJ3:MFWN:J7UP:JZMO:ATRW:62BE:GN4I:LZQC:UGQC:SGQY
Docker Root Dir: /var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

– name: METALLB_ML_SECRET_KEY
valueFrom:
secretKeyRef:
key: secretkey
name: memberlist

Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: CustomResourceDefinition “addresspools.metallb.io” in namespace “” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “metallb”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “default”

Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ValidatingWebhookConfiguration “metallb-webhook-configuration” in namespace “” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “metallb”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “default”

Error: INSTALLATION FAILED: rendered manifests contain a resource that already exists. Unable to continue with install: ValidatingWebhookConfiguration “metallb-webhook-configuration” in namespace “” exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key “app.kubernetes.io/managed-by”: must be set to “Helm”; annotation validation error: missing key “meta.helm.sh/release-name”: must be set to “metallb”; annotation validation error: missing key “meta.helm.sh/release-namespace”: must be set to “default”

tcpdump
enp0s31f6
tcpdump -n -i enp0s31f6 arp src host 10.11.1.172

Leave a Reply

Your email address will not be published. Required fields are marked *