kubernetes 常見問題總結(jié)
來源:https://www.cnblogs.com/passzhang
[root@k8s-master ~]# kubectl get -f fluentd-elasticsearch/
NAME DESIRED CURRENT READY AGE
rc/elasticsearch-logging-v1?0?2?2?15h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/kibana-logging?0?1?1?1?15h
Error?from?server (NotFound): services?"elasticsearch-logging"?not?found
Error?from?server (NotFound): daemonsets.extensions?"fluentd-es-v1.22"?not?found
Error?from?server (NotFound): services?"kibana-logging"?not?foundkubectl?delete?deployment kibana-logging?-n kube-system?--cascade=false
kubectl?delete?deployment kibana-logging?-n kube-system??--ignore-not-found
delete?rc elasticsearch-logging-v1 -n kube-system?--force now --grace-period=0
1|2刪除不了后如何重置etcdrm -rf /var/lib/etcd/*etcdctl?mk?/atomic.io/network/config?'{ "Network": "192.168.0.0/16" }'start?request repeated too quickly?for?kube-apiserver.serviceMay 21 07:56:41 k8s-master kube-apiserver: Flag?--port has been deprecated, see --insecure-port instead.
May 21 07:56:41 k8s-master kube-apiserver: F0521 07:56:41.692480 4299 universal_validation.go:104] Validate server run options failed: unable to?load?client?CA?file:?open?/var/run/kubernetes/ca.crt:?no?such?file?or?directory
May?21?07:56:41?k8s-master?systemd: kube-apiserver.service:?main?process exited, code=exited,?status=255/n/a
May?21?07:56:41?k8s-master?systemd:?Failed?to?start?Kubernetes API Server.
May?21?07:56:41?k8s-master?systemd: Unit kube-apiserver.service entered?failed?state.
May?21?07:56:41?k8s-master?systemd: kube-apiserver.service failed.
May?21?07:56:41?k8s-master?systemd: kube-apiserver.service holdoff?time?over, scheduling restart.
May?21?07:56:41?k8s-master?systemd:?start?request repeated too quickly?for?kube-apiserver.service
May?21?07:56:41?k8s-master?systemd:?Failed?to?start?Kubernetes API Server.openssl genrsa -out?ca.key?2048
openssl req -x509 -new?-nodes -key?ca.key -subj?"/CN=k8s-master"?-days?10000?-out?ca.crt
openssl genrsa -out server.key?2048
echo?subjectAltName=IP:10.254.0.1?> extfile.cnf
#ip由下述命令決定
#kubectl?get?services --all-namespaces |grep?'default'|grep?'kubernetes'|grep?'443'|awk?'{print $3}'
openssl req -new?-key server.key -subj?"/CN=k8s-master"?-out server.csr
openssl x509 -req -in server.csr -CA?ca.crt -CAkey?ca.key -CAcreateserial -extfile extfile.cnf?-out server.crt -days?10000Validate server run?options?failed: unable?to?load client CA?file:?open?/root/keys/ca.crt:?permission denied/usr/bin/kube-apiserver --logtostderr=true --v=0?--etcd-servers=http://k8s-master:2379?--address=0.0.0.0?--port=8080?--kubelet-port=10250?--allow-privileged=true --service-cluster-ip-range=10.254.0.0/16?--admission-control=ServiceAccount --insecure-bind-address=0.0.0.0?--client-ca-file=/root/keys/ca.crt --tls-cert-file=/root/keys/server.crt --tls-private-key-file=/root/keys/server.key --basic-auth-file=/root/keys/basic_auth.csv --secure-port=443?&>> /var/log/kubernetes/kube-apiserver.log?&/usr/bin/kube-controller-manager --logtostderr=true --v=0?--master=http://k8s-master:8080?--root-ca-file=/root/keys/ca.crt --service-account-private-key-file=/root/keys/server.key & >>/var/log/kubernetes/kube-controller-manage.logMay 24 13:39:09 k8s-master systemd: Stopped Flanneld overlay address etcd agent.
May 24 13:39:28 k8s-master systemd: Starting Etcd Server...
May 24 13:39:28 k8s-master etcd: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=http://etcd:2379,http://etcd:4001
May 24 13:39:28 k8s-master etcd: recognized environment variable ETCD_NAME, but unused: shadowed by corresponding flag
May 24 13:39:28 k8s-master etcd: recognized environment variable ETCD_DATA_DIR, but unused: shadowed by corresponding flag
May 24 13:39:28 k8s-master etcd: recognized environment variable ETCD_LISTEN_CLIENT_URLS, but unused: shadowed by corresponding flag
May 24 13:39:28 k8s-master etcd: etcd Version: 3.1.3
May 24 13:39:28 k8s-master etcd: Git SHA: 21fdcc6
May 24 13:39:28 k8s-master etcd: Go Version: go1.7.4
May 24 13:39:28 k8s-master etcd: Go OS/Arch: linux/amd64
May 24 13:39:28 k8s-master etcd: setting maximum number of CPUs to 1, total number of available CPUs is 1
May 24 13:39:28 k8s-master etcd: the server is already initialized as member before, starting as etcd member...
May 24 13:39:28 k8s-master etcd: listening for peers on http://localhost:2380
May 24 13:39:28 k8s-master etcd: listening for client requests on 0.0.0.0:2379
May 24 13:39:28 k8s-master etcd: listening for client requests on 0.0.0.0:4001
May 24 13:39:28 k8s-master etcd: recovered store from snapshot at index 140014
May 24 13:39:28 k8s-master etcd: name = master
May 24 13:39:28 k8s-master etcd: data dir = /var/lib/etcd/default.etcd
May 24 13:39:28 k8s-master etcd: member dir = /var/lib/etcd/default.etcd/member
May 24 13:39:28 k8s-master etcd: heartbeat = 100ms
May 24 13:39:28 k8s-master etcd: election = 1000ms
May 24 13:39:28 k8s-master etcd: snapshot count = 10000
May 24 13:39:28 k8s-master etcd: advertise client URLs = http://etcd:2379,http://etcd:4001
May 24 13:39:28 k8s-master etcd: ignored file 0000000000000001-0000000000012700.wal.broken in wal
May 24 13:39:29 k8s-master etcd: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at?commit?index?148905
May?24?13:39:29?k8s-master?etcd:?8e9e05c52164694d became follower?at?term?12
May?24?13:39:29?k8s-master?etcd: newRaft?8e9e05c52164694d [peers: [8e9e05c52164694d], term:?12,?commit:?148905, applied:?140014, lastindex:?148905, lastterm:?12]
May?24?13:39:29?k8s-master?etcd: enabled capabilities?for?version?3.1
May?24?13:39:29?k8s-master?etcd: added?member?8e9e05c52164694d [http://localhost:2380]?to?cluster cdf818194e3a8c32?from?store
May?24?13:39:29?k8s-master?etcd:?set?the cluster?version?to?3.1?from?store
May?24?13:39:29?k8s-master?etcd:?starting?server... [version:?3.1.3, cluster?version:?3.1]
May?24?13:39:29?k8s-master?etcd: raft?save?state?and?entries?error:?open?/var/lib/etcd/default.etcd/member/wal/0.tmp:?is?a?directory
May?24?13:39:29?k8s-master?systemd: etcd.service:?main?process exited, code=exited,?status=1/FAILURE
May?24?13:39:29?k8s-master?systemd:?Failed?to?start?Etcd Server.
May?24?13:39:29?k8s-master?systemd: Unit etcd.service entered?failed?state.
May?24?13:39:29?k8s-master?systemd: etcd.service failed.
May?24?13:39:29?k8s-master?systemd: etcd.service holdoff?time?over, scheduling restart.raft save state?and?entries error:?open?/var/lib/etcd/default.etcd/member/wal/0.tmp:?is?a?directoryMar 05 14:27:15 k8s-node2 etcd[3248]: etcd Version: 3.3.13
Mar 05 14:27:15 k8s-node2 etcd[3248]: Git SHA: 98d3084
Mar 05 14:27:15 k8s-node2 etcd[3248]: Go Version: go1.10.8
Mar 05 14:27:15 k8s-node2 etcd[3248]: Go OS/Arch: linux/amd64
Mar 05 14:27:15 k8s-node2 etcd[3248]: setting maximum number of CPUs to 4, total number of available CPUs is 4
Mar 05 14:27:15 k8s-node2 etcd[3248]: the server is already initialized as member before, starting as etcd member
...
Mar 05 14:27:15 k8s-node2 etcd[3248]: peerTLS: cert = /opt/etcd/ssl/server.pem, key = /opt/etcd/ssl/server-key.pe
m, ca = , trusted-ca = /opt/etcd/ssl/ca.pem, client-cert-auth = false, crl-file =
Mar 05 14:27:15 k8s-node2 etcd[3248]: listening for peers on https://192.168.25.226:2380
Mar 05 14:27:15 k8s-node2 etcd[3248]: The scheme of client url http://127.0.0.1:2379 is HTTP while peer key/cert
files are presented. Ignored key/cert files.
Mar 05 14:27:15 k8s-node2 etcd[3248]: listening for client requests on 127.0.0.1:2379
Mar 05 14:27:15 k8s-node2 etcd[3248]: listening for client requests on 192.168.25.226:2379
Mar 05 14:27:15 k8s-node2 etcd[3248]: member 9c166b8b7cb6ecb8 has already been bootstrapped
Mar 05 14:27:15 k8s-node2 systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE
Mar 05 14:27:15 k8s-node2 systemd[1]: Failed to?start?Etcd Server.
Mar?05?14:27:15?k8s-node2 systemd[1]: Unit etcd.service entered?failed?state.
Mar?05?14:27:15?k8s-node2 systemd[1]: etcd.service failed.
Mar?05?14:27:15?k8s-node2 systemd[1]: etcd.service failed.
Mar?05?14:27:15?k8s-node2 systemd[1]: etcd.service holdoff?time?over, scheduling restart.
Mar?05?14:27:15?k8s-node2 systemd[1]:?Starting?Etcd Server...
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_NAME, but?unused: shadowed?by?correspo
nding flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_DATA_DIR, but?unused: shadowed?by?corr
esponding flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_LISTEN_PEER_URLS, but?unused: shadowed
?by?corresponding?flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_LISTEN_CLIENT_URLS, but?unused: shadow
ed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_INITIAL_ADVERTISE_PEER_URLS, but unuse
d: shadowed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_ADVERTISE_CLIENT_URLS, but?unused:?sha
dowed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_INITIAL_CLUSTER, but?unused: shadowed
by?corresponding?flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_INITIAL_CLUSTER_TOKEN, but?unused:?sha
dowed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2 etcd[3258]: recognized environment?variable?ETCD_INITIAL_CLUSTER_STATE, but?unused:?sha
dowed?by?corresponding?flag進(jìn)入 etcd 的數(shù)據(jù)存儲(chǔ)目錄進(jìn)行備份 備份原有數(shù)據(jù): cd /var/lib/etcd/default.etcd/member/ cp * ?/data/bak/ 刪除這個(gè)目錄下的所有數(shù)據(jù)文件 rm -rf /var/lib/etcd/default.etcd/member/* 停止另外兩臺 etcd 節(jié)點(diǎn),因?yàn)?etcd 節(jié)點(diǎn)啟動(dòng)時(shí)需要所有節(jié)點(diǎn)一起啟動(dòng),啟動(dòng)成功后即可使用。
master 節(jié)點(diǎn)
systemctl?stop?etcd
systemctl restart etcd
node1 節(jié)點(diǎn)
systemctl?stop?etcd
systemctl restart etcd
node2 節(jié)點(diǎn)
systemctl?stop?etcd
systemctl restart etcd在每臺服務(wù)器需要建立主機(jī)互信的用戶名執(zhí)行以下命令生成公鑰/密鑰,默認(rèn)回車即可
ssh-keygen -t rsa互傳公鑰,第一次需要輸入密碼,之后就OK了
ssh-copy-id?-i /root/.ssh/id_rsa.pub root@192.168.199.132?(-p?2222)測試看是否能登陸
ssh?192.168.199.132?(-p?2222)hostnamectl?set-hostname?k8s-master1yum?install?update
yum?update?kernel
yum?update?kernel-devel
yum?install?kernel-headers
yum?install?gcc
yum?install?gcc makekubectl?delete?pod?NAME?--grace-period=0 --force[root@k8s-master1 k8s]# cat delete-ns.sh
#!/bin/bash
set?-e
useage(){
????echo?"useage:"
????echo?" delns.sh NAMESPACE"
}
if?[?$#?-lt 1 ];then
????useage
????exit
fi
NAMESPACE=$1
JSONFILE=${NAMESPACE}.json
kubectl get ns?"${NAMESPACE}"?-o json >?"${JSONFILE}"
vi?"${JSONFILE}"
curl -k -H?"Content-Type: application/json"?-X PUT --data-binary @"${JSONFLE}"?\
????http://127.0.0.1:8001/api/v1/namespaces/"${NAMESPACE}"/finalize- name: busybox-cnt02
????image: busybox
????command: ["/bin/sh"]
????args:?["-c",?"while true; do echo hello from cnt02; sleep 10;done"]
????resources:
??????requests:
????????memory:?"100Mi"
????????cpu:?"100m"其實(shí)對于正常的環(huán)境來說沒有什么問題,但是對于資源型 pod 來說,如果有的容器沒有設(shè)定 limit 限制,資源會(huì)被其他的 pod 搶占走,可能會(huì)造成容器應(yīng)用失敗的情況。可以通過 limitrange 策略來去匹配,讓 pod 自動(dòng)設(shè)定,前提是要提前配置好limitrange 規(guī)則。
- END -
?推薦閱讀? DevOps 工程師該懂些什么? Linux系統(tǒng)常用命令速查手冊 SpringCloud微服務(wù)項(xiàng)目運(yùn)維必知必會(huì) Python 自動(dòng)創(chuàng)建 Grafana 儀表板 Shell文本處理三劍客:grep、sed、awk 搭建一套高可用的Harbor容器鏡像倉庫 31天Kubernetes集訓(xùn)營,拿下CKA全球認(rèn)證!
點(diǎn)亮,服務(wù)器三年不宕機(jī)
評論
圖片
表情


