Kubernetes 常見問題總結(jié)
如何刪除不一致狀態(tài)下的 rc,deployment,service
在某些情況下,經(jīng)常發(fā)現(xiàn) kubectl 進(jìn)程掛起現(xiàn)象,然后在 get 時(shí)候發(fā)現(xiàn)刪了一半,而另外的刪除不了
[root@k8s-master?~]#?kubectl?get?-f?fluentd-elasticsearch/
NAME?DESIRED?CURRENT?READY?AGE
rc/elasticsearch-logging-v1?0?2?2?15h
NAME?DESIRED?CURRENT?UP-TO-DATE?AVAILABLE?AGE
deploy/kibana-logging?0?1?1?1?15h
Error?from?server?(NotFound):?services?"elasticsearch-logging"?not?found
Error?from?server?(NotFound):?daemonsets.extensions?"fluentd-es-v1.22"?not?found
Error?from?server?(NotFound):?services?"kibana-logging"?not?found
刪除這些 deployment,service 或者 rc 命令如下:
kubectl?delete?deployment?kibana-logging?-n?kube-system?--cascade=false
kubectl?delete?deployment?kibana-logging?-n?kube-system??--ignore-not-found
delete?rc?elasticsearch-logging-v1?-n?kube-system?--force?now?--grace-period=0
1|2刪除不了后如何重置etcd
刪除不了后如何重置 etcd
rm?-rf?/var/lib/etcd/*
刪除后重新 reboot master 節(jié)點(diǎn)。reset etcd 后需要重新設(shè)置網(wǎng)絡(luò)
etcdctl?mk?/atomic.io/network/config?'{?"Network":?"192.168.0.0/16"?}'
啟動(dòng) apiserver 失敗
每次啟動(dòng)都是報(bào)如下問題:
start?request?repeated?too?quickly?for?kube-apiserver.service
但其實(shí)不是啟動(dòng)頻率問題,需要查看, /var/log/messages,在我的情況中是因?yàn)殚_啟 ? ?ServiceAccount 后找不到 ca.crt 等文件,導(dǎo)致啟動(dòng)出錯(cuò)。
May?21?07:56:41?k8s-master?kube-apiserver:?Flag?--port?has?been?deprecated,?see?--insecure-port?instead.
May?21?07:56:41?k8s-master?kube-apiserver:?F0521?07:56:41.692480?4299?universal_validation.go:104]?Validate?server?run?options?failed:?unable?to?load?client?CA?file:?open?/var/run/kubernetes/ca.crt:?no?such?file?or?directory
May?21?07:56:41?k8s-master?systemd:?kube-apiserver.service:?main?process?exited,?code=exited,?status=255/n/a
May?21?07:56:41?k8s-master?systemd:?Failed?to?start?Kubernetes?API?Server.
May?21?07:56:41?k8s-master?systemd:?Unit?kube-apiserver.service?entered?failed?state.
May?21?07:56:41?k8s-master?systemd:?kube-apiserver.service?failed.
May?21?07:56:41?k8s-master?systemd:?kube-apiserver.service?holdoff?time?over,?scheduling?restart.
May?21?07:56:41?k8s-master?systemd:?start?request?repeated?too?quickly?for?kube-apiserver.service
May?21?07:56:41?k8s-master?systemd:?Failed?to?start?Kubernetes?API?Server.
在部署 fluentd 等日志組件的時(shí)候,很多問題都是因?yàn)樾枰_啟 ServiceAccount 選項(xiàng)需要配置安全導(dǎo)致,所以說(shuō)到底還是需要配置好 ServiceAccount.
出現(xiàn) Permission denied 情況
在配置 fluentd 時(shí)候出現(xiàn)cannot create /var/log/fluentd.log: Permission denied 錯(cuò)誤,這是因?yàn)闆]有關(guān)掉 SElinux 安全導(dǎo)致。
可以在?/etc/selinux/config?中將?SELINUX=enforcing?設(shè)置成?disabled,然后?reboot
基于 ServiceAccount 的配置
首先生成各種需要的 keys,k8s-master 需替換成 master 的主機(jī)名.
openssl?genrsa?-out?ca.key?2048
openssl?req?-x509?-new?-nodes?-key?ca.key?-subj?"/CN=k8s-master"?-days?10000?-out?ca.crt
openssl?genrsa?-out?server.key?2048
echo?subjectAltName=IP:10.254.0.1?>?extfile.cnf
#ip由下述命令決定
#kubectl?get?services?--all-namespaces?|grep?'default'|grep?'kubernetes'|grep?'443'|awk?'{print?$3}'
openssl?req?-new?-key?server.key?-subj?"/CN=k8s-master"?-out?server.csr
openssl?x509?-req?-in?server.csr?-CA?ca.crt?-CAkey?ca.key?-CAcreateserial?-extfile?extfile.cnf?-out?server.crt?-days?10000
如果修改 /etc/kubernetes/apiserver 的配置文件參數(shù)的話,通過 systemctl start kube-apiserver 啟動(dòng)失敗,出錯(cuò)信息為:
Validate?server?run?options?failed:?unable?to?load?client?CA?file:?open?/root/keys/ca.crt:?permission?denied
但可以通過命令行啟動(dòng) API Server
/usr/bin/kube-apiserver?--logtostderr=true?--v=0?--etcd-servers=http://k8s-master:2379?--address=0.0.0.0?--port=8080?--kubelet-port=10250?--allow-privileged=true?--service-cluster-ip-range=10.254.0.0/16?--admission-control=ServiceAccount?--insecure-bind-address=0.0.0.0?--client-ca-file=/root/keys/ca.crt?--tls-cert-file=/root/keys/server.crt?--tls-private-key-file=/root/keys/server.key?--basic-auth-file=/root/keys/basic_auth.csv?--secure-port=443?&>>?/var/log/kubernetes/kube-apiserver.log?&
命令行啟動(dòng) Controller-manager
/usr/bin/kube-controller-manager?--logtostderr=true?--v=0?--master=http://k8s-master:8080?--root-ca-file=/root/keys/ca.crt?--service-account-private-key-file=/root/keys/server.key?&?>>/var/log/kubernetes/kube-controller-manage.log
ETCD 啟動(dòng)不起來(lái)-問題(1)
etcd是kubernetes 集群的zookeeper進(jìn)程,幾乎所有的service都依賴于etcd的啟動(dòng),比如flanneld,apiserver,docker.....在啟動(dòng)etcd報(bào)錯(cuò)日志如下:
May?24?13:39:09?k8s-master?systemd:?Stopped?Flanneld?overlay?address?etcd?agent.
May?24?13:39:28?k8s-master?systemd:?Starting?Etcd?Server...
May?24?13:39:28?k8s-master?etcd:?recognized?and?used?environment?variable?ETCD_ADVERTISE_CLIENT_URLS=http://etcd:2379,http://etcd:4001
May?24?13:39:28?k8s-master?etcd:?recognized?environment?variable?ETCD_NAME,?but?unused:?shadowed?by?corresponding?flag
May?24?13:39:28?k8s-master?etcd:?recognized?environment?variable?ETCD_DATA_DIR,?but?unused:?shadowed?by?corresponding?flag
May?24?13:39:28?k8s-master?etcd:?recognized?environment?variable?ETCD_LISTEN_CLIENT_URLS,?but?unused:?shadowed?by?corresponding?flag
May?24?13:39:28?k8s-master?etcd:?etcd?Version:?3.1.3
May?24?13:39:28?k8s-master?etcd:?Git?SHA:?21fdcc6
May?24?13:39:28?k8s-master?etcd:?Go?Version:?go1.7.4
May?24?13:39:28?k8s-master?etcd:?Go?OS/Arch:?linux/amd64
May?24?13:39:28?k8s-master?etcd:?setting?maximum?number?of?CPUs?to?1,?total?number?of?available?CPUs?is?1
May?24?13:39:28?k8s-master?etcd:?the?server?is?already?initialized?as?member?before,?starting?as?etcd?member...
May?24?13:39:28?k8s-master?etcd:?listening?for?peers?on?http://localhost:2380
May?24?13:39:28?k8s-master?etcd:?listening?for?client?requests?on?0.0.0.0:2379
May?24?13:39:28?k8s-master?etcd:?listening?for?client?requests?on?0.0.0.0:4001
May?24?13:39:28?k8s-master?etcd:?recovered?store?from?snapshot?at?index?140014
May?24?13:39:28?k8s-master?etcd:?name?=?master
May?24?13:39:28?k8s-master?etcd:?data?dir?=?/var/lib/etcd/default.etcd
May?24?13:39:28?k8s-master?etcd:?member?dir?=?/var/lib/etcd/default.etcd/member
May?24?13:39:28?k8s-master?etcd:?heartbeat?=?100ms
May?24?13:39:28?k8s-master?etcd:?election?=?1000ms
May?24?13:39:28?k8s-master?etcd:?snapshot?count?=?10000
May?24?13:39:28?k8s-master?etcd:?advertise?client?URLs?=?http://etcd:2379,http://etcd:4001
May?24?13:39:28?k8s-master?etcd:?ignored?file?0000000000000001-0000000000012700.wal.broken?in?wal
May?24?13:39:29?k8s-master?etcd:?restarting?member?8e9e05c52164694d?in?cluster?cdf818194e3a8c32?at?commit?index?148905
May?24?13:39:29?k8s-master?etcd:?8e9e05c52164694d?became?follower?at?term?12
May?24?13:39:29?k8s-master?etcd:?newRaft?8e9e05c52164694d?[peers:?[8e9e05c52164694d],?term:?12,?commit:?148905,?applied:?140014,?lastindex:?148905,?lastterm:?12]
May?24?13:39:29?k8s-master?etcd:?enabled?capabilities?for?version?3.1
May?24?13:39:29?k8s-master?etcd:?added?member?8e9e05c52164694d?[http://localhost:2380]?to?cluster?cdf818194e3a8c32?from?store
May?24?13:39:29?k8s-master?etcd:?set?the?cluster?version?to?3.1?from?store
May?24?13:39:29?k8s-master?etcd:?starting?server...?[version:?3.1.3,?cluster?version:?3.1]
May?24?13:39:29?k8s-master?etcd:?raft?save?state?and?entries?error:?open?/var/lib/etcd/default.etcd/member/wal/0.tmp:?is?a?directory
May?24?13:39:29?k8s-master?systemd:?etcd.service:?main?process?exited,?code=exited,?status=1/FAILURE
May?24?13:39:29?k8s-master?systemd:?Failed?to?start?Etcd?Server.
May?24?13:39:29?k8s-master?systemd:?Unit?etcd.service?entered?failed?state.
May?24?13:39:29?k8s-master?systemd:?etcd.service?failed.
May?24?13:39:29?k8s-master?systemd:?etcd.service?holdoff?time?over,?scheduling?restart.
核心語(yǔ)句:
raft?save?state?and?entries?error:?open?/var/lib/etcd/default.etcd/member/wal/0.tmp:?is?a?directory
進(jìn)入相關(guān)目錄,刪除 0.tmp,然后就可以啟動(dòng)啦!
ETCD啟動(dòng)不起來(lái)-超時(shí)問題(2)
問題背景:當(dāng)前部署了 3 個(gè) etcd 節(jié)點(diǎn),突然有一天 3 臺(tái)集群全部停電宕機(jī)了。重新啟動(dòng)之后發(fā)現(xiàn) K8S 集群是可以正常使用的,但是檢查了一遍組件之后,發(fā)現(xiàn)有一個(gè)節(jié)點(diǎn)的 etcd 啟動(dòng)不了。經(jīng)過一遍探查,發(fā)現(xiàn)時(shí)間不準(zhǔn)確,通過以下命令 ntpdate ntp.aliyun.com 重新將時(shí)間調(diào)整正確,重新啟動(dòng) etcd,發(fā)現(xiàn)還是起不來(lái),報(bào)錯(cuò)如下:
Mar?05?14:27:15?k8s-node2?etcd[3248]:?etcd?Version:?3.3.13
Mar?05?14:27:15?k8s-node2?etcd[3248]:?Git?SHA:?98d3084
Mar?05?14:27:15?k8s-node2?etcd[3248]:?Go?Version:?go1.10.8
Mar?05?14:27:15?k8s-node2?etcd[3248]:?Go?OS/Arch:?linux/amd64
Mar?05?14:27:15?k8s-node2?etcd[3248]:?setting?maximum?number?of?CPUs?to?4,?total?number?of?available?CPUs?is?4
Mar?05?14:27:15?k8s-node2?etcd[3248]:?the?server?is?already?initialized?as?member?before,?starting?as?etcd?member
...
Mar?05?14:27:15?k8s-node2?etcd[3248]:?peerTLS:?cert?=?/opt/etcd/ssl/server.pem,?key?=?/opt/etcd/ssl/server-key.pe
m,?ca?=?,?trusted-ca?=?/opt/etcd/ssl/ca.pem,?client-cert-auth?=?false,?crl-file?=
Mar?05?14:27:15?k8s-node2?etcd[3248]:?listening?for?peers?on?https://192.168.25.226:2380
Mar?05?14:27:15?k8s-node2?etcd[3248]:?The?scheme?of?client?url?http://127.0.0.1:2379?is?HTTP?while?peer?key/cert
files?are?presented.?Ignored?key/cert?files.
Mar?05?14:27:15?k8s-node2?etcd[3248]:?listening?for?client?requests?on?127.0.0.1:2379
Mar?05?14:27:15?k8s-node2?etcd[3248]:?listening?for?client?requests?on?192.168.25.226:2379
Mar?05?14:27:15?k8s-node2?etcd[3248]:?member?9c166b8b7cb6ecb8?has?already?been?bootstrapped
Mar?05?14:27:15?k8s-node2?systemd[1]:?etcd.service:?main?process?exited,?code=exited,?status=1/FAILURE
Mar?05?14:27:15?k8s-node2?systemd[1]:?Failed?to?start?Etcd?Server.
Mar?05?14:27:15?k8s-node2?systemd[1]:?Unit?etcd.service?entered?failed?state.
Mar?05?14:27:15?k8s-node2?systemd[1]:?etcd.service?failed.
Mar?05?14:27:15?k8s-node2?systemd[1]:?etcd.service?failed.
Mar?05?14:27:15?k8s-node2?systemd[1]:?etcd.service?holdoff?time?over,?scheduling?restart.
Mar?05?14:27:15?k8s-node2?systemd[1]:?Starting?Etcd?Server...
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_NAME,?but?unused:?shadowed?by?correspo
nding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_DATA_DIR,?but?unused:?shadowed?by?corr
esponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_LISTEN_PEER_URLS,?but?unused:?shadowed
?by?corresponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_LISTEN_CLIENT_URLS,?but?unused:?shadow
ed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_INITIAL_ADVERTISE_PEER_URLS,?but?unuse
d:?shadowed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_ADVERTISE_CLIENT_URLS,?but?unused:?sha
dowed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_INITIAL_CLUSTER,?but?unused:?shadowed
by?corresponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_INITIAL_CLUSTER_TOKEN,?but?unused:?sha
dowed?by?corresponding?flag
Mar?05?14:27:15?k8s-node2?etcd[3258]:?recognized?environment?variable?ETCD_INITIAL_CLUSTER_STATE,?but?unused:?sha
dowed?by?corresponding?flag
解決方法:檢查日志發(fā)現(xiàn)并沒有特別明顯的錯(cuò)誤,根據(jù)經(jīng)驗(yàn)來(lái)講,etcd 節(jié)點(diǎn)壞掉一個(gè)其實(shí)對(duì)集群沒有大的影響,這時(shí)集群已經(jīng)可以正常使用了,但是這個(gè)壞掉的 etcd 節(jié)點(diǎn)并沒有啟動(dòng),解決方法如下:
進(jìn)入 etcd 的數(shù)據(jù)存儲(chǔ)目錄進(jìn)行備份 備份原有數(shù)據(jù):
cd?/var/lib/etcd/default.etcd/member/cp?*??/data/bak/
刪除這個(gè)目錄下的所有數(shù)據(jù)文件
rm?-rf?/var/lib/etcd/default.etcd/member/*
停止另外兩臺(tái) etcd 節(jié)點(diǎn),因?yàn)?etcd 節(jié)點(diǎn)啟動(dòng)時(shí)需要所有節(jié)點(diǎn)一起啟動(dòng),啟動(dòng)成功后即可使用。
#master?節(jié)點(diǎn)
systemctl?stop?etcd
systemctl?restart?etcd
#node1?節(jié)點(diǎn)
systemctl?stop?etcd
systemctl?restart?etcd
#node2?節(jié)點(diǎn)
systemctl?stop?etcd
systemctl?restart?etcd
CentOS下配置主機(jī)互信
在每臺(tái)服務(wù)器需要建立主機(jī)互信的用戶名執(zhí)行以下命令生成公鑰/密鑰,默認(rèn)回車即可
ssh-keygen?-t?rsa
可以看到生成個(gè)公鑰的文件。互傳公鑰,第一次需要輸入密碼,之后就OK了。
ssh-copy-id?-i?/root/.ssh/[email protected]?(-p?2222)
-p 端口 默認(rèn)端口不加-p,如果更改過端口,就得加上-p. 可以看到是在.ssh/下生成了個(gè) authorized_keys的文件,記錄了能登陸這臺(tái)服務(wù)器的其他服務(wù)器的公鑰。測(cè)試看是否能登陸:
ssh?192.168.199.132?(-p?2222)
CentOS 主機(jī)名的修改
hostnamectl?set-hostname?k8s-master1
Virtualbox 實(shí)現(xiàn) CentOS 復(fù)制和粘貼功能
如果不安裝或者不輸出,可以將 update 修改成 install 再運(yùn)行。
yum?install?update
yum?update?kernel
yum?update?kernel-devel
yum?install?kernel-headers
yum?install?gcc
yum?install?gcc?make
運(yùn)行完后
sh?VBoxLinuxAdditions.run
刪除Pod一直處于Terminating狀態(tài)
可以通過下面命令強(qiáng)制刪除
kubectl?delete?pod?NAME?--grace-period=0?--force
刪除namespace一直處于Terminating狀態(tài)
可以通過以下腳本強(qiáng)制刪除
[root@k8s-master1?k8s]#?cat?delete-ns.sh
#!/bin/bash
set?-e
useage(){
????echo?"useage:"
????echo?"?delns.sh?NAMESPACE"
}
if?[?$#?-lt?1?];then
????useage
????exit
fi
NAMESPACE=$1
JSONFILE=${NAMESPACE}.json
kubectl?get?ns?"${NAMESPACE}"?-o?json?>?"${JSONFILE}"
vi?"${JSONFILE}"
curl?-k?-H?"Content-Type:?application/json"?-X?PUT?--data-binary?@"${JSONFLE}"?\
????http://127.0.0.1:8001/api/v1/namespaces/"${NAMESPACE}"/finalize
容器包含有效的 CPU/內(nèi)存 requests 且沒有指定 limits 可能會(huì)出現(xiàn)什么問題?
下面我們創(chuàng)建一個(gè)對(duì)應(yīng)的容器,該容器只有 requests 設(shè)定,但是沒有 limits 設(shè)定,
-?name:?busybox-cnt02
????image:?busybox
????command:?["/bin/sh"]
????args:?["-c",?"while?true;?do?echo?hello?from?cnt02;?sleep?10;done"]
????resources:
??????requests:
????????memory:?"100Mi"
????????cpu:?"100m"
這個(gè)容器創(chuàng)建出來(lái)會(huì)有什么問題呢?其實(shí)對(duì)于正常的環(huán)境來(lái)說(shuō)沒有什么問題,但是對(duì)于資源型 pod 來(lái)說(shuō),如果有的容器沒有設(shè)定 limit 限制,資源會(huì)被其他的 pod 搶占走,可能會(huì)造成容器應(yīng)用失敗的情況。可以通過 limitrange 策略來(lái)去匹配,讓 pod 自動(dòng)設(shè)定,前提是要提前配置好limitrange 規(guī)則。
原文鏈接:https://www.cnblogs.com/passzhan
關(guān)注「開源Linux」加星標(biāo),提升IT技能

