计算层/k8s/istio
节点
节点说明
节点 | os | 配置 | ip | 角色 |
---|---|---|---|---|
mgm | Rocky9.1 | 2vCPU,RAM4GB,HD:8GB | 10.2.20.59/192.168.3.x | 管理节点,ssh免密 |
k8s-master1 | Rocky9.1 | 4vCPU,RAM4GB,HD:32GB | 10.2.20.110/192.168.3.x | 主控 |
k8s-node1 | Rocky9.1 | 4vCPU,RAM4GB,HD:32GB | 10.2.20.111/192.168.3.x | worker |
k8s-node2 | Rocky9.1 | 4vCPU,RAM4GB,HD:32GB | 10.2.20.112/192.168.3.x | worker |
K8S版本:v1.27.2
5.1 K8S节点配置
5.1.1 基础配置
k8s所有节点配置此部分
5.1.1.1 基础包和内核参数
配置hosts文件
cat >> /etc/hosts << 'EOF'
10.2.20.110 k8s-master1
10.2.20.111 k8s-node1
10.2.20.112 k8s-node2
EOF
基础配置及软件包
swapoff -a
sed -i '/swap/s/^/#/' /etc/fstab
free -m
systemctl stop firewalld && systemctl disable firewalld
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
ip_tables
iptable_filter
overlay
EOF
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl --system
yum -y install epel-release
yum -y install bash-completion net-tools gcc wget curl telnet tree lrzsz iproute zip
5.1.1.2 容器运行时配置
容器运行时有docker和cri-o两种常用。 本文采用cri-o
安装cri-o
yum -y install curl jq tar
curl https://raw.githubusercontent.com/cri-o/cri-o/main/scripts/get | bash -s -- -a amd64
systemctl enable --now crio.service
systemctl start crio
配置cri-o
# cat /etc/crictl.yaml
runtime-endpoint: unix:///var/run/crio/crio.sock
image-endpoint: unix:///var/run/crio/crio.sock
timeout: 10
debug: false
# vi /etc/crio/crio.conf
[crio.image]
pause_image = "registry.aliyuncs.com/google_containers/pause:3.9"
# systemctl restart crio
测试
# crictl --runtime-endpoint unix:///run/crio/crio.sock version
Version: 0.1.0
RuntimeName: cri-o
RuntimeVersion: 1.27.0
RuntimeApiVersion: v1
# crio --version
crio version 1.27.0
Version: 1.27.0
GitCommit: 844b43be4337b72a54b53518667451c975515d0b
GitCommitDate: 2023-06-03T07:36:19Z
GitTreeState: dirty
BuildDate: 1980-01-01T00:00:00Z
GoVersion: go1.20.4
Compiler: gc
Platform: linux/amd64
Linkmode: static
BuildTags:
static
netgo
osusergo
exclude_graphdriver_btrfs
exclude_graphdriver_devicemapper
seccomp
apparmor
selinux
LDFlags: unknown
SeccompEnabled: true
AppArmorEnabled: false
5.1.1.3 kubectl kubelet kubeadm安装
配置阿里kubernetes源
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
版本查看
# yum -y makecache
# yum list kubelet --showduplicates | sort -r
...
kubelet.x86_64 1.27.2-0 kubernetes
kubelet.x86_64 1.27.2-0 @kubernetes
kubelet.x86_64 1.27.1-0 kubernetes
kubelet.x86_64 1.27.0-0 kubernetes
...
安装kubectl kubelet kubeadm ,默认安装最新版
# yum -y install kubectl kubelet kubeadm
提示: 在各节点安装k8s成功后再“systemctl enable kubelet”
5.1.1.4 k8s系统镜像准备
在配置master和worker节点时,会从公网拉取k8s系统镜像。 可将这些镜像提前pull到节点本地。 查看k8s系统镜像列表
# kubeadm config images list --kubernetes-version=1.27.2 --image-repository="registry.aliyuncs.com/google_containers"
W0604 23:32:32.215609 11292 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
registry.aliyuncs.com/google_containers/kube-apiserver:v1.27.2
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.27.2
registry.aliyuncs.com/google_containers/kube-scheduler:v1.27.2
registry.aliyuncs.com/google_containers/kube-proxy:v1.27.2
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.7-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1
pull镜像
kubeadm config images pull --kubernetes-version=1.27.2 --image-repository="registry.aliyuncs.com/google_containers"
查看
# crictl images
IMAGE TAG IMAGE ID SIZE
registry.aliyuncs.com/google_containers/coredns v1.10.1 ead0a4a53df89 53.6MB
registry.aliyuncs.com/google_containers/etcd 3.5.7-0 86b6af7dd652c 297MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.27.2 c5b13e4f7806d 122MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.27.2 ac2b7465ebba9 114MB
registry.aliyuncs.com/google_containers/kube-proxy v1.27.2 b8aa50768fd67 72.7MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.27.2 89e70da428d29 59.8MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f1816883972 750kB
5.1.2 master节点配置
安装第一台master节点
# kubeadm init \
--kubernetes-version="1.27.2" \
--cri-socket="/var/run/crio/crio.sock" \
--control-plane-endpoint="10.2.20.110" \
--apiserver-advertise-address=10.2.20.110 \
--image-repository="registry.aliyuncs.com/google_containers" \
--service-cidr=10.10.0.0/16 \
--pod-network-cidr="10.244.0.0/16" \
--ignore-preflight-errors=Swap \
--upload-certs
输出
...
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of the control-plane node running the following command on each as root:
第二台master安装
kubeadm join 10.2.20.110:6443 --token y1dzd6.rmojednvdy1ukevo \
--discovery-token-ca-cert-hash sha256:4fc878964ab80032ee47e17cdf8a67700f1cc58a72af69d7ffa3b7e0ac0b2b09 \
--control-plane --certificate-key 45d54477eeb7228c6728cbc343c1bb59cce539f3f65e83e6136a724a43b45ac9
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
Then you can join any number of worker nodes by running the following on each as root:
worker节点安装
kubeadm join 10.2.20.110:6443 --token y1dzd6.rmojednvdy1ukevo \
--discovery-token-ca-cert-hash sha256:4fc878964ab80032ee47e17cdf8a67700f1cc58a72af69d7ffa3b7e0ac0b2b09
配置kubelet开机引导
systemctl enable kubelet.service
配置kubectl
创建kubectl环境变量
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
# source /etc/profile
执行下面命令,使kubectl可以自动补充
# echo "source <(kubectl completion bash)" >> ~/.bash_profile
# source .bash_profile
测试
# kubectl version --short
Flag --short has been deprecated, and will be removed in the future. The --short output will become the default.
Client Version: v1.27.2
Kustomize Version: v5.0.1
Server Version: v1.27.2
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 8m45s v1.27.2
5.1.3 worker节点配置
所有结节运行如下命令
# kubeadm join 10.2.20.110:6443 --token y1dzd6.rmojednvdy1ukevo \
--discovery-token-ca-cert-hash sha256:4fc878964ab80032ee47e17cdf8a67700f1cc58a72af69d7ffa3b7e0ac0b2b09
输出
...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
配置开机启动
systemctl enable kubelet.service
5.1.4 管理机mgm配置kubectl
在k8集群外安装k8s客户端命令kubectl. 创建kubectl环境变量
scp k8s-master1:/usr/bin/kubectl /usr/bin/
mkdir -p $HOME/.kube
scp k8s-master1:/etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
mkdir /etc/kubernetes
scp k8s-master1:/etc/kubernetes/admin.conf /etc/kubernetes/
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
kubectl可以自动补充
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile
测试
# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true","reason":""}
# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 Ready control-plane 33m v1.27.2
k8s-node1 Ready <none> 20m v1.27.2
k8s-node2 Ready <none> 19m v1.27.2
# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-7bdc4cb885-hcl6t 1/1 Running 0 16m
kube-system coredns-7bdc4cb885-hvmgs 1/1 Running 0 16m
kube-system etcd-k8s-master1 1/1 Running 0 17m
kube-system kube-apiserver-k8s-master1 1/1 Running 0 16m
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 16m
kube-system kube-proxy-464dg 1/1 Running 0 16m
kube-system kube-proxy-7vtxg 1/1 Running 0 2m53s
kube-system kube-proxy-crfkg 1/1 Running 0 3m52s
kube-system kube-scheduler-k8s-master1 1/1 Running 0 16m
5.1.5 访问私有仓库harbor配置
在管理机上操作。 k8s各节点安装成功后再配置此项
5.1.5.1 k8s/crictl访问私有仓库配置
私有CA根证书添加到k8s所 有节点的根证书链中
ansible k8s -m shell -a "wget http://10.2.20.59/ssl/ca.pem -O /tmp/ca.pem"
ansible k8s -m shell -a "cat /tmp/ca.pem >> /etc/pki/tls/certs/ca-bundle.crt"
创建config.json用于存储私仓用户和密码。
# cat > config.json << 'EOF'
{
"auths": {
"harbor.demo.com": {
"auth": "YWRtaW46MTIzNDU2NzgK"
}
}
}
EOF
# ansible k8s -m copy -a "src=config.json dest=/var/lib/kubelet/"
# ansible k8s -m shell -a "systemctl restart kubelet.service"
配置cri-o/crictl使用config.json
# vi crio.conf
...
[crio.image]
global_auth_file = "/var/lib/kubelet/config.json"
# ansible k8s -m copy -a "src=crio.conf dest=/etc/crio/"
# ansible k8s -m shell -a "systemctl restart crio"
提示: 上述办法是将私仓的帐号存储在config.json,供所有命空间使用。
5.1.5.2 测试
crictl拉取镜像(在k8s某节点上测试)
# crictl pull harbor.demo.com/web/busybox:v2.1
Image is up to date for harbor.demo.com/web/busybox@sha256:0152995fd9b720acfc49ab88e48bc9f4509974fb17025896740ae02396e37388
k8s从私仓拉取镜像
# kubectl create namespace test
# cat app-c19-1.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-test
namespace: test
spec:
selector:
matchLabels:
app: app-test
replicas: 1
template:
metadata:
name: app-test
namespace: test
labels:
app: app-test
spec:
containers:
- name: http
image: harbor.demo.com/test/centos:v0.1.1
imagePullPolicy: IfNotPresent
ports:
- name: port-test-01
containerPort: 8080
protocol: TCP
# kubectl -n test get all
NAME READY STATUS RESTARTS AGE
pod/app-test-55f5b45c96-7fg8g 1/1 Running 0 17s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/app-test 1/1 1 1 17s
NAME DESIRED CURRENT READY AGE
replicaset.apps/app-test-55f5b45c96 1 1 1 17s
5.1.5.3 secrets存储私仓帐号
"5.1.5.1"是将私仓的帐号存储在config.json,供所有命空间使用。 也可以使用secrets存储私仓的帐号。
# kubectl create secret docker-registry harbor-test \
--docker-server="harbor.demo.com" \
--docker-username="admin" \
--docker-password="12qwaszx+pp"
# cat app-c19-2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-test
namespace: test
spec:
selector:
matchLabels:
app: app-test
replicas: 1
template:
metadata:
name: app-test
namespace: test
labels:
app: app-test
spec:
imagePullSecrets:
- name: harbor-test
containers:
- name: http
image: harbor.demo.com/test/centos:v0.1.2
imagePullPolicy: IfNotPresent
ports:
- name: port-test-01
containerPort: 8080
protocol: TCP
# kubectl apply -f app-c19-2.yaml
# kubectl -n test get pod
NAME READY STATUS RESTARTS AGE
app-test-6644fb79b-g4njz 1/1 Running 0 18s
采有imagePullSecrets指定私仓secrets.
5.2 网络配置calico
Kubernetes通过CNI协议支持多种网络模型,如Calico、Flannel、Open vSwitch、Weave、Cilium等。
本文以calico为例。
5.2.1 Calico安装
https://github.com/projectcalico/cni-plugin
https://github.com/projectcalico/calico
https://docs.tigera.io/calico/latest/getting-started/kubernetes/quickstart
本文采用Calico插件,是一个纯三层的方案,不需要 Overlay,基于 Iptables 增加了策略配置。
Calico特点
安装
# wget https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/calico.yaml
# cat calico.yaml | grep "image:"
image: docker.io/calico/cni:v3.26.0
image: docker.io/calico/cni:v3.26.0
image: docker.io/calico/node:v3.26.0
image: docker.io/calico/node:v3.26.0
image: docker.io/calico/kube-controllers:v3.26.0
这镜像转存到私仓,并修改calico.yaml中的镜像地址,将docker.io改为harbor.demo.com
# cat calico.yaml | grep "image: "
image: harbor.demo.com/calico/cni:v3.26.0
image: harbor.demo.com/calico/cni:v3.26.0
image: harbor.demo.com/calico/node:v3.26.0
image: harbor.demo.com/calico/node:v3.26.0
image: harbor.demo.com/calico/kube-controllers:v3.26.0
安装
# kubectl apply -f calico.yaml
查看
# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-868d576d4-7jrwh 1/1 Running 0 12m
kube-system calico-node-ld8gv 1/1 Running 0 17m
kube-system calico-node-s5x7q 1/1 Running 0 17m
kube-system calico-node-zfr76 1/1 Running 0 17m
kube-system coredns-7bdc4cb885-hcl6t 1/1 Running 0 4h20m
kube-system coredns-7bdc4cb885-hvmgs 1/1 Running 0 4h20m
kube-system etcd-k8s-master1 1/1 Running 0 4h20m
kube-system kube-apiserver-k8s-master1 1/1 Running 0 4h20m
kube-system kube-controller-manager-k8s-master1 1/1 Running 0 4h20m
kube-system kube-proxy-464dg 1/1 Running 0 4h20m
kube-system kube-proxy-7vtxg 1/1 Running 0 4h6m
kube-system kube-proxy-crfkg 1/1 Running 0 4h7m
kube-system kube-scheduler-k8s-master1 1/1 Running 0 4h20m
配置cri-o采用cni插件
# tree /etc/cni/net.d/
/etc/cni/net.d/
├── 10-calico.conflist
├── 11-crio-ipv4-bridge.conflist
└── calico-kubeconfig
# tree /opt/cni/bin/
/opt/cni/bin/
├── bandwidth
├── bridge
├── calico
├── calico-ipam
├── dhcp
├── dummy
├── firewall
├── flannel
├── host-device
├── host-local
├── install
├── ipvlan
├── loopback
├── macvlan
├── portmap
├── ptp
├── sbr
├── static
├── tap
├── tuning
├── vlan
└── vrf
可修改cri-o配置来识别calico网络
# vi /etc/crio/crio.conf
[crio.network]
# The default CNI network name to be selected. If not set or "", then
# CRI-O will pick-up the first one found in network_dir.
# cni_default_network = ""
# Path to the directory where CNI configuration files are located.
network_dir = "/etc/cni/net.d/"
# Paths to directories where CNI plugin binaries are located.
plugin_dirs = [
"/opt/cni/bin/",
]
# ansible k8s -m copy -a "src=crio.conf dest=/etc/crio/"
# ansible k8s -m shell -a "systemctl restart crio"
5.2.2 Calicoctl工具
calicoctl 是 Calico 客户端管理工具。 可以方便的管理 calico 网络,配置和安全策略,calicoctl 命令行提供了许多资源管理命令,允许您创建,修改,删除和查看不同的 Calico 资源,网络资源包含:node,bgpPeer,hostEndpoint,workloadEndpoint,ipPool,policy,profile等。
提示
- calico版本与calicoctl版本要相同
- 在master节点安装此命令