baby sword‘s blog baby sword‘s blog
首页
  • java基础
  • java进阶
大数据
  • mysql

    • mysql索引
    • mysql日志
  • redis

    • 单机下的redis
    • 集群下的redis
  • Spring
  • springboot
  • RPC
  • netty
  • mybatis
  • maven
  • 消息队列
  • kafka
  • zookeeper
  • rocketmq
  • 七大设计原则
  • 创建型模式
  • 结构型模式
  • 行为型模式
  • SpringCloud

    • eureka
  • SpringCloud Alibaba

    • nacos
  • 计算机网络
  • 操作系统
  • 算法
  • 个人项目
  • 个人面试面经
  • 八股记忆
  • 工作积累
  • 逻辑题
  • 面试

    • 百度后端实习二面
GitHub (opens new window)

zhengjian

不敢承担失去的风险,是不可能抓住梦想的
首页
  • java基础
  • java进阶
大数据
  • mysql

    • mysql索引
    • mysql日志
  • redis

    • 单机下的redis
    • 集群下的redis
  • Spring
  • springboot
  • RPC
  • netty
  • mybatis
  • maven
  • 消息队列
  • kafka
  • zookeeper
  • rocketmq
  • 七大设计原则
  • 创建型模式
  • 结构型模式
  • 行为型模式
  • SpringCloud

    • eureka
  • SpringCloud Alibaba

    • nacos
  • 计算机网络
  • 操作系统
  • 算法
  • 个人项目
  • 个人面试面经
  • 八股记忆
  • 工作积累
  • 逻辑题
  • 面试

    • 百度后端实习二面
GitHub (opens new window)
  • 华仔聊技术

  • 业务设计

  • 场景设计

  • 运维

  • 安全

  • 面试

  • mac相关工具推荐

  • 开发工具

  • 人工智能

  • 推荐

  • 阅读

  • 工具

  • 计划

  • 产品

  • 云原生

  • go

  • QVM

    • OpenStack

    • K8S

      • 扣丁狼k8s(一)
      • 扣丁狼k8s(二)
      • k8s中的网络(一)
      • k8s中的网络(二)
      • 扣丁狼k8s(三)存储与配置
      • k8s高级调度
      • k8s分享
      • 常用命令
      • k8s集群搭建
        • 环境准备
        • 搭建
        • 1.准备
          • 1.1 系统配置
          • 1.2 配置服务器支持开启ipvs的前提条件
          • 1.3 部署容器运行时Containerd
        • 2. 使用kubeadm部署Kubernetes
          • 2.1 安装kubeadm、kubelet、kubectl
          • 2.2 使用kubeadm init初始化集群
          • 2.3其他节点加入集群
          • 2.4 安装包管理器安装 helm
          • 2.5部署Pod Network组件Calico
          • 方法一:
          • 方法二:
          • 2.6 验证k8s DNS是否可用
        • 3.Kubernetes常用组件部署
          • 3.1 使用Helm部署ingress-nginx
          • 3.2 使用Helm部署dashboard
        • 参考:
      • 搭建过程bug记录
      • 什么是k8s的CRD
      • OpenStack Kubernetes kubevirt
      • k8s-go-client
      • kubeconfig详解
    • kubevirt

  • 软件设计师

  • 极客时间

  • 单元测试

  • 其他
  • QVM
  • K8S
xugaoyi
2023-09-02
目录

k8s集群搭建

# 环境准备

  • linux:Ubuntu 20.04.6 LTS
  • 三台机器,一台充当master,两台充当node

目标:kubernetes 1.28.0

# 搭建

# 1.准备

# 1.1 系统配置

cat /etc/hosts
10.10.10.10 openstacktest-1
10.10.10.7 openstacktest-2
10.10.10.6 openstacktest-3
1
2
3
4

在各个主机上完成下面的系统配置。

如果各个主机启用了防火墙策略,需要开放Kubernetes各个组件所需要的端口,可以查看Ports and Protocols (opens new window)中的内容, 开放相关端口或者关闭主机的防火墙。

创建/etc/modules-load.d/containerd.conf配置文件,确保在系统启动时自动加载所需的内核模块,以满足容器运行时的要求:

cat << EOF > /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF
1
2
3
4

执行以下命令使配置生效:

modprobe overlay
modprobe br_netfilter
1
2

创建/etc/sysctl.d/99-kubernetes-cri.conf配置文件:

cat << EOF > /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
user.max_user_namespaces=28633
EOF
1
2
3
4
5
6

执行以下命令使配置生效:

sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf
1

在文件名/etc/sysctl.d/99-kubernetes-cri.conf中,“99” 代表文件的优先级或顺序。sysctl是Linux内核参数的配置工具,它可以通过修改/proc/sys/目录下的文件来设置内核参数。在/etc/sysctl.d/目录中,可以放置一系列的配置文件,以便在系统启动时自动加载这些参数。这些配置文件按照文件名的字母顺序逐个加载。数字前缀用于指定加载的顺序,较小的数字表示较高的优先级。

# 1.2 配置服务器支持开启ipvs的前提条件

由于ipvs已经加入到了内核的主干,所以为kube-proxy开启ipvs的前提需要加载以下的内核模块:

ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
1
2
3
4
5

创建/etc/modules-load.d/ipvs.conf文件,保证在节点重启后能自动加载所需模块:

cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
EOF
1
2
3
4
5
6

执行以下命令使配置立即生效:

modprobe ip_vs
modprobe ip_vs_rr
modprobe ip_vs_wrr
modprobe ip_vs_sh
1
2
3
4

使用lsmod | grep -e ip_vs -e nf_conntrack命令查看是否已经正确加载所需的内核模块。

接下来还需要确保各个节点上已经安装了ipset软件包,为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

在Ubuntu系统上执行:

apt install -y ipset ipvsadm
1

如果不满足以上前提条件,即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式。

# 1.3 部署容器运行时Containerd

在各个服务器节点上安装容器运行时Containerd。

下载Containerd的二进制包, 需要注意cri-containerd-(cni-)-VERSION-OS-ARCH.tar.gz发行包自containerd 1.6版本起已经被弃用,在某些 Linux 发行版上无法正常工作,并将在containerd 2.0版本中移除,这里下载containerd-<VERSION>-<OS>-<ARCH>.tar.gz的发行包,后边在单独下载安装runc和CNI plugins:

wget https://github.com/containerd/containerd/releases/download/v1.7.3/containerd-1.7.3-linux-amd64.tar.gz
1

将其解压缩到/usr/local下:

tar Cxzvf /usr/local containerd-1.7.3-linux-amd64.tar.gz

bin/
bin/containerd-shim-runc-v1
bin/containerd-shim-runc-v2
bin/containerd-stress
bin/containerd
bin/containerd-shim
bin/ctr
1
2
3
4
5
6
7
8
9

接下来从runc的github上单独下载安装runc,该二进制文件是静态构建的,并且应该适用于任何Linux发行版。

wget https://github.com/opencontainers/runc/releases/download/v1.1.9/runc.amd64
install -m 755 runc.amd64 /usr/local/sbin/runc
1
2

接下来生成containerd的配置文件:

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
1
2

根据文档Container runtimes (opens new window)中的内容,对于使用systemd作为init system的Linux的发行版,使用systemd作为容器的cgroup driver可以确保服务器节点在资源紧张的情况更加稳定,因此这里配置各个节点上containerd的cgroup driver为systemd。

修改前面生成的配置文件/etc/containerd/config.toml:

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true
1
2
3
4

再修改/etc/containerd/config.toml中的

[plugins."io.containerd.grpc.v1.cri"]
  ...
  # sandbox_image = "registry.k8s.io/pause:3.8"
  sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
1
2
3
4

为了通过systemd启动containerd,请还需要从https://raw.githubusercontent.com/containerd/containerd/main/containerd.service下载containerd.service单元文件,并将其放置在/etc/systemd/system/containerd.service中。

cat << EOF > /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target
EOF
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30

配置containerd开机启动,并启动containerd,执行以下命令:

systemctl daemon-reload
systemctl enable containerd --now 
systemctl status containerd
1
2
3

下载安装crictl工具:

wget https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.28.0/crictl-v1.28.0-linux-amd64.tar.gz
tar -zxvf crictl-v1.28.0-linux-amd64.tar.gz
install -m 755 crictl /usr/local/bin/crictl
1
2
3

使用crictl测试一下,确保可以打印出版本信息并且没有错误信息输出:

crictl --runtime-endpoint=unix:///run/containerd/containerd.sock  version

Version:  0.1.0
RuntimeName:  containerd
RuntimeVersion:  v1.7.3
RuntimeApiVersion:  v1
1
2
3
4
5
6

# 2. 使用kubeadm部署Kubernetes

# 2.1 安装kubeadm、kubelet、kubectl

下面在各节点安装kubeadm和kubelet:

在Ubuntu系统上执行下面的命令:

apt-get update
apt-get install -y apt-transport-https ca-certificates curl

curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -


tee /etc/apt/sources.list.d/kubernetes.list <<-'EOF'
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

apt-get update

apt install  kubectl=1.28.0-00 kubelet=1.28.0-00 kubeadm=1.28.0-00

apt-mark hold kubelet kubeadm kubectl
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  • 对于apt install kubectl=1.28.0-00 kubelet=1.28.0-00 kubeadm=1.28.0-00

因为本次教程仅仅针对k8s 1.28.0了版本。如果有特殊版本需求,可以使用命令apt-cache madison kubeadm等查看指定版本进行下载

上面的命令在安装kubeadm, kubectl, kubelet 时,会自动安装依赖项conntrack, cri-tools, ebtables, kubernetes-cni, soca

运行kubelet --help可以看到原来kubelet的绝大多数命令行flag参数都被DEPRECATED了,官方推荐我们使用--config指定配置文件,并在配置文件中指定原来这些flag所配置的内容。具体内容可以查看这里Set Kubelet parameters via a config file (opens new window)。最初Kubernetes这么做是为了支持动态Kubelet配置(Dynamic Kubelet Configuration),但动态Kubelet配置特性从k8s 1.22中已弃用,并在1.24中被移除。如果需要调整集群汇总所有节点kubelet的配置,还是推荐使用ansible等工具将配置分发到各个节点。

kubelet的配置文件必须是json或yaml格式,具体可查看这里 (opens new window)。

Kubernetes 1.8开始要求关闭系统的Swap,如果不关闭,默认配置下kubelet将无法启动。 关闭系统的Swap方法如下:

swapoff -a
1

修改/etc/fstab文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。

swappiness参数调整,修改/etc/sysctl.d/99-kubernetes-cri.conf添加下面一行:

vm.swappiness=0
1

执行sysctl -p /etc/sysctl.d/99-kubernetes-cri.conf使修改生效。

# 2.2 使用kubeadm init初始化集群

在各节点开机启动kubelet服务:

systemctl enable kubelet.service
1

使用kubeadm config print init-defaults --component-configs KubeletConfiguration可以打印集群初始化默认的使用的配置:

apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/containerd/containerd.sock
  imagePullPolicy: IfNotPresent
  name: node
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.k8s.io
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerRuntimeEndpoint: ""
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging:
  flushFrequency: 0
  options:
    json:
      infoBufferSize: "0"
  verbosity: 0
memorySwap: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82

从默认的配置中可以看到,可以使用imageRepository定制在集群初始化时拉取k8s所需镜像的地址。基于默认配置定制出本次使用kubeadm初始化集群所需的配置文件kubeadm.yaml:

apiVersion: kubeadm.k8s.io/v1beta3
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.10.10
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///run/containerd/containerd.sock
  taints:
  - effect: PreferNoSchedule
    key: node-role.kubernetes.io/master
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
kubernetesVersion: 1.28.0
imageRepository: registry.aliyuncs.com/google_containers
networking:
  podSubnet: 10.244.0.0/16
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
failSwapOn: false
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26

这里定制了imageRepository为阿里云的registry,避免因gcr被墙,无法直接拉取镜像。criSocket设置了容器运行时为containerd。 同时设置kubelet的cgroupDriver为systemd,设置kube-proxy代理模式为ipvs。

在开始初始化集群之前可以使用kubeadm config images pull --config kubeadm.yaml预先在各个服务器节点上拉取所k8s需要的容器镜像。

kubeadm config images pull --config kubeadm.yaml

[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1
1
2
3
4
5
6
7
8
9

接下来使用kubeadm初始化集群,选择node4作为Master Node,在node4上执行下面的命令:

root@openstackTest-1:~/k8s1# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local openstacktest-1] and IPs [10.96.0.1 10.10.10.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost openstacktest-1] and IPs [10.10.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost openstacktest-1] and IPs [10.10.10.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.504733 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node openstacktest-1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node openstacktest-1 as control-plane by adding the taints [node-role.kubernetes.io/master:PreferNoSchedule]
[bootstrap-token] Using token: 03z823.c5ksc0e3h25nvdx7
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.10.10:6443 --token 03z823.c5ksc0e3h25nvdx7 \
        --discovery-token-ca-cert-hash sha256:12223137f9d3ed6939a050ab1d452c29a364ca31c2742cee7db7cdaf5ee72c65
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72

进行到这里,就是bug的集中地了,这里给出本人排查bug的记录,希望可以帮到读者。

上面记录了完成的初始化输出的内容,根据输出的内容基本上可以看出手动初始化安装一个Kubernetes集群所需要的关键步骤。 其中有以下关键内容:

  • [certs]生成相关的各种证书
  • [kubeconfig]生成相关的kubeconfig文件
  • [kubelet-start] 生成kubelet的配置文件"/var/lib/kubelet/config.yaml"
  • [control-plane]使用/etc/kubernetes/manifests目录中的yaml文件创建apiserver、controller-manager、scheduler的静态pod
  • [bootstraptoken]生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
  • [addons]安装基本插件:CoreDNS, kube-proxy

下面的命令是配置常规用户如何使用kubectl访问集群:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1
2
3
4

# 2.3其他节点加入集群

使用master最后打印的信息就可以了。

kubeadm join 192.168.96.154:6443 --token ag6egz.xjq1zz01meq8iboq \
        --discovery-token-ca-cert-hash sha256:3a13ba07a146b904a10fe2e3f0ea0056890f522c327eed073f8952a5b182883a
1
2

查看一下集群状态,确认个组件都处于healthy状态

kubectl get cs

Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE   ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   ok
1
2
3
4
5
6
7

如果有错误,可以使用kubeadm reset命令进行清理。

# 2.4 安装包管理器安装 helm

Helm是Kubernetes的包管理器,后续流程也将使用Helm安装Kubernetes的常用组件。 这里先在master节点openstacktest-1上安装helm。

wget https://get.helm.sh/helm-v3.12.3-linux-amd64.tar.gz
tar -zxvf helm-v3.12.3-linux-amd64.tar.gz
install -m 755 linux-amd64/helm  /usr/local/bin/helm
1
2
3

执行helm list确认没有错误输出。

# 2.5部署Pod Network组件Calico

# 方法一:

选择calico作为k8s的Pod网络组件,下面使用helm在k8s集群中安装calico。

下载tigera-operator的helm chart:

wget https://github.com/projectcalico/calico/releases/download/v3.26.1/tigera-operator-v3.26.1.tgz
1

查看这个chart的中可定制的配置:

helm show values tigera-operator-v3.26.1.tgz

imagePullSecrets: {}

installation:
  enabled: true
  kubernetesProvider: ''

apiServer:
  enabled: true

certs:
  node:
    key:
    cert:
    commonName:
  typha:
    key:
    cert:
    commonName:
    caBundle:

# Resource requests and limits for the tigera/operator pod.
resources: {}

# Tolerations for the tigera/operator pod.
tolerations:
- effect: NoExecute
  operator: Exists
- effect: NoSchedule
  operator: Exists

# NodeSelector for the tigera/operator pod.
nodeSelector:
  kubernetes.io/os: linux

# Custom annotations for the tigera/operator pod.
podAnnotations: {}

# Custom labels for the tigera/operator pod.
podLabels: {}

# Image and registry configuration for the tigera/operator pod.
tigeraOperator:
  image: tigera/operator
  version: v1.30.4
  registry: quay.io
calicoctl:
  image: docker.io/calico/ctl
  tag: v3.26.1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50

定制的values.yaml如下:

# 可针对上面的配置进行定制,例如calico的镜像改成从私有库拉取。
# 这里只是个人本地环境测试k8s新版本,因此只有下面几行配置
apiServer:
  enabled: false
installation:
  kubeletVolumePluginPath: None
1
2
3
4
5
6

使用helm安装calico:

helm install calico tigera-operator-v3.26.1.tgz -n kube-system  --create-namespace -f values.yaml
1

等待并确认所有pod处于Running状态:

kubectl get pod -n kube-system | grep tigera-operator
tigera-operator-5fb55776df-wxbph   1/1     Running   0             5m10s

kubectl get pods -n calico-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-68884f975d-5d7p9   1/1     Running   0          5m24s
calico-node-twbdh                          1/1     Running   0          5m24s
calico-typha-7b4bdd99c5-ssdn2              1/1     Running   0          5m24s
1
2
3
4
5
6
7
8

但是我们会发现上面的pod因为网络等原因无法拉取到远程的镜像,所以我们可以使用另一种方式来创建

# 方法二:

直接通过yaml的方式来进行创建

查看一下calico向k8s中添加的api资源:

kubectl api-resources | grep calico
bgpconfigurations                              crd.projectcalico.org/v1               false        BGPConfiguration
bgpfilters                                     crd.projectcalico.org/v1               false        BGPFilter
bgppeers                                       crd.projectcalico.org/v1               false        BGPPeer
blockaffinities                                crd.projectcalico.org/v1               false        BlockAffinity
caliconodestatuses                             crd.projectcalico.org/v1               false        CalicoNodeStatus
clusterinformations                            crd.projectcalico.org/v1               false        ClusterInformation
felixconfigurations                            crd.projectcalico.org/v1               false        FelixConfiguration
globalnetworkpolicies                          crd.projectcalico.org/v1               false        GlobalNetworkPolicy
globalnetworksets                              crd.projectcalico.org/v1               false        GlobalNetworkSet
hostendpoints                                  crd.projectcalico.org/v1               false        HostEndpoint
ipamblocks                                     crd.projectcalico.org/v1               false        IPAMBlock
ipamconfigs                                    crd.projectcalico.org/v1               false        IPAMConfig
ipamhandles                                    crd.projectcalico.org/v1               false        IPAMHandle
ippools                                        crd.projectcalico.org/v1               false        IPPool
ipreservations                                 crd.projectcalico.org/v1               false        IPReservation
kubecontrollersconfigurations                  crd.projectcalico.org/v1               false        KubeControllersConfiguration
networkpolicies                                crd.projectcalico.org/v1               true         NetworkPolicy
networksets                                    crd.projectcalico.org/v1  
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

这些api资源是属于calico的,因此不建议使用kubectl来管理,推荐按照calicoctl来管理这些api资源。 将calicoctl安装为kubectl的插件:

cd /usr/local/bin
curl -o kubectl-calico -O -L  "https://github.com/projectcalico/calicoctl/releases/download/v3.21.5/calicoctl-linux-amd64" 
chmod +x kubectl-calico
1
2
3

验证插件正常工作:

kubectl calico -h
1

# 2.6 验证k8s DNS是否可用

kubectl run curl --image=radial/busyboxplus:curl -it
If you don't see a command prompt, try pressing enter.
[ root@curl:/ ]$
1
2
3

进入后执行nslookup kubernetes.default确认解析正常:

nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
1
2
3
4
5
6
  • 如果想要其他node也可以使用kubectl命令:

kubernetes: node节点不能使用kubectl解决方法 (opens new window)

# 3.Kubernetes常用组件部署

# 3.1 使用Helm部署ingress-nginx

为了便于将集群中的服务暴露到集群外部,需要使用Ingress。接下来使用Helm将ingress-nginx部署到Kubernetes上。 Nginx Ingress Controller被部署在Kubernetes的边缘节点上。

这里将Openstacktest-1(10.10.10.10)作为边缘节点,打上Label:

kubectl label node openstacktest-1 node-role.kubernetes.io/edge=
1

下载ingress-nginx的helm chart:

wget https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-4.7.0/ingress-nginx-4.7.0.tgz

1
2

查看ingress-nginx-4.7.0.tgz这个chart的可定制配置:

helm show values ingress-nginx-4.7.0.tgz
1

对values.yaml配置定制如下:

controller:
  ingressClassResource:
    name: nginx
    enabled: true
    default: true
    controllerValue: "k8s.io/ingress-nginx"
  admissionWebhooks:
    enabled: false
  replicaCount: 1
  image:
    # registry: registry.k8s.io
    # image: ingress-nginx/controller
    # tag: "v1.8.0"
    registry: docker.io
    image: unreachableg/registry.k8s.io_ingress-nginx_controller
    tag: "v1.8.0"
    digest: sha256:626fc8847e967dc06049c0eda9e093d77a08feff80179ae97538ba8b118570f3
  hostNetwork: true
  nodeSelector:
    node-role.kubernetes.io/edge: ''
  affinity:
    podAntiAffinity:
        requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
            - key: app
              operator: In
              values:
              - nginx-ingress
            - key: component
              operator: In
              values:
              - controller
          topologyKey: kubernetes.io/hostname
  tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: PreferNoSchedule
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41

nginx ingress controller的副本数replicaCount为1,将被调度到node4这个边缘节点上。这里并没有指定nginx ingress controller service的externalIPs,而是通过hostNetwork: true设置nginx ingress controller使用宿主机网络。 因为registry.k8s.io被墙,这里替换成unreachableg/registry.k8s.io_ingress-nginx_controller提前拉取一下镜像:

crictl --runtime-endpoint=unix:///run/containerd/containerd.sock pull unreachableg/registry.k8s.io_ingress-nginx_controller:v1.8.0

1
2
helm install ingress-nginx ingress-nginx-4.7.0.tgz --create-namespace -n ingress-nginx -f values.yaml

1
2
kubectl get po -n ingress-nginx
NAME                                        READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-86878885cd-m9xc4   1/1     Running   0          45s

1
2
3
4

测试访问http://10.10.10.10返回默认的nginx 404页,则部署完成。

# 3.2 使用Helm部署dashboard

先部署metrics-server:

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.4/components.yaml
1

修改components.yaml中的image为docker.io/unreachableg/k8s.gcr.io_metrics-server_metrics-server:v0.6.4。 修改components.yaml中容器的启动参数,加入--kubelet-insecure-tls。

kubectl apply -f components.yaml
1

metrics-server的pod正常启动后,等一段时间就可以使用kubectl top查看集群和pod的metrics信息:

kubectl top node
NAME    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
node4   246m         6%     2296Mi          29%
node5   145m         3%     810Mi           10%
node6   97m          2%     933Mi           12%

kubectl top pod -n kube-system
NAME                              CPU(cores)   MEMORY(bytes)
coredns-66f779496c-7mlsm          3m           12Mi
coredns-66f779496c-9m4cv          3m           13Mi
etcd-node4                        36m          44Mi
kube-apiserver-node4              147m         353Mi
kube-controller-manager-node4     24m          49Mi
kube-proxy-bt64z                  32m          26Mi
kube-proxy-k4sft                  94m          25Mi
kube-proxy-x28q9                  49m          17Mi
kube-scheduler-node4              9m           18Mi
metrics-server-7d686f4d9d-pgk6c   6m           17Mi
tigera-operator-94d7f7696-nl5l7   3m           25Mi
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

接下来使用helm部署k8s的dashboard。当前k8s dashboard已经更新到了v3.0.0-alpha0,这里体验一下v3版本。

从k8s dashboard的v3版本开始,底层架构已更改,需要进行干净的安装,如果是在做升级dashboard操作,请首先移除先前的安装,这里是全新安装可以忽略。

k8s dashboard的v3版本现在默认使用cert-manager和nginx-ingress-controller。如果选择基于yaml清单的安装,请确保在集群中已安装它们。

我们前面已经安装了nginx-ingress-controller,下面先安装cert-manager:

wget https://github.com/cert-manager/cert-manager/releases/download/v1.12.3/cert-manager.yaml

kubectl apply -f cert-manager.yaml
1
2
3

确保cert-manager的所有pod启动正常:

kubectl get po -n cert-manager
NAME                                       READY   STATUS    RESTARTS   AGE
cert-manager-6774cd657f-q9qpf              1/1     Running   0          102s
cert-manager-cainjector-55c8b7b49b-vf8r4   1/1     Running   0          102s
cert-manager-webhook-57797c469d-cgw4n      1/1     Running   0          102s
1
2
3
4
5

下载dashboard的yaml清单文件:

wget https://raw.githubusercontent.com/kubernetes/dashboard/v3.0.0-alpha0/charts/kubernetes-dashboard.yaml
1

编辑kubernetes-dashboard.yaml清单文件,将其中的ingress中的host替换你的域名:

kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
  labels:
    app.kubernetes.io/name: nginx-ingress
    app.kubernetes.io/part-of: kubernetes-dashboard
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    cert-manager.io/issuer: selfsigned
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - localhost
      secretName: kubernetes-dashboard-certs
  rules:
    - host: k8s.example.com
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: kubernetes-dashboard-web
                port:
                  name: web
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: kubernetes-dashboard-api
                port:
                  name: api
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35

安装dashboard的yaml清单文件:

kubectl apply -f kubernetes-dashboard.yaml
1

确认dashboard的相关Pod启动正常:

kubectl get po -n kubernetes-dashboard
NAME                                                    READY   STATUS    RESTARTS   AGE
kubernetes-dashboard-api-8586787f7-vtszr                1/1     Running   0          60s
kubernetes-dashboard-metrics-scraper-6959b784dc-c98tz   1/1     Running   0          59s
kubernetes-dashboard-web-6b6d549b4-qsrsn                1/1     Running   0          60s

kubectl get ingress -n kubernetes-dashboard
NAME                   CLASS   HOSTS             ADDRESS   PORTS     AGE
kubernetes-dashboard   nginx   k8s.example.com             80, 443   6m47s
1
2
3
4
5
6
7
8
9

创建管理员sa:

kubectl create serviceaccount kube-dashboard-admin-sa -n kube-system

kubectl create clusterrolebinding kube-dashboard-admin-sa \
--clusterrole=cluster-admin --serviceaccount=kube-system:kube-dashboard-admin-sa
1
2
3
4

创建集群管理员登录dashboard所需token:

kubectl create token kube-dashboard-admin-sa -n kube-system --duration=87600h

eyJhbGciOiJSUzI1NiIsImtpZCI6IlU1SlpSTS1YekNuVzE0T1k5TUdTOFFqN25URWxKckt6OUJBT0xzblBsTncifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxOTY4OTA4MjgyLCJpYXQiOjE2NTM1NDgyODIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6IjY0MmMwMmExLWY1YzktNDFjNy04Mjc5LWQ1ZmI3MGRjYTQ3ZSJ9fSwibmJmIjoxNjUzNTQ4MjgyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.Xqxlo2vJ9Hb6UUVIqwvc8I5bahdxKzSRSaQI_67Yt7_YEHmkkHApxUGlwJYTKF9ufww3btlCmM8PtRn5_Q1yv-HAFyTOYKo8WHZ9UCm1bT3X8V8g4GQwZIl2dwmlUmKb1unBz2-em2uThQ015bMPDE8a42DV_bOwWjljVXat0nwV14nGorC8vKLjXbohrIJ3G1pgCJvlBn99F1RelmSUSQLlolUFoxpN6MamYTElwR6FfD-AGmFXvZSbcFaqVW0oxJHV70Gjs2igOtpqHFxxPlHT8aQzlRiybPtFyBf9Ll87TmVJimT89z8wv2si2Nee8bB2jhsApLn8TJyUSlbTXA
1
2
3

使用上面的token登录k8s dashboard。

token

eyJhbGciOiJSUzI1NiIsImtpZCI6IlFvdUY4WUs5dzFFM01VTUYtYl9QUGhSWXFMNFBLQzlaOS1tOUlqaTRJMDgifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoyMDA5MTU1NDAyLCJpYXQiOjE2OTM3OTU0MDIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6IjNiNGMzNDYyLWRmYzMtNDI2Mi1iYzM4LWVlMWQ5ZjQyMDU1YSJ9fSwibmJmIjoxNjkzNzk1NDAyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.jwPtkt454CuGXWPbiCEzrbNHcmRR0RCMrPBcoSlUwi_fAoIIy1heQMpQZnN7VoF9bEe40SJw7FA9ScoViOxZW3L8t-ZiwjMThvT3zU8Bxg8lMCQEkwVV4R5Q6DhGkJVB6RV2za_KI1NYPwqu5wNRH2492Z7zjWeE81GcPekQFSc99LbQiElWVK8yUh6vpGQaYpI8Yt0QLT-FU58YgQnsD0PlsmVVW4qvN2Y1NqA6A2spWRmGwoFpcTEM3iP_EBIyWLv0QtEpm2FbbVZW392ANFJFOhFSAsTUY2UW8UKK7Zsbsn_i9GPop0POAEsK8MVy-EqQCHmotl_zNTaMkA7Z6A
1

增加本地域名配置vim /etc/hosts

增加如下配置:

122.228.207.18 k8s.dashboard.com
1

保存后:

浏览器输入:k8s.dashboard.com

页面出现后,使用token进行登录

eyJhbGciOiJSUzI1NiIsImtpZCI6IlFvdUY4WUs5dzFFM01VTUYtYl9QUGhSWXFMNFBLQzlaOS1tOUlqaTRJMDgifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoyMDA5MTU1NDAyLCJpYXQiOjE2OTM3OTU0MDIsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJrdWJlLWRhc2hib2FyZC1hZG1pbi1zYSIsInVpZCI6IjNiNGMzNDYyLWRmYzMtNDI2Mi1iYzM4LWVlMWQ5ZjQyMDU1YSJ9fSwibmJmIjoxNjkzNzk1NDAyLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06a3ViZS1kYXNoYm9hcmQtYWRtaW4tc2EifQ.jwPtkt454CuGXWPbiCEzrbNHcmRR0RCMrPBcoSlUwi_fAoIIy1heQMpQZnN7VoF9bEe40SJw7FA9ScoViOxZW3L8t-ZiwjMThvT3zU8Bxg8lMCQEkwVV4R5Q6DhGkJVB6RV2za_KI1NYPwqu5wNRH2492Z7zjWeE81GcPekQFSc99LbQiElWVK8yUh6vpGQaYpI8Yt0QLT-FU58YgQnsD0PlsmVVW4qvN2Y1NqA6A2spWRmGwoFpcTEM3iP_EBIyWLv0QtEpm2FbbVZW392ANFJFOhFSAsTUY2UW8UKK7Zsbsn_i9GPop0POAEsK8MVy-EqQCHmotl_zNTaMkA7Z6A
1

image-20230904105122378

# 参考:

使用kubeadm部署Kubernetes 1.28 (opens new window)

编辑 (opens new window)
上次更新: 2024/02/22, 14:03:19
常用命令
搭建过程bug记录

← 常用命令 搭建过程bug记录→

最近更新
01
spark基础
02-22
02
mysql读写分离和分库分表
02-22
03
数据库迁移
02-22
更多文章>
Theme by Vdoing | Copyright © 2019-2024 Evan Xu | MIT License
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式