vim /etc/ssh/sshd_config Port 44322 Protocol 2 LoginGraceTime 60 PermitRootLogin no PermitEmptyPasswords no UseDNS no TCPKeepAlive yes MaxAuthTries 4 AllowGroups knner maintainer ec2-user PasswordAuthentication yes
systemctl restart sshd
关闭SELINUX
setenforce 0 vim /etc/selinux/config SELINUX=disabled
#!/bin/bash ipvs_modules_dir="/usr/lib/modules/`uname -r`/kernel/net/netfilter/ipvs" for i in `ls $ipvs_modules_dir | sed -r 's#(.*).ko.*#\1#'`; do /sbin/modinfo -F filename $i &> /dev/null if [ $? -eq 0 ]; then /sbin/modprobe $i fi done
[init] Using Kubernetes version: v1.15.5 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [k8s01.test.awsbj.cn localhost] and IPs [172.17.0.7 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s01.test.awsbj.cn localhost] and IPs [172.17.0.7 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s01.test.awsbj.cn kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 172.17.0.7] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 13.002203 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s01.test.awsbj.cn as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node k8s01.test.awsbj.cn as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: 4m9zlz.0l3waicmvgl6envz [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
[preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
检查Kubernetes集群
登陆master节点,查看nodes
[root@k8s01.test.awsbj.cn kubernetes]# kubectl get node NAME STATUS ROLES AGE VERSION k8s01.test.awsbj.cn NotReady master 6m30s v1.15.5 k8s02.test.awsbj.cn NotReady <none> 117s v1.15.5 k8s03.test.awsbj.cn NotReady <none> 43s v1.15.5
[root@k8s01.test.awsbj.cn microoak]# kubectl cluster-info Kubernetes master is running at https://172.17.0.7:6443 KubeDNS is running at https://172.17.0.7:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[microoak@k8s01.test.awsbj.cn kubernetes]$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s01.test.awsbj.cn Ready master 4h29m v1.15.5 k8s02.test.awsbj.cn Ready <none> 4h25m v1.15.5 k8s03.test.awsbj.cn Ready <none> 4h23m v1.15.5