为什么Docker使用-d参数后台运行容器退出了?

采菊篱下 回复了问题 • 2 人关注 • 1 个回复 • 221 次浏览 • 2017-03-09 20:35 • 来自相关话题

Docker容器启动过程

Something 发表了文章 • 0 个评论 • 209 次浏览 • 2017-03-09 15:02 • 来自相关话题

下面让我们看看一个Docker容器它启动过程中,背后到底做了什么?docker run -i -t ubuntu /bin/bash输入上面这行命令,启动一个ubuntu容器时,到底发生了什么?
 
大致过程可以用下图描述:




首先系统要有一个docker daemon的后台进程在运行,当刚才这行命令敲下时,发生了如下动作:
docker client(即:docker终端命令行)会调用docker daemon请求启动一个容器,docker daemon会向host os(即:linux)请求创建容器linux会创建一个空的容器(可以简单理解为:一个未安装操作系统的裸机,只有虚拟出来的CPU、内存等硬件资源)docker daemon请检查本机是否存在docker镜像文件(可以简单理解为操作系统安装光盘),如果有,则加载到容器中(即:光盘插入裸机,准备安装操作系统)将镜像文件加载到容器中(即:裸机上安装好了操作系统,不再是裸机状态)
 
最后,我们就得到了一个ubuntu的虚拟机,然后就可以进行各种操作了。
 
 
如果在第4步检查本机镜像文件时,发现文件不存在,则会到默认的docker镜像注册机构(即:docker hub网站)去联网下载,下载回来后,再进行装载到容器的动作,即下图所示:




另外官网有一张图也很形象的描述了这个过程:




原文地址:http://www.cnblogs.com/yjmyzz/p/docker-container-start-up-analysis.html 

参考文章:
https://www.gitbook.com/book/joshhu/docker_theory_install/details  
https://docs.docker.com/engine/introduction/understanding-docker/  查看全部
下面让我们看看一个Docker容器它启动过程中,背后到底做了什么?
docker run -i -t ubuntu /bin/bash
输入上面这行命令,启动一个ubuntu容器时,到底发生了什么?
 
大致过程可以用下图描述:
dockerflow.png

首先系统要有一个docker daemon的后台进程在运行,当刚才这行命令敲下时,发生了如下动作:
  1. docker client(即:docker终端命令行)会调用docker daemon请求启动一个容器,
  2. docker daemon会向host os(即:linux)请求创建容器
  3. linux会创建一个空的容器(可以简单理解为:一个未安装操作系统的裸机,只有虚拟出来的CPU、内存等硬件资源)
  4. docker daemon请检查本机是否存在docker镜像文件(可以简单理解为操作系统安装光盘),如果有,则加载到容器中(即:光盘插入裸机,准备安装操作系统)
  5. 将镜像文件加载到容器中(即:裸机上安装好了操作系统,不再是裸机状态)

 
最后,我们就得到了一个ubuntu的虚拟机,然后就可以进行各种操作了。
 
 
如果在第4步检查本机镜像文件时,发现文件不存在,则会到默认的docker镜像注册机构(即:docker hub网站)去联网下载,下载回来后,再进行装载到容器的动作,即下图所示:
dockerload.png

另外官网有一张图也很形象的描述了这个过程:
dockerfollow.png

原文地址:http://www.cnblogs.com/yjmyzz/p/docker-container-start-up-analysis.html 

参考文章:
https://www.gitbook.com/book/joshhu/docker_theory_install/details  
https://docs.docker.com/engine/introduction/understanding-docker/ 

Elasticsearch怎么备份和迁移数据

采菊篱下 回复了问题 • 3 人关注 • 2 个回复 • 985 次浏览 • 2017-03-03 17:02 • 来自相关话题

在Docker容器中计划任务不能执行?

采菊篱下 回复了问题 • 2 人关注 • 1 个回复 • 296 次浏览 • 2017-02-24 13:25 • 来自相关话题

如何执行一个已经运行Docker容器内的脚本?

采菊篱下 回复了问题 • 3 人关注 • 4 个回复 • 311 次浏览 • 2017-03-09 20:17 • 来自相关话题

OVM-V1.5 版正式发布,新增对ESXI节点的支持

maoliang 发表了文章 • 0 个评论 • 431 次浏览 • 2017-02-21 17:13 • 来自相关话题

OVM-V1.5版本正式发布,此版本新增对Esxi节点的支持,目前OVM混合虚拟化平台已经能够统一管理KVM、ESXI、Docker三种类型的底层虚拟化技术。同时也提升了OVM的易用性,管理平台和计算节点 iso镜像合并到一个镜像里,安装方面同时新增u盘安装,提高工作效率。


OVM是国内首款、完全免费、企业级——混合虚拟化管理平台,是从中小企业目前的困境得到启发,完全基于国内企业特点开发,更多的关注国内中小企业用户的产品需求。


OVM-V1.5虚拟化管理平台功能变动清单:

 

Ø  新增对VMwareEsxi节点的支持


Ø  合并管理平台和计算节点镜像为一个iso


Ø  支持通过USB安装OVM


Ø  修复其他若干bug

 

详细信息


1、新增对VMwareEsxi节点的支持

 

OVM-V 1.5版本新增对Esxi节点的支持,支持将现有的VMwareEsxi计算节点直接添加到OVM虚拟化管理平台进行管理,并支持将原有Esxi虚拟机导入到平台进行统一管理。目前OVM混合虚拟化平台已经能够统一管理KVM、ESXI、Docker三种类型的底层虚拟化技术。

 

2.   管理平台和计算节点 iso镜像合并


OVM-V 1.5版本将管理平台和计算节点原来的两个iso镜像合并到一个iso镜像里面,在安装过程中提供选项来安装不同的服务,此次的合并不仅方便了大家的下载,也提升了OVM的易用性。



3.   支持通过USB方式安装,提高工作效率

OVM1.5版本镜像支持通过USB方式进行安装,极大提高了一线运维人员的工作效率。

 

获得帮助

下载请访问OVM社区官网:51ovm.com

使用过程中遇到什么问题及获得下载密码,加入OVM社区qq官方交流群:22265939

  查看全部
OVM-V1.5版本正式发布,此版本新增对Esxi节点的支持,目前OVM混合虚拟化平台已经能够统一管理KVM、ESXI、Docker三种类型的底层虚拟化技术。同时也提升了OVM的易用性,管理平台和计算节点 iso镜像合并到一个镜像里,安装方面同时新增u盘安装,提高工作效率。


OVM是国内首款、完全免费、企业级——混合虚拟化管理平台,是从中小企业目前的困境得到启发,完全基于国内企业特点开发,更多的关注国内中小企业用户的产品需求。


OVM-V1.5虚拟化管理平台功能变动清单:

 

Ø  新增对VMwareEsxi节点的支持


Ø  合并管理平台和计算节点镜像为一个iso


Ø  支持通过USB安装OVM


Ø  修复其他若干bug

 

详细信息


1、新增对VMwareEsxi节点的支持

 

OVM-V 1.5版本新增对Esxi节点的支持,支持将现有的VMwareEsxi计算节点直接添加到OVM虚拟化管理平台进行管理,并支持将原有Esxi虚拟机导入到平台进行统一管理。目前OVM混合虚拟化平台已经能够统一管理KVM、ESXI、Docker三种类型的底层虚拟化技术。

 

2.   管理平台和计算节点 iso镜像合并


OVM-V 1.5版本将管理平台和计算节点原来的两个iso镜像合并到一个iso镜像里面,在安装过程中提供选项来安装不同的服务,此次的合并不仅方便了大家的下载,也提升了OVM的易用性。



3.   支持通过USB方式安装,提高工作效率

OVM1.5版本镜像支持通过USB方式进行安装,极大提高了一线运维人员的工作效率。

 

获得帮助

下载请访问OVM社区官网:51ovm.com

使用过程中遇到什么问题及获得下载密码,加入OVM社区qq官方交流群:22265939

 

Java程序处理中文乱码

回复

chris 发起了问题 • 1 人关注 • 0 个回复 • 333 次浏览 • 2017-02-16 23:40 • 来自相关话题

Docker挂载主机目录出现Permission denied状况分析

采菊篱下 发表了文章 • 0 个评论 • 489 次浏览 • 2017-02-16 23:10 • 来自相关话题

今天用脚本部署一个Docker私有化环境,挂载宿主机目录出现Permission denied的情况,导致服务启动失败,具体情况如下:








问题原因及解决办法:
原因是CentOS7中的安全模块selinux把权限禁掉了,至少有以下三种方式解决挂载的目录没有权限的问题:
1、在运行容器的时候,给容器加特权,及加上 --privileged=true 参数docker run -i -t -v /data/mysql/data:/data/var-3306 --privileged=true b0387b8279d4 /bin/bash -c "/opt/start_db.sh"2、临时关闭selinuxsetenforce 03、添加selinux规则,改变要挂载的目录的安全性文本:# 更改安全性文本的格式如下
chcon [-R] [-t type] [-u user] [-r role] 文件或者目录

选顷不参数:
-R :连同该目录下癿次目录也同时修改;
-t :后面接安全性本文的类型字段!例如 httpd_sys_content_t ;
-u :后面接身份识别,例如 system_u;
-r :后面街觇色,例如 system_r

[root@localhost Desktop]# chcon --help
Usage: chcon [OPTION]... CONTEXT FILE...
or: chcon [OPTION]... [-u USER] [-r ROLE] [-l RANGE] [-t TYPE] FILE...
or: chcon [OPTION]... --reference=RFILE FILE...
Change the SELinux security context of each FILE to CONTEXT.
With --reference, change the security context of each FILE to that of RFILE.

Mandatory arguments to long options are mandatory for short options too.
--dereference affect the referent of each symbolic link (this is
the default), rather than the symbolic link itself
-h, --no-dereference affect symbolic links instead of any referenced file
-u, --user=USER set user USER in the target security context
-r, --role=ROLE set role ROLE in the target security context
-t, --type=TYPE set type TYPE in the target security context
-l, --range=RANGE set range RANGE in the target security context
--no-preserve-root do not treat '/' specially (the default)
--preserve-root fail to operate recursively on '/'
--reference=RFILE use RFILE's security context rather than specifying
a CONTEXT value
-R, --recursive operate on files and directories recursively
-v, --verbose output a diagnostic for every file processed

The following options modify how a hierarchy is traversed when the -R
option is also specified. If more than one is specified, only the final
one takes effect.

-H if a command line argument is a symbolic link
to a directory, traverse it
-L traverse every symbolic link to a directory
encountered
-P do not traverse any symbolic links (default)

--help display this help and exit
--version output version information and exit

GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils 'chcon invocation'在主机中修改/data/mysql/data目录的安全性文档[root@localhost Desktop]# chcon -Rt svirt_sandbox_file_t /data/mysql/data在docker中就可以正常访问该目录下的相关资源了。
卷权限参考:https://yq.aliyun.com/articles/53990  查看全部
Docker.jpeg

今天用脚本部署一个Docker私有化环境,挂载宿主机目录出现Permission denied的情况,导致服务启动失败,具体情况如下:
errorqx.png

cerror.png

问题原因及解决办法:
原因是CentOS7中的安全模块selinux把权限禁掉了,至少有以下三种方式解决挂载的目录没有权限的问题:
1、在运行容器的时候,给容器加特权,及加上 --privileged=true 参数
docker run -i -t -v /data/mysql/data:/data/var-3306 --privileged=true b0387b8279d4 /bin/bash -c "/opt/start_db.sh"
2、临时关闭selinux
setenforce 0
3、添加selinux规则,改变要挂载的目录的安全性文本:
# 更改安全性文本的格式如下
chcon [-R] [-t type] [-u user] [-r role] 文件或者目录

选顷不参数:
-R :连同该目录下癿次目录也同时修改;
-t :后面接安全性本文的类型字段!例如 httpd_sys_content_t ;
-u :后面接身份识别,例如 system_u;
-r :后面街觇色,例如 system_r

[root@localhost Desktop]# chcon --help
Usage: chcon [OPTION]... CONTEXT FILE...
or: chcon [OPTION]... [-u USER] [-r ROLE] [-l RANGE] [-t TYPE] FILE...
or: chcon [OPTION]... --reference=RFILE FILE...
Change the SELinux security context of each FILE to CONTEXT.
With --reference, change the security context of each FILE to that of RFILE.

Mandatory arguments to long options are mandatory for short options too.
--dereference affect the referent of each symbolic link (this is
the default), rather than the symbolic link itself
-h, --no-dereference affect symbolic links instead of any referenced file
-u, --user=USER set user USER in the target security context
-r, --role=ROLE set role ROLE in the target security context
-t, --type=TYPE set type TYPE in the target security context
-l, --range=RANGE set range RANGE in the target security context
--no-preserve-root do not treat '/' specially (the default)
--preserve-root fail to operate recursively on '/'
--reference=RFILE use RFILE's security context rather than specifying
a CONTEXT value
-R, --recursive operate on files and directories recursively
-v, --verbose output a diagnostic for every file processed

The following options modify how a hierarchy is traversed when the -R
option is also specified. If more than one is specified, only the final
one takes effect.

-H if a command line argument is a symbolic link
to a directory, traverse it
-L traverse every symbolic link to a directory
encountered
-P do not traverse any symbolic links (default)

--help display this help and exit
--version output version information and exit

GNU coreutils online help: <http://www.gnu.org/software/coreutils/>
For complete documentation, run: info coreutils 'chcon invocation'
在主机中修改/data/mysql/data目录的安全性文档
[root@localhost Desktop]# chcon -Rt svirt_sandbox_file_t /data/mysql/data
在docker中就可以正常访问该目录下的相关资源了。
卷权限参考:https://yq.aliyun.com/articles/53990 

Kafka Consumer客户端启动报错

采菊篱下 回复了问题 • 2 人关注 • 1 个回复 • 304 次浏览 • 2017-02-11 18:50 • 来自相关话题

使用Kubeadm安装Kubernetes1.5版本

Not see︶ 发表了文章 • 2 个评论 • 1801 次浏览 • 2017-01-11 18:52 • 来自相关话题

1、下载说需要的包都得在墙外,需要翻墙。 但服务器上总不能提供这种便利。 比较麻烦。 两种办法,一种是绑定hosts  由于Kubernetes 编译的各种发行版安装包来源于 Github 上的另一个叫 release 的项目,把这个项目 clone 下来,  

可以参考漠然的文章: https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/
 
2、我是用的方法 是通过hosts绑定, 然后通过打包到源码,下次直接使用
使用Kubeadm安装Kubernetes1.5版本

1、系统版本:ubuntu16.04

root@master:~# docker version

Client:

Version: 1.12.1

API version: 1.24

Go version: go1.6.2

Git commit: 23cf638

Built: Tue, 27 Sep 2016 12:25:38 +1300

OS/Arch: linux/amd64



Server:

Version: 1.12.1

API version: 1.24

Go version: go1.6.2

Git commit: 23cf638

Built: Tue, 27 Sep 2016 12:25:38 +1300

OS/Arch: linux/amd64





1、部署前提条件

每台主机上面至少1G内存。

所有主机之间网络可达。



2、部署:

在主机上安装kubelet和kubeadm

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

主机master上操作如下:

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt-get update

apt-get install -y docker.io

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service

root@master:~# ls /lib/systemd/system |grep kube

kubelet.service

kubelet的版本:

root@master:~# kubelet --version

Kubernetes v1.5.1

k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。

3、初始化集群

理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是,

kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理,

因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images


在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。

我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。

更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:“flannel is a network fabric for containers, designed for Kubernetes”。

如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。


4、执行kubeadm init

执行kubeadm init命令:

root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

[preflight] Running pre-flight checks

[preflight] Starting the kubelet service

[init] Using Kubernetes version: v1.5.1

[tokens] Generated token: "2909ca.c0b0772a8817f9e3"

[certificates] Generated Certificate Authority key and certificate.

[certificates] Generated API Server key and certificate

[certificates] Generated Service Account signing keys

[certificates] Created keys and certificates in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[apiclient] Created API client, waiting for the control plane to become ready

[apiclient] All control plane components are healthy after 14.761716 seconds

[apiclient] Waiting for at least one node to register and become ready

[apiclient] First node is ready after 1.003312 seconds

[apiclient] Creating a test deployment

[apiclient] Test deployment succeeded

[token-discovery] Created the kube-discovery deployment, waiting for it to become ready

[token-discovery] kube-discovery is ready after 1.002402 seconds

[addons] Created essential addon: kube-proxy

[addons] Created essential addon: kube-dns



Your Kubernetes master has initialized successfully!



You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/



You can now join any number of machines by running the following on each node:



kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx (ip记下)



init成功后的master node有啥变化?k8s的核心组件均正常启动:

root@master:~# ps -ef |grep kube

root 23817 1 2 14:07 ? 00:00:35 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local

root 23921 23900 0 14:07 ? 00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080

root 24055 24036 0 14:07 ? 00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=master的ip --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379

root 24084 24070 0 14:07 ? 00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16

root 24242 24227 0 14:07 ? 00:00:00 /usr/local/bin/kube-discovery

root 24308 24293 1 14:07 ? 00:00:15 kube-proxy --kubeconfig=/run/kubeconfig

root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr

root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done

root 30372 30357 0 14:10 ? 00:00:01 /exechealthz --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null --url=/healthz-dnsmasq --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null --url=/healthz-kubedns --port=8080 --quiet

root 30682 30667 0 14:10 ? 00:00:01 /kube-dns --domain=cluster.local --dns-port=10053 --config-map=kube-dns --v=2

root 48755 1796 0 14:31 pts/0 00:00:00 grep --color=auto kube



而且以多cotainer的形式启动

root@master:~# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c4209b1077d2 gcr.io/google_containers/kubedns-amd64:1.9 "/kube-dns --domain=c" 22 minutes ago Up 22 minutes k8s_kube-dns.61e5a20f_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_fc02f762

0908d6398b0b gcr.io/google_containers/exechealthz-amd64:1.2 "/exechealthz '--cmd=" 22 minutes ago Up 22 minutes k8s_healthz.9d343f54_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_0ee806f6

0e35e96ca4ac gcr.io/google_containers/dnsmasq-metrics-amd64:1.0 "/dnsmasq-metrics --v" 22 minutes ago Up 22 minutes k8s_dnsmasq-metrics.2bb05ef7_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_436b9370

3921b4e59aca gcr.io/google_containers/kube-dnsmasq-amd64:1.4 "/usr/sbin/dnsmasq --" 22 minutes ago Up 22 minutes k8s_dnsmasq.f7e18a01_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_06c5efa7

18513413ba60 gcr.io/google_containers/pause-amd64:3.0 "/pause" 22 minutes ago Up 22 minutes k8s_POD.d8dbe16c_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_9de0a18d

45132c8d6d3d quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/bin/sh -c 'set -e -" 23 minutes ago Up 23 minutes k8s_install-cni.fc218cef_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_88dffd75

4c2a2e46c808 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/opt/bin/flanneld --" 23 minutes ago Up 23 minutes k8s_kube-flannel.5fdd90ba_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_2706c3cb

ad08c8dd177c gcr.io/google_containers/pause-amd64:3.0 "/pause" 23 minutes ago Up 23 minutes k8s_POD.d8dbe16c_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_279d8436

847f00759977 gcr.io/google_containers/kube-proxy-amd64:v1.5.1 "kube-proxy --kubecon" 24 minutes ago Up 24 minutes k8s_kube-proxy.2f62b4e5_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c1f31904

f8da0f38f3e1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c340d947

c1efa29640d1 gcr.io/google_containers/kube-discovery-amd64:1.0 "/usr/local/bin/kube-" 24 minutes ago Up 24 minutes k8s_kube-discovery.6907cb07_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_c4827da2

4c6a646d0b2e gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_8823b66a

ece79181f177 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_dummy.702d1bd5_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_ade728ba

9c3364c623df gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_838c58b5

a64a3363a82b gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 25 minutes ago Up 25 minutes k8s_kube-controller-manager.84edb2e5_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_697ef6ee

27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 25 minutes ago Up 25 minutes k8s_kube-apiserver.5942f3e3_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844

5b2cc5cb9ac1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_2f88a796

e12ef7b3c1f0 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 25 minutes ago Up 25 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_ef6eb513

84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e

612b021457a1 gcr.io/google_containers/kube-scheduler-amd64:v1.5.1 "kube-scheduler --add" 25 minutes ago Up 25 minutes k8s_kube-scheduler.bb7d750_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_a49fab86

ac0d8698f79f gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_9a6b7925

2a16a2217bf3 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_d2b51317





kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出:



root@master:~# docker ps |grep apiserver

27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 26 minutes ago Up 26 minutes k8s_kube-apiserver.5942f3e3_kubeapiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844

84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 26 minutes ago Up 26 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e



问题一、

如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出:

# kubeadm init --pod-network-cidr=10.244.0.0/16

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

[preflight] Running pre-flight checks

[preflight] Some fatal errors occurred:

Port 10250 is in use

/etc/kubernetes/manifests is not empty

/etc/kubernetes/pki is not empty

/var/lib/kubelet is not empty

/etc/kubernetes/admin.conf already exists

/etc/kubernetes/kubelet.conf already exists

[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`



kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。



# kubeadm reset

[preflight] Running pre-flight checks

[reset] Draining node: "iz25beglnhtz"

[reset] Removing node: "iz25beglnhtz"

[reset] Stopping the kubelet service

[reset] Unmounting mounted directories in "/var/lib/kubelet"

[reset] Removing kubernetes-managed containers

[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd]

[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]





5、要使用Flannel网络,因此我们需要执行如下安装命令:

#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

configmap "kube-flannel-cfg" created

daemonset "kube-flannel-ds" created



需要稍等几秒钟,我们再来看master node上的cluster信息:

root@master:~# ps -ef |grep kube |grep flannel

root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr

root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done



root@master:~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system dummy-2088944543-r2mw3 1/1 Running 0 30m

kube-system etcd-master 1/1 Running 0 31m

kube-system kube-apiserver-master 1/1 Running 0 31m

kube-system kube-controller-manager-master 1/1 Running 0 31m

kube-system kube-discovery-1769846148-4rsq9 1/1 Running 0 30m

kube-system kube-dns-2924299975-txh1v 4/4 Running 0 30m

kube-system kube-flannel-ds-0fnxc 2/2 Running 0 29m

kube-system kube-flannel-ds-lpgpv 2/2 Running 0 23m

kube-system kube-flannel-ds-s05nr 2/2 Running 0 18m

kube-system kube-proxy-9c0bf 1/1 Running 0 30m

kube-system kube-proxy-t8hxr 1/1 Running 0 18m

kube-system kube-proxy-zd0v2 1/1 Running 0 23m

kube-system kube-scheduler-master 1/1 Running 0 31m



至少集群的核心组件已经全部run起来了。看起来似乎是成功了。





接下来开始node下的操作



6、minion node:join the cluster



这里我们用到了kubeadm的第二个命令:kubeadm join。



在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的):

前提node下需要有上面安装的kube组建

7、安装kubelet和kubeadm

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

我是用的是

http://119.29.98.145:8070/zhi/apt-key.gpg



主机master上操作如下:



curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add -



cat <<EOF >/etc/apt/sources.list.d/kubernetes.list



deb http://apt.kubernetes.io/ kubernetes-xenial main



EOF



apt-get update



apt-get install -y docker.io

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

记住master的token

root@node01:~# kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx(ip)

8、在master node上查看当前cluster状态:

root@master:~# kubectl get node

NAME STATUS AGE

master Ready,master 59m

node01 Ready 51m

node02 Ready 46m 查看全部
1、下载说需要的包都得在墙外,需要翻墙。 但服务器上总不能提供这种便利。 比较麻烦。 两种办法,一种是绑定hosts  由于Kubernetes 编译的各种发行版安装包来源于 Github 上的另一个叫 release 的项目,把这个项目 clone 下来,  

可以参考漠然的文章: https://mritd.me/2016/10/29/set-up-kubernetes-cluster-by-kubeadm/
 
2、我是用的方法 是通过hosts绑定, 然后通过打包到源码,下次直接使用
使用Kubeadm安装Kubernetes1.5版本

1、系统版本:ubuntu16.04

root@master:~# docker version

Client:

Version: 1.12.1

API version: 1.24

Go version: go1.6.2

Git commit: 23cf638

Built: Tue, 27 Sep 2016 12:25:38 +1300

OS/Arch: linux/amd64



Server:

Version: 1.12.1

API version: 1.24

Go version: go1.6.2

Git commit: 23cf638

Built: Tue, 27 Sep 2016 12:25:38 +1300

OS/Arch: linux/amd64





1、部署前提条件

每台主机上面至少1G内存。

所有主机之间网络可达。



2、部署:

在主机上安装kubelet和kubeadm

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

主机master上操作如下:

cat <<EOF >/etc/apt/sources.list.d/kubernetes.list

deb http://apt.kubernetes.io/ kubernetes-xenial main

EOF

apt-get update

apt-get install -y docker.io

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.service

root@master:~# ls /lib/systemd/system |grep kube

kubelet.service

kubelet的版本:

root@master:~# kubelet --version

Kubernetes v1.5.1

k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。

3、初始化集群

理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是,

kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理,

因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images


在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。

我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。

更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:“flannel is a network fabric for containers, designed for Kubernetes”。

如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:–pod-network-cidr=10.244.0.0/16。


4、执行kubeadm init

执行kubeadm init命令:

root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

[preflight] Running pre-flight checks

[preflight] Starting the kubelet service

[init] Using Kubernetes version: v1.5.1

[tokens] Generated token: "2909ca.c0b0772a8817f9e3"

[certificates] Generated Certificate Authority key and certificate.

[certificates] Generated API Server key and certificate

[certificates] Generated Service Account signing keys

[certificates] Created keys and certificates in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[apiclient] Created API client, waiting for the control plane to become ready

[apiclient] All control plane components are healthy after 14.761716 seconds

[apiclient] Waiting for at least one node to register and become ready

[apiclient] First node is ready after 1.003312 seconds

[apiclient] Creating a test deployment

[apiclient] Test deployment succeeded

[token-discovery] Created the kube-discovery deployment, waiting for it to become ready

[token-discovery] kube-discovery is ready after 1.002402 seconds

[addons] Created essential addon: kube-proxy

[addons] Created essential addon: kube-dns



Your Kubernetes master has initialized successfully!



You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/



You can now join any number of machines by running the following on each node:



kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx (ip记下)



init成功后的master node有啥变化?k8s的核心组件均正常启动:

root@master:~# ps -ef |grep kube

root 23817 1 2 14:07 ? 00:00:35 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.local

root 23921 23900 0 14:07 ? 00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080

root 24055 24036 0 14:07 ? 00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=master的ip --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379

root 24084 24070 0 14:07 ? 00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16

root 24242 24227 0 14:07 ? 00:00:00 /usr/local/bin/kube-discovery

root 24308 24293 1 14:07 ? 00:00:15 kube-proxy --kubeconfig=/run/kubeconfig

root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr

root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done

root 30372 30357 0 14:10 ? 00:00:01 /exechealthz --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null --url=/healthz-dnsmasq --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null --url=/healthz-kubedns --port=8080 --quiet

root 30682 30667 0 14:10 ? 00:00:01 /kube-dns --domain=cluster.local --dns-port=10053 --config-map=kube-dns --v=2

root 48755 1796 0 14:31 pts/0 00:00:00 grep --color=auto kube



而且以多cotainer的形式启动

root@master:~# docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

c4209b1077d2 gcr.io/google_containers/kubedns-amd64:1.9 "/kube-dns --domain=c" 22 minutes ago Up 22 minutes k8s_kube-dns.61e5a20f_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_fc02f762

0908d6398b0b gcr.io/google_containers/exechealthz-amd64:1.2 "/exechealthz '--cmd=" 22 minutes ago Up 22 minutes k8s_healthz.9d343f54_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_0ee806f6

0e35e96ca4ac gcr.io/google_containers/dnsmasq-metrics-amd64:1.0 "/dnsmasq-metrics --v" 22 minutes ago Up 22 minutes k8s_dnsmasq-metrics.2bb05ef7_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_436b9370

3921b4e59aca gcr.io/google_containers/kube-dnsmasq-amd64:1.4 "/usr/sbin/dnsmasq --" 22 minutes ago Up 22 minutes k8s_dnsmasq.f7e18a01_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_06c5efa7

18513413ba60 gcr.io/google_containers/pause-amd64:3.0 "/pause" 22 minutes ago Up 22 minutes k8s_POD.d8dbe16c_kube-dns-2924299975-txh1v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_9de0a18d

45132c8d6d3d quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/bin/sh -c 'set -e -" 23 minutes ago Up 23 minutes k8s_install-cni.fc218cef_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_88dffd75

4c2a2e46c808 quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64 "/opt/bin/flanneld --" 23 minutes ago Up 23 minutes k8s_kube-flannel.5fdd90ba_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_2706c3cb

ad08c8dd177c gcr.io/google_containers/pause-amd64:3.0 "/pause" 23 minutes ago Up 23 minutes k8s_POD.d8dbe16c_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_279d8436

847f00759977 gcr.io/google_containers/kube-proxy-amd64:v1.5.1 "kube-proxy --kubecon" 24 minutes ago Up 24 minutes k8s_kube-proxy.2f62b4e5_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c1f31904

f8da0f38f3e1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c340d947

c1efa29640d1 gcr.io/google_containers/kube-discovery-amd64:1.0 "/usr/local/bin/kube-" 24 minutes ago Up 24 minutes k8s_kube-discovery.6907cb07_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_c4827da2

4c6a646d0b2e gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_8823b66a

ece79181f177 gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_dummy.702d1bd5_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_ade728ba

9c3364c623df gcr.io/google_containers/pause-amd64:3.0 "/pause" 24 minutes ago Up 24 minutes k8s_POD.d8dbe16c_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_838c58b5

a64a3363a82b gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1 "kube-controller-mana" 25 minutes ago Up 25 minutes k8s_kube-controller-manager.84edb2e5_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_697ef6ee

27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 25 minutes ago Up 25 minutes k8s_kube-apiserver.5942f3e3_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844

5b2cc5cb9ac1 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_2f88a796

e12ef7b3c1f0 gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm "etcd --listen-client" 25 minutes ago Up 25 minutes k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_ef6eb513

84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e

612b021457a1 gcr.io/google_containers/kube-scheduler-amd64:v1.5.1 "kube-scheduler --add" 25 minutes ago Up 25 minutes k8s_kube-scheduler.bb7d750_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_a49fab86

ac0d8698f79f gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_9a6b7925

2a16a2217bf3 gcr.io/google_containers/pause-amd64:3.0 "/pause" 25 minutes ago Up 25 minutes k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_d2b51317





kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出:



root@master:~# docker ps |grep apiserver

27625502c298 gcr.io/google_containers/kube-apiserver-amd64:v1.5.1 "kube-apiserver --ins" 26 minutes ago Up 26 minutes k8s_kube-apiserver.5942f3e3_kubeapiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a83844

84a731cbce18 gcr.io/google_containers/pause-amd64:3.0 "/pause" 26 minutes ago Up 26 minutes k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e



问题一、

如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出:

# kubeadm init --pod-network-cidr=10.244.0.0/16

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.

[preflight] Running pre-flight checks

[preflight] Some fatal errors occurred:

Port 10250 is in use

/etc/kubernetes/manifests is not empty

/etc/kubernetes/pki is not empty

/var/lib/kubelet is not empty

/etc/kubernetes/admin.conf already exists

/etc/kubernetes/kubelet.conf already exists

[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`



kubeadm会自动检查当前环境是否有上次命令执行的“残留”。如果有,必须清理后再行执行init。我们可以通过”kubeadm reset”来清理环境,以备重来。



# kubeadm reset

[preflight] Running pre-flight checks

[reset] Draining node: "iz25beglnhtz"

[reset] Removing node: "iz25beglnhtz"

[reset] Stopping the kubelet service

[reset] Unmounting mounted directories in "/var/lib/kubelet"

[reset] Removing kubernetes-managed containers

[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd]

[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]

[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]





5、要使用Flannel网络,因此我们需要执行如下安装命令:

#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

configmap "kube-flannel-cfg" created

daemonset "kube-flannel-ds" created



需要稍等几秒钟,我们再来看master node上的cluster信息:

root@master:~# ps -ef |grep kube |grep flannel

root 29457 29441 0 14:09 ? 00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgr

root 29498 29481 0 14:09 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done



root@master:~# kubectl get pods --all-namespaces

NAMESPACE NAME READY STATUS RESTARTS AGE

kube-system dummy-2088944543-r2mw3 1/1 Running 0 30m

kube-system etcd-master 1/1 Running 0 31m

kube-system kube-apiserver-master 1/1 Running 0 31m

kube-system kube-controller-manager-master 1/1 Running 0 31m

kube-system kube-discovery-1769846148-4rsq9 1/1 Running 0 30m

kube-system kube-dns-2924299975-txh1v 4/4 Running 0 30m

kube-system kube-flannel-ds-0fnxc 2/2 Running 0 29m

kube-system kube-flannel-ds-lpgpv 2/2 Running 0 23m

kube-system kube-flannel-ds-s05nr 2/2 Running 0 18m

kube-system kube-proxy-9c0bf 1/1 Running 0 30m

kube-system kube-proxy-t8hxr 1/1 Running 0 18m

kube-system kube-proxy-zd0v2 1/1 Running 0 23m

kube-system kube-scheduler-master 1/1 Running 0 31m



至少集群的核心组件已经全部run起来了。看起来似乎是成功了。





接下来开始node下的操作



6、minion node:join the cluster



这里我们用到了kubeadm的第二个命令:kubeadm join。



在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的):

前提node下需要有上面安装的kube组建

7、安装kubelet和kubeadm

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -

我是用的是

http://119.29.98.145:8070/zhi/apt-key.gpg



主机master上操作如下:



curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add -



cat <<EOF >/etc/apt/sources.list.d/kubernetes.list



deb http://apt.kubernetes.io/ kubernetes-xenial main



EOF



apt-get update



apt-get install -y docker.io

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

记住master的token

root@node01:~# kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx(ip)

8、在master node上查看当前cluster状态:

root@master:~# kubectl get node

NAME STATUS AGE

master Ready,master 59m

node01 Ready 51m

node02 Ready 46m