5月 062020
 

滚动更新(Rolling Update)通过策略控制每次更新副本的数量来保障业务连续性。

准备使用httpd:2.4.41版本镜像的配置文件

[root@k8s-01 ~]# vi httpd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.41
        ports:
        - containerPort: 80

应用配置文件并获取deployment和replicaset及pod列表信息

[root@k8s-01 ~]# kubectl apply -f httpd-deployment.yaml
deployment.apps/httpd created
[root@k8s-01 ~]#
[root@k8s-01 ~]# kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           20s   httpd        httpd:2.4.41   run=httpd
[root@k8s-01 ~]#
[root@k8s-01 ~]# kubectl get replicasets.apps -o wide
NAME               DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
httpd-5bb8cdb99c   3         3         3       36s   httpd        httpd:2.4.41   pod-template-hash=5bb8cdb99c,run=httpd
[root@k8s-01 ~]#
[root@k8s-01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
httpd-5bb8cdb99c-454mz   1/1     Running   0          51s   10.244.2.4   k8s-03   <none>           <none>
httpd-5bb8cdb99c-qlzbh   1/1     Running   0          51s   10.244.1.5   k8s-02   <none>           <none>
httpd-5bb8cdb99c-rpt59   1/1     Running   0          51s   10.244.1.6   k8s-02   <none>           <none>
[root@k8s-01 ~]#

修改配置文件为使用httpd:2.4.43版本镜像

apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.43
        ports:
        - containerPort: 80

应用配置文件并获取deployment和replicaset列表信息

[root@k8s-01 ~]# kubectl apply -f httpd-deployment.yaml
deployment.apps/httpd configured
[root@k8s-01 ~]# kubectl get deployments.apps httpd -o wide
NAME    READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES         SELECTOR
httpd   3/3     3            3           3m2s   httpd        httpd:2.4.43   run=httpd
[root@k8s-01 ~]# kubectl get replicasets.apps -o wide
NAME               DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
httpd-5bb8cdb99c   0         0         0       3m11s   httpd        httpd:2.4.41   pod-template-hash=5bb8cdb99c,run=httpd
httpd-7c68f97dc5   3         3         3       24s     httpd        httpd:2.4.43   pod-template-hash=7c68f97dc5,run=httpd
[root@k8s-01 ~]#

查看滚动更新详情(每次只更新替换一个低版本镜像Pod)

[root@k8s-01 ~]# kubectl describe deployments.apps httpd
Name:                   httpd
Namespace:              default
CreationTimestamp:      Wed, 06 May 2020 09:20:14 +0000
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 2
                        kubectl.kubernetes.io/last-applied-configuration:
                          {"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"name":"httpd","namespace":"default"},"spec":{"replicas":3,"selec...
Selector:               run=httpd
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=httpd
  Containers:
   httpd:
    Image:        httpd:2.4.43
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   httpd-7c68f97dc5 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  4m27s  deployment-controller  Scaled up replica set httpd-5bb8cdb99c to 3
  Normal  ScalingReplicaSet  100s   deployment-controller  Scaled up replica set httpd-7c68f97dc5 to 1
  Normal  ScalingReplicaSet  93s    deployment-controller  Scaled down replica set httpd-5bb8cdb99c to 2
  Normal  ScalingReplicaSet  93s    deployment-controller  Scaled up replica set httpd-7c68f97dc5 to 2
  Normal  ScalingReplicaSet  85s    deployment-controller  Scaled down replica set httpd-5bb8cdb99c to 1
  Normal  ScalingReplicaSet  85s    deployment-controller  Scaled up replica set httpd-7c68f97dc5 to 3
  Normal  ScalingReplicaSet  84s    deployment-controller  Scaled down replica set httpd-5bb8cdb99c to 0
[root@k8s-01 ~]#
5月 062020
 

获取集群内的服务列表(类型为ClusterIP)

[root@k8s-01 ~]# kubectl get service -o wide
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE   SELECTOR
httpd-service   ClusterIP   10.109.145.140   <none>        8080/TCP   78m   run=httpd
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP    85m   <none>
[root@k8s-01 ~]#

修改服务配置文件以添加NodePort配置并应用

[root@k8s-01 ~]# vi httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: httpd-service
spec:
  type: NodePort
  selector:
    run: httpd
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80
[root@k8s-01 ~]# kubectl apply -f httpd-service.yaml
service/httpd-service configured
[root@k8s-01 ~]#

获取集群内的服务列表(类型为NodePort)

[root@k8s-01 ~]# kubectl get service -o wide
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE   SELECTOR
httpd-service   NodePort    10.109.145.140   <none>        8080:30093/TCP   81m   run=httpd
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP          88m   <none>
[root@k8s-01 ~]#

使用节点的IP+Port方式访问集群内的服务(借助iptbales实现负载均衡的包转发)

[root@k8s-01 ~]# curl 167.99.108.90:30093
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl 206.189.165.254:30093
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl 167.99.108.90:30093
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]#

为NodePort指定固定端口号(默认为30000-32767的随机端口号)

[root@k8s-01 ~]# vi httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: httpd-service
spec:
  type: NodePort
  selector:
    run: httpd
  ports:
  - protocol: TCP
    nodePort: 31234
    port: 8080
    targetPort: 80
[root@k8s-01 ~]# kubectl apply -f httpd-service.yaml
service/httpd-service configured
[root@k8s-01 ~]#

获取集群内的服务列表

[root@k8s-01 ~]# kubectl  get services -o wide
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE    SELECTOR
httpd-service   NodePort    10.109.145.140   <none>        8080:31234/TCP   93m    run=httpd
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP          100m   <none>
[root@k8s-01 ~]#

端口类型说明

nodePort:节点监听端口
port:ClusterIP监听端口
targetPort:Pod监听端口
5月 062020
 

Kubernetes集群中的Service从逻辑上代表了一组Pod,并通过label建立与pod的关联

准备Deployment配置文件

[root@k8s-01 ~]# vi httpd-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: httpd
spec:
  replicas: 3
  selector:
    matchLabels:
      run: httpd
  template:
    metadata:
      labels:
        run: httpd
    spec:
      containers:
      - name: httpd
        image: httpd:2.4.41
        ports:
        - containerPort: 80
[root@k8s-01 ~]# kubectl apply -f httpd-deployment.yaml 
deployment.apps/httpd created
[root@k8s-01 ~]#

获取集群pod列表详情

[root@k8s-01 ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
httpd-5bb8cdb99c-g5m95   1/1     Running   0          4m29s   10.244.2.3   k8s-03   <none>           <none>
httpd-5bb8cdb99c-hzjqd   1/1     Running   0          4m29s   10.244.1.3   k8s-02   <none>           <none>
httpd-5bb8cdb99c-s4q25   1/1     Running   0          4m29s   10.244.1.4   k8s-02   <none>           <none>
[root@k8s-01 ~]#

使用CURL模拟浏览器请求pod的IP地址(Pod的IP地址只能被集群中的容器和节点访问到)

[root@k8s-01 ~]# curl 10.244.2.3
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl 10.244.1.3
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl 10.244.1.4
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]#

[root@k8s-02 ~]# curl 10.244.2.3
<html><body><h1>It works!</h1></body></html>
[root@k8s-02 ~]# curl 10.244.1.3
<html><body><h1>It works!</h1></body></html>
[root@k8s-02 ~]# curl 10.244.1.4
<html><body><h1>It works!</h1></body></html>
[root@k8s-02 ~]#

[root@k8s-03 ~]# curl 10.244.2.3
<html><body><h1>It works!</h1></body></html>
[root@k8s-03 ~]# curl 10.244.1.3
<html><body><h1>It works!</h1></body></html>
[root@k8s-03 ~]# curl 10.244.1.4
<html><body><h1>It works!</h1></body></html>
[root@k8s-03 ~]#

对Pod IP进行PING测试

[root@k8s-01 ~]# ping -c 2 10.244.2.3
PING 10.244.2.3 (10.244.2.3) 56(84) bytes of data.
64 bytes from 10.244.2.3: icmp_seq=1 ttl=63 time=2.03 ms
64 bytes from 10.244.2.3: icmp_seq=2 ttl=63 time=0.660 ms

--- 10.244.2.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.660/1.348/2.036/0.688 ms
[root@k8s-01 ~]# ping -c 2 10.244.1.3
PING 10.244.1.3 (10.244.1.3) 56(84) bytes of data.
64 bytes from 10.244.1.3: icmp_seq=1 ttl=63 time=1.58 ms
64 bytes from 10.244.1.3: icmp_seq=2 ttl=63 time=0.641 ms

--- 10.244.1.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1001ms
rtt min/avg/max/mdev = 0.641/1.115/1.589/0.474 ms
[root@k8s-01 ~]# ping -c 2 10.244.1.4
PING 10.244.1.4 (10.244.1.4) 56(84) bytes of data.
64 bytes from 10.244.1.4: icmp_seq=1 ttl=63 time=0.658 ms
64 bytes from 10.244.1.4: icmp_seq=2 ttl=63 time=0.483 ms

--- 10.244.1.4 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.483/0.570/0.658/0.090 ms
[root@k8s-01 ~]#

创建服务Service配置文件

[root@k8s-01 ~]# vi httpd-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: httpd-service
spec:
  selector:
    run: httpd
  ports:
  - protocol: TCP
    port: 8080
    targetPort: 80
[root@k8s-01 ~]# kubectl apply -f httpd-service.yaml
service/httpd-service created
[root@k8s-01 ~]#

获取集群Service列表详情

[root@k8s-01 ~]# kubectl get services -o wide
NAME            TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE    SELECTOR
httpd-service   ClusterIP   10.109.145.140   <none>        8080/TCP   4m9s   run=httpd
kubernetes      ClusterIP   10.96.0.1        <none>        443/TCP    10m    <none>
[root@k8s-01 ~]#

尝试ping集群IP地址(默认无法ping通)

[root@k8s-01 ~]# ping 10.109.145.140
PING 10.109.145.140 (10.109.145.140) 56(84) bytes of data.
^C
--- 10.109.145.140 ping statistics ---
3 packets transmitted, 0 received, 100% packet loss, time 1999ms

[root@k8s-01 ~]#

使用Service获得的集群IP访问具有run=httpd标签的后端Pod及容器

[root@k8s-01 ~]# curl 10.109.145.140:8080
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl 10.109.145.140:8080
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl 10.109.145.140:8080
<html><body><h1>It works!</h1></body></html>
[root@k8s-01 ~]# curl -I 10.109.145.140:8080
HTTP/1.1 200 OK
Date: Wed, 06 May 2020 07:24:57 GMT
Server: Apache/2.4.41 (Unix)
Last-Modified: Mon, 11 Jun 2007 18:53:14 GMT
ETag: "2d-432a5e4a73a80"
Accept-Ranges: bytes
Content-Length: 45
Content-Type: text/html

[root@k8s-01 ~]#

获取服务详情以确认Cluster IP指向的后端Pod IP信息

[root@k8s-01 ~]# kubectl describe services httpd-service
Name:              httpd-service
Namespace:         default
Labels:            <none>
Annotations:       kubectl.kubernetes.io/last-applied-configuration:
                     {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"httpd-service","namespace":"default"},"spec":{"ports":[{"port":80...
Selector:          run=httpd
Type:              ClusterIP
IP:                10.109.145.140
Port:              <unset>  8080/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.3:80,10.244.1.4:80,10.244.2.3:80
Session Affinity:  None
Events:            <none>
[root@k8s-01 ~]#
[root@k8s-01 ~]# kubectl get endpoints httpd-service
NAME            ENDPOINTS                                   AGE
httpd-service   10.244.1.3:80,10.244.1.4:80,10.244.2.3:80   5m23s
[root@k8s-01 ~]#
4月 272020
 

相较于Deployment资源,DaemonSet在每个节点仅运行一个副本,以提供守护服务。

查看DaemonSet类型的系统组件(kube-proxy和kube-flannel-ds-amd64)

获取kube-system命名空间的daemonset列表

[root@k8s01 ~]# kubectl get daemonsets.apps --namespace=kube-system 
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
kube-flannel-ds-amd64     5         5         5       5            5           <none>                   6d16h
kube-flannel-ds-arm       0         0         0       0            0           <none>                   6d16h
kube-flannel-ds-arm64     0         0         0       0            0           <none>                   6d16h
kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   6d16h
kube-flannel-ds-s390x     0         0         0       0            0           <none>                   6d16h
kube-proxy                5         5         5       5            5           kubernetes.io/os=linux   6d16h
[root@k8s01 ~]#

获取kube-system命名空间pod列表详情(每个节点都运行一个daemonset类型容器副本)

[root@k8s01 ~]# kubectl get pods --namespace=kube-system -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
coredns-66bff467f8-5x8nf        1/1     Running   0          6d16h   10.244.1.2     k8s02   <none>           <none>
coredns-66bff467f8-mgcd2        1/1     Running   0          6d16h   10.244.0.2     k8s01   <none>           <none>
etcd-k8s01                      1/1     Running   0          6d16h   172.31.14.12   k8s01   <none>           <none>
kube-apiserver-k8s01            1/1     Running   0          6d16h   172.31.14.12   k8s01   <none>           <none>
kube-controller-manager-k8s01   1/1     Running   0          6d16h   172.31.14.12   k8s01   <none>           <none>
kube-flannel-ds-amd64-4ngbr     1/1     Running   0          6d16h   172.31.6.113   k8s03   <none>           <none>
kube-flannel-ds-amd64-j9qmh     1/1     Running   0          4d      172.31.1.139   k8s04   <none>           <none>
kube-flannel-ds-amd64-kmw29     1/1     Running   0          6d16h   172.31.3.249   k8s02   <none>           <none>
kube-flannel-ds-amd64-l57kp     1/1     Running   0          6d16h   172.31.14.12   k8s01   <none>           <none>
kube-flannel-ds-amd64-rr8sv     1/1     Running   1          4d      172.31.15.1    k8s05   <none>           <none>
kube-proxy-22fd2                1/1     Running   0          6d16h   172.31.3.249   k8s02   <none>           <none>
kube-proxy-97hft                1/1     Running   0          4d      172.31.1.139   k8s04   <none>           <none>
kube-proxy-jwwp2                1/1     Running   0          6d16h   172.31.6.113   k8s03   <none>           <none>
kube-proxy-mw6xf                1/1     Running   0          4d      172.31.15.1    k8s05   <none>           <none>
kube-proxy-wnf4q                1/1     Running   0          6d16h   172.31.14.12   k8s01   <none>           <none>
kube-scheduler-k8s01            1/1     Running   0          6d16h   172.31.14.12   k8s01   <none>           <none>
[root@k8s01 ~]#

查看flannel网络组件配置文件中的daemonset配置

[root@k8s01 ~]# vi kube-flannel.yml
    134 apiVersion: apps/v1
    135 kind: DaemonSet
    136 metadata:
    137   name: kube-flannel-ds-amd64
    138   namespace: kube-system
    139   labels:
    140     tier: node
    141     app: flannel
    142 spec:
    143   selector:
    144     matchLabels:
    145       app: flannel
    146   template:
    147     metadata:
    148       labels:
    149         tier: node
    150         app: flannel
    151     spec:
    152       affinity:
    153         nodeAffinity:
    154           requiredDuringSchedulingIgnoredDuringExecution:
    155             nodeSelectorTerms:
    156               - matchExpressions:
    157                   - key: kubernetes.io/os
    158                     operator: In
    159                     values:
    160                       - linux
    161                   - key: kubernetes.io/arch
    162                     operator: In
    163                     values:
    164                       - amd64
    165       hostNetwork: true
    166       tolerations:
    167       - operator: Exists
    168         effect: NoSchedule
    169       serviceAccountName: flannel
    170       initContainers:
    171       - name: install-cni
    172         image: quay.io/coreos/flannel:v0.12.0-amd64
    173         command:
    174         - cp
    175         args:
    176         - -f
    177         - /etc/kube-flannel/cni-conf.json
    178         - /etc/cni/net.d/10-flannel.conflist
    179         volumeMounts:
    180         - name: cni
    181           mountPath: /etc/cni/net.d
    182         - name: flannel-cfg
    183           mountPath: /etc/kube-flannel/
    184       containers:
    185       - name: kube-flannel
    186         image: quay.io/coreos/flannel:v0.12.0-amd64
    187         command:
    188         - /opt/bin/flanneld
    189         args:
    190         - --ip-masq
    191         - --kube-subnet-mgr
    192         resources:
    193           requests:
    194             cpu: "100m"
    195             memory: "50Mi"
    196           limits:
    197             cpu: "100m"
    198             memory: "50Mi"
    199         securityContext:
    200           privileged: false
    201           capabilities:
    202             add: ["NET_ADMIN"]
    203         env:
    204         - name: POD_NAME
    205           valueFrom:
    206             fieldRef:
    207               fieldPath: metadata.name
    208         - name: POD_NAMESPACE
    209           valueFrom:
    210             fieldRef:
    211               fieldPath: metadata.namespace
    212         volumeMounts:
    213         - name: run
    214           mountPath: /run/flannel
    215         - name: flannel-cfg
    216           mountPath: /etc/kube-flannel/
    217       volumes:
    218         - name: run
    219           hostPath:
    220             path: /run/flannel
    221         - name: cni
    222           hostPath:
    223             path: /etc/cni/net.d
    224         - name: flannel-cfg
    225           configMap:
    226             name: kube-flannel-cfg

运行一个daemonset类型的资源(Fluentd日志收集系统)

[root@k8s01 ~]# vi daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      # this toleration is to have the daemonset runnable on master nodes
      # remove it if your masters can't run pods
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers

应用配置文件

[root@k8s01 ~]# kubectl apply -f daemonset.yaml 
daemonset.apps/fluentd-elasticsearch created
[root@k8s01 ~]# kubectl get daemonsets.apps 
No resources found in default namespace.
[root@k8s01 ~]# kubectl get daemonsets.apps --namespace=kube-system 
NAME                      DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
fluentd-elasticsearch     5         5         5       5            5           <none>                   28s
kube-flannel-ds-amd64     5         5         5       5            5           <none>                   6d18h
kube-flannel-ds-arm       0         0         0       0            0           <none>                   6d18h
kube-flannel-ds-arm64     0         0         0       0            0           <none>                   6d18h
kube-flannel-ds-ppc64le   0         0         0       0            0           <none>                   6d18h
kube-flannel-ds-s390x     0         0         0       0            0           <none>                   6d18h
kube-proxy                5         5         5       5            5           kubernetes.io/os=linux   6d18h
[root@k8s01 ~]#

获取kube-system命名空间的daemonset列表

[root@k8s01 ~]# kubectl get pods --namespace=kube-system -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
coredns-66bff467f8-5x8nf        1/1     Running   0          6d18h   10.244.1.2     k8s02   <none>           <none>
coredns-66bff467f8-mgcd2        1/1     Running   0          6d18h   10.244.0.2     k8s01   <none>           <none>
etcd-k8s01                      1/1     Running   0          6d18h   172.31.14.12   k8s01   <none>           <none>
fluentd-elasticsearch-64c2h     1/1     Running   0          84s     10.244.5.9     k8s05   <none>           <none>
fluentd-elasticsearch-f8989     1/1     Running   0          84s     10.244.0.3     k8s01   <none>           <none>
fluentd-elasticsearch-lcgn7     1/1     Running   0          84s     10.244.3.4     k8s04   <none>           <none>
fluentd-elasticsearch-ss2zm     1/1     Running   0          84s     10.244.1.20    k8s02   <none>           <none>
fluentd-elasticsearch-wkd45     1/1     Running   0          84s     10.244.2.39    k8s03   <none>           <none>
kube-apiserver-k8s01            1/1     Running   0          6d18h   172.31.14.12   k8s01   <none>           <none>
kube-controller-manager-k8s01   1/1     Running   0          6d18h   172.31.14.12   k8s01   <none>           <none>
kube-flannel-ds-amd64-4ngbr     1/1     Running   0          6d18h   172.31.6.113   k8s03   <none>           <none>
kube-flannel-ds-amd64-j9qmh     1/1     Running   0          4d2h    172.31.1.139   k8s04   <none>           <none>
kube-flannel-ds-amd64-kmw29     1/1     Running   0          6d18h   172.31.3.249   k8s02   <none>           <none>
kube-flannel-ds-amd64-l57kp     1/1     Running   0          6d18h   172.31.14.12   k8s01   <none>           <none>
kube-flannel-ds-amd64-rr8sv     1/1     Running   1          4d2h    172.31.15.1    k8s05   <none>           <none>
kube-proxy-22fd2                1/1     Running   0          6d18h   172.31.3.249   k8s02   <none>           <none>
kube-proxy-97hft                1/1     Running   0          4d2h    172.31.1.139   k8s04   <none>           <none>
kube-proxy-jwwp2                1/1     Running   0          6d18h   172.31.6.113   k8s03   <none>           <none>
kube-proxy-mw6xf                1/1     Running   0          4d2h    172.31.15.1    k8s05   <none>           <none>
kube-proxy-wnf4q                1/1     Running   0          6d18h   172.31.14.12   k8s01   <none>           <none>
kube-scheduler-k8s01            1/1     Running   0          6d18h   172.31.14.12   k8s01   <none>           <none>
[root@k8s01 ~]#
4月 272020
 

获取当前集群pod列表及所属节点

[root@k8s01 ~]# kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-bbfdbf4b7-8khd4   1/1     Running   0          3d23h   10.244.2.35   k8s03   <none>           <none>
nginx-deployment-bbfdbf4b7-9g825   1/1     Running   0          3d23h   10.244.1.17   k8s02   <none>           <none>
nginx-deployment-bbfdbf4b7-hsvfg   1/1     Running   0          3d23h   10.244.2.36   k8s03   <none>           <none>
nginx-deployment-bbfdbf4b7-jpt96   1/1     Running   0          3d23h   10.244.2.34   k8s03   <none>           <none>
nginx-deployment-bbfdbf4b7-vlnlk   1/1     Running   0          3d23h   10.244.1.18   k8s02   <none>           <none>
[root@k8s01 ~]# kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   5/5     5            5           5d15h
[root@k8s01 ~]#

删除nginx-deployment资源

[root@k8s01 ~]# kubectl delete deployments.apps nginx-deployment 
deployment.apps "nginx-deployment" deleted
[root@k8s01 ~]# kubectl get pods
No resources found in default namespace.
[root@k8s01 ~]#

获取节点列表

[root@k8s01 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
k8s01   Ready    master   6d15h   v1.18.2
k8s02   Ready    <none>   6d15h   v1.18.2
k8s03   Ready    <none>   6d15h   v1.18.2
k8s04   Ready    <none>   3d23h   v1.18.2
k8s05   Ready    <none>   3d23h   v1.18.2
[root@k8s01 ~]#

应用nginx-deployment配置文件

[root@k8s01 ~]# cat nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.10
        ports:
        - containerPort: 80
[root@k8s01 ~]# kubectl apply -f nginx-deployment.yaml 
deployment.apps/nginx-deployment created
[root@k8s01 ~]# kubectl get pods -o wide
NAME                               READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-cc5db57d4-dvr4p   1/1     Running   0          11s   10.244.2.37   k8s03   <none>           <none>
nginx-deployment-cc5db57d4-fnq9c   1/1     Running   0          11s   10.244.3.2    k8s04   <none>           <none>
[root@k8s01 ~]#

获取节点的默认标签配置信息

[root@k8s01 ~]# kubectl get nodes --show-labels 
NAME    STATUS   ROLES    AGE     VERSION   LABELS
k8s01   Ready    master   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s01,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s02   Ready    <none>   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s02,kubernetes.io/os=linux
k8s03   Ready    <none>   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s03,kubernetes.io/os=linux
k8s04   Ready    <none>   3d23h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s04,kubernetes.io/os=linux
k8s05   Ready    <none>   3d23h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s05,kubernetes.io/os=linux
[root@k8s01 ~]#

对指定节点添加标签键值对

[root@k8s01 ~]# kubectl label nodes k8s05 disktype=ssd
node/k8s05 labeled
[root@k8s01 ~]# kubectl get nodes --show-labels 
NAME    STATUS   ROLES    AGE     VERSION   LABELS
k8s01   Ready    master   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s01,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s02   Ready    <none>   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s02,kubernetes.io/os=linux
k8s03   Ready    <none>   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s03,kubernetes.io/os=linux
k8s04   Ready    <none>   3d23h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s04,kubernetes.io/os=linux
k8s05   Ready    <none>   3d23h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s05,kubernetes.io/os=linux
[root@k8s01 ~]#

修改deployment配置文件添加关联标签

[root@k8s01 ~]# vi nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 6
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.17.10
        ports:
        - containerPort: 80
      nodeSelector:
        disktype: ssd

应用配置文件执行销毁原有pod并调度新pod资源到节点k8s05上

[root@k8s01 ~]# kubectl get pods -o wide
NAME                               READY   STATUS              RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-cc5db57d4-5lzsz   1/1     Running             0          12s     10.244.3.3    k8s04   <none>           <none>
nginx-deployment-cc5db57d4-dvr4p   1/1     Running             0          9m53s   10.244.2.37   k8s03   <none>           <none>
nginx-deployment-cc5db57d4-fnq9c   1/1     Running             0          9m53s   10.244.3.2    k8s04   <none>           <none>
nginx-deployment-cc5db57d4-hwmk4   1/1     Running             0          12s     10.244.1.19   k8s02   <none>           <none>
nginx-deployment-cc5db57d4-qt26r   1/1     Running             0          12s     10.244.2.38   k8s03   <none>           <none>
nginx-deployment-ddc6847d-4qx2m    0/1     ContainerCreating   0          12s     <none>        k8s05   <none>           <none>
nginx-deployment-ddc6847d-cvhv4    0/1     ContainerCreating   0          12s     <none>        k8s05   <none>           <none>
nginx-deployment-ddc6847d-dcztn    0/1     ContainerCreating   0          12s     <none>        k8s05   <none>           <none>
[root@k8s01 ~]# kubectl get pods -o wide
NAME                               READY   STATUS        RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-cc5db57d4-dvr4p   0/1     Terminating   0          10m   10.244.2.37   k8s03   <none>           <none>
nginx-deployment-cc5db57d4-fnq9c   0/1     Terminating   0          10m   10.244.3.2    k8s04   <none>           <none>
nginx-deployment-ddc6847d-26hl9    1/1     Running       0          13s   10.244.5.7    k8s05   <none>           <none>
nginx-deployment-ddc6847d-4qx2m    1/1     Running       0          26s   10.244.5.3    k8s05   <none>           <none>
nginx-deployment-ddc6847d-cvhv4    1/1     Running       0          26s   10.244.5.4    k8s05   <none>           <none>
nginx-deployment-ddc6847d-d6f99    1/1     Running       0          14s   10.244.5.6    k8s05   <none>           <none>
nginx-deployment-ddc6847d-dcztn    1/1     Running       0          26s   10.244.5.5    k8s05   <none>           <none>
nginx-deployment-ddc6847d-dj5x4    1/1     Running       0          12s   10.244.5.8    k8s05   <none>           <none>
[root@k8s01 ~]# kubectl get pods -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP           NODE    NOMINATED NODE   READINESS GATES
nginx-deployment-ddc6847d-26hl9   1/1     Running   0          21s   10.244.5.7   k8s05   <none>           <none>
nginx-deployment-ddc6847d-4qx2m   1/1     Running   0          34s   10.244.5.3   k8s05   <none>           <none>
nginx-deployment-ddc6847d-cvhv4   1/1     Running   0          34s   10.244.5.4   k8s05   <none>           <none>
nginx-deployment-ddc6847d-d6f99   1/1     Running   0          22s   10.244.5.6   k8s05   <none>           <none>
nginx-deployment-ddc6847d-dcztn   1/1     Running   0          34s   10.244.5.5   k8s05   <none>           <none>
nginx-deployment-ddc6847d-dj5x4   1/1     Running   0          20s   10.244.5.8   k8s05   <none>           <none>
[root@k8s01 ~]#

删除lable标签配置

[root@k8s01 ~]# kubectl label nodes k8s05 disktype-
node/k8s05 labeled
[root@k8s01 ~]# kubectl get nodes --show-labels 
NAME    STATUS   ROLES    AGE     VERSION   LABELS
k8s01   Ready    master   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s01,kubernetes.io/os=linux,node-role.kubernetes.io/master=
k8s02   Ready    <none>   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s02,kubernetes.io/os=linux
k8s03   Ready    <none>   6d15h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s03,kubernetes.io/os=linux
k8s04   Ready    <none>   3d23h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s04,kubernetes.io/os=linux
k8s05   Ready    <none>   3d23h   v1.18.2   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s05,kubernetes.io/os=linux
[root@k8s01 ~]#
4月 232020
 

为集群新增节点

172.31.3.209 k8s01
172.31.8.132 k8s02
172.31.10.229 k8s03
172.31.1.139 k8s04
172.31.15.1 k8s05

新节点加入集群

kubeadm join --token <token> <control-plane-host>:<control-plane-port> --discovery-token-ca-cert-hash sha256:<hash>

主节点生成token有效期为24小时,超过该有效期后需要另行生成。

查看现有token列表

[root@k8s01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
ca673s.97ektx8klpsjfovt   8h          2020-04-23T10:35:25Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
qxycbf.ri8i2zygahp5je8m   8h          2020-04-23T10:35:43Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
[root@k8s01 ~]#

重新生成token

[root@k8s01 ~]# kubeadm token create
W0423 02:26:28.166475    9469 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
lf1qej.q4wq7xo23xigg672
[root@k8s01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
ca673s.97ektx8klpsjfovt   8h          2020-04-23T10:35:25Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
lf1qej.q4wq7xo23xigg672   23h         2020-04-24T02:26:28Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
qxycbf.ri8i2zygahp5je8m   8h          2020-04-23T10:35:43Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
[root@k8s01 ~]#

重新生成hash值(该值不变)

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

[root@k8s01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
> openssl dgst -sha256 -hex | sed 's/^.* //'
d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859
[root@k8s01 ~]#

节点4加入

[root@k8s04 ~]# kubeadm join --token lf1qej.q4wq7xo23xigg672 172.31.14.12:6443 --discovery-token-ca-cert-hash sha256:d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859
W0423 02:28:44.283472 19177 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s04 ~]#

节点5加入

[root@k8s05 ~]# kubeadm join --token lf1qej.q4wq7xo23xigg672 172.31.14.12:6443 --discovery-token-ca-cert-hash sha256:d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859
W0423 02:28:51.716851 19271 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s05 ~]#

获取节点列表(加入成功)

[root@k8s01 ~]# kubectl get nodes -o wide
NAME    STATUS   ROLES    AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s01   Ready    master   2d16h   v1.18.2   172.31.14.12   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   docker://19.3.8
k8s02   Ready    <none>   2d16h   v1.18.2   172.31.3.249   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   docker://19.3.8
k8s03   Ready    <none>   2d16h   v1.18.2   172.31.6.113   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   docker://19.3.8
k8s04   Ready    <none>   78s     v1.18.2   172.31.1.139   <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   docker://19.3.8
k8s05   Ready    <none>   70s     v1.18.2   172.31.15.1    <none>        CentOS Linux 7 (Core)   3.10.0-1062.12.1.el7.x86_64   docker://19.3.8
[root@k8s01 ~]#

创建新token并生成完整节点加入命令(一次性)

[root@k8s01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
ca673s.97ektx8klpsjfovt   7h          2020-04-23T10:35:25Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
lf1qej.q4wq7xo23xigg672   23h         2020-04-24T02:26:28Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
qxycbf.ri8i2zygahp5je8m   7h          2020-04-23T10:35:43Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
[root@k8s01 ~]# kubeadm token create --print-join-command
W0423 02:41:47.487117   15377 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
kubeadm join 172.31.14.12:6443 --token vc6toc.jhhp9jatexn4ed7m     --discovery-token-ca-cert-hash sha256:d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859 
[root@k8s01 ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                USAGES                   DESCRIPTION                                                EXTRA GROUPS
ca673s.97ektx8klpsjfovt   7h          2020-04-23T10:35:25Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
lf1qej.q4wq7xo23xigg672   23h         2020-04-24T02:26:28Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
qxycbf.ri8i2zygahp5je8m   7h          2020-04-23T10:35:43Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
vc6toc.jhhp9jatexn4ed7m   23h         2020-04-24T02:41:47Z   authentication,signing   <none>                                                     system:bootstrappers:kubeadm:default-node-token
[root@k8s01 ~]#
4月 212020
 

基础环境安装脚本(基于Amazon AWS EC2 CentOS 7环境)

#!/bin/bash
#

#
setenforce 0;
sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config;
#
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl -p;
#
yum makecache;
yum install -y yum-utils;
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo;
yum install -y docker-ce docker-ce-cli containerd.io;
#
cat <<EOF >> /etc/hosts
172.31.3.209 k8s01
172.31.8.132 k8s02
172.31.10.229 k8s03
EOF
#
mkdir /etc/docker;
cat <<EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF
#
systemctl daemon-reload;
systemctl enable docker;
systemctl restart docker;
#
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
#
yum install -y kubectl kubelet kubeadm;
systemctl enable kubelet;

执行脚本

[root@k8s01 ~]# vi deploy.sh
[root@k8s01 ~]# chmod 700 deploy.sh 
[root@k8s01 ~]# ./deploy.sh

初始化master节点

kubeadm init --apiserver-advertise-address=172.31.14.12 --pod-network-cidr=10.244.0.0/16

[root@k8s01 ~]# kubeadm init --apiserver-advertise-address=172.31.14.12 --pod-network-cidr=10.244.0.0/16

配置本地命令环境

[root@k8s01 ~]# mkdir -p $HOME/.kube
[root@k8s01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

节点加入集群

kubeadm join 172.31.14.12:6443 --token ghr4s0.13nh5q6f6ywt2oso \
--discovery-token-ca-cert-hash sha256:d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859

节点2加入

[root@k8s02 ~]# kubeadm join 172.31.14.12:6443 --token ghr4s0.13nh5q6f6ywt2oso \
> --discovery-token-ca-cert-hash sha256:d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859
W0420 10:23:48.432125 9198 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s02 ~]#

节点3加入

[root@k8s03 ~]# kubeadm join 172.31.14.12:6443 --token ghr4s0.13nh5q6f6ywt2oso \
> --discovery-token-ca-cert-hash sha256:d435ee7f3795a10b58762be903a78a99c719e3520fb029d718505095b37e9859
W0420 10:24:14.829097 9202 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@k8s03 ~]#

安装flannel网络

[root@k8s01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
[root@k8s01 ~]#

获取节点信息

[root@k8s01 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
k8s01   Ready    master   12m   v1.18.2
k8s02   Ready    <none>   10m   v1.18.2
k8s03   Ready    <none>   10m   v1.18.2
[root@k8s01 ~]#

查看集群组件状态

[root@k8s01 ~]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
[root@k8s01 ~]# 

查看本地镜像列表

[root@k8s01 ~]# docker image ls
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.2             0d40868643c6        3 days ago          117MB
k8s.gcr.io/kube-scheduler            v1.18.2             a3099161e137        3 days ago          95.3MB
k8s.gcr.io/kube-apiserver            v1.18.2             6ed75ad404bd        3 days ago          173MB
k8s.gcr.io/kube-controller-manager   v1.18.2             ace0a8c17ba9        3 days ago          162MB
quay.io/coreos/flannel               v0.12.0-amd64       4e9f801d2217        5 weeks ago         52.8MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        2 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        2 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        5 months ago        288MB
[root@k8s01 ~]#

 

1月 292015
 

https://docs.gluster.org/en/latest/Quick-Start-Guide/Quickstart/
https://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart

可扩展的分布式文件系统,实现在唯一全局命名空间中聚合多个服务器节点的磁盘资源。

分布式文件系统节点

glusterfs-01 138.197.217.220 10.138.18.152
glusterfs-02 157.245.169.92 10.138.146.225
glusterfs-03 165.227.21.222 10.138.178.108

在所有节点配置hosts文件

[root@glusterfs-01 ~]# vi /etc/hosts
10.138.18.152 glusterfs-01
10.138.146.225 glusterfs-02
10.138.178.108 glusterfs-03

查看当前可用磁盘和分区信息

[root@glusterfs-01 ~]# fdisk -l

Disk /dev/vda: 64.4 GB, 64424509440 bytes, 125829120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b6061

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048   125829086    62913519+  83  Linux

Disk /dev/vdb: 0 MB, 466944 bytes, 912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@glusterfs-01 ~]#

创建分区

[root@glusterfs-01 ~]# fdisk /dev/sda
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x99c4ee31.

Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended
Select (default p): e
Partition number (1-4, default 1): 
First sector (2048-209715199, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-209715199, default 209715199): 
Using default value 209715199
Partition 1 of type Extended and of size 100 GiB is set

Command (m for help): n
Partition type:
   p   primary (0 primary, 1 extended, 3 free)
   l   logical (numbered from 5)
Select (default p): l
Adding logical partition 5
First sector (4096-209715199, default 4096): 
Using default value 4096
Last sector, +sectors or +size{K,M,G} (4096-209715199, default 209715199): 
Using default value 209715199
Partition 5 of type Linux and of size 100 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@glusterfs-01 ~]#

查看当前可用磁盘和分区信息

[root@glusterfs-01 ~]# fdisk -l

Disk /dev/vda: 64.4 GB, 64424509440 bytes, 125829120 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000b6061

   Device Boot      Start         End      Blocks   Id  System
/dev/vda1   *        2048   125829086    62913519+  83  Linux

Disk /dev/vdb: 0 MB, 466944 bytes, 912 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0xbb370b51

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1            2048   209715199   104856576    5  Extended
/dev/sda5            4096   209715199   104855552   83  Linux
[root@glusterfs-01 ~]#

在所有节点格式化并挂载数据盘

# mkfs.xfs -i size=512 /dev/sda5
# mkdir -p /data/brick1
# echo '/dev/sda5 /data/brick1 xfs defaults 1 2' >> /etc/fstab
# mount -a && mount

[root@glusterfs-01 ~]# mkfs.xfs -i size=512 /dev/sda5
meta-data=/dev/sda5              isize=512    agcount=4, agsize=6553472 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=26213888, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=12799, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
[root@glusterfs-01 ~]# mkdir -p /data/brick1
[root@glusterfs-01 ~]# echo '/dev/sda5 /data/brick1 xfs defaults 1 2' >> /etc/fstab
[root@glusterfs-01 ~]# mount -a && mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=917804k,nr_inodes=229451,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/vda1 on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel)
mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=32,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=13335)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=188220k,mode=700)
/dev/sda5 on /data/brick1 type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@glusterfs-01 ~]#

在所有节点安装GlusterFS软件

[root@glusterfs-01 ~]# yum -y install centos-release-gluster
[root@glusterfs-01 ~]# yum -y install glusterfs-server

[root@glusterfs-02 ~]# yum -y install centos-release-gluster
[root@glusterfs-02 ~]# yum -y install glusterfs-server

[root@glusterfs-03 ~]# yum -y install centos-release-gluster
[root@glusterfs-03 ~]# yum -y install glusterfs-server

在所有节点注册并启动glusterfsd系统服务

[root@glusterfs-01 ~]# systemctl enable glusterfsd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterfsd.service to /usr/lib/systemd/system/glusterfsd.service.
[root@glusterfs-01 ~]# systemctl start glusterfsd
[root@glusterfs-01 ~]# systemctl status glusterfsd
● glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2020-05-26 07:28:17 UTC; 8s ago
  Process: 10737 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 10737 (code=exited, status=0/SUCCESS)

May 26 07:28:17 glusterfs-01 systemd[1]: Starting GlusterFS brick processes (stopping only)...
May 26 07:28:17 glusterfs-01 systemd[1]: Started GlusterFS brick processes (stopping only).
[root@glusterfs-01 ~]# 

[root@glusterfs-02 ~]# systemctl enable glusterfsd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterfsd.service to /usr/lib/systemd/system/glusterfsd.service.
[root@glusterfs-02 ~]# systemctl start glusterfsd
[root@glusterfs-02 ~]# systemctl status glusterfsd
● glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2020-05-26 07:29:21 UTC; 11s ago
  Process: 18817 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 18817 (code=exited, status=0/SUCCESS)

May 26 07:29:20 glusterfs-02 systemd[1]: Starting GlusterFS brick processes (stopping only)...
May 26 07:29:21 glusterfs-02 systemd[1]: Started GlusterFS brick processes (stopping only).
[root@glusterfs-02 ~]# 

[root@glusterfs-03 ~]# systemctl enable glusterfsd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterfsd.service to /usr/lib/systemd/system/glusterfsd.service.
[root@glusterfs-03 ~]# systemctl start glusterfsd
[root@glusterfs-03 ~]# systemctl status glusterfsd
● glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; enabled; vendor preset: disabled)
   Active: active (exited) since Tue 2020-05-26 07:30:27 UTC; 7s ago
  Process: 18444 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 18444 (code=exited, status=0/SUCCESS)

May 26 07:30:27 glusterfs-03 systemd[1]: Starting GlusterFS brick processes (stopping only)...
May 26 07:30:27 glusterfs-03 systemd[1]: Started GlusterFS brick processes (stopping only).
[root@glusterfs-03 ~]#

查看端口监听

[root@glusterfs-01 ~]# netstat -lntuop
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name     Timer
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1195/master          off (0.00/0/0)
tcp        0      0 0.0.0.0:24007           0.0.0.0:*               LISTEN      1047/glusterd        off (0.00/0/0)
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd            off (0.00/0/0)
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1247/sshd            off (0.00/0/0)
tcp6       0      0 ::1:25                  :::*                    LISTEN      1195/master          off (0.00/0/0)
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd            off (0.00/0/0)
tcp6       0      0 :::22                   :::*                    LISTEN      1247/sshd            off (0.00/0/0)
udp        0      0 0.0.0.0:111             0.0.0.0:*                           1/systemd            off (0.00/0/0)
udp        0      0 127.0.0.1:323           0.0.0.0:*                           647/chronyd          off (0.00/0/0)
udp        0      0 0.0.0.0:802             0.0.0.0:*                           629/rpcbind          off (0.00/0/0)
udp6       0      0 :::111                  :::*                                1/systemd            off (0.00/0/0)
udp6       0      0 ::1:323                 :::*                                647/chronyd          off (0.00/0/0)
udp6       0      0 :::802                  :::*                                629/rpcbind          off (0.00/0/0)
[root@glusterfs-01 ~]#

查看版本信息

[root@glusterfs-01 ~]# glusterfs -V
glusterfs 7.5
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.
[root@glusterfs-01 ~]#

将节点加入受信存储池

[root@glusterfs-01 ~]# gluster peer probe glusterfs-02
peer probe: success. 
[root@glusterfs-01 ~]# gluster peer probe glusterfs-03
peer probe: success. 
[root@glusterfs-01 ~]#

查看节点状态

[root@glusterfs-01 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs-02
Uuid: 9375a552-1cce-414c-8850-997800dd1f6e
State: Peer in Cluster (Connected)

Hostname: glusterfs-03
Uuid: c490e4ee-03f7-4b83-9456-6cccd101020f
State: Peer in Cluster (Connected)
[root@glusterfs-01 ~]#

[root@glusterfs-02 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs-01
Uuid: 605bacf2-abb4-4083-be2b-0d17c843bc68
State: Peer in Cluster (Connected)

Hostname: glusterfs-03
Uuid: c490e4ee-03f7-4b83-9456-6cccd101020f
State: Peer in Cluster (Connected)
[root@glusterfs-02 ~]#

[root@glusterfs-03 ~]# gluster peer status
Number of Peers: 2

Hostname: glusterfs-01
Uuid: 605bacf2-abb4-4083-be2b-0d17c843bc68
State: Peer in Cluster (Connected)

Hostname: glusterfs-02
Uuid: 9375a552-1cce-414c-8850-997800dd1f6e
State: Peer in Cluster (Connected)
[root@glusterfs-03 ~]#

创建一个存储卷(三副本)

[root@glusterfs-01 ~]# mkdir -p /data/brick1/gv0
[root@glusterfs-02 ~]# mkdir -p /data/brick1/gv0
[root@glusterfs-03 ~]# mkdir -p /data/brick1/gv0

[root@glusterfs-01 ~]# gluster volume create gv0 replica 3 glusterfs-01:/data/brick1/gv0 glusterfs-02:/data/brick1/gv0 glusterfs-03:/data/brick1/gv0
volume create: gv0: success: please start the volume to access data
[root@glusterfs-01 ~]#

[root@glusterfs-01 ~]# gluster volume start gv0
volume start: gv0: success
[root@glusterfs-01 ~]#

查看存储卷信息

[root@glusterfs-01 ~]# gluster volume info
 
Volume Name: gv0
Type: Replicate
Volume ID: aaa143ff-c7db-4b12-9d2f-4199c2cf76c9
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: glusterfs-01:/data/brick1/gv0
Brick2: glusterfs-02:/data/brick1/gv0
Brick3: glusterfs-03:/data/brick1/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@glusterfs-01 ~]#

查看卷状态信息

[root@glusterfs-01 ~]# gluster volume status
Status of volume: gv0
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterfs-01:/data/brick1/gv0         49152     0          Y       1580 
Brick glusterfs-02:/data/brick1/gv0         49152     0          Y       10275
Brick glusterfs-03:/data/brick1/gv0         49152     0          Y       10248
Self-heal Daemon on localhost               N/A       N/A        Y       1601 
Self-heal Daemon on glusterfs-03            N/A       N/A        Y       10269
Self-heal Daemon on glusterfs-02            N/A       N/A        Y       10296
 
Task Status of Volume gv0
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@glusterfs-01 ~]#

通过任意节点挂载已创建的三副本文件系统

在GlusterFS集群文件系统中,执行挂载命令时指定的服务器,仅用于获取卷的配置信息。随后客户端将直接与卷配置文件中的服务器进行通信(甚至不包括用于挂载的服务器)。

[root@glusterfs-01 ~]# mount -t glusterfs glusterfs-03:/gv0 /mnt
[root@glusterfs-01 ~]# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
devtmpfs on /dev type devtmpfs (rw,nosuid,size=917804k,nr_inodes=229451,mode=755)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,mode=755)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_prio,net_cls)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpuacct,cpu)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
configfs on /sys/kernel/config type configfs (rw,relatime)
/dev/vda1 on / type xfs (rw,relatime,attr2,inode64,noquota)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=24,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12616)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
/dev/sda5 on /data/brick1 type xfs (rw,relatime,attr2,inode64,noquota)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=188220k,mode=700)
glusterfs-03:/gv0 on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@glusterfs-01 ~]#

写入20个文件

[root@glusterfs-01 ~]# for i in `seq -w 1 20`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
[root@glusterfs-01 ~]#

确认写入文件数量

[root@glusterfs-01 ~]# ls -lA /mnt/copy* | wc -l
20
[root@glusterfs-01 ~]#

在各个节点的本地挂载点查看写入的文件

[root@glusterfs-01 ~]# ls /data/brick1/gv0/
copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19
copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20
[root@glusterfs-01 ~]#

[root@glusterfs-02 ~]# ls /data/brick1/gv0/
copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19
copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20
[root@glusterfs-02 ~]#

[root@glusterfs-03 ~]# ls /data/brick1/gv0/
copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19
copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20
[root@glusterfs-03 ~]#
6月 232014
 

美国FDC西雅图机房到阿里云香港节点路由

[root@mon-sea ~]$ tracepath 58.96.169.52
1: mon-sea (192.240.101.xxx) 0.167ms pmtu 1500
1: 50.7.72.129 (50.7.72.129) 5.492ms
1: 50.7.72.129 (50.7.72.129) 0.969ms
2: ae1.mpr1.sea1.us.above.net (208.184.53.193) 0.462ms
3: ae10.cr1.sjc2.us.above.net (64.125.21.25) 17.555ms
4: ae6.mpr1.pao1.us.above.net (64.125.31.46) 18.233ms
5: ge-0-0-1.0.ejr02.pao001.flagtel.com (198.32.176.137) 18.799ms asymm 6
6: so-0-1-1.0.pjr01.wad001.flagtel.com (62.216.128.2) 170.897ms asymm 13
7: ge-0-3-0.0.pjr02.hkg005.flagtel.com (85.95.26.89) 172.159ms asymm 12
8: so-5-0-0.0.cjr04.hkg003.flagtel.com (85.95.25.214) 174.487ms
9: xe-2-2-0.0.cji02.hkg003.flagtel.com (62.216.128.102) 172.114ms asymm 10
10: 80.77.0.198 (80.77.0.198) 170.967ms
11: 202.123.74.121 (202.123.74.121) 172.332ms
12: 58.96.160.245 (58.96.160.245) 171.111ms
13: 58.96.160.234 (58.96.160.234) 172.061ms
14: 58.96.160.241 (58.96.160.241) 170.377ms asymm 13
15: no reply
16: 58.96.169.52 (58.96.169.52) 171.787ms !H
Resume: pmtu 1500
[root@mon-sea ~]$
[root@mon-sea ~]$ ping -c 8 58.96.169.52

延时

PING 58.96.169.52 (58.96.169.52) 56(84) bytes of data.
64 bytes from 58.96.169.52: icmp_seq=1 ttl=52 time=171 ms
64 bytes from 58.96.169.52: icmp_seq=2 ttl=52 time=174 ms
64 bytes from 58.96.169.52: icmp_seq=3 ttl=52 time=172 ms
64 bytes from 58.96.169.52: icmp_seq=4 ttl=52 time=171 ms
64 bytes from 58.96.169.52: icmp_seq=5 ttl=52 time=173 ms
64 bytes from 58.96.169.52: icmp_seq=6 ttl=52 time=182 ms
64 bytes from 58.96.169.52: icmp_seq=7 ttl=52 time=183 ms
64 bytes from 58.96.169.52: icmp_seq=8 ttl=52 time=172 ms

— 58.96.169.52 ping statistics —
8 packets transmitted, 8 received, 0% packet loss, time 7183ms
rtt min/avg/max/mdev = 171.109/175.191/183.139/4.566 ms
[root@mon-sea ~]$

美国FDC洛杉矶机房到阿里云香港节点路由

[root@mon-lax ~]$ tracepath 58.96.169.52
1: mon-lax (50.7.103.xx) 0.165ms pmtu 1500
1: 50.7.102.201 (50.7.102.201) 1.236ms
1: 50.7.102.201 (50.7.102.201) 0.918ms
2: ae1.er3.lax112.us.above.net (208.184.110.153) 0.452ms
3: ae6.cr1.lax112.us.above.net (64.125.20.221) 0.493ms
4: ae1.cr2.sjc2.us.above.net (64.125.31.233) 10.502ms
5: ae3.cr1.sjc2.us.above.net (64.125.26.26) 8.720ms asymm 4
6: ae6.mpr1.pao1.us.above.net (64.125.31.46) 9.541ms asymm 5
7: ge-0-0-1.0.ejr02.pao001.flagtel.com (198.32.176.137) 9.959ms
8: so-0-1-1.0.pjr01.wad001.flagtel.com (62.216.128.2) 161.433ms asymm 14
9: ge-0-3-0.0.pjr02.hkg005.flagtel.com (85.95.26.89) 183.356ms asymm 13
10: so-5-0-0.0.cjr04.hkg003.flagtel.com (85.95.25.214) 166.270ms asymm 9
11: xe-2-2-0.0.cji02.hkg003.flagtel.com (62.216.128.102) 161.535ms
12: 80.77.0.198 (80.77.0.198) 164.378ms asymm 11
13: 202.123.74.121 (202.123.74.121) 179.274ms asymm 12
14: 58.96.160.245 (58.96.160.245) 162.333ms asymm 13
15: 58.96.160.234 (58.96.160.234) 166.776ms asymm 13
16: 58.96.160.241 (58.96.160.241) 161.198ms asymm 14
17: no reply
18: 58.96.169.52 (58.96.169.52) 161.058ms !H
Resume: pmtu 1500
[root@mon-lax ~]$
[root@mon-lax ~]$ ping -c 8 58.96.169.52
PING 58.96.169.52 (58.96.169.52) 56(84) bytes of data.
64 bytes from 58.96.169.52: icmp_seq=1 ttl=51 time=161 ms
64 bytes from 58.96.169.52: icmp_seq=2 ttl=51 time=163 ms
64 bytes from 58.96.169.52: icmp_seq=3 ttl=51 time=161 ms
64 bytes from 58.96.169.52: icmp_seq=4 ttl=51 time=162 ms
64 bytes from 58.96.169.52: icmp_seq=5 ttl=51 time=162 ms
64 bytes from 58.96.169.52: icmp_seq=6 ttl=51 time=164 ms
64 bytes from 58.96.169.52: icmp_seq=7 ttl=51 time=162 ms
64 bytes from 58.96.169.52: icmp_seq=8 ttl=51 time=162 ms

— 58.96.169.52 ping statistics —
8 packets transmitted, 8 received, 0% packet loss, time 7171ms
rtt min/avg/max/mdev = 161.219/162.574/164.032/0.949 ms
[root@mon-lax ~]$

阿里云北京节点到香港节点路由

[root@AY1405192126447871b3Z ~]# tracepath 58.96.169.52
1: 182.92.x.xx (182.92.x.xx) 0.124ms pmtu 1500
1: 182.92.3.249 (182.92.3.249) 0.490ms
1: 182.92.3.249 (182.92.3.249) 0.494ms
2: 10.106.128.34 (10.106.128.34) 0.474ms
3: 10.255.32.114 (10.255.32.114) 3.658ms
4: 180.149.140.33 (180.149.140.33) 2.347ms asymm 5
5: 180.149.128.101 (180.149.128.101) 4.384ms
6: 180.149.128.113 (180.149.128.113) 7.684ms
7: 202.97.53.102 (202.97.53.102) 3.716ms asymm 8
8: 202.97.53.234 (202.97.53.234) 4.918ms asymm 9
9: 202.97.61.54 (202.97.61.54) 41.146ms
10: no reply
11: 0.ge-6-0-2-XT3.HKG2.Alter.Net (210.80.3.109) 55.096ms asymm 12
12: 0.gigabitethernet6-0-0.GW9.HKG2.Alter.Net (210.80.3.74) 38.709ms asymm 13
13: towngastelecom-gw.customer.alter.net (202.130.165.14) 44.837ms
14: 202.123.74.121 (202.123.74.121) 73.009ms asymm 15
15: 58.96.160.245 (58.96.160.245) 49.868ms asymm 14
16: 58.96.160.234 (58.96.160.234) 48.499ms
17: 58.96.160.241 (58.96.160.241) 45.385ms asymm 16
18: no reply
19: 58.96.169.52 (58.96.169.52) 47.707ms !H
Resume: pmtu 1500
[root@AY1405192126447871b3Z ~]#

[root@AY1405192126447871b3Z ~]# ping -c 8 58.96.169.52
PING 58.96.169.52 (58.96.169.52) 56(84) bytes of data.
64 bytes from 58.96.169.52: icmp_seq=1 ttl=48 time=44.8 ms
64 bytes from 58.96.169.52: icmp_seq=2 ttl=48 time=43.1 ms
64 bytes from 58.96.169.52: icmp_seq=3 ttl=48 time=42.4 ms
64 bytes from 58.96.169.52: icmp_seq=4 ttl=48 time=42.7 ms
64 bytes from 58.96.169.52: icmp_seq=5 ttl=48 time=46.6 ms
64 bytes from 58.96.169.52: icmp_seq=6 ttl=48 time=52.2 ms
64 bytes from 58.96.169.52: icmp_seq=7 ttl=48 time=44.3 ms
64 bytes from 58.96.169.52: icmp_seq=8 ttl=48 time=46.4 ms

— 58.96.169.52 ping statistics —
8 packets transmitted, 8 received, 0% packet loss, time 7057ms
rtt min/avg/max/mdev = 42.412/45.364/52.299/3.021 ms
[root@AY1405192126447871b3Z ~]#

未优化路由的路由跟踪绕道日本

[harveymei@monitor ~]$ tracepath 58.96.169.52
 1: 182.92.x.xx (182.92.x.xx) 0.115ms pmtu 1500
 1: 182.92.3.249 (182.92.3.249) 0.536ms 
 1: 182.92.3.249 (182.92.3.249) 0.697ms 
 2: 10.106.128.58 (10.106.128.58) 0.527ms 
 3: 10.255.32.110 (10.255.32.110) 1.505ms 
 4: 180.149.140.41 (180.149.140.41) 0.448ms asymm 5 
 5: 202.106.35.189 (202.106.35.189) 5.312ms 
 6: 180.149.128.9 (180.149.128.9) 3.736ms asymm 7 
 7: 202.97.53.34 (202.97.53.34) 3.659ms asymm 8 
 8: 219.158.101.34 (219.158.101.34) 5.899ms asymm 7 
 9: p64-7-0-1.r21.tokyjp05.jp.bb.gin.ntt.net (129.250.66.53) 60.465ms asymm 10 
10: ae-0.r25.tokyjp05.jp.bb.gin.ntt.net (129.250.6.200) 62.278ms asymm 11 
11: ae-8.r25.tokyjp05.jp.bb.gin.ntt.net (129.250.3.157) 50.973ms 
12: ae-1.r01.tokyjp03.jp.bb.gin.ntt.net (129.250.6.166) 124.612ms asymm 13 
13: xe-0-0-0-13.r01.tokyjp03.jp.ce.gin.ntt.net (61.213.160.222) 122.828ms asymm 14 
14: ge-1-1-0.0.pjr02.wad001.flagtel.com (85.95.26.117) 105.993ms asymm 21 
15: so-5-0-0.0.cjr04.hkg003.flagtel.com (85.95.25.214) 173.519ms asymm 18 
16: so-5-0-0.0.cjr04.hkg003.flagtel.com (85.95.25.214) 175.549ms asymm 18 
17: 80.77.0.198 (80.77.0.198) 265.257ms asymm 14 
18: 202.123.74.121 (202.123.74.121) 243.866ms asymm 13 
19: 202.123.74.121 (202.123.74.121) 239.736ms asymm 13 
20: 58.96.160.245 (58.96.160.245) 248.474ms asymm 15 
21: 58.96.169.52 (58.96.169.52) 254.868ms !H
 Resume: pmtu 1500 
[harveymei@monitor ~]$

本地台式机

root@root-desktop:~$ tracepath 58.96.169.52
1: root-desktop.local 0.125ms pmtu 1500
1: 192.168.1.254 1.021ms
1: 192.168.1.254 0.987ms
2: 183.49.125.243 1.122ms pmtu 1492
2: 183.49.124.1 5.230ms
3: 113.106.37.85 2.703ms
4: 121.15.179.54 3.779ms asymm 5
5: 121.34.242.250 9.189ms
6: 202.97.33.206 6.939ms
7: 202.97.60.70 131.812ms
8: 202.97.61.22 77.714ms
9: no reply
10: 0.ge-6-0-2-XT3.HKG2.Alter.Net 14.852ms asymm 11
11: 0.gigabitethernet6-0-0.GW9.HKG2.Alter.Net 11.380ms
12: towngastelecom-gw.customer.alter.net 18.089ms
13: 202.123.74.121 18.825ms
14: 58.96.160.245 117.302ms asymm 13
15: 58.96.160.234 18.335ms
16: 58.96.160.241 29.325ms asymm 15
17: no reply
18: 58.96.169.52 16.126ms !H
Resume: pmtu 1492
root@root-desktop:~$

 

root@root-desktop:~$ ping -c 8 58.96.169.52
PING 58.96.169.52 (58.96.169.52) 56(84) bytes of data.
64 bytes from 58.96.169.52: icmp_req=1 ttl=50 time=17.4 ms
64 bytes from 58.96.169.52: icmp_req=2 ttl=50 time=14.8 ms
64 bytes from 58.96.169.52: icmp_req=3 ttl=50 time=25.4 ms
64 bytes from 58.96.169.52: icmp_req=4 ttl=50 time=21.1 ms
64 bytes from 58.96.169.52: icmp_req=5 ttl=50 time=18.8 ms
64 bytes from 58.96.169.52: icmp_req=6 ttl=50 time=28.1 ms
64 bytes from 58.96.169.52: icmp_req=7 ttl=50 time=14.4 ms
64 bytes from 58.96.169.52: icmp_req=8 ttl=50 time=16.0 ms

— 58.96.169.52 ping statistics —
8 packets transmitted, 8 received, 0% packet loss, time 7011ms
rtt min/avg/max/mdev = 14.414/19.566/28.197/4.716 ms
root@root-desktop:~$

 

[root@VM_27_135_centos ~]# ping -c 8 58.96.169.52
PING 58.96.169.52 (58.96.169.52) 56(84) bytes of data.
64 bytes from 58.96.169.52: icmp_seq=1 ttl=45 time=16.1 ms
64 bytes from 58.96.169.52: icmp_seq=2 ttl=45 time=16.4 ms
64 bytes from 58.96.169.52: icmp_seq=3 ttl=45 time=16.9 ms
64 bytes from 58.96.169.52: icmp_seq=4 ttl=45 time=16.2 ms
64 bytes from 58.96.169.52: icmp_seq=5 ttl=45 time=20.2 ms
64 bytes from 58.96.169.52: icmp_seq=6 ttl=45 time=16.8 ms
64 bytes from 58.96.169.52: icmp_seq=7 ttl=45 time=17.8 ms
64 bytes from 58.96.169.52: icmp_seq=8 ttl=45 time=16.3 ms

— 58.96.169.52 ping statistics —
8 packets transmitted, 8 received, 0% packet loss, time 7027ms
rtt min/avg/max/mdev = 16.134/17.120/20.225/1.292 ms
[root@VM_27_135_centos ~]#

6月 202014
 

腾讯云主机最低配置CPU信息 Xeon X3440 2.53Ghz主频 4MB缓存

[root@VM_27_135_centos ~]# cat /proc/cpuinfo
 processor : 0
 vendor_id : GenuineIntel
 cpu family : 6
 model : 30
 model name : Intel(R) Xeon(R) CPU X3440 @ 2.53GHz
 stepping : 5
 cpu MHz : 2526.998
 cache size : 4096 KB
 physical id : 0
 siblings : 1
 core id : 0
 cpu cores : 1
 apicid : 0
 initial apicid : 0
 fpu : yes
 fpu_exception : yes
 cpuid level : 11
 wp : yes
 flags : fpu vme de pse tsc msr pae mce cx8 apic mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx lm constant_tsc up arch_perfmon rep_good unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 popcnt hypervisor lahf_lm
 bogomips : 5053.99
 clflush size : 64
 cache_alignment : 64
 address sizes : 40 bits physical, 48 bits virtual
 power management:

阿里云主机最低配置CPU信息
[root@AY1405192126447871b3Z ~]# cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2430 0 @ 2.20GHz
stepping : 7
cpu MHz : 2200.095
cache size : 15360 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat clflush mmx fxsr sse sse2 ht syscall nx lm up rep_good unfair_spinlock pni ssse3 cx16 sse4_1 sse4_2 popcnt aes hypervisor lahf_lm
bogomips : 4400.19
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

[root@AY1405192126447871b3Z ~]#

查看demsg确认使用的虚拟化技术平台
[root@VM_27_135_centos ~]# dmesg |grep xen
[root@VM_27_135_centos ~]# dmesg |grep kvm
kvm-clock: Using msrs 4b564d01 and 4b564d00
kvm-clock: cpu 0, msr 0:1c257c1, boot clock
kvm-clock: cpu 0, msr 0:22167c1, primary cpu clock
kvm-stealtime: cpu 0, msr 220e880
Switching to clocksource kvm-clock
[root@VM_27_135_centos ~]#
[root@AY1405192126447871b3Z ~]# dmesg |grep xen
CPU: CPU feature rdtscp disabled on xen guest
CPU: CPU feature constant_tsc disabled on xen guest
xen-platform-pci 0000:00:03.0: PCI INT A -> GSI 28 (level, low) -> IRQ 28
[root@AY1405192126447871b3Z ~]# dmesg |grep kvm
[root@AY1405192126447871b3Z ~]#

[root@VM_27_135_centos ~]#

腾讯云主机为单网卡,主机无公网IP绑定
[root@VM_27_135_centos ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 52:54:00:D3:AD:AD
inet addr:10.142.27.135 Bcast:10.142.27.255 Mask:255.255.255.0
inet6 addr: fe80::5054:ff:fed3:adad/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:25968 errors:0 dropped:0 overruns:0 frame:0
TX packets:21664 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:190788488 (181.9 MiB) TX bytes:10550093 (10.0 MiB)

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
inet6 addr: ::1/128 Scope:Host
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:4 errors:0 dropped:0 overruns:0 frame:0
TX packets:4 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:200 (200.0 b) TX bytes:200 (200.0 b)

[root@VM_27_135_centos ~]#

阿里云主机为双网卡,分别绑定公网IP和私网IP(阿里云北京节点)
[root@AY1405192126447871b3Z ~]# ifconfig
eth0 Link encap:Ethernet HWaddr 00:16:3E:00:3B:C0
inet addr:10.162.222.113 Bcast:10.162.223.255 Mask:255.255.240.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:83246285 errors:0 dropped:0 overruns:0 frame:0
TX packets:81922830 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:25346599332 (23.6 GiB) TX bytes:25324814074 (23.5 GiB)
Interrupt:165

eth1 Link encap:Ethernet HWaddr 00:16:3E:00:3B:C2
inet addr:182.92.x.xx Bcast:182.92.11.255 Mask:255.255.252.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:131305480 errors:0 dropped:0 overruns:0 frame:0
TX packets:793230 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:6117093526 (5.6 GiB) TX bytes:270508082 (257.9 MiB)
Interrupt:164

lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:16436 Metric:1
RX packets:68664 errors:0 dropped:0 overruns:0 frame:0
TX packets:68664 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:5239997 (4.9 MiB) TX bytes:5239997 (4.9 MiB)

[root@AY1405192126447871b3Z ~]#
默认CentOS6.3 x86_64系统 默认内核版本和执行yum update后的内核版本
[root@VM_27_135_centos ~]# uname -ar
Linux VM_27_135_centos 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux
[root@VM_27_135_centos ~]#

[root@VM_27_135_centos ~]# uname -ar
Linux VM_27_135_centos 2.6.32-431.20.3.el6.centos.plus.x86_64 #1 SMP Thu Jun 19 23:04:15 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
[root@VM_27_135_centos ~]#
在本地本机ping www.qq.com的延时(深圳电信100M光纤ADSL)
harveymei@linux-7zyd:~> ping -c 6 www.qq.com
PING www.qq.com (14.17.32.211) 56(84) bytes of data.
64 bytes from 14.17.32.211: icmp_seq=1 ttl=55 time=2.16 ms
64 bytes from 14.17.32.211: icmp_seq=2 ttl=55 time=3.60 ms
64 bytes from 14.17.32.211: icmp_seq=3 ttl=55 time=3.57 ms
64 bytes from 14.17.32.211: icmp_seq=4 ttl=55 time=3.72 ms
64 bytes from 14.17.32.211: icmp_seq=5 ttl=55 time=4.98 ms
64 bytes from 14.17.32.211: icmp_seq=6 ttl=55 time=3.03 ms

— www.qq.com ping statistics —
6 packets transmitted, 6 received, 0% packet loss, time 5005ms
rtt min/avg/max/mdev = 2.160/3.515/4.988/0.846 ms
harveymei@linux-7zyd:~>

在qcloud(腾讯云华南广州节点1Mbps)主机上ping www.qq.com的延时
[root@VM_27_135_centos ~]# ping -c 6 www.qq.com
PING www.qq.com (183.60.15.153) 56(84) bytes of data.
64 bytes from 183.60.15.153: icmp_seq=1 ttl=51 time=11.2 ms
64 bytes from 183.60.15.153: icmp_seq=2 ttl=51 time=11.2 ms
64 bytes from 183.60.15.153: icmp_seq=3 ttl=51 time=11.2 ms
64 bytes from 183.60.15.153: icmp_seq=4 ttl=51 time=11.2 ms
64 bytes from 183.60.15.153: icmp_seq=5 ttl=51 time=11.2 ms
64 bytes from 183.60.15.153: icmp_seq=6 ttl=51 time=11.2 ms

— www.qq.com ping statistics —
6 packets transmitted, 6 received, 0% packet loss, time 5018ms
rtt min/avg/max/mdev = 11.202/11.242/11.272/0.109 ms
[root@VM_27_135_centos ~]#

[root@VM_27_135_centos ~]# ping -c 6 www.qq.com
PING www.qq.com (14.17.32.211) 56(84) bytes of data.
64 bytes from 14.17.32.211: icmp_seq=1 ttl=50 time=10.6 ms
64 bytes from 14.17.32.211: icmp_seq=2 ttl=50 time=10.3 ms
64 bytes from 14.17.32.211: icmp_seq=3 ttl=50 time=10.5 ms
64 bytes from 14.17.32.211: icmp_seq=4 ttl=50 time=10.5 ms
64 bytes from 14.17.32.211: icmp_seq=5 ttl=50 time=10.5 ms
64 bytes from 14.17.32.211: icmp_seq=6 ttl=50 time=10.5 ms

— www.qq.com ping statistics —
6 packets transmitted, 6 received, 0% packet loss, time 5019ms
rtt min/avg/max/mdev = 10.391/10.549/10.639/0.154 ms
[root@VM_27_135_centos ~]#

在阿里云北京节点1Mbps主机上ping www.qq.com的延时
[root@AY1405192126447871b3Z ~]# ping -c 6 www.qq.com
PING www.qq.com (61.135.157.156) 56(84) bytes of data.
64 bytes from 61.135.157.156: icmp_seq=1 ttl=52 time=5.76 ms
64 bytes from 61.135.157.156: icmp_seq=2 ttl=52 time=5.55 ms
64 bytes from 61.135.157.156: icmp_seq=3 ttl=52 time=5.72 ms
64 bytes from 61.135.157.156: icmp_seq=4 ttl=52 time=5.69 ms
64 bytes from 61.135.157.156: icmp_seq=5 ttl=52 time=5.69 ms
64 bytes from 61.135.157.156: icmp_seq=6 ttl=52 time=5.62 ms

— www.qq.com ping statistics —
6 packets transmitted, 6 received, 0% packet loss, time 5014ms
rtt min/avg/max/mdev = 5.550/5.674/5.760/0.091 ms
[root@AY1405192126447871b3Z ~]#
可以ping通内部网络非本人账户下其他主机或设备
[root@VM_27_135_centos ~]# ping -c 2 10.142.27.136
PING 10.142.27.136 (10.142.27.136) 56(84) bytes of data.
From 10.142.27.135 icmp_seq=1 Destination Host Unreachable
From 10.142.27.135 icmp_seq=2 Destination Host Unreachable

— 10.142.27.136 ping statistics —
2 packets transmitted, 0 received, +2 errors, 100% packet loss, time 3000ms
pipe 2
[root@VM_27_135_centos ~]# ping -c 2 10.142.27.135
PING 10.142.27.135 (10.142.27.135) 56(84) bytes of data.
64 bytes from 10.142.27.135: icmp_seq=1 ttl=64 time=0.024 ms
64 bytes from 10.142.27.135: icmp_seq=2 ttl=64 time=0.028 ms

— 10.142.27.135 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.024/0.026/0.028/0.002 ms
[root@VM_27_135_centos ~]# ping -c 2 10.142.27.134
PING 10.142.27.134 (10.142.27.134) 56(84) bytes of data.
64 bytes from 10.142.27.134: icmp_seq=1 ttl=64 time=1.15 ms
64 bytes from 10.142.27.134: icmp_seq=2 ttl=64 time=0.501 ms

— 10.142.27.134 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.501/0.827/1.154/0.327 ms
[root@VM_27_135_centos ~]# ping -c 2 10.142.27.129
PING 10.142.27.129 (10.142.27.129) 56(84) bytes of data.
64 bytes from 10.142.27.129: icmp_seq=1 ttl=64 time=2.35 ms
64 bytes from 10.142.27.129: icmp_seq=2 ttl=64 time=0.699 ms

— 10.142.27.129 ping statistics —
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.699/1.529/2.359/0.830 ms
[root@VM_27_135_centos ~]#

阿里云香港节点基础电信运营商名气通新闻稿

有關名氣通電訊有限公司 (名氣通)
名氣通電訊有限公司(名氣通)為香港中華煤氣有限公司(煤氣公司)的全資附屬機構,於 2004 年正式成立,主要業務包括網絡構建、數據中心與智能家居及雲計算服務。秉承煤氣公司的優良服務文化,名氣通作為中立電訊供應商,在香港及內地均擁有多個世界級數據中心及網路基建服務,並利用煤氣管道光纖技術於香港鋪設光纖網路,為各大企業、國際網路服務商及專業客戶提供更廣泛的服務。