6月 122020
 

Rancher关于Kubernetes 集群节点的角色定义

https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/nodes-and-roles/
https://kubernetes.io/docs/concepts/overview/components/

etcd

具有etcd角色的节点运行etcd,这是一个用于存储Kubernetes集群配置数据,具有一致性且高可用的键值存储服务。 etcd将数据复制到每个节点。
注意:在用户界面中,具有etcd角色的节点显示为“Unschedulable”,这意味着默认情况下不会将Pod调度到这些节点。

controlplane

具有controlplane角色的节点运行Kubernetes主组件(不包括etcd,因为它是单独的角色)。 有关组件包括kube-apiserver,kube-scheduler,kube-controller-manager和cloud-controller-manager。
注意:在用户界面中,具有controlplane角色的节点显示为“Unschedulable”,这意味着默认情况下不会将Pod调度到这些节点。

worker

具有worker角色的节点运行Kubernetes节点组件。 有关组件包括kubelet,kube-proxy,Container runtime。

6月 112020
 

Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.

Rancher是供采用容器的团队使用的完整软件堆栈。 它解决了在任何基础架构上管理多个Kubernetes集群的运营和安全挑战,同时为DevOps团队提供了用于运行容器化工作负载的集成工具。

禁用SELinux配置

[root@rancher ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
[root@rancher ~]# setenforce 0
[root@rancher ~]# getenforce 
Permissive
[root@rancher ~]#

安装Docker运行环境

[root@rancher ~]# curl https://releases.rancher.com/install-docker/18.09.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15521  100 15521    0     0  92374      0 --:--:-- --:--:-- --:--:-- 92940
+ '[' centos = redhat ']'
+ sh -c 'yum install -y -q yum-utils'
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
warning: /var/cache/yum/x86_64/7/updates/packages/yum-utils-1.1.31-54.el7_8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for yum-utils-1.1.31-54.el7_8.noarch.rpm is not installed
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-6.1810.2.el7.centos.x86_64 (installed)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
+ sh -c 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
+ '[' stable '!=' stable ']'
+ sh -c 'yum makecache fast'
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.keystealth.org
 * extras: mirror.fileplanet.com
 * updates: mirror.web-ster.com
base                                                                                                                                                     | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                                                         | 3.5 kB  00:00:00     
extras                                                                                                                                                   | 2.9 kB  00:00:00     
updates                                                                                                                                                  | 2.9 kB  00:00:00     
(1/2): docker-ce-stable/x86_64/updateinfo                                                                                                                |   55 B  00:00:00     
(2/2): docker-ce-stable/x86_64/primary_db                                                                                                                |  44 kB  00:00:00     
Metadata Cache Created
+ sh -c 'yum install -y -q docker-ce-18.09.9 docker-ce-cli-18.09.9'
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Public key for containerd.io-1.2.13-3.2.el7.x86_64.rpm is not installed
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <docker@docker.com>"
 Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
 From       : https://download.docker.com/linux/centos/gpg
setsebool:  SELinux is disabled.
+ '[' -d /run/systemd/system ']'
+ sh -c 'service docker start'
Redirecting to /bin/systemctl start docker.service
+ sh -c 'docker version'
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false

If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

[root@rancher ~]#

可用Docker版本安装脚本列表

https://github.com/rancher/install-docker

配置DNS指向

rancher.bcoc.site ----> 167.71.149.159

安装Rancher并配置持久化存储和Let’s Encrypt证书

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  -v /opt/rancher:/var/lib/rancher \
  rancher/rancher:latest \
  --acme-domain rancher.bcoc.site
  
[root@rancher ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[root@rancher ~]# docker container ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@rancher ~]# 
[root@rancher ~]# docker run -d --restart=unless-stopped \
>   -p 80:80 -p 443:443 \
>   -v /opt/rancher:/var/lib/rancher \
>   rancher/rancher:latest \
>   --acme-domain rancher.bcoc.site
Unable to find image 'rancher/rancher:latest' locally
latest: Pulling from rancher/rancher
23884877105a: Pull complete 
bc38caa0f5b9: Pull complete 
2910811b6c42: Pull complete 
36505266dcc6: Pull complete 
99447ff7670f: Pull complete 
879c87dc86fd: Pull complete 
5b954e5aebf8: Pull complete 
664e1faf26b5: Pull complete 
bf7ac75d932b: Pull complete 
7e972d16ff5b: Pull complete 
08314b1e671c: Pull complete 
d5ce20b3d070: Pull complete 
20e75cd9c8e9: Pull complete 
80daa2770be8: Pull complete 
7fb927855713: Pull complete 
af20d79674f1: Pull complete 
d6a9086242eb: Pull complete 
887a8f050cee: Pull complete 
834df47e622f: Pull complete 
Digest: sha256:25ab51f5366ee7b7add66bc41203eac4b8654386630432ac4f334f69f8baf706
Status: Downloaded newer image for rancher/rancher:latest
7b54dbd549650b332c9ded7904e044774ddce775f54e3f6802d22f9c2e626057
[root@rancher ~]#

查看当前运行的rancher容器

[root@rancher ~]# docker container ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                      NAMES
7b54dbd54965        rancher/rancher:latest   "entrypoint.sh --acm…"   20 seconds ago      Up 19 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   recursing_joliot
[root@rancher ~]#

登录Web控制台并为默认用户admin设置密码

确认Web控制台访问URL地址

控制台主界面

查看https证书信息

创建集群配置

集群配置详情

按照节点角色类型生成集群节点配置命令

在一个或多个已安装Docker的节点上运行

sudo docker run -d --privileged --restart=unless-stopped --net=host \
-v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 \
--server https://rancher.bcoc.site --token 7lmgztttzn7z2l8w6t4xhdz9gz2l7rpks6x7gc8222pjddt2mxlwcp \
--etcd --controlplane --worker
在rancher-01上运行

[root@rancher-01 ~]# sudo docker run -d --privileged --restart=unless-stopped --net=host \
> -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 \
> --server https://rancher.bcoc.site --token 7lmgztttzn7z2l8w6t4xhdz9gz2l7rpks6x7gc8222pjddt2mxlwcp \
> --etcd --controlplane --worker
Unable to find image 'rancher/rancher-agent:v2.4.4' locally
v2.4.4: Pulling from rancher/rancher-agent
23884877105a: Pull complete 
bc38caa0f5b9: Pull complete 
2910811b6c42: Pull complete 
36505266dcc6: Pull complete 
839286d9c3a6: Pull complete 
8a1ba646e5a3: Pull complete 
4917caa38753: Pull complete 
b56094248bdf: Pull complete 
77f08dadb4eb: Pull complete 
d925a4b78308: Pull complete 
Digest: sha256:a6b416d7e5f89d28f8f8a54472cabe656378bc8c1903d08e1c2e9e453cdab1ff
Status: Downloaded newer image for rancher/rancher-agent:v2.4.4
eea306867dca30ad9f70dcd764e723fec2b10239212205535ab83f24fc6827ed
[root@rancher-01 ~]#

在rancher-02上运行

[root@rancher-02 ~]# sudo docker run -d --privileged --restart=unless-stopped --net=host \
> -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 \
> --server https://rancher.bcoc.site --token 7lmgztttzn7z2l8w6t4xhdz9gz2l7rpks6x7gc8222pjddt2mxlwcp \
> --etcd --controlplane --worker
Unable to find image 'rancher/rancher-agent:v2.4.4' locally
v2.4.4: Pulling from rancher/rancher-agent
23884877105a: Pull complete 
bc38caa0f5b9: Pull complete 
2910811b6c42: Pull complete 
36505266dcc6: Pull complete 
839286d9c3a6: Pull complete 
8a1ba646e5a3: Pull complete 
4917caa38753: Pull complete 
b56094248bdf: Pull complete 
77f08dadb4eb: Pull complete 
d925a4b78308: Pull complete 
Digest: sha256:a6b416d7e5f89d28f8f8a54472cabe656378bc8c1903d08e1c2e9e453cdab1ff
Status: Downloaded newer image for rancher/rancher-agent:v2.4.4
1f84c5b8afa35475fada986834458c08c565ff7d2b3dd4965a55a2439036e45b
[root@rancher-02 ~]#

查看Web控制台显示集群创建中

集群创建成功

6月 102020
 

安装Apache及Subversion服务

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# yum install httpd subversion mod_dav_svn mariadb-server mariadb apr-util-mysql

Installed:
  apr-util-mysql.x86_64 0:1.5.2-6.el7                httpd.x86_64 0:2.4.6-93.el7.centos                 
  mariadb.x86_64 1:5.5.65-1.el7                      mariadb-server.x86_64 1:5.5.65-1.el7               
  mod_dav_svn.x86_64 0:1.7.14-14.el7                 subversion.x86_64 0:1.7.14-14.el7                  

Dependency Installed:
  apr.x86_64 0:1.4.8-5.el7                            apr-util.x86_64 0:1.5.2-6.el7                     
  centos-logos.noarch 0:70.0.6-3.el7.centos           gnutls.x86_64 0:3.3.29-9.el7_6                    
  httpd-tools.x86_64 0:2.4.6-93.el7.centos            libaio.x86_64 0:0.3.109-13.el7                    
  libmodman.x86_64 0:2.0.1-8.el7                      libproxy.x86_64 0:0.4.11-11.el7                   
  mailcap.noarch 0:2.1.41-2.el7                       neon.x86_64 0:0.30.0-4.el7                        
  nettle.x86_64 0:2.7.1-8.el7                         pakchois.x86_64 0:0.4-10.el7                      
  perl.x86_64 4:5.16.3-295.el7                        perl-Carp.noarch 0:1.26-244.el7                   
  perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7        perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7       
  perl-DBD-MySQL.x86_64 0:4.023-6.el7                 perl-DBI.x86_64 0:1.627-4.el7                     
  perl-Data-Dumper.x86_64 0:2.145-3.el7               perl-Encode.x86_64 0:2.51-7.el7                   
  perl-Exporter.noarch 0:5.68-3.el7                   perl-File-Path.noarch 0:2.09-2.el7                
  perl-File-Temp.noarch 0:0.23.01-3.el7               perl-Filter.x86_64 0:1.49-3.el7                   
  perl-Getopt-Long.noarch 0:2.40-3.el7                perl-HTTP-Tiny.noarch 0:0.033-3.el7               
  perl-IO-Compress.noarch 0:2.061-2.el7               perl-Net-Daemon.noarch 0:0.48-5.el7               
  perl-PathTools.x86_64 0:3.40-5.el7                  perl-PlRPC.noarch 0:0.2020-14.el7                 
  perl-Pod-Escapes.noarch 1:1.04-295.el7              perl-Pod-Perldoc.noarch 0:3.20-4.el7              
  perl-Pod-Simple.noarch 1:3.28-4.el7                 perl-Pod-Usage.noarch 0:1.63-3.el7                
  perl-Scalar-List-Utils.x86_64 0:1.27-248.el7        perl-Socket.x86_64 0:2.010-5.el7                  
  perl-Storable.x86_64 0:2.45-3.el7                   perl-Text-ParseWords.noarch 0:3.29-4.el7          
  perl-Time-HiRes.x86_64 4:1.9725-3.el7               perl-Time-Local.noarch 0:1.2300-2.el7             
  perl-constant.noarch 0:1.27-2.el7                   perl-libs.x86_64 4:5.16.3-295.el7                 
  perl-macros.x86_64 4:5.16.3-295.el7                 perl-parent.noarch 1:0.225-244.el7                
  perl-podlators.noarch 0:2.5.1-3.el7                 perl-threads.x86_64 0:1.87-4.el7                  
  perl-threads-shared.x86_64 0:1.43-6.el7             subversion-libs.x86_64 0:1.7.14-14.el7            
  trousers.x86_64 0:0.3.14-2.el7                     

Dependency Updated:
  mariadb-libs.x86_64 1:5.5.65-1.el7 

查看DBD MySQL驱动模块信息

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# yum info apr-util-mysql
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.keystealth.org
 * extras: repos-lax.psychz.net
 * updates: mirrors.xtom.com
Installed Packages
Name        : apr-util-mysql
Arch        : x86_64
Version     : 1.5.2
Release     : 6.el7
Size        : 24 k
Repo        : installed
From repo   : base
Summary     : APR utility library MySQL DBD driver
URL         : http://apr.apache.org/
License     : ASL 2.0
Description : This package provides the MySQL driver for the apr-util DBD
            : (database abstraction) interface.

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# rpm -lq apr-util-mysql
/usr/lib64/apr-util-1/apr_dbd_mysql-1.so
/usr/lib64/apr-util-1/apr_dbd_mysql.so
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

配置MySQL服务并新建数据库及表

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl start mariadb
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

建库

MariaDB [(none)]> create database subversion;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant SELECT, INSERT, UPDATE, DELETE on subversion.* to apache@localhost;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> set password for apache@localhost=password('apachepwd');
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]>

建表

MariaDB [(none)]> use subversion;
Database changed
MariaDB [subversion]> use subversion;
Database changed
MariaDB [subversion]> create table authn (
    -> username varchar(255) not null,
    -> password varchar(255),
    -> status varchar(255),
    -> primary key (username)
    -> );
Query OK, 0 rows affected (0.01 sec)

MariaDB [subversion]>

写入测试数据
生成密码(可指定密码加密函数)

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# htpasswd -nb user1 123456
user1:$apr1$hyGT4jgm$xCWktYtKdOZ.y59Zo.t7C1

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# 

MariaDB [subversion]> INSERT INTO `authn` (`username`, `password`, `status`) 
    -> VALUES('user1', '$apr1$hyGT4jgm$xCWktYtKdOZ.y59Zo.t7C1', 'ok');
Query OK, 1 row affected (0.00 sec)

MariaDB [subversion]>

查看表数据

MariaDB [subversion]> select * from authn;
+----------+---------------------------------------+--------+
| username | password                              | status |
+----------+---------------------------------------+--------+
| user1    | $apr1$hyGT4jgm$xCWktYtKdOZ.y59Zo.t7C1 | ok     |
+----------+---------------------------------------+--------+
1 row in set (0.00 sec)

MariaDB [subversion]>

密码加密函数参考

https://dev.mysql.com/doc/refman/5.6/en/encryption-functions.html#function_password

创建仓库

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# mkdir /var/www/svn
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# cd /var/www/svn/
[root@centos-s-1vcpu-1gb-sfo3-01 svn]# svnadmin create test
[root@centos-s-1vcpu-1gb-sfo3-01 svn]#

配置Apache环境
查看已安装相关模块

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# ls /etc/httpd/modules/ |grep dbd
mod_authn_dbd.so
mod_authz_dbd.so
mod_dbd.so
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# ls /etc/httpd/modules/ |grep socache
mod_authn_socache.so
mod_cache_socache.so
mod_socache_dbm.so
mod_socache_memcache.so
mod_socache_shmcb.so
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

模块mod_authn_dbd配置参考

http://httpd.apache.org/docs/2.4/mod/mod_authn_dbd.html
http://httpd.apache.org/docs/2.4/mod/mod_dbd.html
http://httpd.apache.org/docs/2.4/mod/mod_authn_socache.html

设置主机名

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# vi /etc/httpd/conf/httpd.conf
ServerName 64.227.106.245

新增配置文件

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# vi /etc/httpd/conf.d/repository.conf
# mod_dbd configuration
# UPDATED to include authentication caching
DBDriver mysql
DBDParams "host=localhost port=3306 dbname=subversion user=apache pass=apachepwd"

DBDMin  4
DBDKeep 8
DBDMax  20
DBDExptime 300

<Location /repos>
  DAV svn
  SVNParentPath /var/www/svn

  # mod_authn_core and mod_auth_basic configuration
  # for mod_authn_dbd
  AuthType Basic
  AuthName "Subversion repository"
  
  # To cache credentials, put socache ahead of dbd here
  AuthBasicProvider socache dbd

  # Also required for caching: tell the cache to cache dbd lookups!
  AuthnCacheProvideFor dbd
  AuthnCacheContext my-server
  
  SVNPathAuthz off

  # Authorization: Authenticated users only
  Require valid-user
  
  # mod_authn_dbd SQL query to authenticate a user
  AuthDBDUserPWQuery "SELECT password FROM authn WHERE username = %s"
</Location>
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# apachectl -t
Syntax OK
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

启动Apache服务

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl start httpd
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

查看端口监听

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      969/master          
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1586/mysqld         
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1018/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      969/master          
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
tcp6       0      0 :::80                   :::*                    LISTEN      12198/httpd         
tcp6       0      0 :::22                   :::*                    LISTEN      1018/sshd           
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

使用浏览器访问仓库进行登录验证

http://64.227.106.245/repos/test
5月 292020
 

节点server6离线

[root@server6 ~]# init 0

查看卷信息

[root@server1 ~]# gluster volume info

Volume Name: data-volume
Type: Disperse
Volume ID: fd3fdef5-a1c5-41c6-83f7-a9df9e3ccbb3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/volume_sfo2_01/brick1
Brick2: server2:/mnt/volume_sfo2_02/brick2
Brick3: server3:/mnt/volume_sfo2_03/brick3
Brick4: server4:/mnt/volume_sfo2_04/brick4
Brick5: server5:/mnt/volume_sfo2_05/brick5
Brick6: server6:/mnt/volume_sfo2_06/brick6
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@server1 ~]#

查看卷状态信息显示server6及相关块已不在列表中

[root@server1 ~]# gluster volume status
Status of volume: data-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/mnt/volume_sfo2_01/brick1    49152     0          Y       10380
Brick server2:/mnt/volume_sfo2_02/brick2    49152     0          Y       10273
Brick server3:/mnt/volume_sfo2_03/brick3    49152     0          Y       10269
Brick server4:/mnt/volume_sfo2_04/brick4    49152     0          Y       10276
Brick server5:/mnt/volume_sfo2_05/brick5    49152     0          Y       10274
Self-heal Daemon on localhost               N/A       N/A        Y       10401
Self-heal Daemon on server3                 N/A       N/A        Y       10290
Self-heal Daemon on server5                 N/A       N/A        Y       10295
Self-heal Daemon on server2                 N/A       N/A        Y       10294
Self-heal Daemon on server4                 N/A       N/A        Y       10297
 
Task Status of Volume data-volume
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@server1 ~]#

查看节点状态信息显示server6已断开连接

[root@server1 ~]# gluster peer status
Number of Peers: 5

Hostname: server2
Uuid: d331a6e5-b533-42a6-bd78-b07f33edbb0f
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: c925e178-a154-4e00-b678-a0b9a30187a8
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: 278a51f2-e399-4182-8f37-9c47e35205d3
State: Peer in Cluster (Connected)

Hostname: server5
Uuid: a0be5978-e05b-46bc-83b7-c34ae212cf21
State: Peer in Cluster (Connected)

Hostname: server6
Uuid: a505fa8b-72f3-47a1-af1c-19ca01c1245a
State: Peer in Cluster (Disconnected)
[root@server1 ~]#

写入文件

[root@server7 ~]# for i in `seq -w 21 40`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
[root@server7 ~]# ls /mnt/
copy-test-01 copy-test-05 copy-test-09 copy-test-13 copy-test-17 copy-test-21 copy-test-25 copy-test-29 copy-test-33 copy-test-37
copy-test-02 copy-test-06 copy-test-10 copy-test-14 copy-test-18 copy-test-22 copy-test-26 copy-test-30 copy-test-34 copy-test-38
copy-test-03 copy-test-07 copy-test-11 copy-test-15 copy-test-19 copy-test-23 copy-test-27 copy-test-31 copy-test-35 copy-test-39
copy-test-04 copy-test-08 copy-test-12 copy-test-16 copy-test-20 copy-test-24 copy-test-28 copy-test-32 copy-test-36 copy-test-40
[root@server7 ~]#

节点server5离线

[root@server5 ~]# init 0

查看节点状态信息

[root@server2 ~]# gluster peer status
Number of Peers: 5

Hostname: server1
Uuid: a97fa9a8-e97f-421e-b92c-07ef39a488cd
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: c925e178-a154-4e00-b678-a0b9a30187a8
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: 278a51f2-e399-4182-8f37-9c47e35205d3
State: Peer in Cluster (Connected)

Hostname: server5
Uuid: a0be5978-e05b-46bc-83b7-c34ae212cf21
State: Peer in Cluster (Disconnected)

Hostname: server6
Uuid: a505fa8b-72f3-47a1-af1c-19ca01c1245a
State: Peer in Cluster (Disconnected)
[root@server2 ~]#

查看卷状态信息

[root@server2 ~]# gluster volume status
Status of volume: data-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/mnt/volume_sfo2_01/brick1    49152     0          Y       10380
Brick server2:/mnt/volume_sfo2_02/brick2    49152     0          Y       10273
Brick server3:/mnt/volume_sfo2_03/brick3    49152     0          Y       10269
Brick server4:/mnt/volume_sfo2_04/brick4    49152     0          Y       10276
Self-heal Daemon on localhost               N/A       N/A        Y       10294
Self-heal Daemon on server1                 N/A       N/A        Y       10401
Self-heal Daemon on server3                 N/A       N/A        Y       10290
Self-heal Daemon on server4                 N/A       N/A        Y       10297
 
Task Status of Volume data-volume
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@server2 ~]#

写入文件

[root@server7 ~]# for i in `seq -w 41 60`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
[root@server7 ~]# ls /mnt/
copy-test-01 copy-test-06 copy-test-11 copy-test-16 copy-test-21 copy-test-26 copy-test-31 copy-test-36 copy-test-41 copy-test-46 copy-test-51 copy-test-56
copy-test-02 copy-test-07 copy-test-12 copy-test-17 copy-test-22 copy-test-27 copy-test-32 copy-test-37 copy-test-42 copy-test-47 copy-test-52 copy-test-57
copy-test-03 copy-test-08 copy-test-13 copy-test-18 copy-test-23 copy-test-28 copy-test-33 copy-test-38 copy-test-43 copy-test-48 copy-test-53 copy-test-58
copy-test-04 copy-test-09 copy-test-14 copy-test-19 copy-test-24 copy-test-29 copy-test-34 copy-test-39 copy-test-44 copy-test-49 copy-test-54 copy-test-59
copy-test-05 copy-test-10 copy-test-15 copy-test-20 copy-test-25 copy-test-30 copy-test-35 copy-test-40 copy-test-45 copy-test-50 copy-test-55 copy-test-60
[root@server7 ~]#

节点server4离线

[root@server4 ~]# init 0

查看节点状态信息

[root@server2 ~]# gluster peer status
Number of Peers: 5

Hostname: server1
Uuid: a97fa9a8-e97f-421e-b92c-07ef39a488cd
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: c925e178-a154-4e00-b678-a0b9a30187a8
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: 278a51f2-e399-4182-8f37-9c47e35205d3
State: Peer in Cluster (Disconnected)

Hostname: server5
Uuid: a0be5978-e05b-46bc-83b7-c34ae212cf21
State: Peer in Cluster (Disconnected)

Hostname: server6
Uuid: a505fa8b-72f3-47a1-af1c-19ca01c1245a
State: Peer in Cluster (Disconnected)
[root@server2 ~]#

查看卷状态信息

[root@server2 ~]# gluster volume status
Status of volume: data-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/mnt/volume_sfo2_01/brick1    49152     0          Y       10380
Brick server2:/mnt/volume_sfo2_02/brick2    49152     0          Y       10273
Brick server3:/mnt/volume_sfo2_03/brick3    49152     0          Y       10269
Self-heal Daemon on localhost               N/A       N/A        Y       10294
Self-heal Daemon on server3                 N/A       N/A        Y       10290
Self-heal Daemon on server1                 N/A       N/A        Y       10401
 
Task Status of Volume data-volume
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@server2 ~]#

在客户端节点查看挂载目录文件(已不可用)

[root@server7 ~]# ls /mnt 
ls: cannot access /mnt: Transport endpoint is not connected
[root@server7 ~]#
5月 282020
 

配置为6个100GB存储块(Brick)并容许最多2个块宕机失效的分散卷

206.189.167.54 server1
64.227.54.71 server2
64.227.54.85 server3
206.189.171.152 server4
64.227.54.28 server5
64.227.54.35 server6
206.189.171.164 server7

将节点加入受信存储池(Trusted Pool)

[root@server1 ~]# gluster peer probe server2
peer probe: success. 
[root@server1 ~]# gluster peer probe server3
peer probe: success. 
[root@server1 ~]# gluster peer probe server4
peer probe: success. 
[root@server1 ~]# gluster peer probe server5
peer probe: success. 
[root@server1 ~]# gluster peer probe server6
peer probe: success. 
[root@server1 ~]#

查看节点状态

[root@server1 ~]# gluster peer status
Number of Peers: 5

Hostname: server2
Uuid: d331a6e5-b533-42a6-bd78-b07f33edbb0f
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: c925e178-a154-4e00-b678-a0b9a30187a8
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: 278a51f2-e399-4182-8f37-9c47e35205d3
State: Peer in Cluster (Connected)

Hostname: server5
Uuid: a0be5978-e05b-46bc-83b7-c34ae212cf21
State: Peer in Cluster (Connected)

Hostname: server6
Uuid: a505fa8b-72f3-47a1-af1c-19ca01c1245a
State: Peer in Cluster (Connected)
[root@server1 ~]#

在节点上创建Brick目录

[root@server1 ~]# mkdir -p /mnt/volume_sfo2_01/brick1
[root@server2 ~]# mkdir -p /mnt/volume_sfo2_02/brick2
[root@server3 ~]# mkdir -p /mnt/volume_sfo2_03/brick3
[root@server4 ~]# mkdir -p /mnt/volume_sfo2_04/brick4
[root@server5 ~]# mkdir -p /mnt/volume_sfo2_05/brick5
[root@server6 ~]# mkdir -p /mnt/volume_sfo2_06/brick6

创建卷时未指定redundancy参数系统将自动计算并提示

[root@server1 ~]# gluster volume create data-volume disperse 6 transport tcp \
> server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \
> server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \
> server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6
The optimal redundancy for this configuration is 2. Do you want to create the volume with this value ? (y/n) n

Usage:
volume create <NEW-VOLNAME> [stripe <COUNT>] [[replica <COUNT> [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force]

[root@server1 ~]#

创建卷时指定disperse参数和redundancy参数

gluster volume create data-volume disperse 6 redundancy 2 transport tcp \
server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \
server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \
server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6

[root@server1 ~]# gluster volume create data-volume disperse 6 redundancy 2 transport tcp \
> server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \
> server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \
> server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6
volume create: data-volume: success: please start the volume to access data
[root@server1 ~]#

启动卷并查看卷信息和状态信息

[root@server1 ~]# gluster volume start data-volume
volume start: data-volume: success
[root@server1 ~]#

[root@server1 ~]# gluster volume info

Volume Name: data-volume
Type: Disperse
Volume ID: fd3fdef5-a1c5-41c6-83f7-a9df9e3ccbb3
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (4 + 2) = 6
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/volume_sfo2_01/brick1
Brick2: server2:/mnt/volume_sfo2_02/brick2
Brick3: server3:/mnt/volume_sfo2_03/brick3
Brick4: server4:/mnt/volume_sfo2_04/brick4
Brick5: server5:/mnt/volume_sfo2_05/brick5
Brick6: server6:/mnt/volume_sfo2_06/brick6
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@server1 ~]#

客户端安装GlusterFS必要组件

[root@server7 ~]# yum -y install centos-release-gluster
[root@server7 ~]# yum -y install glusterfs glusterfs-fuse glusterfs-rdma

挂载data-volume卷并查看磁盘信息(实际可用存储400GB)

[root@server7 ~]# mount -t glusterfs server6:/data-volume /mnt/
[root@server7 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 60G 1020M 59G 2% /
devtmpfs 897M 0 897M 0% /dev
tmpfs 920M 0 920M 0% /dev/shm
tmpfs 920M 17M 903M 2% /run
tmpfs 920M 0 920M 0% /sys/fs/cgroup
tmpfs 184M 0 184M 0% /run/user/0
server6:/data-volume 400G 4.2G 396G 2% /mnt
[root@server7 ~]#

客户端随机写入文件

[root@server7 ~]# for i in `seq -w 1 20`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
[root@server7 ~]# ls /mnt/
copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19
copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20
[root@server7 ~]#

在服务端节点查看随机写入文件的分布

[root@server1 ~]# ls /mnt/volume_sfo2_01/brick1/
copy-test-01  copy-test-03  copy-test-05  copy-test-07  copy-test-09  copy-test-11  copy-test-13  copy-test-15  copy-test-17  copy-test-19
copy-test-02  copy-test-04  copy-test-06  copy-test-08  copy-test-10  copy-test-12  copy-test-14  copy-test-16  copy-test-18  copy-test-20
[root@server1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  974M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_01
tmpfs           184M     0  184M   0% /run/user/0
[root@server1 ~]# 

[root@server2 ~]# ls /mnt/volume_sfo2_02/brick2/
copy-test-01  copy-test-03  copy-test-05  copy-test-07  copy-test-09  copy-test-11  copy-test-13  copy-test-15  copy-test-17  copy-test-19
copy-test-02  copy-test-04  copy-test-06  copy-test-08  copy-test-10  copy-test-12  copy-test-14  copy-test-16  copy-test-18  copy-test-20
[root@server2 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  974M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_02
tmpfs           184M     0  184M   0% /run/user/0
[root@server2 ~]# 

[root@server3 ~]# ls /mnt/volume_sfo2_03/brick3/
copy-test-01  copy-test-03  copy-test-05  copy-test-07  copy-test-09  copy-test-11  copy-test-13  copy-test-15  copy-test-17  copy-test-19
copy-test-02  copy-test-04  copy-test-06  copy-test-08  copy-test-10  copy-test-12  copy-test-14  copy-test-16  copy-test-18  copy-test-20
[root@server3 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  974M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_03
tmpfs           184M     0  184M   0% /run/user/0
[root@server3 ~]# 

[root@server4 ~]# ls /mnt/volume_sfo2_04/brick4/
copy-test-01  copy-test-03  copy-test-05  copy-test-07  copy-test-09  copy-test-11  copy-test-13  copy-test-15  copy-test-17  copy-test-19
copy-test-02  copy-test-04  copy-test-06  copy-test-08  copy-test-10  copy-test-12  copy-test-14  copy-test-16  copy-test-18  copy-test-20
[root@server4 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  974M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_04
tmpfs           184M     0  184M   0% /run/user/0
[root@server4 ~]# 

[root@server5 ~]# ls /mnt/volume_sfo2_05/brick5/
copy-test-01  copy-test-03  copy-test-05  copy-test-07  copy-test-09  copy-test-11  copy-test-13  copy-test-15  copy-test-17  copy-test-19
copy-test-02  copy-test-04  copy-test-06  copy-test-08  copy-test-10  copy-test-12  copy-test-14  copy-test-16  copy-test-18  copy-test-20
[root@server5 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  975M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_05
tmpfs           184M     0  184M   0% /run/user/0
[root@server5 ~]#  

[root@server6 ~]# ls /mnt/volume_sfo2_06/brick6/
copy-test-01  copy-test-03  copy-test-05  copy-test-07  copy-test-09  copy-test-11  copy-test-13  copy-test-15  copy-test-17  copy-test-19
copy-test-02  copy-test-04  copy-test-06  copy-test-08  copy-test-10  copy-test-12  copy-test-14  copy-test-16  copy-test-18  copy-test-20
[root@server6 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  974M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_06
tmpfs           184M     0  184M   0% /run/user/0
[root@server6 ~]#
5月 272020
 

服务端

DigitalOcean/2Core/2G/60G+100G
165.227.27.221 server1
159.89.152.41 server2
159.89.151.236 server3
167.172.118.183 server4
167.172.126.43 server5
64.225.47.139 server6

客户端

DigitalOcean/2Core/2G/60G
64.225.47.123 server7

查看可用磁盘信息

[root@server1 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  901M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_01
tmpfs           184M     0  184M   0% /run/user/0
[root@server1 ~]# 

[root@server2 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  901M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_02
tmpfs           184M     0  184M   0% /run/user/0
[root@server2 ~]# 

[root@server3 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  901M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_03
tmpfs           184M     0  184M   0% /run/user/0
[root@server3 ~]# 

[root@server4 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  901M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_04
tmpfs           184M     0  184M   0% /run/user/0
[root@server4 ~]# 

[root@server5 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  974M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_05
tmpfs           184M     0  184M   0% /run/user/0
[root@server5 ~]# 

[root@server6 ~]# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        60G  973M   60G   2% /
devtmpfs        897M     0  897M   0% /dev
tmpfs           920M     0  920M   0% /dev/shm
tmpfs           920M   17M  903M   2% /run
tmpfs           920M     0  920M   0% /sys/fs/cgroup
/dev/sda        100G   33M  100G   1% /mnt/volume_sfo2_06
tmpfs           184M     0  184M   0% /run/user/0
[root@server6 ~]#

在服务端节点安装并启动GlusterFS服务

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config;
yum -y install centos-release-gluster;
yum -y install glusterfs-server;
systemctl enable glusterfsd;
systemctl start glusterfsd;

将节点加入受信存储池(Trusted Pool)

在受信存储池建立,节点间彼此建立通信连接后,只有受信成员节点可以将新节点加入池,新节点不可以直接操作已加入受信存储池的节点。

[root@server1 ~]# gluster peer probe server2
peer probe: success. 
[root@server1 ~]# gluster peer probe server3
peer probe: success. 
[root@server1 ~]# gluster peer probe server4
peer probe: success. 
[root@server1 ~]# gluster peer probe server5
peer probe: success. 
[root@server1 ~]# gluster peer probe server6
peer probe: success. 
[root@server1 ~]# gluster peer status
Number of Peers: 5

Hostname: server2
Uuid: 6231013f-07cc-4701-93b3-34d4c623a890
State: Peer in Cluster (Connected)

Hostname: server3
Uuid: aa808d87-4e7c-4ecd-bcf0-13ea03f844a8
State: Peer in Cluster (Connected)

Hostname: server4
Uuid: d153d847-ad46-4c85-8336-f8e553d5aab6
State: Peer in Cluster (Connected)

Hostname: server5
Uuid: a90c2969-67eb-4792-b5ce-6b4b3d782675
State: Peer in Cluster (Connected)

Hostname: server6
Uuid: 3ed5adc9-d3f7-40eb-8bbd-45f0882f55cd
State: Peer in Cluster (Connected)
[root@server1 ~]#

在节点上创建Brick目录

[root@server1 ~]# mkdir -p /mnt/volume_sfo2_01/brick1
[root@server2 ~]# mkdir -p /mnt/volume_sfo2_02/brick2
[root@server3 ~]# mkdir -p /mnt/volume_sfo2_03/brick3
[root@server4 ~]# mkdir -p /mnt/volume_sfo2_04/brick4
[root@server5 ~]# mkdir -p /mnt/volume_sfo2_05/brick5
[root@server6 ~]# mkdir -p /mnt/volume_sfo2_06/brick6

创建6节点3副本分布式副本卷

gluster volume create data-volume replica 3 transport tcp \
server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \
server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \
server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6

[root@server1 ~]# gluster volume create data-volume replica 3 transport tcp \
> server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \
> server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \
> server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6
volume create: data-volume: success: please start the volume to access data
[root@server1 ~]#

查看卷信息

[root@server1 ~]# gluster volume info

Volume Name: data-volume
Type: Distributed-Replicate
Volume ID: 2a2103ab-17e4-47b5-9d4c-96e460ac419c
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/volume_sfo2_01/brick1
Brick2: server2:/mnt/volume_sfo2_02/brick2
Brick3: server3:/mnt/volume_sfo2_03/brick3
Brick4: server4:/mnt/volume_sfo2_04/brick4
Brick5: server5:/mnt/volume_sfo2_05/brick5
Brick6: server6:/mnt/volume_sfo2_06/brick6
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@server1 ~]# 

启动卷并查看卷信息和状态信息

[root@server1 ~]# gluster volume start data-volume
volume start: data-volume: success
[root@server1 ~]# gluster volume info
 
Volume Name: data-volume
Type: Distributed-Replicate
Volume ID: 2a2103ab-17e4-47b5-9d4c-96e460ac419c
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: server1:/mnt/volume_sfo2_01/brick1
Brick2: server2:/mnt/volume_sfo2_02/brick2
Brick3: server3:/mnt/volume_sfo2_03/brick3
Brick4: server4:/mnt/volume_sfo2_04/brick4
Brick5: server5:/mnt/volume_sfo2_05/brick5
Brick6: server6:/mnt/volume_sfo2_06/brick6
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@server1 ~]# gluster volume status
Status of volume: data-volume
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick server1:/mnt/volume_sfo2_01/brick1    49152     0          Y       9805 
Brick server2:/mnt/volume_sfo2_02/brick2    49152     0          Y       9843 
Brick server3:/mnt/volume_sfo2_03/brick3    49152     0          Y       9690 
Brick server4:/mnt/volume_sfo2_04/brick4    49152     0          Y       9734 
Brick server5:/mnt/volume_sfo2_05/brick5    49152     0          Y       10285
Brick server6:/mnt/volume_sfo2_06/brick6    49152     0          Y       10470
Self-heal Daemon on localhost               N/A       N/A        Y       9826 
Self-heal Daemon on server5                 N/A       N/A        Y       10306
Self-heal Daemon on server2                 N/A       N/A        Y       9864 
Self-heal Daemon on server6                 N/A       N/A        Y       10491
Self-heal Daemon on server3                 N/A       N/A        Y       9711 
Self-heal Daemon on server4                 N/A       N/A        Y       9755 
 
Task Status of Volume data-volume
------------------------------------------------------------------------------
There are no active volume tasks
 
[root@server1 ~]#

客户端安装GlusterFS必要组件

[root@server7 ~]# yum -y install centos-release-gluster
[root@server7 ~]# yum -y install glusterfs glusterfs-fuse glusterfs-rdma

挂载data-volume卷并查看磁盘信息(实际可用存储200GB)

[root@server7 ~]# mount -t glusterfs server6:/data-volume /mnt/
[root@server7 ~]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              60G 1003M   60G   2% /
devtmpfs              897M     0  897M   0% /dev
tmpfs                 920M     0  920M   0% /dev/shm
tmpfs                 920M   17M  903M   2% /run
tmpfs                 920M     0  920M   0% /sys/fs/cgroup
tmpfs                 184M     0  184M   0% /run/user/0
server6:/data-volume  200G  2.1G  198G   2% /mnt
[root@server7 ~]# 

[root@server7 ~]# mount |grep server6
server6:/data-volume on /mnt type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root@server7 ~]#

查看与服务端节点间通信状态

客户端随机写入文件

[root@server7 ~]# for i in `seq -w 1 20`; do cp -rp /var/log/messages /mnt/copy-test-$i; done
[root@server7 ~]# ls /mnt/
copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19
copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20
[root@server7 ~]#

在服务端节点查看随机写入文件的分布

[root@server1 ~]# ls /mnt/volume_sfo2_01/brick1/
copy-test-04 copy-test-05 copy-test-09 copy-test-15 copy-test-17 copy-test-18 copy-test-20
[root@server1 ~]#

[root@server2 ~]# ls /mnt/volume_sfo2_02/brick2/
copy-test-04 copy-test-05 copy-test-09 copy-test-15 copy-test-17 copy-test-18 copy-test-20
[root@server2 ~]#

[root@server3 ~]# ls /mnt/volume_sfo2_03/brick3/
copy-test-04 copy-test-05 copy-test-09 copy-test-15 copy-test-17 copy-test-18 copy-test-20
[root@server3 ~]#

[root@server4 ~]# ls /mnt/volume_sfo2_04/brick4/
copy-test-01 copy-test-03 copy-test-07 copy-test-10 copy-test-12 copy-test-14 copy-test-19
copy-test-02 copy-test-06 copy-test-08 copy-test-11 copy-test-13 copy-test-16
[root@server4 ~]#

[root@server5 ~]# ls /mnt/volume_sfo2_05/brick5/
copy-test-01 copy-test-03 copy-test-07 copy-test-10 copy-test-12 copy-test-14 copy-test-19
copy-test-02 copy-test-06 copy-test-08 copy-test-11 copy-test-13 copy-test-16
[root@server5 ~]#

[root@server6 ~]# ls /mnt/volume_sfo2_06/brick6/
copy-test-01 copy-test-03 copy-test-07 copy-test-10 copy-test-12 copy-test-14 copy-test-19
copy-test-02 copy-test-06 copy-test-08 copy-test-11 copy-test-13 copy-test-16
[root@server6 ~]#
5月 262020
 

分布式卷将文件随机分布在存储卷中的各个块(Bricks)中。分布式卷具有良好的扩展性,但不具备数据冗余能力,该能力需借助服务器软硬件实现。

创建分布式卷命令格式如下:

# gluster volume create [transport tcp | rdma | tcp,rdma]

副本卷通过存储卷中的多个块(Bricks)建立文件的副本。在创建副本卷时,块数量应当等于副本数量,为防止服务器及磁盘故障,每个块都应当分布在独立的服务器上。副本卷提供数据的高可用性和高可靠性。

创建副本卷命令格式如下:

# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma]

分布式副本卷是分布式卷和副本卷的集合,在创建分布式副本卷时,块(Bricks)数量最小应当为指定副本数量的整数倍。在未使用force参数之前,GlusterFS默认副本卷在一个服务器节点上仅允许建立一个块(Bricks)。分布式副本卷可以提高文件读取性能。

创建分布式副本卷命令格式如下:

# gluster volume create [replica ] [transport tcp | rdma | tcp,rdma]

分散卷基于纠错码,将文件编码后条带化分散存储在卷的多个块中,并提供一定冗余性。分散卷可以提高磁盘存储利用率,但性能有所下降。分散卷中的冗余值表示允许多少块失效而不中断对卷的读写操作。

分散卷中的冗余值必须大于0,总块数应当大于2倍的冗余值,也就意味着分散卷至少要由3个块组成。在创建分散卷时如果未指定冗余值,系统将自动计算该值并提示。

分散卷可用存储空间计算公式如下:

<Usable size> = <Brick size> * (#Bricks - Redundancy)

创建分散卷命令格式如下:

# gluster volume create [disperse [<count>]] [redundancy <count>] [transport tcp | rdma | tcp,rdma]

分布式分散卷等效于分布式副本卷,区别在于分布式分散卷通过分散卷将数据存储在块中。

5月 132020
 

练习代码1及注释

# -*- coding: utf-8 -*-
print "Hello World!"
print "Hello Again"
print "I like typing this."
print "This is fun."
print 'Yay! Printing.'
print "I'd much rather you 'not'."
print 'I "said" do not touch this.'
# #号用来注释
# 使用Unicode UTF-8编码以避免乱码

练习代码2及注释

# -*- coding: utf-8 -*-
# A comment, this is so you can read your program later.
# Anything after the # is ignored by python.

print "I could have code like this." # and the comment after is ignored

# You can also use a commnet to "disable" or comment out a piece of code:
# print "This won't run."

print "This will run."
# 注释可以是对某行代码的自然语言描述也可以用作临时禁用该行代码
# 注释符的英文名称为octothorepe或者pound character
# 引号中的#号作为字符串中的一个普通字符

练习代码3及注释

# -*- coding: utf-8 -*-
print "I will now count my chickens:"

print "Hens", 25 + 30 / 6
print "Roosters", 100 - 25 * 3 % 4

print "Now I will count the eggs:"

print 3 + 2 + 1 - 5 + 4 % 2 - 1 / 4 + 6

print "is it true that 3 + 2 < 5 - 7?"

print 3 + 2 < 5 - 7

print "What is 3 + 2?", 3 + 2
print "What is 5 - 7?", 5 - 7

print "Oh, that's why it's False."

print "How about some more."

print "Is it greater?", 5 > -2
print "Is it greater or equal?", 5 >= -2
print "is it less or equal?", 5 <= -2
# 百分号%表示求余数,75除4得18余3
# 运算优先级为括号,指数,乘,除,加,减
# 1/4运算结果舍去了小数部分
5月 092020
 

应用Calico网络配置文件

[root@k8s-01 ~]# kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@k8s-01 ~]#

查看Calico相关Pod运行状态

[root@k8s-01 ~]# kubectl get pods --namespace=kube-system -o wide
NAME                                       READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
calico-kube-controllers-7d4d547dd6-b6rvr   1/1     Running   0          45m   10.244.165.194   k8s-03   <none>           <none>
calico-node-dccgc                          1/1     Running   0          45m   64.225.118.77    k8s-03   <none>           <none>
calico-node-l2lcp                          1/1     Running   0          45m   157.245.178.77   k8s-02   <none>           <none>
calico-node-zwj8n                          1/1     Running   0          45m   64.225.39.115    k8s-01   <none>           <none>
coredns-5644d7b6d9-tgw7c                   1/1     Running   0          49m   10.244.165.195   k8s-03   <none>           <none>
coredns-5644d7b6d9-tljw2                   1/1     Running   0          49m   10.244.165.193   k8s-03   <none>           <none>
etcd-k8s-01                                1/1     Running   0          48m   64.225.39.115    k8s-01   <none>           <none>
kube-apiserver-k8s-01                      1/1     Running   0          48m   64.225.39.115    k8s-01   <none>           <none>
kube-controller-manager-k8s-01             1/1     Running   0          48m   64.225.39.115    k8s-01   <none>           <none>
kube-proxy-7s8pn                           1/1     Running   0          49m   64.225.39.115    k8s-01   <none>           <none>
kube-proxy-9kxxr                           1/1     Running   0          49m   64.225.118.77    k8s-03   <none>           <none>
kube-proxy-r7w4z                           1/1     Running   0          49m   157.245.178.77   k8s-02   <none>           <none>
kube-scheduler-k8s-01                      1/1     Running   0          48m   64.225.39.115    k8s-01   <none>           <none>
[root@k8s-01 ~]#