5月 282020
配置为6个100GB存储块(Brick)并容许最多2个块宕机失效的分散卷
206.189.167.54 server1 64.227.54.71 server2 64.227.54.85 server3 206.189.171.152 server4 64.227.54.28 server5 64.227.54.35 server6 206.189.171.164 server7
将节点加入受信存储池(Trusted Pool)
[root@server1 ~]# gluster peer probe server2 peer probe: success. [root@server1 ~]# gluster peer probe server3 peer probe: success. [root@server1 ~]# gluster peer probe server4 peer probe: success. [root@server1 ~]# gluster peer probe server5 peer probe: success. [root@server1 ~]# gluster peer probe server6 peer probe: success. [root@server1 ~]#
查看节点状态
[root@server1 ~]# gluster peer status Number of Peers: 5 Hostname: server2 Uuid: d331a6e5-b533-42a6-bd78-b07f33edbb0f State: Peer in Cluster (Connected) Hostname: server3 Uuid: c925e178-a154-4e00-b678-a0b9a30187a8 State: Peer in Cluster (Connected) Hostname: server4 Uuid: 278a51f2-e399-4182-8f37-9c47e35205d3 State: Peer in Cluster (Connected) Hostname: server5 Uuid: a0be5978-e05b-46bc-83b7-c34ae212cf21 State: Peer in Cluster (Connected) Hostname: server6 Uuid: a505fa8b-72f3-47a1-af1c-19ca01c1245a State: Peer in Cluster (Connected) [root@server1 ~]#
在节点上创建Brick目录
[root@server1 ~]# mkdir -p /mnt/volume_sfo2_01/brick1 [root@server2 ~]# mkdir -p /mnt/volume_sfo2_02/brick2 [root@server3 ~]# mkdir -p /mnt/volume_sfo2_03/brick3 [root@server4 ~]# mkdir -p /mnt/volume_sfo2_04/brick4 [root@server5 ~]# mkdir -p /mnt/volume_sfo2_05/brick5 [root@server6 ~]# mkdir -p /mnt/volume_sfo2_06/brick6
创建卷时未指定redundancy参数系统将自动计算并提示
[root@server1 ~]# gluster volume create data-volume disperse 6 transport tcp \ > server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \ > server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \ > server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6 The optimal redundancy for this configuration is 2. Do you want to create the volume with this value ? (y/n) n Usage: volume create <NEW-VOLNAME> [stripe <COUNT>] [[replica <COUNT> [arbiter <COUNT>]]|[replica 2 thin-arbiter 1]] [disperse [<COUNT>]] [disperse-data <COUNT>] [redundancy <COUNT>] [transport <tcp|rdma|tcp,rdma>] <NEW-BRICK> <TA-BRICK>... [force] [root@server1 ~]#
创建卷时指定disperse参数和redundancy参数
gluster volume create data-volume disperse 6 redundancy 2 transport tcp \ server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \ server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \ server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6 [root@server1 ~]# gluster volume create data-volume disperse 6 redundancy 2 transport tcp \ > server1:/mnt/volume_sfo2_01/brick1 server2:/mnt/volume_sfo2_02/brick2 \ > server3:/mnt/volume_sfo2_03/brick3 server4:/mnt/volume_sfo2_04/brick4 \ > server5:/mnt/volume_sfo2_05/brick5 server6:/mnt/volume_sfo2_06/brick6 volume create: data-volume: success: please start the volume to access data [root@server1 ~]#
启动卷并查看卷信息和状态信息
[root@server1 ~]# gluster volume start data-volume volume start: data-volume: success [root@server1 ~]# [root@server1 ~]# gluster volume info Volume Name: data-volume Type: Disperse Volume ID: fd3fdef5-a1c5-41c6-83f7-a9df9e3ccbb3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x (4 + 2) = 6 Transport-type: tcp Bricks: Brick1: server1:/mnt/volume_sfo2_01/brick1 Brick2: server2:/mnt/volume_sfo2_02/brick2 Brick3: server3:/mnt/volume_sfo2_03/brick3 Brick4: server4:/mnt/volume_sfo2_04/brick4 Brick5: server5:/mnt/volume_sfo2_05/brick5 Brick6: server6:/mnt/volume_sfo2_06/brick6 Options Reconfigured: transport.address-family: inet storage.fips-mode-rchecksum: on nfs.disable: on [root@server1 ~]#
客户端安装GlusterFS必要组件
[root@server7 ~]# yum -y install centos-release-gluster [root@server7 ~]# yum -y install glusterfs glusterfs-fuse glusterfs-rdma
挂载data-volume卷并查看磁盘信息(实际可用存储400GB)
[root@server7 ~]# mount -t glusterfs server6:/data-volume /mnt/ [root@server7 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 1020M 59G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup tmpfs 184M 0 184M 0% /run/user/0 server6:/data-volume 400G 4.2G 396G 2% /mnt [root@server7 ~]#
客户端随机写入文件
[root@server7 ~]# for i in `seq -w 1 20`; do cp -rp /var/log/messages /mnt/copy-test-$i; done [root@server7 ~]# ls /mnt/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server7 ~]#
在服务端节点查看随机写入文件的分布
[root@server1 ~]# ls /mnt/volume_sfo2_01/brick1/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 974M 60G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda 100G 33M 100G 1% /mnt/volume_sfo2_01 tmpfs 184M 0 184M 0% /run/user/0 [root@server1 ~]# [root@server2 ~]# ls /mnt/volume_sfo2_02/brick2/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 974M 60G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda 100G 33M 100G 1% /mnt/volume_sfo2_02 tmpfs 184M 0 184M 0% /run/user/0 [root@server2 ~]# [root@server3 ~]# ls /mnt/volume_sfo2_03/brick3/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server3 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 974M 60G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda 100G 33M 100G 1% /mnt/volume_sfo2_03 tmpfs 184M 0 184M 0% /run/user/0 [root@server3 ~]# [root@server4 ~]# ls /mnt/volume_sfo2_04/brick4/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server4 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 974M 60G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda 100G 33M 100G 1% /mnt/volume_sfo2_04 tmpfs 184M 0 184M 0% /run/user/0 [root@server4 ~]# [root@server5 ~]# ls /mnt/volume_sfo2_05/brick5/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server5 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 975M 60G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda 100G 33M 100G 1% /mnt/volume_sfo2_05 tmpfs 184M 0 184M 0% /run/user/0 [root@server5 ~]# [root@server6 ~]# ls /mnt/volume_sfo2_06/brick6/ copy-test-01 copy-test-03 copy-test-05 copy-test-07 copy-test-09 copy-test-11 copy-test-13 copy-test-15 copy-test-17 copy-test-19 copy-test-02 copy-test-04 copy-test-06 copy-test-08 copy-test-10 copy-test-12 copy-test-14 copy-test-16 copy-test-18 copy-test-20 [root@server6 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 60G 974M 60G 2% / devtmpfs 897M 0 897M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 17M 903M 2% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/sda 100G 33M 100G 1% /mnt/volume_sfo2_06 tmpfs 184M 0 184M 0% /run/user/0 [root@server6 ~]#