6月 302020
 

MariaDB备份工具软件包信息

[root@centos-s-2vcpu-2gb-sfo3-01 ~]# dnf info mariadb-backup
Last metadata expiration check: 0:02:04 ago on Tue 30 Jun 2020 02:14:23 AM UTC.
Installed Packages
Name         : mariadb-backup
Epoch        : 3
Version      : 10.3.17
Release      : 1.module_el8.1.0+257+48736ea6
Architecture : x86_64
Size         : 28 M
Source       : mariadb-10.3.17-1.module_el8.1.0+257+48736ea6.src.rpm
Repository   : @System
From repo    : AppStream
Summary      : The mariabackup tool for physical online backups
URL          : http://mariadb.org
License      : GPLv2 with exceptions and LGPLv2 and BSD
Description  : MariaDB Backup is an open source tool provided by MariaDB for performing
             : physical online backups of InnoDB, Aria and MyISAM tables.
             : For InnoDB, "hot online" backups are possible.

[root@centos-s-2vcpu-2gb-sfo3-01 ~]#

备份工具命令行参数

[root@centos-s-2vcpu-2gb-sfo3-01 ~]# mariabackup
mariabackup based on MariaDB server 10.3.17-MariaDB Linux (x86_64)
Open source backup tool for InnoDB and XtraDB

Copyright (C) 2009-2015 Percona LLC and/or its affiliates.
Portions Copyright (C) 2000, 2011, MySQL AB & Innobase Oy. All Rights Reserved.

This program is free software; you can redistribute it and/or
modify it under the terms of the GNU General Public License
as published by the Free Software Foundation version 2
of the License.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You can download full text of the license on http://www.gnu.org/licenses/gpl-2.0.txt

Usage: mariabackup [--defaults-file=#] [--backup | --prepare | --copy-back | --move-back] [OPTIONS]

Default options are read from the following files in the given order:
/etc/my.cnf ~/.my.cnf
The following groups are read: xtrabackup mariabackup mysqld server mysqld-10.3 mariadb mariadb-10.3 client-server galera
The following options may be given as the first argument:
--print-defaults          Print the program argument list and exit.
--no-defaults             Don't read default options from any option file.
The following specify which files/extra groups are read (specified before remaining options):
--defaults-file=#         Only read default options from the given file #.
--defaults-extra-file=#   Read this file after the global files are read.
--defaults-group-suffix=# Additionally read default groups with # appended as a suffix.
  -V, --verbose       display verbose output
  -v, --version       print xtrabackup version information
  --target-dir=name   destination directory
  --backup            take backup to target-dir
  --prepare           prepare a backup for starting mysql server on the backup.
  --export            create files to import to another database when prepare.
  --print-param       print parameter of mysqld needed for copyback.
  --use-memory=#      The value is used instead of buffer_pool_size
  --throttle=#        limit count of IO operations (pairs of read&write) per
                      second to IOS values (for '--backup')
  --log[=name]        Ignored option for MySQL option compatibility
  --log-copy-interval=#
                      time interval between checks done by log copying thread
                      in milliseconds (default is 1 second).
  --extra-lsndir=name (for --backup): save an extra copy of the
                      xtrabackup_checkpoints file in this directory.
  --incremental-lsn=name
                      (for --backup): copy only .ibd pages newer than specified
                      LSN 'high:low'. ##ATTENTION##: If a wrong LSN value is
                      specified, it is impossible to diagnose this, causing the
                      backup to be unusable. Be careful!
  --incremental-basedir=name
                      (for --backup): copy only .ibd pages newer than backup at
                      specified directory.
  --incremental-dir=name
                      (for --prepare): apply .delta files and logfile in the
                      specified directory.
  --tables=name       filtering by regexp for table names.
  --tables-file=name  filtering by list of the exact database.table name in the
                      file.
  --databases=name    filtering by list of databases.
  --databases-file=name
                      filtering by list of databases in the file.
  --tables-exclude=name
                      filtering by regexp for table names. Operates the same
                      way as --tables, but matched names are excluded from
                      backup. Note that this option has a higher priority than
                      --tables.
  --databases-exclude=name
                      Excluding databases based on name, Operates the same way
                      as --databases, but matched names are excluded from
                      backup. Note that this option has a higher priority than
                      --databases.
  --stream=name       Stream all backup files to the standard output in the
                      specified format.Supported format is 'xbstream'.
  --compress[=name]   Compress individual backup files using the specified
                      compression algorithm. Currently the only supported
                      algorithm is 'quicklz'. It is also the default algorithm,
                      i.e. the one used when --compress is used without an
                      argument.
  --compress-threads=#
                      Number of threads for parallel data compression. The
                      default value is 1.
  --compress-chunk-size=#
                      Size of working buffer(s) for compression threads in
                      bytes. The default value is 64K.
  --incremental-force-scan
                      Perform a full-scan incremental backup even in the
                      presence of changed page bitmap data
  --close-files       do not keep files opened. Use at your own risk.
  --core-file         Write core on fatal signals
  --copy-back         Copy all the files in a previously made backup from the
                      backup directory to their original locations.
  --move-back         Move all the files in a previously made backup from the
                      backup directory to the actual datadir location. Use with
                      caution, as it removes backup files.
  --galera-info       This options creates the xtrabackup_galera_info file
                      which contains the local node state at the time of the
                      backup. Option should be used when performing the backup
                      of MariaDB Galera Cluster. Has no effect when backup
                      locks are used to create the backup.
  --slave-info        This option is useful when backing up a replication slave
                      server. It prints the binary log position and name of the
                      master server. It also writes this information to the
                      "xtrabackup_slave_info" file as a "CHANGE MASTER"
                      command. A new slave for this master can be set up by
                      starting a slave server on this backup and issuing a
                      "CHANGE MASTER" command with the binary log position
                      saved in the "xtrabackup_slave_info" file.
  --no-lock           Use this option to disable table lock with "FLUSH TABLES
                      WITH READ LOCK". Use it only if ALL your tables are
                      InnoDB and you DO NOT CARE about the binary log position
                      of the backup. This option shouldn't be used if there are
                      any DDL statements being executed or if any updates are
                      happening on non-InnoDB tables (this includes the system
                      MyISAM tables in the mysql database), otherwise it could
                      lead to an inconsistent backup. If you are considering to
                      use --no-lock because your backups are failing to acquire
                      the lock, this could be because of incoming replication
                      events preventing the lock from succeeding. Please try
                      using --safe-slave-backup to momentarily stop the
                      replication slave thread, this may help the backup to
                      succeed and you then don't need to resort to using this
                      option.
  --safe-slave-backup Stop slave SQL thread and wait to start backup until
                      Slave_open_temp_tables in "SHOW STATUS" is zero. If there
                      are no open temporary tables, the backup will take place,
                      otherwise the SQL thread will be started and stopped
                      until there are no open temporary tables. The backup will
                      fail if Slave_open_temp_tables does not become zero after
                      --safe-slave-backup-timeout seconds. The slave SQL thread
                      will be restarted when the backup finishes.
  --rsync             Uses the rsync utility to optimize local file transfers.
                      When this option is specified, innobackupex uses rsync to
                      copy all non-InnoDB files instead of spawning a separate
                      cp for each file, which can be much faster for servers
                      with a large number of databases or tables.  This option
                      cannot be used together with --stream.
  --force-non-empty-directories
                      This option, when specified, makes --copy-back or
                      --move-back transfer files to non-empty directories. Note
                      that no existing files will be overwritten. If
                      --copy-back or --nove-back has to copy a file from the
                      backup directory which already exists in the destination
                      directory, it will still fail with an error.
  --no-version-check  This option disables the version check which is enabled
                      by the --version-check option.
  --no-backup-locks   This option controls if backup locks should be used
                      instead of FLUSH TABLES WITH READ LOCK on the backup
                      stage. The option has no effect when backup locks are not
                      supported by the server. This option is enabled by
                      default, disable with --no-backup-locks.
  --decompress        Decompresses all files with the .qp extension in a backup
                      previously made with the --compress option.
  -u, --user=name     This option specifies the MySQL username used when
                      connecting to the server, if that's not the current user.
                      The option accepts a string argument. See mysql --help
                      for details.
  -H, --host=name     This option specifies the host to use when connecting to
                      the database server with TCP/IP.  The option accepts a
                      string argument. See mysql --help for details.
  -P, --port=#        This option specifies the port to use when connecting to
                      the database server with TCP/IP.  The option accepts a
                      string argument. See mysql --help for details.
  -p, --password=name This option specifies the password to use when connecting
                      to the database. It accepts a string argument.  See mysql
                      --help for details.
  --protocol=name     The protocol to use for connection (tcp, socket, pipe,
                      memory).
  -S, --socket=name   This option specifies the socket to use when connecting
                      to the local database server with a UNIX domain socket.
                      The option accepts a string argument. See mysql --help
                      for details.
  --incremental-history-name=name
                      This option specifies the name of the backup series
                      stored in the PERCONA_SCHEMA.xtrabackup_history history
                      record to base an incremental backup on. Xtrabackup will
                      search the history table looking for the most recent
                      (highest innodb_to_lsn), successful backup in the series
                      and take the to_lsn value to use as the starting lsn for
                      the incremental backup. This will be mutually exclusive
                      with --incremental-history-uuid, --incremental-basedir
                      and --incremental-lsn. If no valid lsn can be found (no
                      series by that name, no successful backups by that name)
                      xtrabackup will return with an error. It is used with the
                      --incremental option.
  --incremental-history-uuid=name
                      This option specifies the UUID of the specific history
                      record stored in the PERCONA_SCHEMA.xtrabackup_history to
                      base an incremental backup on.
                      --incremental-history-name, --incremental-basedir and
                      --incremental-lsn. If no valid lsn can be found (no
                      success record with that uuid) xtrabackup will return
                      with an error. It is used with the --incremental option.
  --remove-original   Remove .qp files after decompression.
  --ftwrl-wait-query-type=name
                      This option specifies which types of queries are allowed
                      to complete before innobackupex will issue the global
                      lock. Default is all.. One of: ALL, UPDATE, SELECT
  --kill-long-query-type=name
                      This option specifies which types of queries should be
                      killed to unblock the global lock. Default is "all".. One
                      of: ALL, UPDATE, SELECT
  --history[=name]    This option enables the tracking of backup history in the
                      PERCONA_SCHEMA.xtrabackup_history table. An optional
                      history series name may be specified that will be placed
                      with the history record for the current backup being
                      taken.
  --kill-long-queries-timeout=#
                      This option specifies the number of seconds innobackupex
                      waits between starting FLUSH TABLES WITH READ LOCK and
                      killing those queries that block it. Default is 0
                      seconds, which means innobackupex will not attempt to
                      kill any queries.
  --ftwrl-wait-timeout=#
                      This option specifies time in seconds that innobackupex
                      should wait for queries that would block FTWRL before
                      running it. If there are still such queries when the
                      timeout expires, innobackupex terminates with an error.
                      Default is 0, in which case innobackupex does not wait
                      for queries to complete and starts FTWRL immediately.
  --ftwrl-wait-threshold=#
                      This option specifies the query run time threshold which
                      is used by innobackupex to detect long-running queries
                      with a non-zero value of --ftwrl-wait-timeout. FTWRL is
                      not started until such long-running queries exist. This
                      option has no effect if --ftwrl-wait-timeout is 0.
                      Default value is 60 seconds.
  --debug-sleep-before-unlock=#
                      This is a debug-only option used by the XtraBackup test
                      suite.
  --safe-slave-backup-timeout=#
                      How many seconds --safe-slave-backup should wait for
                      Slave_open_temp_tables to become zero. (default 300)
  --binlog-info[=name]
                      This option controls how XtraBackup should retrieve
                      server's binary log coordinates corresponding to the
                      backup. Possible values are OFF, ON, LOCKLESS and AUTO.
                      See the XtraBackup manual for more information. One of:
                      off, lockless, on, auto
  --secure-auth       Refuse client connecting to server if it uses old
                      (pre-4.1.1) protocol.
                      (Defaults to on; use --skip-secure-auth to disable.)
  --ssl               Enable SSL for connection (automatically enabled with
                      other flags).
  --ssl-ca=name       CA file in PEM format (check OpenSSL docs, implies
                      --ssl).
  --ssl-capath=name   CA directory (check OpenSSL docs, implies --ssl).
  --ssl-cert=name     X509 cert in PEM format (implies --ssl).
  --ssl-cipher=name   SSL cipher to use (implies --ssl).
  --ssl-key=name      X509 key in PEM format (implies --ssl).
  --ssl-crl=name      Certificate revocation list (implies --ssl).
  --ssl-crlpath=name  Certificate revocation list path (implies --ssl).
  --ssl-verify-server-cert
                      Verify server's "Common Name" in its cert against
                      hostname used when connecting. This option is disabled by
                      default.
  -h, --datadir=name  Path to the database root.
  -t, --tmpdir=name   Path for temporary files. Several paths may be specified,
                      separated by a colon (:), in this case they are used in a
                      round-robin fashion.
  --parallel=#        Number of threads to use for parallel datafiles transfer.
                      The default value is 1.
  --extended-validation
                      Enable extended validation for Innodb data pages during
                      backup phase. Will slow down backup considerably, in case
                      encryption is used. May fail if tables are created during
                      the backup.
  --encrypted-backup  In --backup, assume that nonzero key_version implies that
                      the page is encrypted. Use --backup
                      --skip-encrypted-backup to allow copying unencrypted that
                      were originally created before MySQL 5.1.48.
                      (Defaults to on; use --skip-encrypted-backup to disable.)
  --log[=name]        Ignored option for MySQL option compatibility
  --log-bin[=name]    Base name for the log sequence
  --innodb[=name]     Ignored option for MySQL option compatibility
  --innodb-adaptive-hash-index
                      Enable InnoDB adaptive hash index (enabled by default).
                      Disable with --skip-innodb-adaptive-hash-index.
                      (Defaults to on; use --skip-innodb-adaptive-hash-index to disable.)
  --innodb-autoextend-increment=#
                      Data file autoextend increment in megabytes
  --innodb-data-file-path=name
                      Path to individual files and their sizes.
  --innodb-data-home-dir=name
                      The common part for InnoDB table spaces.
  --innodb-doublewrite
                      Enable InnoDB doublewrite buffer during --prepare.
  --innodb-io-capacity[=#]
                      Number of IOPs the server can do. Tunes the background IO
                      rate
  --innodb-file-io-threads=#
                      Number of file I/O threads in InnoDB.
  --innodb-read-io-threads=#
                      Number of background read I/O threads in InnoDB.
  --innodb-write-io-threads=#
                      Number of background write I/O threads in InnoDB.
  --innodb-file-per-table
                      Stores each InnoDB table to an .ibd file in the database
                      dir.
  --innodb-flush-method=name
                      With which method to flush data.. One of: fsync, O_DSYNC,
                      littlesync, nosync, O_DIRECT, O_DIRECT_NO_FSYNC
  --innodb-log-buffer-size=#
                      The size of the buffer which InnoDB uses to write log to
                      the log files on disk.
  --innodb-log-file-size=#
                      Ignored for mysqld option compatibility
  --innodb-log-files-in-group=#
                      Ignored for mysqld option compatibility
  --innodb-log-group-home-dir=name
                      Path to InnoDB log files.
  --innodb-max-dirty-pages-pct=#
                      Percentage of dirty pages allowed in bufferpool.
  --innodb-use-native-aio
                      Use native AIO if supported on this platform.
                      (Defaults to on; use --skip-innodb-use-native-aio to disable.)
  --innodb-page-size=#
                      The universal page size of the database.
  --innodb-buffer-pool-filename=name
                      Ignored for mysqld option compatibility
  --debug-sync=name   Debug sync point. This is only used by the xtrabackup
                      test suite
  --innodb-checksum-algorithm=name
                      The algorithm InnoDB uses for page checksumming. [CRC32,
                      STRICT_CRC32, INNODB, STRICT_INNODB, NONE, STRICT_NONE].
                      One of: crc32, strict_crc32, innodb, strict_innodb, none,
                      strict_none
  --innodb-undo-directory=name
                      Directory where undo tablespace files live, this path can
                      be absolute.
  --innodb-undo-tablespaces=#
                      Number of undo tablespaces to use.
  --innodb-compression-level=#
                      Compression level used for zlib compression.
  --defaults-group=name
                      defaults group in config file (default "mysqld").
  --plugin-dir=name   Server plugin directory. Used to load encryption plugin
                      during 'prepare' phase.Has no effect in the 'backup'
                      phase (plugin directory during backup is the same as
                      server's)
  --innodb-log-checksums
                      Whether to require checksums for InnoDB redo log blocks
                      (Defaults to on; use --skip-innodb-log-checksums to disable.)
  --open-files-limit=#
                      the maximum number of file descriptors to reserve with
                      setrlimit().
  --lock-ddl-per-table
                      Lock DDL for each table before xtrabackup starts to copy
                      it and until the backup is completed.
  --rocksdb-datadir=name
                      RocksDB data directory.This option is only  used with
                      --copy-back or --move-back option
  --rocksdb-backup    Backup rocksdb data, if rocksdb plugin is installed.Used
                      only with --backup option. Can be useful for partial
                      backups, to exclude all rocksdb data
                      (Defaults to on; use --skip-rocksdb-backup to disable.)
  --check-privileges  Check database user privileges fro the backup user
                      (Defaults to on; use --skip-check-privileges to disable.)

Variables (--variable-name=value)
and boolean options {FALSE|TRUE}  Value (after reading options)
--------------------------------- ----------------------------------------
datadir                           /var/lib/mysql
tmpdir                            /var/tmp
parallel                          1
extended-validation               FALSE
encrypted-backup                  TRUE
log                               (No default value)
log-bin                           (No default value)
innodb                            (No default value)
innodb-adaptive-hash-index        TRUE
innodb-autoextend-increment       8
innodb-data-file-path             (No default value)
innodb-data-home-dir              (No default value)
innodb-doublewrite                FALSE
innodb-io-capacity                200
innodb-file-io-threads            4
innodb-read-io-threads            4
innodb-write-io-threads           4
innodb-file-per-table             FALSE
innodb-flush-method               fsync
innodb-log-buffer-size            1048576
innodb-log-file-size              50331648
innodb-log-files-in-group         1
innodb-log-group-home-dir         (No default value)
innodb-max-dirty-pages-pct        90
innodb-use-native-aio             TRUE
innodb-page-size                  16384
innodb-buffer-pool-filename       (No default value)
debug-sync                        (No default value)
innodb-checksum-algorithm         crc32
innodb-undo-directory             (No default value)
innodb-undo-tablespaces           0
innodb-compression-level          6
defaults-group                    mysqld
plugin-dir                        (No default value)
innodb-log-checksums              TRUE
open-files-limit                  0
lock-ddl-per-table                FALSE
rocksdb-datadir                   (No default value)
rocksdb-backup                    TRUE
check-privileges                  TRUE

Variables (--variable-name=value)
and boolean options {FALSE|TRUE}  Value (after reading options)
--------------------------------- ----------------------------------------
verbose                           FALSE
version                           FALSE
target-dir                        /root/xtrabackup_backupfiles/
backup                            FALSE
prepare                           FALSE
export                            FALSE
print-param                       FALSE
use-memory                        104857600
throttle                          0
log                               (No default value)
log-copy-interval                 1000
extra-lsndir                      (No default value)
incremental-lsn                   (No default value)
incremental-basedir               (No default value)
incremental-dir                   (No default value)
tables                            (No default value)
tables-file                       (No default value)
databases                         (No default value)
databases-file                    (No default value)
tables-exclude                    (No default value)
databases-exclude                 (No default value)
stream                            (No default value)
compress                          (No default value)
compress-threads                  1
compress-chunk-size               65536
incremental-force-scan            FALSE
close-files                       FALSE
copy-back                         FALSE
move-back                         FALSE
galera-info                       FALSE
slave-info                        FALSE
no-lock                           FALSE
safe-slave-backup                 FALSE
rsync                             FALSE
force-non-empty-directories       FALSE
no-version-check                  FALSE
no-backup-locks                   FALSE
decompress                        FALSE
user                              (No default value)
host                              (No default value)
port                              0
socket                            (No default value)
incremental-history-name          (No default value)
incremental-history-uuid          (No default value)
remove-original                   FALSE
ftwrl-wait-query-type             ALL
kill-long-query-type              SELECT
kill-long-queries-timeout         0
ftwrl-wait-timeout                0
ftwrl-wait-threshold              60
debug-sleep-before-unlock         0
safe-slave-backup-timeout         300
binlog-info                       on
secure-auth                       TRUE
ssl                               FALSE
ssl-ca                            (No default value)
ssl-capath                        (No default value)
ssl-cert                          (No default value)
ssl-cipher                        (No default value)
ssl-key                           (No default value)
ssl-crl                           (No default value)
ssl-crlpath                       (No default value)
ssl-verify-server-cert            FALSE
[root@centos-s-2vcpu-2gb-sfo3-01 ~]#
6月 232020
 

CentOS 8 Mariadb Server系统资源限制配置

https://mariadb.com/kb/en/configuring-linux-for-mariadb/
https://mariadb.com/kb/en/systemd/#configuring-the-open-files-limit

方案一

在系统层面配置openfile值(默认值为1024)

[root@centos-s-1vcpu-2gb-sfo3-01 ~]# ulimit -a
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 7182
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 7182
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
[root@centos-s-1vcpu-2gb-sfo3-01 ~]#

使用通配符或用户名进行配置

[root@centos-s-1vcpu-2gb-sfo3-01 ~]# vi /etc/security/limits.conf
* soft nofile 65535
* hard nofile 65535

mysql soft nofile 65535
mysql hard nofile 65535

确认修改成功

[root@centos-s-1vcpu-2gb-sfo3-01 ~]# ulimit -n
65535
[root@centos-s-1vcpu-2gb-sfo3-01 ~]# ulimit -Sn
65535
[root@centos-s-1vcpu-2gb-sfo3-01 ~]# ulimit -Hn
65535
[root@centos-s-1vcpu-2gb-sfo3-01 ~]#

方案二

基于systemd的服务启动参数配置openfile值

Mariadb Server服务启动配置文件(默认不建议修改,会随软件包升级而覆盖)

[root@centos-s-1vcpu-2gb-sfo3-01 ~]# cat /usr/lib/systemd/system/mariadb.service
# It's not recommended to modify this file in-place, because it will be
# overwritten during package upgrades. If you want to customize, the
# best way is to create a file "/etc/systemd/system/mariadb.service",
# containing
# .include /usr/lib/systemd/system/mariadb.service
# ...make your changes here...
# or create a file "/etc/systemd/system/mariadb.service.d/foo.conf",
# which doesn't need to include ".include" call and which will be parsed
# after the file mariadb.service itself is parsed.
#
# For more info about custom unit files, see systemd.unit(5) or
# http://fedoraproject.org/wiki/Systemd#How_do_I_customize_a_unit_file.2F_add_a_custom_unit_file.3F

# For example, if you want to increase mysql's open-files-limit to 10000,
# you need to increase systemd's LimitNOFILE setting, so create a file named
# "/etc/systemd/system/mariadb.service.d/limits.conf" containing:
# [Service]
# LimitNOFILE=10000

# Note: /usr/lib/... is recommended in the .include line though /lib/...
# still works.
# Don't forget to reload systemd daemon after you change unit configuration:
# root> systemctl --system daemon-reload

# Use [mysqld.INSTANCENAME] as sections in my.cnf to configure this instance.

[Unit]
Description=MariaDB 10.3 database server
Documentation=man:mysqld(8)
Documentation=https://mariadb.com/kb/en/library/systemd/
After=network.target

[Install]
WantedBy=multi-user.target
Alias=mysql.service
Alias=mysqld.service

[Service]
Type=notify
User=mysql
Group=mysql

ExecStartPre=/usr/libexec/mysql-check-socket
# '%n' expands to 'Full unit name'; man systemd.unit
ExecStartPre=/usr/libexec/mysql-prepare-db-dir %n
# MYSQLD_OPTS here is for users to set in /etc/systemd/system/mariadb@.service.d/MY_SPECIAL.conf
# Note: we set --basedir to prevent probes that might trigger SELinux alarms,
# per bug #547485
ExecStart=/usr/libexec/mysqld --basedir=/usr $MYSQLD_OPTS $_WSREP_NEW_CLUSTER
ExecStartPost=/usr/libexec/mysql-check-upgrade

# Setting this to true can break replication and the Type=notify settings
# See also bind-address mysqld option.
PrivateNetwork=false

KillMode=process
KillSignal=SIGTERM

# Don't want to see an automated SIGKILL ever
SendSIGKILL=no

# Restart crashed server only, on-failure would also restart, for example, when
# my.cnf contains unknown option
Restart=on-abort
RestartSec=5s

UMask=007

# Give a reasonable amount of time for the server to start up/shut down
TimeoutSec=300

# Place temp files in a secure directory, not /tmp
PrivateTmp=true
[root@centos-s-1vcpu-2gb-sfo3-01 ~]#

通用配置值(LimitNOFILE=infinity)

sudo tee /etc/systemd/system/mariadb.service.d/limitnofile.conf <<EOF
[Service]

LimitNOFILE=infinity
EOF
sudo systemctl daemon-reload

通用配置值infinity实际值来源

[centos@mariadb ~]$ cat /proc/sys/fs/nr_open
1048576
[centos@mariadb ~]$

建议配置值(较大的正整数)

sudo tee /etc/systemd/system/mariadb.service.d/limitnofile.conf <<EOF
[Service]

LimitNOFILE=1048576
EOF
sudo systemctl daemon-reload

Mairadb最大连接数参数详情

https://mariadb.com/docs/reference/mdb/system-variables/max_connections/#mdb-system-variables-max-connections
https://mariadb.com/kb/en/server-system-variables/#max_connections
Minimum Value 10
Maximum Value 100000
Default Value 151

修改Mariadb最大连接数配置(上限值)

[root@centos-s-1vcpu-2gb-sfo3-01 ~]# vi /etc/my.cnf.d/mariadb-server.cnf
[mysqld]
max_connections = 100000
[root@centos-s-1vcpu-2gb-sfo3-01 ~]# systemctl restart mariadb
[root@centos-s-1vcpu-2gb-sfo3-01 ~]#

确认Mariadb最大连接数配置生效

MariaDB [(none)]> show variables like "max_connections";
+-----------------+--------+
| Variable_name   | Value  |
+-----------------+--------+
| max_connections | 100000 |
+-----------------+--------+
1 row in set (0.002 sec)

MariaDB [(none)]>
6月 122020
 

https://rancher.com/docs/rke/latest/en/installation/
https://rancher.com/docs/rke/latest/en/example-yamls/
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://rancher.com/docs/rke/latest/en/kubeconfig/

节点主机名及IP信息

167.172.114.10 10.138.218.141 rancher-01
159.65.106.35 10.138.218.144 rancher-02
159.65.102.101 10.138.218.146 rancher-03

节点基础环境配置

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config;
echo "167.172.114.10 rancher-01">>/etc/hosts;
echo "159.65.106.35 rancher-02">>/etc/hosts;
echo "159.65.102.101 rancher-03">>/etc/hosts;
init 6

节点Docker运行环境配置

curl https://releases.rancher.com/install-docker/18.09.sh | sh;
useradd rancher;
usermod -aG docker rancher
echo "rancherpwd" | passwd --stdin rancher

为节点生成并配置密钥对

生成密钥对

[root@rancher-01 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:sfL3YnyrNZsioS3ThuOTRME7AIyLxm4Yq396LAaeQOY root@rancher-01
The key's randomart image is:
+---[RSA 2048]----+
| o.. .           |
|. . . o          |
|o.   . o.        |
|+=    +  o       |
|Bo   ...S        |
|=E    .o.        |
|=... . *.o. o    |
|.oo + O =.=o.+   |
| oo= ..* o.==.   |
+----[SHA256]-----+
[root@rancher-01 ~]#

分发密钥对

[root@rancher-01 ~]# ssh-copy-id -i .ssh/id_rsa.pub rancher@rancher-01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'rancher-01 (::1)' can't be established.
ECDSA key fingerprint is SHA256:NTaQJddPf6G3saQd2d6iQnF+Txp6YpkwhyiNuSImgNg.
ECDSA key fingerprint is MD5:ee:13:1b:70:95:ab:28:30:20:38:64:69:44:bd:1a:4a.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
rancher@rancher-01's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'rancher@rancher-01'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-01 ~]# ssh-copy-id -i .ssh/id_rsa.pub rancher@rancher-02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'rancher-02 (159.65.106.35)' can't be established.
ECDSA key fingerprint is SHA256:bZ2ZGx9IIzSGC2fkMEtWULbau8RcAeOOCwh+4QOMU2g.
ECDSA key fingerprint is MD5:48:d9:55:3c:9e:91:8a:47:c1:1a:3e:77:c7:f2:21:a7.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
rancher@rancher-02's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'rancher@rancher-02'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-01 ~]# ssh-copy-id -i .ssh/id_rsa.pub rancher@rancher-03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
The authenticity of host 'rancher-03 (159.65.102.101)' can't be established.
ECDSA key fingerprint is SHA256:74nZvSQC34O7LrXlRzu/k0MsQzFcucn/n6c8X9CREwM.
ECDSA key fingerprint is MD5:37:2c:97:0e:d2:8e:4b:f5:7e:c5:b2:34:b5:f2:86:60.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
rancher@rancher-03's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'rancher@rancher-03'"
and check to make sure that only the key(s) you wanted were added.

[root@rancher-01 ~]#

下载安装RKE(Rancher Kubernetes Engine)

[root@rancher-01 ~]# yum -y install wget
[root@rancher-01 ~]# wget https://github.com/rancher/rke/releases/download/v1.1.2/rke_linux-amd64
[root@rancher-01 ~]# ls
anaconda-ks.cfg original-ks.cfg rke_linux-arm64
[root@rancher-01 ~]# mv rke_linux-amd64 /usr/bin/rke
[root@rancher-01 ~]# chmod +x /usr/bin/rke

查看RKE版本信息

[root@rancher-01 ~]# rke --version
rke version v1.1.2
[root@rancher-01 ~]#

生成RKE集权配置文件(OpenSSH Server版本6.7及以上,禁止使用root用户,需指定docker socket路径/var/run/docker.sock)

[root@rancher-01 ~]# rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]: 3
[+] SSH Address of host (1) [none]: 167.172.114.10
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (167.172.114.10) [none]:
[-] You have entered empty SSH key path, trying fetch from SSH key parameter
[+] SSH Private Key of host (167.172.114.10) [none]: ^C
[root@rancher-01 ~]# rke config --name cluster.yml
[+] Cluster Level SSH Private Key Path [~/.ssh/id_rsa]:
[+] Number of Hosts [1]: 3
[+] SSH Address of host (1) [none]: 167.172.114.10
[+] SSH Port of host (1) [22]:
[+] SSH Private Key Path of host (167.172.114.10) [none]: ~/.ssh/id_rsa
[+] SSH User of host (167.172.114.10) [ubuntu]: rancher
[+] Is host (167.172.114.10) a Control Plane host (y/n)? [y]:
[+] Is host (167.172.114.10) a Worker host (y/n)? [n]:
[+] Is host (167.172.114.10) an etcd host (y/n)? [n]: y
[+] Override Hostname of host (167.172.114.10) [none]: rancher-01
[+] Internal IP of host (167.172.114.10) [none]: 10.138.218.141
[+] Docker socket path on host (167.172.114.10) [/var/run/docker.sock]:
[+] SSH Address of host (2) [none]: 159.65.106.35
[+] SSH Port of host (2) [22]:
[+] SSH Private Key Path of host (159.65.106.35) [none]: ~/.ssh/id_rsa
[+] SSH User of host (159.65.106.35) [ubuntu]: rancher
[+] Is host (159.65.106.35) a Control Plane host (y/n)? [y]: n
[+] Is host (159.65.106.35) a Worker host (y/n)? [n]: y
[+] Is host (159.65.106.35) an etcd host (y/n)? [n]:
[+] Override Hostname of host (159.65.106.35) [none]: rancher-02
[+] Internal IP of host (159.65.106.35) [none]: 10.138.218.144
[+] Docker socket path on host (159.65.106.35) [/var/run/docker.sock]:
[+] SSH Address of host (3) [none]: 159.65.102.101
[+] SSH Port of host (3) [22]:
[+] SSH Private Key Path of host (159.65.102.101) [none]: ~/.ssh/id_rsa
[+] SSH User of host (159.65.102.101) [ubuntu]: rancher
[+] Is host (159.65.102.101) a Control Plane host (y/n)? [y]: n
[+] Is host (159.65.102.101) a Worker host (y/n)? [n]: y
[+] Is host (159.65.102.101) an etcd host (y/n)? [n]:
[+] Override Hostname of host (159.65.102.101) [none]: rancher-03
[+] Internal IP of host (159.65.102.101) [none]: 10.138.218.146
[+] Docker socket path on host (159.65.102.101) [/var/run/docker.sock]:
[+] Network Plugin Type (flannel, calico, weave, canal) [canal]:
[+] Authentication Strategy [x509]:
[+] Authorization Mode (rbac, none) [rbac]:
[+] Kubernetes Docker image [rancher/hyperkube:v1.17.6-rancher2]:
[+] Cluster domain [cluster.local]:
[+] Service Cluster IP Range [10.43.0.0/16]:
[+] Enable PodSecurityPolicy [n]:
[+] Cluster Network CIDR [10.42.0.0/16]:
[+] Cluster DNS Service IP [10.43.0.10]:
[+] Add addon manifest URLs or YAML files [no]:
[root@rancher-01 ~]#

查看RKE集群配置文件

[root@rancher-01 ~]# cat cluster.yml
# If you intened to deploy Kubernetes in an air-gapped environment,
# please consult the documentation on how to configure custom RKE images.
nodes:
- address: 167.172.114.10
  port: "22"
  internal_address: 10.138.218.141
  role:
  - controlplane
  - etcd
  hostname_override: rancher-01
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 159.65.106.35
  port: "22"
  internal_address: 10.138.218.144
  role:
  - worker
  hostname_override: rancher-02
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
- address: 159.65.102.101
  port: "22"
  internal_address: 10.138.218.146
  role:
  - worker
  hostname_override: rancher-03
  user: rancher
  docker_socket: /var/run/docker.sock
  ssh_key: ""
  ssh_key_path: ~/.ssh/id_rsa
  ssh_cert: ""
  ssh_cert_path: ""
  labels: {}
  taints: []
services:
  etcd:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    external_urls: []
    ca_cert: ""
    cert: ""
    key: ""
    path: ""
    uid: 0
    gid: 0
    snapshot: null
    retention: ""
    creation: ""
    backup_config: null
  kube-api:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    service_cluster_ip_range: 10.43.0.0/16
    service_node_port_range: ""
    pod_security_policy: false
    always_pull_images: false
    secrets_encryption_config: null
    audit_log: null
    admission_configuration: null
    event_rate_limit: null
  kube-controller:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_cidr: 10.42.0.0/16
    service_cluster_ip_range: 10.43.0.0/16
  scheduler:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
  kubelet:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
    cluster_domain: cluster.local
    infra_container_image: ""
    cluster_dns_server: 10.43.0.10
    fail_swap_on: false
    generate_serving_certificate: false
  kubeproxy:
    image: ""
    extra_args: {}
    extra_binds: []
    extra_env: []
network:
  plugin: canal
  options: {}
  mtu: 0
  node_selector: {}
  update_strategy: null
authentication:
  strategy: x509
  sans: []
  webhook: null
addons: ""
addons_include: []
system_images:
  etcd: rancher/coreos-etcd:v3.4.3-rancher1
  alpine: rancher/rke-tools:v0.1.56
  nginx_proxy: rancher/rke-tools:v0.1.56
  cert_downloader: rancher/rke-tools:v0.1.56
  kubernetes_services_sidecar: rancher/rke-tools:v0.1.56
  kubedns: rancher/k8s-dns-kube-dns:1.15.0
  dnsmasq: rancher/k8s-dns-dnsmasq-nanny:1.15.0
  kubedns_sidecar: rancher/k8s-dns-sidecar:1.15.0
  kubedns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  coredns: rancher/coredns-coredns:1.6.5
  coredns_autoscaler: rancher/cluster-proportional-autoscaler:1.7.1
  nodelocal: rancher/k8s-dns-node-cache:1.15.7
  kubernetes: rancher/hyperkube:v1.17.6-rancher2
  flannel: rancher/coreos-flannel:v0.11.0-rancher1
  flannel_cni: rancher/flannel-cni:v0.3.0-rancher6
  calico_node: rancher/calico-node:v3.13.4
  calico_cni: rancher/calico-cni:v3.13.4
  calico_controllers: rancher/calico-kube-controllers:v3.13.4
  calico_ctl: rancher/calico-ctl:v3.13.4
  calico_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
  canal_node: rancher/calico-node:v3.13.4
  canal_cni: rancher/calico-cni:v3.13.4
  canal_flannel: rancher/coreos-flannel:v0.11.0
  canal_flexvol: rancher/calico-pod2daemon-flexvol:v3.13.4
  weave_node: weaveworks/weave-kube:2.6.4
  weave_cni: weaveworks/weave-npc:2.6.4
  pod_infra_container: rancher/pause:3.1
  ingress: rancher/nginx-ingress-controller:nginx-0.32.0-rancher1
  ingress_backend: rancher/nginx-ingress-controller-defaultbackend:1.5-rancher1
  metrics_server: rancher/metrics-server:v0.3.6
  windows_pod_infra_container: rancher/kubelet-pause:v0.1.3
ssh_key_path: ~/.ssh/id_rsa
ssh_cert_path: ""
ssh_agent_auth: false
authorization:
  mode: rbac
  options: {}
ignore_docker_version: false
kubernetes_version: ""
private_registries: []
ingress:
  provider: ""
  options: {}
  node_selector: {}
  extra_args: {}
  dns_policy: ""
  extra_envs: []
  extra_volumes: []
  extra_volume_mounts: []
  update_strategy: null
cluster_name: ""
cloud_provider:
  name: ""
prefix_path: ""
addon_job_timeout: 0
bastion_host:
  address: ""
  port: ""
  user: ""
  ssh_key: ""
  ssh_key_path: ""
  ssh_cert: ""
  ssh_cert_path: ""
monitoring:
  provider: ""
  options: {}
  node_selector: {}
  update_strategy: null
  replicas: null
restore:
  restore: false
  snapshot_name: ""
dns: null
[root@rancher-01 ~]#

执行集群部署

[root@rancher-01 ~]# rke up --config cluster.yml
INFO[0000] Running RKE version: v1.1.2
INFO[0000] Initiating Kubernetes cluster
INFO[0000] [dialer] Setup tunnel for host [159.65.102.101]
INFO[0000] [dialer] Setup tunnel for host [159.65.106.35]
INFO[0000] [dialer] Setup tunnel for host [167.172.114.10]
INFO[0000] Checking if container [cluster-state-deployer] is running on host [167.172.114.10], try #1
INFO[0000] Pulling image [rancher/rke-tools:v0.1.56] on host [167.172.114.10], try #1
INFO[0005] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0005] Starting container [cluster-state-deployer] on host [167.172.114.10], try #1
INFO[0005] [state] Successfully started [cluster-state-deployer] container on host [167.172.114.10]
INFO[0006] Checking if container [cluster-state-deployer] is running on host [159.65.106.35], try #1
INFO[0006] Pulling image [rancher/rke-tools:v0.1.56] on host [159.65.106.35], try #1
INFO[0012] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0012] Starting container [cluster-state-deployer] on host [159.65.106.35], try #1
INFO[0012] [state] Successfully started [cluster-state-deployer] container on host [159.65.106.35]
INFO[0012] Checking if container [cluster-state-deployer] is running on host [159.65.102.101], try #1
INFO[0012] Pulling image [rancher/rke-tools:v0.1.56] on host [159.65.102.101], try #1
INFO[0020] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0020] Starting container [cluster-state-deployer] on host [159.65.102.101], try #1
INFO[0021] [state] Successfully started [cluster-state-deployer] container on host [159.65.102.101]
INFO[0021] [certificates] Generating CA kubernetes certificates
INFO[0021] [certificates] Generating Kubernetes API server aggregation layer requestheader client CA certificates
INFO[0021] [certificates] GenerateServingCertificate is disabled, checking if there are unused kubelet certificates
INFO[0021] [certificates] Generating Kubernetes API server certificates
INFO[0022] [certificates] Generating Service account token key
INFO[0022] [certificates] Generating Kube Controller certificates
INFO[0022] [certificates] Generating Kube Scheduler certificates
INFO[0022] [certificates] Generating Kube Proxy certificates
INFO[0022] [certificates] Generating Node certificate
INFO[0022] [certificates] Generating admin certificates and kubeconfig
INFO[0022] [certificates] Generating Kubernetes API server proxy client certificates
INFO[0023] [certificates] Generating kube-etcd-10-138-218-141 certificate and key
INFO[0023] Successfully Deployed state file at [./cluster.rkestate]
INFO[0023] Building Kubernetes cluster
INFO[0023] [dialer] Setup tunnel for host [159.65.102.101]
INFO[0023] [dialer] Setup tunnel for host [167.172.114.10]
INFO[0023] [dialer] Setup tunnel for host [159.65.106.35]
INFO[0023] [network] Deploying port listener containers
INFO[0023] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0023] Starting container [rke-etcd-port-listener] on host [167.172.114.10], try #1
INFO[0024] [network] Successfully started [rke-etcd-port-listener] container on host [167.172.114.10]
INFO[0024] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0024] Starting container [rke-cp-port-listener] on host [167.172.114.10], try #1
INFO[0024] [network] Successfully started [rke-cp-port-listener] container on host [167.172.114.10]
INFO[0024] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0024] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0024] Starting container [rke-worker-port-listener] on host [159.65.102.101], try #1
INFO[0024] Starting container [rke-worker-port-listener] on host [159.65.106.35], try #1
INFO[0024] [network] Successfully started [rke-worker-port-listener] container on host [159.65.102.101]
INFO[0024] [network] Successfully started [rke-worker-port-listener] container on host [159.65.106.35]
INFO[0024] [network] Port listener containers deployed successfully
INFO[0024] [network] Running control plane -> etcd port checks
INFO[0024] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0024] Starting container [rke-port-checker] on host [167.172.114.10], try #1
INFO[0025] [network] Successfully started [rke-port-checker] container on host [167.172.114.10]
INFO[0025] Removing container [rke-port-checker] on host [167.172.114.10], try #1
INFO[0025] [network] Running control plane -> worker port checks
INFO[0025] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0025] Starting container [rke-port-checker] on host [167.172.114.10], try #1
INFO[0025] [network] Successfully started [rke-port-checker] container on host [167.172.114.10]
INFO[0025] Removing container [rke-port-checker] on host [167.172.114.10], try #1
INFO[0025] [network] Running workers -> control plane port checks
INFO[0025] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0025] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0025] Starting container [rke-port-checker] on host [159.65.106.35], try #1
INFO[0025] Starting container [rke-port-checker] on host [159.65.102.101], try #1
INFO[0025] [network] Successfully started [rke-port-checker] container on host [159.65.106.35]
INFO[0025] Removing container [rke-port-checker] on host [159.65.106.35], try #1
INFO[0026] [network] Successfully started [rke-port-checker] container on host [159.65.102.101]
INFO[0026] Removing container [rke-port-checker] on host [159.65.102.101], try #1
INFO[0026] [network] Checking KubeAPI port Control Plane hosts
INFO[0026] [network] Removing port listener containers
INFO[0026] Removing container [rke-etcd-port-listener] on host [167.172.114.10], try #1
INFO[0026] [remove/rke-etcd-port-listener] Successfully removed container on host [167.172.114.10]
INFO[0026] Removing container [rke-cp-port-listener] on host [167.172.114.10], try #1
INFO[0026] [remove/rke-cp-port-listener] Successfully removed container on host [167.172.114.10]
INFO[0026] Removing container [rke-worker-port-listener] on host [159.65.106.35], try #1
INFO[0026] Removing container [rke-worker-port-listener] on host [159.65.102.101], try #1
INFO[0026] [remove/rke-worker-port-listener] Successfully removed container on host [159.65.102.101]
INFO[0026] [remove/rke-worker-port-listener] Successfully removed container on host [159.65.106.35]
INFO[0026] [network] Port listener containers removed successfully
INFO[0026] [certificates] Deploying kubernetes certificates to Cluster nodes
INFO[0026] Checking if container [cert-deployer] is running on host [159.65.106.35], try #1
INFO[0026] Checking if container [cert-deployer] is running on host [159.65.102.101], try #1
INFO[0026] Checking if container [cert-deployer] is running on host [167.172.114.10], try #1
INFO[0026] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0026] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0026] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0026] Starting container [cert-deployer] on host [167.172.114.10], try #1
INFO[0026] Starting container [cert-deployer] on host [159.65.106.35], try #1
INFO[0026] Starting container [cert-deployer] on host [159.65.102.101], try #1
INFO[0027] Checking if container [cert-deployer] is running on host [167.172.114.10], try #1
INFO[0027] Checking if container [cert-deployer] is running on host [159.65.106.35], try #1
INFO[0027] Checking if container [cert-deployer] is running on host [159.65.102.101], try #1
INFO[0032] Checking if container [cert-deployer] is running on host [167.172.114.10], try #1
INFO[0032] Removing container [cert-deployer] on host [167.172.114.10], try #1
INFO[0032] Checking if container [cert-deployer] is running on host [159.65.106.35], try #1
INFO[0032] Removing container [cert-deployer] on host [159.65.106.35], try #1
INFO[0032] Checking if container [cert-deployer] is running on host [159.65.102.101], try #1
INFO[0032] Removing container [cert-deployer] on host [159.65.102.101], try #1
INFO[0032] [reconcile] Rebuilding and updating local kube config
INFO[0032] Successfully Deployed local admin kubeconfig at [./kube_config_cluster.yml]
INFO[0032] [certificates] Successfully deployed kubernetes certificates to Cluster nodes
INFO[0032] [file-deploy] Deploying file [/etc/kubernetes/audit-policy.yaml] to node [167.172.114.10]
INFO[0032] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0032] Starting container [file-deployer] on host [167.172.114.10], try #1
INFO[0032] Successfully started [file-deployer] container on host [167.172.114.10]
INFO[0032] Waiting for [file-deployer] container to exit on host [167.172.114.10]
INFO[0032] Waiting for [file-deployer] container to exit on host [167.172.114.10]
INFO[0032] Container [file-deployer] is still running on host [167.172.114.10]
INFO[0033] Waiting for [file-deployer] container to exit on host [167.172.114.10]
INFO[0033] Removing container [file-deployer] on host [167.172.114.10], try #1
INFO[0033] [remove/file-deployer] Successfully removed container on host [167.172.114.10]
INFO[0033] [/etc/kubernetes/audit-policy.yaml] Successfully deployed audit policy file to Cluster control nodes
INFO[0033] [reconcile] Reconciling cluster state
INFO[0033] [reconcile] This is newly generated cluster
INFO[0033] Pre-pulling kubernetes images
INFO[0033] Pulling image [rancher/hyperkube:v1.17.6-rancher2] on host [167.172.114.10], try #1
INFO[0033] Pulling image [rancher/hyperkube:v1.17.6-rancher2] on host [159.65.102.101], try #1
INFO[0033] Pulling image [rancher/hyperkube:v1.17.6-rancher2] on host [159.65.106.35], try #1
INFO[0065] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [167.172.114.10]
INFO[0071] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [159.65.106.35]
INFO[0080] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [159.65.102.101]
INFO[0080] Kubernetes images pulled successfully
INFO[0080] [etcd] Building up etcd plane..
INFO[0080] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0080] Starting container [etcd-fix-perm] on host [167.172.114.10], try #1
INFO[0081] Successfully started [etcd-fix-perm] container on host [167.172.114.10]
INFO[0081] Waiting for [etcd-fix-perm] container to exit on host [167.172.114.10]
INFO[0081] Waiting for [etcd-fix-perm] container to exit on host [167.172.114.10]
INFO[0081] Container [etcd-fix-perm] is still running on host [167.172.114.10]
INFO[0082] Waiting for [etcd-fix-perm] container to exit on host [167.172.114.10]
INFO[0082] Removing container [etcd-fix-perm] on host [167.172.114.10], try #1
INFO[0082] [remove/etcd-fix-perm] Successfully removed container on host [167.172.114.10]
INFO[0082] Pulling image [rancher/coreos-etcd:v3.4.3-rancher1] on host [167.172.114.10], try #1
INFO[0085] Image [rancher/coreos-etcd:v3.4.3-rancher1] exists on host [167.172.114.10]
INFO[0085] Starting container [etcd] on host [167.172.114.10], try #1
INFO[0086] [etcd] Successfully started [etcd] container on host [167.172.114.10]
INFO[0086] [etcd] Running rolling snapshot container [etcd-snapshot-once] on host [167.172.114.10]
INFO[0086] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0086] Starting container [etcd-rolling-snapshots] on host [167.172.114.10], try #1
INFO[0086] [etcd] Successfully started [etcd-rolling-snapshots] container on host [167.172.114.10]
INFO[0091] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0091] Starting container [rke-bundle-cert] on host [167.172.114.10], try #1
INFO[0091] [certificates] Successfully started [rke-bundle-cert] container on host [167.172.114.10]
INFO[0091] Waiting for [rke-bundle-cert] container to exit on host [167.172.114.10]
INFO[0091] Container [rke-bundle-cert] is still running on host [167.172.114.10]
INFO[0092] Waiting for [rke-bundle-cert] container to exit on host [167.172.114.10]
INFO[0092] [certificates] successfully saved certificate bundle [/opt/rke/etcd-snapshots//pki.bundle.tar.gz] on host [167.172.114.10]
INFO[0092] Removing container [rke-bundle-cert] on host [167.172.114.10], try #1
INFO[0092] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0092] Starting container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0093] [etcd] Successfully started [rke-log-linker] container on host [167.172.114.10]
INFO[0093] Removing container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0093] [remove/rke-log-linker] Successfully removed container on host [167.172.114.10]
INFO[0093] [etcd] Successfully started etcd plane.. Checking etcd cluster health
INFO[0093] [controlplane] Building up Controller Plane..
INFO[0093] Checking if container [service-sidekick] is running on host [167.172.114.10], try #1
INFO[0093] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0093] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [167.172.114.10]
INFO[0093] Starting container [kube-apiserver] on host [167.172.114.10], try #1
INFO[0093] [controlplane] Successfully started [kube-apiserver] container on host [167.172.114.10]
INFO[0093] [healthcheck] Start Healthcheck on service [kube-apiserver] on host [167.172.114.10]
INFO[0098] [healthcheck] service [kube-apiserver] on host [167.172.114.10] is healthy
INFO[0098] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0098] Starting container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0099] [controlplane] Successfully started [rke-log-linker] container on host [167.172.114.10]
INFO[0099] Removing container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0099] [remove/rke-log-linker] Successfully removed container on host [167.172.114.10]
INFO[0099] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [167.172.114.10]
INFO[0099] Starting container [kube-controller-manager] on host [167.172.114.10], try #1
INFO[0099] [controlplane] Successfully started [kube-controller-manager] container on host [167.172.114.10]
INFO[0099] [healthcheck] Start Healthcheck on service [kube-controller-manager] on host [167.172.114.10]
INFO[0104] [healthcheck] service [kube-controller-manager] on host [167.172.114.10] is healthy
INFO[0104] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0104] Starting container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0104] [controlplane] Successfully started [rke-log-linker] container on host [167.172.114.10]
INFO[0104] Removing container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0105] [remove/rke-log-linker] Successfully removed container on host [167.172.114.10]
INFO[0105] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [167.172.114.10]
INFO[0105] Starting container [kube-scheduler] on host [167.172.114.10], try #1
INFO[0105] [controlplane] Successfully started [kube-scheduler] container on host [167.172.114.10]
INFO[0105] [healthcheck] Start Healthcheck on service [kube-scheduler] on host [167.172.114.10]
INFO[0110] [healthcheck] service [kube-scheduler] on host [167.172.114.10] is healthy
INFO[0110] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0110] Starting container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0110] [controlplane] Successfully started [rke-log-linker] container on host [167.172.114.10]
INFO[0110] Removing container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0110] [remove/rke-log-linker] Successfully removed container on host [167.172.114.10]
INFO[0110] [controlplane] Successfully started Controller Plane..
INFO[0110] [authz] Creating rke-job-deployer ServiceAccount
INFO[0110] [authz] rke-job-deployer ServiceAccount created successfully
INFO[0110] [authz] Creating system:node ClusterRoleBinding
INFO[0110] [authz] system:node ClusterRoleBinding created successfully
INFO[0110] [authz] Creating kube-apiserver proxy ClusterRole and ClusterRoleBinding
INFO[0110] [authz] kube-apiserver proxy ClusterRole and ClusterRoleBinding created successfully
INFO[0110] Successfully Deployed state file at [./cluster.rkestate]
INFO[0110] [state] Saving full cluster state to Kubernetes
INFO[0111] [state] Successfully Saved full cluster state to Kubernetes ConfigMap: full-cluster-state
INFO[0111] [worker] Building up Worker Plane..
INFO[0111] Checking if container [service-sidekick] is running on host [167.172.114.10], try #1
INFO[0111] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0111] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0111] [sidekick] Sidekick container already created on host [167.172.114.10]
INFO[0111] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [167.172.114.10]
INFO[0111] Starting container [kubelet] on host [167.172.114.10], try #1
INFO[0111] Starting container [nginx-proxy] on host [159.65.106.35], try #1
INFO[0111] Starting container [nginx-proxy] on host [159.65.102.101], try #1
INFO[0111] [worker] Successfully started [kubelet] container on host [167.172.114.10]
INFO[0111] [healthcheck] Start Healthcheck on service [kubelet] on host [167.172.114.10]
INFO[0111] [worker] Successfully started [nginx-proxy] container on host [159.65.106.35]
INFO[0111] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0111] [worker] Successfully started [nginx-proxy] container on host [159.65.102.101]
INFO[0111] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0111] Starting container [rke-log-linker] on host [159.65.106.35], try #1
INFO[0111] Starting container [rke-log-linker] on host [159.65.102.101], try #1
INFO[0111] [worker] Successfully started [rke-log-linker] container on host [159.65.106.35]
INFO[0111] Removing container [rke-log-linker] on host [159.65.106.35], try #1
INFO[0111] [worker] Successfully started [rke-log-linker] container on host [159.65.102.101]
INFO[0111] Removing container [rke-log-linker] on host [159.65.102.101], try #1
INFO[0111] [remove/rke-log-linker] Successfully removed container on host [159.65.106.35]
INFO[0111] Checking if container [service-sidekick] is running on host [159.65.106.35], try #1
INFO[0111] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0111] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [159.65.106.35]
INFO[0112] [remove/rke-log-linker] Successfully removed container on host [159.65.102.101]
INFO[0112] Checking if container [service-sidekick] is running on host [159.65.102.101], try #1
INFO[0112] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0112] Starting container [kubelet] on host [159.65.106.35], try #1
INFO[0112] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [159.65.102.101]
INFO[0112] Starting container [kubelet] on host [159.65.102.101], try #1
INFO[0112] [worker] Successfully started [kubelet] container on host [159.65.106.35]
INFO[0112] [healthcheck] Start Healthcheck on service [kubelet] on host [159.65.106.35]
INFO[0112] [worker] Successfully started [kubelet] container on host [159.65.102.101]
INFO[0112] [healthcheck] Start Healthcheck on service [kubelet] on host [159.65.102.101]
INFO[0116] [healthcheck] service [kubelet] on host [167.172.114.10] is healthy
INFO[0116] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0116] Starting container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0116] [worker] Successfully started [rke-log-linker] container on host [167.172.114.10]
INFO[0116] Removing container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0116] [remove/rke-log-linker] Successfully removed container on host [167.172.114.10]
INFO[0116] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [167.172.114.10]
INFO[0116] Starting container [kube-proxy] on host [167.172.114.10], try #1
INFO[0117] [worker] Successfully started [kube-proxy] container on host [167.172.114.10]
INFO[0117] [healthcheck] Start Healthcheck on service [kube-proxy] on host [167.172.114.10]
INFO[0117] [healthcheck] service [kubelet] on host [159.65.106.35] is healthy
INFO[0117] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0117] [healthcheck] service [kubelet] on host [159.65.102.101] is healthy
INFO[0117] Starting container [rke-log-linker] on host [159.65.106.35], try #1
INFO[0117] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0117] Starting container [rke-log-linker] on host [159.65.102.101], try #1
INFO[0117] [worker] Successfully started [rke-log-linker] container on host [159.65.106.35]
INFO[0117] Removing container [rke-log-linker] on host [159.65.106.35], try #1
INFO[0117] [worker] Successfully started [rke-log-linker] container on host [159.65.102.101]
INFO[0117] Removing container [rke-log-linker] on host [159.65.102.101], try #1
INFO[0118] [remove/rke-log-linker] Successfully removed container on host [159.65.106.35]
INFO[0118] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [159.65.106.35]
INFO[0118] Starting container [kube-proxy] on host [159.65.106.35], try #1
INFO[0118] [remove/rke-log-linker] Successfully removed container on host [159.65.102.101]
INFO[0118] Image [rancher/hyperkube:v1.17.6-rancher2] exists on host [159.65.102.101]
INFO[0118] Starting container [kube-proxy] on host [159.65.102.101], try #1
INFO[0118] [worker] Successfully started [kube-proxy] container on host [159.65.106.35]
INFO[0118] [healthcheck] Start Healthcheck on service [kube-proxy] on host [159.65.106.35]
INFO[0118] [worker] Successfully started [kube-proxy] container on host [159.65.102.101]
INFO[0118] [healthcheck] Start Healthcheck on service [kube-proxy] on host [159.65.102.101]
INFO[0122] [healthcheck] service [kube-proxy] on host [167.172.114.10] is healthy
INFO[0122] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0122] Starting container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0122] [worker] Successfully started [rke-log-linker] container on host [167.172.114.10]
INFO[0122] Removing container [rke-log-linker] on host [167.172.114.10], try #1
INFO[0122] [remove/rke-log-linker] Successfully removed container on host [167.172.114.10]
INFO[0123] [healthcheck] service [kube-proxy] on host [159.65.106.35] is healthy
INFO[0123] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0123] Starting container [rke-log-linker] on host [159.65.106.35], try #1
INFO[0123] [healthcheck] service [kube-proxy] on host [159.65.102.101] is healthy
INFO[0123] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0123] Starting container [rke-log-linker] on host [159.65.102.101], try #1
INFO[0123] [worker] Successfully started [rke-log-linker] container on host [159.65.106.35]
INFO[0123] Removing container [rke-log-linker] on host [159.65.106.35], try #1
INFO[0124] [remove/rke-log-linker] Successfully removed container on host [159.65.106.35]
INFO[0124] [worker] Successfully started [rke-log-linker] container on host [159.65.102.101]
INFO[0124] Removing container [rke-log-linker] on host [159.65.102.101], try #1
INFO[0124] [remove/rke-log-linker] Successfully removed container on host [159.65.102.101]
INFO[0124] [worker] Successfully started Worker Plane..
INFO[0124] Image [rancher/rke-tools:v0.1.56] exists on host [167.172.114.10]
INFO[0124] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.106.35]
INFO[0124] Image [rancher/rke-tools:v0.1.56] exists on host [159.65.102.101]
INFO[0124] Starting container [rke-log-cleaner] on host [167.172.114.10], try #1
INFO[0124] Starting container [rke-log-cleaner] on host [159.65.106.35], try #1
INFO[0124] Starting container [rke-log-cleaner] on host [159.65.102.101], try #1
INFO[0124] [cleanup] Successfully started [rke-log-cleaner] container on host [167.172.114.10]
INFO[0124] Removing container [rke-log-cleaner] on host [167.172.114.10], try #1
INFO[0124] [cleanup] Successfully started [rke-log-cleaner] container on host [159.65.106.35]
INFO[0124] Removing container [rke-log-cleaner] on host [159.65.106.35], try #1
INFO[0125] [remove/rke-log-cleaner] Successfully removed container on host [167.172.114.10]
INFO[0125] [cleanup] Successfully started [rke-log-cleaner] container on host [159.65.102.101]
INFO[0125] Removing container [rke-log-cleaner] on host [159.65.102.101], try #1
INFO[0125] [remove/rke-log-cleaner] Successfully removed container on host [159.65.106.35]
INFO[0125] [remove/rke-log-cleaner] Successfully removed container on host [159.65.102.101]
INFO[0125] [sync] Syncing nodes Labels and Taints
INFO[0125] [sync] Successfully synced nodes Labels and Taints
INFO[0125] [network] Setting up network plugin: canal
INFO[0125] [addons] Saving ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0125] [addons] Successfully saved ConfigMap for addon rke-network-plugin to Kubernetes
INFO[0125] [addons] Executing deploy job rke-network-plugin
INFO[0130] [addons] Setting up coredns
INFO[0130] [addons] Saving ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0130] [addons] Successfully saved ConfigMap for addon rke-coredns-addon to Kubernetes
INFO[0130] [addons] Executing deploy job rke-coredns-addon
INFO[0135] [addons] CoreDNS deployed successfully
INFO[0135] [dns] DNS provider coredns deployed successfully
INFO[0135] [addons] Setting up Metrics Server
INFO[0135] [addons] Saving ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0135] [addons] Successfully saved ConfigMap for addon rke-metrics-addon to Kubernetes
INFO[0135] [addons] Executing deploy job rke-metrics-addon
INFO[0140] [addons] Metrics Server deployed successfully
INFO[0140] [ingress] Setting up nginx ingress controller
INFO[0140] [addons] Saving ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0140] [addons] Successfully saved ConfigMap for addon rke-ingress-controller to Kubernetes
INFO[0140] [addons] Executing deploy job rke-ingress-controller
INFO[0145] [ingress] ingress controller nginx deployed successfully
INFO[0145] [addons] Setting up user addons
INFO[0145] [addons] no user addons defined
INFO[0145] Finished building Kubernetes cluster successfully
[root@rancher-01 ~]#

查看生成的kubeconfig配置文件

[root@rancher-01 ~]# cat kube_config_cluster.yml
apiVersion: v1
kind: Config
clusters:
- cluster:
    api-version: v1
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN3akNDQWFxZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFTTVJBd0RnWURWUVFERXdkcmRXSmwKTFdOaE1CNFhEVEl3TURZeE1qQTROVFF3T0ZvWERUTXdNRFl4TURBNE5UUXdPRm93RWpFUU1BNEdBMVVFQXhNSAphM1ZpWlMxallUQ0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5NU3FXQWRkSjVECjRxQ0NMWXRHVWY4SjdIdUIvVlpaYldLckY4M3NZSzVaaEFsK0VrRnlhWFUwWU9aZGdHQldTZDVoMDVNa0VJQmkKZEdFK1gxWDd5RVM1R0NUMGUxYTU2Z1hXMXljQUxiYzVxZUxSQmtpV2p5b09KT0tJSHJBWGxOZVpQcHA4Tm9oYgpuOG50c29BaHc2TVcySjRERWQ2L2lZcGUxMkl4QlhQVVE1R1Y3aE5SV3k3WHVoQ2NDclRDQTRrNmVlakNTdTFrCnYyRUxUMTZ1ekc5OStwUFM2MElRd25FTjFzOFV2Rms2VU1ZVzFPTnFQbXBGdE52M0hmTkZ3OGpHVWh5MXZKeEgKVm9Ed2JqUmZQUHlabERvSmFETXBnSFpxSlhTSlFvaHkxaVk3aXBuaHh6L1VHMExyMmZZZk4vZEhDeUJHOWp4NAplM1JwMGRxTWFOVUNBd0VBQWFNak1DRXdEZ1lEVlIwUEFRSC9CQVFEQWdLa01BOEdBMVVkRXdFQi93UUZNQU1CCkFmOHdEUVlKS29aSWh2Y05BUUVMQlFBRGdnRUJBQlJUUVJVSDRSM2tPVEJLQVJZTkV4YW1VZ3BqQjNDNmVFZGgKaVdOU2lJU0FnMHlSUnlLakpIb21ibUpjRHA3UVAxUWtnamg5a3Z3TnpOU1EwQ3NaRmVMaUpJVHc3U3l0d1huaApmZmtQUnpTTXB2ZVpzZ0RVWExadlkzQjM3bFpCSDhSR0xNQTlYRVZRVUdUcDBSWTFqNE5VN2llaUJycDRveG42CnZNajQra0xocE9pZUJjQlZ6MXVLN0Vkbk5uRTZvcVFNemkzNHBNbDJHT3dWbEY1UVVYcVRNTTJoM244MU04WEcKdFpDOFY2RTlzSXZsTjFkT2tPTEpYWnIwQnI1aGdpbVNCd0xKbks3WlhRTkFjL2pzb3FCa3BEM2ZwWG1OUzFNaQpQWnJqb0t6RWlhclVwNXZsQndFY2pRUDI3TlNQOWVROGkzV2dpVE85VEd6djQ4SzdiOUU9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: "https://167.172.114.10:6443"
  name: "local"
contexts:
- context:
    cluster: "local"
    user: "kube-admin-local"
  name: "local"
current-context: "local"
users:
- name: "kube-admin-local"
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM2VENDQWRHZ0F3SUJBZ0lJSEN2MzNVd2FnWTB3RFFZSktvWklodmNOQVFFTEJRQXdFakVRTUE0R0ExVUUKQXhNSGEzVmlaUzFqWVRBZUZ3MHlNREEyTVRJd09EVTBNRGhhRncwek1EQTJNVEF3T0RVME1EbGFNQzR4RnpBVgpCZ05WQkFvVERuTjVjM1JsYlRwdFlYTjBaWEp6TVJNd0VRWURWUVFERXdwcmRXSmxMV0ZrYldsdU1JSUJJakFOCkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQXFMdjUvTDYxdG5ybitZV2VNMDlDWnJNeEI5NkEKSVdSZFQ5M2poNTdYaXdsb0Jtd3NsOStYLzdmZnBGTzZYcXV1QUVDWW4zZEJ2WnMvc256R1I5YUl2NXhpZ1pxRgpDZ0ZCakpsNjE0UVB3N0FGYVJDUTRyMTlxTEdEUS9EMmhhV25YQm4rZU5pNlZsRXlFNVU0cEttVUM1U2FITUdXCmRRR0h2MTZ4bmdyQVllb2gwRzRCbmErV0wyNDNybG5DNVROZ2QwOUJRV2V5Vng5SUppZ3hzcCtkTEMyM2J2MUkKS1VIM0VwV0hJNGFLK05CeWN2SzRMUU9jRUVlWEZuTnRDUmZ3ZkVNeThVbTAwQUZiZG90OGpHajhYTzhlYzlpRgplT21pbUhXZFdDa01uZHJiNDFtSWU3MEVKUGZwM0FxVmRTMkg4azd3MWxaa2NzVkNBa2psbWpYZVlRSURBUUFCCm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUhBd0l3RFFZSktvWkkKaHZjTkFRRUxCUUFEZ2dFQkFKTnNFaUhta0tPVnpnSVJWOEdSRTZqL2lGQ1lWSzVIakVtR0YzTk9KcUhBNUVLZAo0SDVRZWFubTBuRUpzOFVYdithSUhNcTZ3QjRBc3c5MnJsdnV5NUxIZVNJbVN6UCtVbTdqT0hYZGdjK3d2TXI3Cmt6L1VuT3FPNlJPQ3JUZ1Rod1ZtbHYvNTRxTTZJTkI3aWI1YzNZRlRFU2lJbHdxM05KYU1rMDV6QWp6N3lPM3YKaXdDQ1U0ckJRa2l4MGVQVFlLREJYV1lNOFpUakhLby9TT2JYRFBFRTFVYWFnM2FsMU4xUXNiWUcrYlk2ZWt0VQpSdkpxV0lJNTE5Um5kVWxGMW9zaGNySVJRYlFTSll0S0E5clJhVEZ6SUpIOVR5dldJeXcrSHUrYUpBdkpJdTRnCmIvMkpBUzFHZ0orcjQwc1lqL3o1d04xMHBXWVgyS1RTMWxrVUlnYz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBcUx2NS9MNjF0bnJuK1lXZU0wOUNack14Qjk2QUlXUmRUOTNqaDU3WGl3bG9CbXdzCmw5K1gvN2ZmcEZPNlhxdXVBRUNZbjNkQnZacy9zbnpHUjlhSXY1eGlnWnFGQ2dGQmpKbDYxNFFQdzdBRmFSQ1EKNHIxOXFMR0RRL0QyaGFXblhCbitlTmk2VmxFeUU1VTRwS21VQzVTYUhNR1dkUUdIdjE2eG5nckFZZW9oMEc0QgpuYStXTDI0M3JsbkM1VE5nZDA5QlFXZXlWeDlJSmlneHNwK2RMQzIzYnYxSUtVSDNFcFdISTRhSytOQnljdks0CkxRT2NFRWVYRm5OdENSZndmRU15OFVtMDBBRmJkb3Q4akdqOFhPOGVjOWlGZU9taW1IV2RXQ2tNbmRyYjQxbUkKZTcwRUpQZnAzQXFWZFMySDhrN3cxbFprY3NWQ0FramxtalhlWVFJREFRQUJBb0lCQUJyVjRwbE8zMm1KUEpHVApyYWh0WjVzYnpxVjR2cG9RODBJN2dPOVYxT1A0K0FGbGZPWWVtbmNDRUdCN0xIM1lBaEZxTkp2UUJMV2FGbFJWCndkYzFDSVNvNDRYSFJIZGw0YjN4dnZhOXV5QWRRNDhGSW5YZE96bjBHWE5aeEd0WEFEb0dyRkVkN3V6QmR4eGsKTkNFRUUxYVFLTDZBRDJUR2ZJZDBFUDJZcWlZb0syRjFwSGJ3N1FPNGhkcXdpWWRwK2xzcDZDQTd0NGpMTnpjeApkaFZHWkE4eHFpdU9MUndmTk85RXhFN1FyTmZtcGpWcHA2di93Q3FpMkdPWGdHVnp3WUtqWm1Yc2NxclltazN6CjZ5RjNXQVJLTDNsQTk0WWxnYTdHaTAzL0tDdS9CMXFoMVJKRU1wcFhva3RXNVJRdStxNU82aG92ZjNONTlOWkYKdlFmNU10MENnWUVBelljM0dMNk5OQmRGVVlpWDRHK21SM2FzNVU5QkdmcWpnSE1YMWtWZXlIZUc2WFlHT29iawpWSHdxU3pReE95VS9JcFRKeHNIR3RKZ2ZkOU1ncXd3bDloSTBpc3pUT2M5bkxlckVLZXdpdG9yejU2USthVEY5CjNGSjhBTExPOTZxVEk5WkxYV3BSRnZZL2lJMlJZRHZYemJybFZ1ZDkzRzhkUFoydUE0YkFtL2NDZ1lFQTBpdXEKdmRPSUtsTXFzOUVQcWdUZ2lZbXRzd0x1Q1djL1ZjWnpWTm5UZWcrYnBSajFxVmdWMTNyOTB6RTAyYmtCU0g5NgorWlRvWEdVbGEzY1p4c0VKZStwRXFtc3RpMDI5NzJTUzI3ZHNodXFEU2NrdVJLM2RUTW1SVXRubXR1SkJJbFhHCnJhSGJ6aXhVL1lwR1o4VEtpdzFaYmhiS3ZLWTNUSGxlbWxET1VtY0NnWUJlVmY3N0U1TjZZbWdGeVgxMG5hcWoKeUp3SlVMeGY4VVFVMUQ4UHNaMlV4QkFmbm5XemJYRG1PbXVyUXhTSndrbmRWSS9jODlxQjBBVTVtYVczL1FaNwprTldmRSs2cjdUKzl1ckU1VU5LS0dQTmswbVYzSVNsVTlHTklhc3BHc1h1Q0NuMWpMa1owRktrS3czZ0R4TlFECjhSSU5Ob24xb09hNS9tTDk2VjhFOXdLQmdRQ1JXa3ZxbnZwRU0yS01IQ0ZXTjZ0RzArWkNzTnNKdTlOTXNrUWYKUWNzRlZ2Z1JGWk1JL0hlV29HUWRoS0dGbG5LeHZpREJyZCtKenhZekhackJIODQ4V2dnRlNMeWw1QzFnL0ZDcApEbEZMZWJNMCs2TTVNbm1qMnAvY0NnR0xLQzFkM3E3YWROKzgxbUl0TzAxNEJOMERrRWJ5WVdielU0MVpJWE54CkRFTzFMd0tCZ0FpNkhWblZTN3NyZFYrTnRGTk1FMXl4b1g2S2svZ09VZ2ZESzduN2kzL1dWUWFSTGw0Umh4aTUKbzljN0xTbmZFRXptQUhxR0RmNUE4a2hDR01tZ0xyNnZQbkV3bXNDMmo4ankvRnZIMkpPdnp1QW02NFNUM1IvUQpkUktZVXZhT0ZDc3J4bjZiVVdFZnl3L1ZDeDJPWlZmU1AwOHp5Ymx6TDJQWUhWclFHMjAyCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==[root@rancher-01 ~]#

安装kubectl二进制工具

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl-1.17.6

查看版本信息

[root@rancher-01 ~]# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:16:24Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
[root@rancher-01 ~]#

查看集群节点信息

[root@rancher-01 ~]# kubectl --kubeconfig kube_config_cluster.yml get nodes -o wide
NAME         STATUS   ROLES               AGE   VERSION   INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION               CONTAINER-RUNTIME
rancher-01   Ready    controlplane,etcd   12m   v1.17.6   10.138.218.141   <none>        CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://18.9.9
rancher-02   Ready    worker              12m   v1.17.6   10.138.218.144   <none>        CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://18.9.9
rancher-03   Ready    worker              12m   v1.17.6   10.138.218.146   <none>        CentOS Linux 7 (Core)   3.10.0-957.27.2.el7.x86_64   docker://18.9.9
[root@rancher-01 ~]#

查看集群组件状态信息

[root@rancher-01 ~]# kubectl --kubeconfig kube_config_cluster.yml get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok
scheduler            Healthy   ok
etcd-0               Healthy   {"health":"true"}
[root@rancher-01 ~]#

查看命名空间列表

[root@rancher-01 ~]# kubectl --kubeconfig kube_config_cluster.yml get namespace
NAME              STATUS   AGE
default           Active   16m
ingress-nginx     Active   15m
kube-node-lease   Active   16m
kube-public       Active   16m
kube-system       Active   16m
[root@rancher-01 ~]#

查看kube-system命名空间下Pods状态信息

[root@rancher-01 ~]# kubectl --kubeconfig kube_config_cluster.yml get pods --namespace=kube-system -o wide
NAME                                      READY   STATUS      RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
canal-dgt4n                               2/2     Running     0          17m   10.138.218.146   rancher-03   <none>           <none>
canal-v9pkx                               2/2     Running     0          17m   10.138.218.141   rancher-01   <none>           <none>
canal-xdg2l                               2/2     Running     0          17m   10.138.218.144   rancher-02   <none>           <none>
coredns-7c5566588d-d9pvd                  1/1     Running     0          17m   10.42.0.3        rancher-03   <none>           <none>
coredns-7c5566588d-tzkvn                  1/1     Running     0          16m   10.42.2.4        rancher-02   <none>           <none>
coredns-autoscaler-65bfc8d47d-8drw8       1/1     Running     0          17m   10.42.2.3        rancher-02   <none>           <none>
metrics-server-6b55c64f86-tmbpr           1/1     Running     0          16m   10.42.2.2        rancher-02   <none>           <none>
rke-coredns-addon-deploy-job-nt4pd        0/1     Completed   0          17m   10.138.218.141   rancher-01   <none>           <none>
rke-ingress-controller-deploy-job-tnbqq   0/1     Completed   0          16m   10.138.218.141   rancher-01   <none>           <none>
rke-metrics-addon-deploy-job-t4jrv        0/1     Completed   0          17m   10.138.218.141   rancher-01   <none>           <none>
rke-network-plugin-deploy-job-fk8tc       0/1     Completed   0          17m   10.138.218.141   rancher-01   <none>           <none>
[root@rancher-01 ~]#
6月 122020
 

Rancher关于Kubernetes 集群节点的角色定义

https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/nodes-and-roles/
https://kubernetes.io/docs/concepts/overview/components/

etcd

具有etcd角色的节点运行etcd,这是一个用于存储Kubernetes集群配置数据,具有一致性且高可用的键值存储服务。 etcd将数据复制到每个节点。
注意:在用户界面中,具有etcd角色的节点显示为“Unschedulable”,这意味着默认情况下不会将Pod调度到这些节点。

controlplane

具有controlplane角色的节点运行Kubernetes主组件(不包括etcd,因为它是单独的角色)。 有关组件包括kube-apiserver,kube-scheduler,kube-controller-manager和cloud-controller-manager。
注意:在用户界面中,具有controlplane角色的节点显示为“Unschedulable”,这意味着默认情况下不会将Pod调度到这些节点。

worker

具有worker角色的节点运行Kubernetes节点组件。 有关组件包括kubelet,kube-proxy,Container runtime。

6月 112020
 

Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters across any infrastructure, while providing DevOps teams with integrated tools for running containerized workloads.

Rancher是供采用容器的团队使用的完整软件堆栈。 它解决了在任何基础架构上管理多个Kubernetes集群的运营和安全挑战,同时为DevOps团队提供了用于运行容器化工作负载的集成工具。

禁用SELinux配置

[root@rancher ~]# sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config
[root@rancher ~]# setenforce 0
[root@rancher ~]# getenforce 
Permissive
[root@rancher ~]#

安装Docker运行环境

[root@rancher ~]# curl https://releases.rancher.com/install-docker/18.09.sh | sh
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 15521  100 15521    0     0  92374      0 --:--:-- --:--:-- --:--:-- 92940
+ '[' centos = redhat ']'
+ sh -c 'yum install -y -q yum-utils'
Delta RPMs disabled because /usr/bin/applydeltarpm not installed.
warning: /var/cache/yum/x86_64/7/updates/packages/yum-utils-1.1.31-54.el7_8.noarch.rpm: Header V3 RSA/SHA256 Signature, key ID f4a80eb5: NOKEY
Public key for yum-utils-1.1.31-54.el7_8.noarch.rpm is not installed
Importing GPG key 0xF4A80EB5:
 Userid     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 Fingerprint: 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 Package    : centos-release-7-6.1810.2.el7.centos.x86_64 (installed)
 From       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
+ sh -c 'yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo'
Loaded plugins: fastestmirror
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
+ '[' stable '!=' stable ']'
+ sh -c 'yum makecache fast'
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.keystealth.org
 * extras: mirror.fileplanet.com
 * updates: mirror.web-ster.com
base                                                                                                                                                     | 3.6 kB  00:00:00     
docker-ce-stable                                                                                                                                         | 3.5 kB  00:00:00     
extras                                                                                                                                                   | 2.9 kB  00:00:00     
updates                                                                                                                                                  | 2.9 kB  00:00:00     
(1/2): docker-ce-stable/x86_64/updateinfo                                                                                                                |   55 B  00:00:00     
(2/2): docker-ce-stable/x86_64/primary_db                                                                                                                |  44 kB  00:00:00     
Metadata Cache Created
+ sh -c 'yum install -y -q docker-ce-18.09.9 docker-ce-cli-18.09.9'
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.2.13-3.2.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
Public key for containerd.io-1.2.13-3.2.el7.x86_64.rpm is not installed
Importing GPG key 0x621E9F35:
 Userid     : "Docker Release (CE rpm) <docker@docker.com>"
 Fingerprint: 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
 From       : https://download.docker.com/linux/centos/gpg
setsebool:  SELinux is disabled.
+ '[' -d /run/systemd/system ']'
+ sh -c 'service docker start'
Redirecting to /bin/systemctl start docker.service
+ sh -c 'docker version'
Client:
 Version:           18.09.9
 API version:       1.39
 Go version:        go1.11.13
 Git commit:        039a7df9ba
 Built:             Wed Sep  4 16:51:21 2019
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.9
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.11.13
  Git commit:       039a7df
  Built:            Wed Sep  4 16:22:32 2019
  OS/Arch:          linux/amd64
  Experimental:     false

If you would like to use Docker as a non-root user, you should now consider
adding your user to the "docker" group with something like:

  sudo usermod -aG docker your-user

Remember that you will have to log out and back in for this to take effect!

WARNING: Adding a user to the "docker" group will grant the ability to run
         containers which can be used to obtain root privileges on the
         docker host.
         Refer to https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
         for more information.

[root@rancher ~]#

可用Docker版本安装脚本列表

https://github.com/rancher/install-docker

配置DNS指向

rancher.bcoc.site ----> 167.71.149.159

安装Rancher并配置持久化存储和Let’s Encrypt证书

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  -v /opt/rancher:/var/lib/rancher \
  rancher/rancher:latest \
  --acme-domain rancher.bcoc.site
  
[root@rancher ~]# docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
[root@rancher ~]# docker container ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
[root@rancher ~]# 
[root@rancher ~]# docker run -d --restart=unless-stopped \
>   -p 80:80 -p 443:443 \
>   -v /opt/rancher:/var/lib/rancher \
>   rancher/rancher:latest \
>   --acme-domain rancher.bcoc.site
Unable to find image 'rancher/rancher:latest' locally
latest: Pulling from rancher/rancher
23884877105a: Pull complete 
bc38caa0f5b9: Pull complete 
2910811b6c42: Pull complete 
36505266dcc6: Pull complete 
99447ff7670f: Pull complete 
879c87dc86fd: Pull complete 
5b954e5aebf8: Pull complete 
664e1faf26b5: Pull complete 
bf7ac75d932b: Pull complete 
7e972d16ff5b: Pull complete 
08314b1e671c: Pull complete 
d5ce20b3d070: Pull complete 
20e75cd9c8e9: Pull complete 
80daa2770be8: Pull complete 
7fb927855713: Pull complete 
af20d79674f1: Pull complete 
d6a9086242eb: Pull complete 
887a8f050cee: Pull complete 
834df47e622f: Pull complete 
Digest: sha256:25ab51f5366ee7b7add66bc41203eac4b8654386630432ac4f334f69f8baf706
Status: Downloaded newer image for rancher/rancher:latest
7b54dbd549650b332c9ded7904e044774ddce775f54e3f6802d22f9c2e626057
[root@rancher ~]#

查看当前运行的rancher容器

[root@rancher ~]# docker container ps
CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                      NAMES
7b54dbd54965        rancher/rancher:latest   "entrypoint.sh --acm…"   20 seconds ago      Up 19 seconds       0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   recursing_joliot
[root@rancher ~]#

登录Web控制台并为默认用户admin设置密码

确认Web控制台访问URL地址

控制台主界面

查看https证书信息

创建集群配置

集群配置详情

按照节点角色类型生成集群节点配置命令

在一个或多个已安装Docker的节点上运行

sudo docker run -d --privileged --restart=unless-stopped --net=host \
-v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 \
--server https://rancher.bcoc.site --token 7lmgztttzn7z2l8w6t4xhdz9gz2l7rpks6x7gc8222pjddt2mxlwcp \
--etcd --controlplane --worker
在rancher-01上运行

[root@rancher-01 ~]# sudo docker run -d --privileged --restart=unless-stopped --net=host \
> -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 \
> --server https://rancher.bcoc.site --token 7lmgztttzn7z2l8w6t4xhdz9gz2l7rpks6x7gc8222pjddt2mxlwcp \
> --etcd --controlplane --worker
Unable to find image 'rancher/rancher-agent:v2.4.4' locally
v2.4.4: Pulling from rancher/rancher-agent
23884877105a: Pull complete 
bc38caa0f5b9: Pull complete 
2910811b6c42: Pull complete 
36505266dcc6: Pull complete 
839286d9c3a6: Pull complete 
8a1ba646e5a3: Pull complete 
4917caa38753: Pull complete 
b56094248bdf: Pull complete 
77f08dadb4eb: Pull complete 
d925a4b78308: Pull complete 
Digest: sha256:a6b416d7e5f89d28f8f8a54472cabe656378bc8c1903d08e1c2e9e453cdab1ff
Status: Downloaded newer image for rancher/rancher-agent:v2.4.4
eea306867dca30ad9f70dcd764e723fec2b10239212205535ab83f24fc6827ed
[root@rancher-01 ~]#

在rancher-02上运行

[root@rancher-02 ~]# sudo docker run -d --privileged --restart=unless-stopped --net=host \
> -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.4.4 \
> --server https://rancher.bcoc.site --token 7lmgztttzn7z2l8w6t4xhdz9gz2l7rpks6x7gc8222pjddt2mxlwcp \
> --etcd --controlplane --worker
Unable to find image 'rancher/rancher-agent:v2.4.4' locally
v2.4.4: Pulling from rancher/rancher-agent
23884877105a: Pull complete 
bc38caa0f5b9: Pull complete 
2910811b6c42: Pull complete 
36505266dcc6: Pull complete 
839286d9c3a6: Pull complete 
8a1ba646e5a3: Pull complete 
4917caa38753: Pull complete 
b56094248bdf: Pull complete 
77f08dadb4eb: Pull complete 
d925a4b78308: Pull complete 
Digest: sha256:a6b416d7e5f89d28f8f8a54472cabe656378bc8c1903d08e1c2e9e453cdab1ff
Status: Downloaded newer image for rancher/rancher-agent:v2.4.4
1f84c5b8afa35475fada986834458c08c565ff7d2b3dd4965a55a2439036e45b
[root@rancher-02 ~]#

查看Web控制台显示集群创建中

集群创建成功

6月 102020
 

安装Apache及Subversion服务

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# yum install httpd subversion mod_dav_svn mariadb-server mariadb apr-util-mysql

Installed:
  apr-util-mysql.x86_64 0:1.5.2-6.el7                httpd.x86_64 0:2.4.6-93.el7.centos                 
  mariadb.x86_64 1:5.5.65-1.el7                      mariadb-server.x86_64 1:5.5.65-1.el7               
  mod_dav_svn.x86_64 0:1.7.14-14.el7                 subversion.x86_64 0:1.7.14-14.el7                  

Dependency Installed:
  apr.x86_64 0:1.4.8-5.el7                            apr-util.x86_64 0:1.5.2-6.el7                     
  centos-logos.noarch 0:70.0.6-3.el7.centos           gnutls.x86_64 0:3.3.29-9.el7_6                    
  httpd-tools.x86_64 0:2.4.6-93.el7.centos            libaio.x86_64 0:0.3.109-13.el7                    
  libmodman.x86_64 0:2.0.1-8.el7                      libproxy.x86_64 0:0.4.11-11.el7                   
  mailcap.noarch 0:2.1.41-2.el7                       neon.x86_64 0:0.30.0-4.el7                        
  nettle.x86_64 0:2.7.1-8.el7                         pakchois.x86_64 0:0.4-10.el7                      
  perl.x86_64 4:5.16.3-295.el7                        perl-Carp.noarch 0:1.26-244.el7                   
  perl-Compress-Raw-Bzip2.x86_64 0:2.061-3.el7        perl-Compress-Raw-Zlib.x86_64 1:2.061-4.el7       
  perl-DBD-MySQL.x86_64 0:4.023-6.el7                 perl-DBI.x86_64 0:1.627-4.el7                     
  perl-Data-Dumper.x86_64 0:2.145-3.el7               perl-Encode.x86_64 0:2.51-7.el7                   
  perl-Exporter.noarch 0:5.68-3.el7                   perl-File-Path.noarch 0:2.09-2.el7                
  perl-File-Temp.noarch 0:0.23.01-3.el7               perl-Filter.x86_64 0:1.49-3.el7                   
  perl-Getopt-Long.noarch 0:2.40-3.el7                perl-HTTP-Tiny.noarch 0:0.033-3.el7               
  perl-IO-Compress.noarch 0:2.061-2.el7               perl-Net-Daemon.noarch 0:0.48-5.el7               
  perl-PathTools.x86_64 0:3.40-5.el7                  perl-PlRPC.noarch 0:0.2020-14.el7                 
  perl-Pod-Escapes.noarch 1:1.04-295.el7              perl-Pod-Perldoc.noarch 0:3.20-4.el7              
  perl-Pod-Simple.noarch 1:3.28-4.el7                 perl-Pod-Usage.noarch 0:1.63-3.el7                
  perl-Scalar-List-Utils.x86_64 0:1.27-248.el7        perl-Socket.x86_64 0:2.010-5.el7                  
  perl-Storable.x86_64 0:2.45-3.el7                   perl-Text-ParseWords.noarch 0:3.29-4.el7          
  perl-Time-HiRes.x86_64 4:1.9725-3.el7               perl-Time-Local.noarch 0:1.2300-2.el7             
  perl-constant.noarch 0:1.27-2.el7                   perl-libs.x86_64 4:5.16.3-295.el7                 
  perl-macros.x86_64 4:5.16.3-295.el7                 perl-parent.noarch 1:0.225-244.el7                
  perl-podlators.noarch 0:2.5.1-3.el7                 perl-threads.x86_64 0:1.87-4.el7                  
  perl-threads-shared.x86_64 0:1.43-6.el7             subversion-libs.x86_64 0:1.7.14-14.el7            
  trousers.x86_64 0:0.3.14-2.el7                     

Dependency Updated:
  mariadb-libs.x86_64 1:5.5.65-1.el7 

查看DBD MySQL驱动模块信息

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# yum info apr-util-mysql
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirror.keystealth.org
 * extras: repos-lax.psychz.net
 * updates: mirrors.xtom.com
Installed Packages
Name        : apr-util-mysql
Arch        : x86_64
Version     : 1.5.2
Release     : 6.el7
Size        : 24 k
Repo        : installed
From repo   : base
Summary     : APR utility library MySQL DBD driver
URL         : http://apr.apache.org/
License     : ASL 2.0
Description : This package provides the MySQL driver for the apr-util DBD
            : (database abstraction) interface.

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# rpm -lq apr-util-mysql
/usr/lib64/apr-util-1/apr_dbd_mysql-1.so
/usr/lib64/apr-util-1/apr_dbd_mysql.so
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

配置MySQL服务并新建数据库及表

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl start mariadb
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

建库

MariaDB [(none)]> create database subversion;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> grant SELECT, INSERT, UPDATE, DELETE on subversion.* to apache@localhost;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> set password for apache@localhost=password('apachepwd');
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]>

建表

MariaDB [(none)]> use subversion;
Database changed
MariaDB [subversion]> use subversion;
Database changed
MariaDB [subversion]> create table authn (
    -> username varchar(255) not null,
    -> password varchar(255),
    -> status varchar(255),
    -> primary key (username)
    -> );
Query OK, 0 rows affected (0.01 sec)

MariaDB [subversion]>

写入测试数据
生成密码(可指定密码加密函数)

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# htpasswd -nb user1 123456
user1:$apr1$hyGT4jgm$xCWktYtKdOZ.y59Zo.t7C1

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# 

MariaDB [subversion]> INSERT INTO `authn` (`username`, `password`, `status`) 
    -> VALUES('user1', '$apr1$hyGT4jgm$xCWktYtKdOZ.y59Zo.t7C1', 'ok');
Query OK, 1 row affected (0.00 sec)

MariaDB [subversion]>

查看表数据

MariaDB [subversion]> select * from authn;
+----------+---------------------------------------+--------+
| username | password                              | status |
+----------+---------------------------------------+--------+
| user1    | $apr1$hyGT4jgm$xCWktYtKdOZ.y59Zo.t7C1 | ok     |
+----------+---------------------------------------+--------+
1 row in set (0.00 sec)

MariaDB [subversion]>

密码加密函数参考

https://dev.mysql.com/doc/refman/5.6/en/encryption-functions.html#function_password

创建仓库

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# mkdir /var/www/svn
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# cd /var/www/svn/
[root@centos-s-1vcpu-1gb-sfo3-01 svn]# svnadmin create test
[root@centos-s-1vcpu-1gb-sfo3-01 svn]#

配置Apache环境
查看已安装相关模块

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# ls /etc/httpd/modules/ |grep dbd
mod_authn_dbd.so
mod_authz_dbd.so
mod_dbd.so
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# ls /etc/httpd/modules/ |grep socache
mod_authn_socache.so
mod_cache_socache.so
mod_socache_dbm.so
mod_socache_memcache.so
mod_socache_shmcb.so
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

模块mod_authn_dbd配置参考

http://httpd.apache.org/docs/2.4/mod/mod_authn_dbd.html
http://httpd.apache.org/docs/2.4/mod/mod_dbd.html
http://httpd.apache.org/docs/2.4/mod/mod_authn_socache.html

设置主机名

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# vi /etc/httpd/conf/httpd.conf
ServerName 64.227.106.245

新增配置文件

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# vi /etc/httpd/conf.d/repository.conf
# mod_dbd configuration
# UPDATED to include authentication caching
DBDriver mysql
DBDParams "host=localhost port=3306 dbname=subversion user=apache pass=apachepwd"

DBDMin  4
DBDKeep 8
DBDMax  20
DBDExptime 300

<Location /repos>
  DAV svn
  SVNParentPath /var/www/svn

  # mod_authn_core and mod_auth_basic configuration
  # for mod_authn_dbd
  AuthType Basic
  AuthName "Subversion repository"
  
  # To cache credentials, put socache ahead of dbd here
  AuthBasicProvider socache dbd

  # Also required for caching: tell the cache to cache dbd lookups!
  AuthnCacheProvideFor dbd
  AuthnCacheContext my-server
  
  SVNPathAuthz off

  # Authorization: Authenticated users only
  Require valid-user
  
  # mod_authn_dbd SQL query to authenticate a user
  AuthDBDUserPWQuery "SELECT password FROM authn WHERE username = %s"
</Location>
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# apachectl -t
Syntax OK
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

启动Apache服务

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl enable httpd
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@centos-s-1vcpu-1gb-sfo3-01 ~]# systemctl start httpd
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

查看端口监听

[root@centos-s-1vcpu-1gb-sfo3-01 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      969/master          
tcp        0      0 0.0.0.0:3306            0.0.0.0:*               LISTEN      1586/mysqld         
tcp        0      0 0.0.0.0:111             0.0.0.0:*               LISTEN      1/systemd           
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      1018/sshd           
tcp6       0      0 ::1:25                  :::*                    LISTEN      969/master          
tcp6       0      0 :::111                  :::*                    LISTEN      1/systemd           
tcp6       0      0 :::80                   :::*                    LISTEN      12198/httpd         
tcp6       0      0 :::22                   :::*                    LISTEN      1018/sshd           
[root@centos-s-1vcpu-1gb-sfo3-01 ~]#

使用浏览器访问仓库进行登录验证

http://64.227.106.245/repos/test