不同应用场景部署 TEM 的配置文件

TiDB 作为 PingCAP 公司自主设计、研发的开源分布式关系型数据库, 可以根据用户的需求,按照不同的拓扑结构对数据库集群进行部署,本文介绍了以下 6 种常用的使用场景对应的集群部署拓扑文件,帮助用户更加高效的部署 TiDB 集群。

多数据中心部署

多数据中心通常由生产数据中心、同城灾备中心、异地灾备中心构成,即通常所说的两地三中心。这种部署方式通常具有最高的高可用和容灾能力,由于两个城市的三个数据中心互联互通,如果一个数据中心发生故障或灾难,其它数据中心可以正常运行并对全部业务实现快速接管,保障业务连续性。 对于这种部署方式,TiDB 会将多个数据副本分别部署在三个数据中心。下面的集群拓扑文件示例会创建一个5副本的 TiDB 集群,并将 5 个副本分布到 3 个数据中心当中。

# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/data/tidb_cluster/tidb-deploy"
  data_dir: "/data/tidb_cluster/tidb-data"

server_configs:
  tikv:
    server.grpc-compression-type: gzip
  pd:
    replication.location-labels:  ["dc","az","rack","host"]

pd_servers:
  - host: 10.63.10.10
    name: "pd-10"
  - host: 10.63.10.11
    name: "pd-11"
  - host: 10.63.10.12
    name: "pd-12"
  - host: 10.63.10.13
    name: "pd-13"
  - host: 10.63.10.14
    name: "pd-14"

tidb_servers:
  - host: 10.63.10.10
  - host: 10.63.10.11
  - host: 10.63.10.12
  - host: 10.63.10.13
  - host: 10.63.10.14

tikv_servers:
  - host: 10.63.10.30
    config:
      server.labels: { dc: "dc1", az: "az1", rack: "1", host: "30" }
  - host: 10.63.10.31
    config:
      server.labels: { dc: "dc1", az: "az2", rack: "2", host: "31" }
  - host: 10.63.10.32
    config:
      server.labels: { dc: "dc2", az: "az3", rack: "3", host: "32" }
  - host: 10.63.10.33
    config:
      server.labels: { dc: "dc2", az: "az4", rack: "4", host: "33" }
  - host: 10.63.10.34
    config:
      server.labels: { dc: "dc3", az: "az5", rack: "5", host: "34" }
      raftstore.raft-min-election-timeout-ticks: 1000
      raftstore.raft-max-election-timeout-ticks: 1200

monitoring_servers:
  - host: 10.63.10.60

grafana_servers:
  - host: 10.63.10.60

alertmanager_servers:
  - host: 10.63.10.60

同城双AZ部署

同城两个数据中心方案,即同城有两个机房且要求满足高可用的 TiDB 部署方案,相对于多数据中心部署,同城双 AZ 的部署方式成本更低,而且也具有很好的高可用和容灾特定,当一个 AZ 由于故障或灾难无法提供服务时,另外一个 AZ 可以对业务实现快速接管,保障业务连续性。 下面的集群拓扑文件示例会创建一个 6 副本的 TiDB 集群,并将6个副本分布到 2 个 AZ 中。

# # Global variables are applied to all deployments and used as the default value of
# # the deployments if a specific deployment value is missing.
global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/data/tidb_cluster/tidb-deploy"
  data_dir: "/data/tidb_cluster/tidb-data"
server_configs:
  pd:
    replication.location-labels:  ["az","rack","host"]
pd_servers:
  - host: 10.63.10.10
    name: "pd-10"
  - host: 10.63.10.11
    name: "pd-11"
  - host: 10.63.10.12
    name: "pd-12"
tidb_servers:
  - host: 10.63.10.10
  - host: 10.63.10.11
  - host: 10.63.10.12
tikv_servers:
  - host: 10.63.10.30
    config:
      server.labels: { az: "east", rack: "east-1", host: "30" }
  - host: 10.63.10.31
    config:
      server.labels: { az: "east", rack: "east-2", host: "31" }
  - host: 10.63.10.32
    config:
      server.labels: { az: "east", rack: "east-3", host: "32" }
  - host: 10.63.10.33
    config:
      server.labels: { az: "west", rack: "west-1", host: "33" }
  - host: 10.63.10.34
    config:
      server.labels: { az: "west", rack: "west-2", host: "34" }
  - host: 10.63.10.35
    config:
      server.labels: { az: "west", rack: "west-3", host: "35" }
monitoring_servers:
  - host: 10.63.10.60
grafana_servers:
  - host: 10.63.10.60
alertmanager_servers:
  - host: 10.63.10.60

在通过以上的拓扑文件部署完成 TiDB 集群之后,还需要准备类似以下的 json 文件来完成对副本角色的配置。

cat rule.json
[
  {
    "group_id": "pd",
    "group_index": 0,
    "group_override": false,
    "rules": [
      {
        "group_id": "pd",
        "id": "az-east",
        "start_key": "",
        "end_key": "",
        "role": "voter",
        "count": 3,
        "label_constraints": [
          {
            "key": "az",
            "op": "in",
            "values": [
              "east"
            ]
          }
        ],
        "location_labels": [
          "az",
          "rack",
          "host"
        ]
      },
      {
        "group_id": "pd",
        "id": "az-west-1",
        "start_key": "",
        "end_key": "",
        "role": "follower",
        "count": 2,
        "label_constraints": [
          {
            "key": "az",
            "op": "in",
            "values": [
              "west"
            ]
          }
        ],
        "location_labels": [
          "az",
          "rack",
          "host"
        ]
      },
      {
        "group_id": "pd",
        "id": "az-west-2",
        "start_key": "",
        "end_key": "",
        "role": "learner",
        "count": 1,
        "label_constraints": [
          {
            "key": "az",
            "op": "in",
            "values": [
              "west"
            ]
          }
        ],
        "location_labels": [
          "az",
          "rack",
          "host"
        ]
      }
    ]
  }
]

最后,运行类似以下的命令让上面的副本角色配置生效

pd-ctl config placement-rules rule-bundle save --in="rule.json"

多业务融合部署

这种部署方式是指通过利用 TiDB 集群的部署工具 TiUP 提供的基于 cgroup 的资源控制和隔离能力,以及 TiDB 集群提供的 Placement Rule in SQL 特性,来将多个业务系统的数据库整合到同一个 TiDB 集群中的部署方式,从而达到对用户资源使用的最大化高效使用。 下面的示例 TiDB 拓扑文件描述了如何将TiDB 集群部署到 3 台物理主机上,每台主机上都会部署 3 个 TiDB 节点和 3 个 TiKV 节点,并且通过 resource control 来进行资源的隔离,以及通过名称为 resource_pool 的 lable 将每台主机上的 TiDB 和 TiKV 节点划分到不同的资源池当中。在创建完 TiDB 集群之后,用户可以通过使用 placement rule in SQL 的方式,将每个资源池指定给特定的数据库,并将每个数据库指定给一个特定的应用程序。

  global:
  user: "tidb"
  ssh_port: 22
server_configs:
  pd:
    replication.location-labels:  ["resource_pool","host"]
pd_servers:
  - host: 172.16.11.62
    name: "pd-62"
  - host: 172.16.11.63
    name: "pd-63"
  - host: 172.16.11.64
    name: "pd-64"
tidb_servers:
  - host: 172.16.11.62
    port: 4000
    status_port: 10080
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4000"
    log_dir: "/home/tidb/tidb-deploy/tidb-4000/log"
    config:
      labels:
        resource_pool: pool1
        host: db62-1
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.62
    port: 4001
    status_port: 10081
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4001"
    log_dir: "/home/tidb/tidb-deploy/tidb-4001/log"
    config:
      labels:
        resource_pool: pool2
        host: db62-2
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
   - host: 172.16.11.62
    port: 4002
    status_port: 10082
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4002"
    log_dir: "/home/tidb/tidb-deploy/tidb-4002/log"
    config:
      labels:
        resource_pool: pool3
        host: db62-3
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%     
  - host: 172.16.11.63
    port: 4000
    status_port: 10080
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4000"
    log_dir: "/home/tidb/tidb-deploy/tidb-4000/log"
    config:
      labels:
        resource_pool: pool1
        host: db63-1
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.63
    port: 4001
    status_port: 10081
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4001"
    log_dir: "/home/tidb/tidb-deploy/tidb-4001/log"
    config:
      labels:
        resource_pool: pool2
        host: db63-2
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.63
    port: 4002
    status_port: 10082
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4002"
    log_dir: "/home/tidb/tidb-deploy/tidb-4002/log"
    config:
      labels:
        resource_pool: pool3
        host: db63-3
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.64
    port: 4000
    status_port: 10080
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4000"
    log_dir: "/home/tidb/tidb-deploy/tidb-4000/log"
    config:
      labels:
        resource_pool: pool1
        host: db64-1
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.64
    port: 4001
    status_port: 10081
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4001"
    log_dir: "/home/tidb/tidb-deploy/tidb-4001/log"
    config:
      labels:
        resource_pool: pool2
        host: db64-2
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.64
    port: 4002
    status_port: 10082
    deploy_dir: "/home/tidb/tidb-deploy/tidb-4002"
    log_dir: "/home/tidb/tidb-deploy/tidb-4002/log"
    config:
      labels:
        resource_pool: pool3
        host: db64-3
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
tikv_servers:
  - host: 172.16.11.62
    port: 20160
    status_port: 20180
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20160"
    data_dir: "/home/tidb/tidb-data/tikv-20160"
    log_dir: "/home/tidb/tidb-deploy/tikv-20160/log"
    config:
      server.labels:
        resource_pool: pool1
        host: kv62-1
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.62
    port: 20161
    status_port: 20181
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20161"
    data_dir: "/home/tidb/tidb-data/tikv-20161"
    log_dir: "/home/tidb/tidb-deploy/tikv-20161/log"
    config:
      server.labels:
        resource_pool: pool2
        host: kv62-2
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.62
    port: 20162
    status_port: 20182
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20162"
    data_dir: "/home/tidb/tidb-data/tikv-20162"
    log_dir: "/home/tidb/tidb-deploy/tikv-20162/log"
    config:
      server.labels:
        resource_pool: pool3
        host: kv62-3
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.63
    port: 20160
    status_port: 20180
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20160"
    data_dir: "/home/tidb/tidb-data/tikv-20160"
    log_dir: "/home/tidb/tidb-deploy/tikv-20160/log"
    config:
      server.labels:
        resource_pool: pool1
        host: kv63-1
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.63
    port: 20161
    status_port: 20181
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20161"
    data_dir: "/home/tidb/tidb-data/tikv-20161"
    log_dir: "/home/tidb/tidb-deploy/tikv-20161/log"
    config:
      server.labels:
        resource_pool: pool2
        host: kv63-2
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%      
  - host: 172.16.11.63
    port: 20162
    status_port: 20182
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20162"
    data_dir: "/home/tidb/tidb-data/tikv-20162"
    log_dir: "/home/tidb/tidb-deploy/tikv-20162/log"
    config:
      server.labels:
        resource_pool: pool3
        host: kv63-3
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.64
    port: 20160
    status_port: 20180
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20160"
    data_dir: "/home/tidb/tidb-data/tikv-20160"
    log_dir: "/home/tidb/tidb-deploy/tikv-20160/log"
    config:
      server.labels:
        resource_pool: pool1
        host: kv64-1
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.64
    port: 20161
    status_port: 20181
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20161"
    data_dir: "/home/tidb/tidb-data/tikv-20161"
    log_dir: "/home/tidb/tidb-deploy/tikv-20161/log"
    config:
      server.labels:
        resource_pool: pool2
        host: kv64-2
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%
  - host: 172.16.11.64
    port: 20162
    status_port: 20182
    deploy_dir: "/home/tidb/tidb-deploy/tikv-20162"
    data_dir: "/home/tidb/tidb-data/tikv-20162"
    log_dir: "/home/tidb/tidb-deploy/tikv-20162/log"
    config:
      server.labels:
        resource_pool: pool3
        host: kv64-3
    resource_control:
      memory_limit: 4G
      cpu_quota: 200%

monitoring_servers:
  - host: 172.16.11.62
grafana_servers:
  - host: 172.16.11.62

说明:下面是一个创建 placement rule 和将数据库指定到特定的 placement rule 的命令示例。

CREATE PLACEMENT POLICY `pool_orders` LEADER_CONSTRAINTS="[+resource_pool=pool1]" FOLLOWER_CONSTRAINTS="{+resource_pool=pool2:1,+resource_pool=pool3:1}";
CREATE PLACEMENT POLICY `pool_item` LEADER_CONSTRAINTS="[+resource_pool=pool2]" FOLLOWER_CONSTRAINTS="{+resource_pool=pool1:1,+resource_pool=pool3:1}";
CREATE PLACEMENT POLICY `pool_queries` LEADER_CONSTRAINTS="[+resource_pool=pool3]" FOLLOWER_CONSTRAINTS="{+resource_pool=pool2:1,+resource_pool=pool1:1}";

alter database orders placement policy=pool_orders;
alter database item placement policy=pool_item;
alter database queries placement policy=pool_queries;

最小规模部署

如果您希望用单台 Linux 服务器,体验 TiDB 最小的完整拓扑的集群,完成一些功能验证、进行简单的 PoC,或并模拟生产环境下的部署步骤,可以参考下面的 yaml 文件来快速创建一个 TiDB 集群。

global:
 user: "tidb"
 ssh_port: 22
 deploy_dir: "/tidb-deploy"
 data_dir: "/tidb-data"

# # Monitored variables are applied to all the machines.
monitored:
 node_exporter_port: 9100
 blackbox_exporter_port: 9115

server_configs:
 tidb:
   instance.tidb_slow_log_threshold: 300
 tikv:
   readpool.storage.use-unified-pool: false
   readpool.coprocessor.use-unified-pool: true
 pd:
   replication.enable-placement-rules: true
   replication.location-labels: ["host"]
 tiflash:
   logger.level: "info"

pd_servers:
 - host: 10.0.1.1

tidb_servers:
 - host: 10.0.1.1

tikv_servers:
 - host: 10.0.1.1
   port: 20160
   status_port: 20180
   config:
     server.labels: { host: "logic-host-1" }

 - host: 10.0.1.1
   port: 20161
   status_port: 20181
   config:
     server.labels: { host: "logic-host-2" }

 - host: 10.0.1.1
   port: 20162
   status_port: 20182
   config:
     server.labels: { host: "logic-host-3" }

tiflash_servers:
 - host: 10.0.1.1

monitoring_servers:
 - host: 10.0.1.1

grafana_servers:
 - host: 10.0.1.1

HTAP 集群部署

混合负载处理能力(HTAP)是 TiDB 集群的核心能力,它可以帮助客户一站式混合型在线事务与在线分析处理。TiDB 集群使用 TiKV 节点通过行存的方式来处理日常的在线事务处理工作负载,而使用 Tiflash 节点通过列存的方式来处理分析类型的工作负载。下面的 yaml 文件中包含了一个同时包含 TiKV 和 Tiflash 节点的 TiDB 集群拓扑文件示例。

global:
  user: "tidb"
  ssh_port: 22
  deploy_dir: "/tidb-deploy"
  data_dir: "/tidb-data"

server_configs:
  pd:
    replication.enable-placement-rules: true

pd_servers:
  - host: 10.0.1.4
  - host: 10.0.1.5
  - host: 10.0.1.6

tidb_servers:
  - host: 10.0.1.7
  - host: 10.0.1.8
  - host: 10.0.1.9

tikv_servers:
  - host: 10.0.1.1
  - host: 10.0.1.2
  - host: 10.0.1.3

tiflash_servers:
  - host: 10.0.1.11
    data_dir: /tidb-data/tiflash-9000
    deploy_dir: /tidb-deploy/tiflash-9000
  - host: 10.0.1.12
    data_dir: /tidb-data/tiflash-9000
    deploy_dir: /tidb-deploy/tiflash-9000
  - host: 10.0.1.13
    data_dir: /tidb-data/tiflash-9000
    deploy_dir: /tidb-deploy/tiflash-9000

monitoring_servers:
  - host: 10.0.1.10

grafana_servers:
  - host: 10.0.1.10

alertmanager_servers:
  - host: 10.0.1.10

更多的 TiDB 集群部署示例文档,请参考 TiDB 官方文档。

© 2024 平凯星辰(北京)科技有限公司 all right reserved,powered by GitbookFile Modify: 2024-04-02 15:37:04

results matching ""

    No results matching ""