1.1. 部署前准备

  1. 指定namespace, 比如hengshi
kubectl create namespace hengshi
  1. 根据版本获取k8s yaml部署配置文件。
安装版本 部署文件 安装规模 组件依赖
3.x k8s 单机安装 metadb、engine、hengshi
4.0.x k8s 单机安装 metadb、engine、hengshi、minio
4.1.x k8s 单机安装 metadb、engine、hengshi、minio、redis、flink
4.2.x k8s 单机安装 metadb、engine、hengshi、minio、redis、flink
4.3.x k8s 单机安装 metadb、engine、hengshi、minio、redis、flink
4.4.x k8s 单机安装 metadb、engine、hengshi、minio、redis、flink
3.x k8s 集群安装 metadb、engine、hengshi zookeeper
4.0.x k8s 集群安装 metadb、engine、hengshi、minio zookeeper
4.1.x k8s 集群安装 metadb、engine、hengshi、minio、redis、flink zookeeper
4.2.x k8s 集群安装 metadb、engine、hengshi、minio、redis、flink zookeeper
4.3.x k8s 集群安装 metadb、engine、hengshi、minio、redis、flink zookeeper
4.4.x k8s 集群安装 metadb、engine、hengshi、minio、redis、flink zookeeper
  1. 导入离线镜像 修改image地址
wget https://download.hengshi.com/releases/hengshi-sense-xxx.tar.gz
docker load -i hengshi-sense-xxx.tar.gz
# 除gpdb的image地址与其他不同,其余组件hengshi|metadb|redis|flink|minio 均替换为docker load导到本地带有hengshi-sense前缀的镜像地址
# hengshi示例
image: hengshi-sense:xxxx
# gpdb示例
image: gpdb:xxxx
  1. 替换gpdb.yaml中的$(POD_NAMESPACE) 变量为当前namespace, 比如 hengshi
sed -i 's/$(POD_NAMESPACE)/hengshi/'
  1. 修改pvc

  2. 以下服务需要修改volume:

    • 修改 storageClassName: csi-rbd, 为当前集群的storageclass
    • 修改 storage: , 为各个服务的存储大小
metadb.yaml
gpdb.yaml
zookeeper.yaml
redis.yaml
minio.yaml
flink.yaml

1.2. engine

1.2.1. 部署(engine)

  1. 初始化,并启动engine
kubectl -n hengshi apply -f gpdb.yaml
kubectl -n hengshi exec -it master-0 -- /entrypoint.sh -m initsystem
kubectl -n hengshi exec -it master-0 -- /entrypoint.sh -m startsystem
  • gpdb:6.2.1.1 及以前的版本, 在初始化执行以上命令后还要执行一下命令, 仅初始化执行一次即可.

    kubectl -n hengshi exec -it master-0 -- /bin/bash -c "source ~/.bashrc; /opt/hengshi/bin/engine.sh config"
    kubectl -n hengshi exec -it master-0 -- /bin/bash -c "source ~/.bashrc; psql -c \"ALTER USER \${GREENPLUM_USR} WITH SUPERUSER LOGIN PASSWORD '\${GREENPLUM_PWD}'\""
    
  • 如果需要修改 gpdb 的密码, 需要在两处修改:

    • gpdb.yaml
      GREENPLUM_PWD: hengshi202020
      GREENPLUM_QUERY_PWD: query202020
      GREENPLUM_ETL_PWD: etl202020
      
    • configmap.yaml
      HS_ENGINE_PWD: hengshi202020
      ENGINE_QUERY_PASSWORD: query202020
      ENGINE_ETL_PASSWORD: etl202020
      

1.3. hengshi

两种对外访问的示例配置,可根据需要选择其中之一即可

1.3.1. nodePort

  1. 通过nodePort方式暴露hengshi服务 ( 默认 )
    # Client service for connecting to any Hengshi Service.
    apiVersion: v1
    kind: Service
    metadata:
    name: hengshi
    spec:
    selector:
     hsapp: hengshi-sense
     hsrole: hengshi
    ports:    
    - protocol: TCP
     name: "8080"
     port: 8080
     targetPort: 8080
     #nodePort: 38080 # 指定集群外可访问的端口
    - protocol: TCP
     name: "5005"
     port: 5005
     targetPort: 5005
    - protocol: TCP
     name: "11111"
     port: 11111
     targetPort: 11111
    type: NodePort
    

1.3.2. ingress

  1. 通过ingress方式对外暴露hengshi服务 ( 可选 )
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    name: hengshi-sense
    namespace: hengshi-sense
    annotations:
     ingress.kubernetes.io/force-ssl-redirect: "false"
     nginx.ingress.kubernetes.io/proxy-connect-timeout: "90"
     nginx.ingress.kubernetes.io/proxy-send-timeout: "90"
     nginx.ingress.kubernetes.io/proxy-read-timeout: "90"
    spec:
    ingressClassName: nginx # 当前集群的ingressClass
    rules:
     - host: xxxx.hengshi.com # eg. www.hengshi.com
       http:
         paths:
           - path: /
             pathType: Prefix
             backend:
               service:
                 name: hengshi-sense
                 port:
                   number: 8080
    
kubectl -n hengshi apply -f configmap.yaml
kubectl -n hengshi apply -f service.yaml
kubectl -n hengshi apply -f zookeeper.yaml
kubectl -n hengshi apply -f metadb.yaml
kubectl -n hengshi apply -f minio.yaml
kubectl -n hengshi apply -f redis.yaml
kubectl -n hengshi apply -f flink.yaml
kubectl -n hengshi apply -f hengshi.yaml
kubectl -n hengshi apply -f ingress.yaml #视情况是否部署ingress

1.4. 基本运维操作

1.4.1. 安全停止数据库服务(metadb,engine)

kubectl -n hengshi exec -it metadb-0 -- /docker-entrypoint.sh stop metadb single
kubectl -n hengshi exec -it master-0 -- /entrypoint.sh -m stopsystem

1.4.2. 重启(engine)

kubectl -n hengshi exec -it master-0 -- /entrypoint.sh gpstop -r

1.4.3. 扩容(engine)

  1. 修改 StatefulSet/segment
kubectl -n hengshi edit StatefulSet/segment
  • SEGMENTS字段填写扩容后所有segment 的appname (比如2个扩容到4个)
  • StatefulSet/segment 的 replicas: 改到扩容后所有segment数
apiVersion: v1
kind: ConfigMap
metadata:
  name: greenplum
data:
  MASTER: "master-0"
  SEGMENTS: |  #4个segment的列表
    segment-0
    segment-1
    segment-2
    segment-3
...
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: segment
spec:
  replicas: 4 #例如扩容后为4个segment
  • 然后 kubectl -n hengshi apply -f gpdb.yaml
  • 然后等待所有新增和旧有segement pod的状态都变成running

  • 写 new_host_file (新增segment列表, 比如原有2个segment(0,1), 现扩容到4个segment(0,1,2,3))

kubectl -n hengshi exec -it master-0 /bin/bash
cd /opt/hsdata/ && mkdir expand && cd expand
cat <<EOF > new_host_file
segment-2
segment-3
EOF
  1. 执行扩容操作
kubectl -n hengshi exec -it master-0 /bin/bash
cd /opt/hsdata/expand
psql postgres -c "create database expand"
gpexpand -f new_host_file  -D expand
  >y
  >0 #然后会生成 gpexpand_inputfile_yyyymmdd_xxxxxx 文件
gpexpand -i gpexpand_inputfile_yyyymmdd_xxxxxx -D expand
  1. 失败回滚(engine)
    kubectl -n hengshi exec -it master-0 /bin/bash
    cd /opt/hsdata/expand
    gpstart -aR
    gpexpand -r -D expand
    

1.4.4. engine数据迁移

  1. 旧engine数据导出
 # dump db data
kubectl exec -it $old-gp /bin/bash
source $HS_HOME/engine-cluster
pg_dumpall > /opt/hsdata/engine.back.sql
exit
  1. copy 数据到新机器
 # cp db data
kubectl cp $old-gp:/opt/hsdata/engine.back.sql engine.back.sql
kubectl cp engine.back.sql $master-0:/opt/hsdata/engine.back.sql
  1. 导入数据到新环境
 # load db data
kubectl exec -it $master-0 /bin/bash
source $HS_HOME/engine-cluster
psql postgres < /opt/hsdata/engine.back.sql
rm /opt/hsdata/engine.back.sql

1.5. 3.x 搭配 gpdb5 的配置 (hengshi 2.x 升级 3.x, 不升级gpdb)

  • 编辑 master-0 配置文件
kubectl -n hengshi exec -it metadb-0 -- /bin/bash
cd /opt/hengshi/conf; test -f hengshi-sense-env.sh || cp hengshi-sense-env.sh.sample hengshi-sense-env.sh
cat<<EOF >> hengshi-sense-env.sh
ENGINE_QUERY_USER=hengshi_query
ENGINE_QUERY_PASSWORD=query202020
ENGINE_QUERY_QUEUE=hengshi_query_queue
QUERY_QUEUE_ACTIVE_NUM=10
ENGINE_ETL_USER=hengshi_etl
ENGINE_ETL_PASSWORD=etl202020
ENGINE_ETL_QUEUE=hengshi_etl_queue
ETL_QUEUE_ACTIVE_NUM=4
EOF
  • 更新gpdb配置
kubectl -n hengshi get pod
kubectl cp hengshi-sense-xxxxxxxxx-xxxxx:/opt/hengshi/bin/engine.sh engine.31.sh
kubectl cp engine.31.sh master-0:/opt/hengshi/bin/engine.31.sh
kubectl -n hengshi exec -it metadb-0 -- /bin/bash
cd /opt/hengshi/bin
./engine.31.sh config
  • 更新configmap
set_kv_config() {
    local config_file="$1"
    local param="$2"
    local val="$3"
    # edit param=val if exist or insert new param=val
    grep -E "^\s*${param}\s*:" "${config_file}" > /dev/null \
                || sed -i "$ a ${param}: ${val}" "${config_file}"
}
set_kv_config configmap.yaml ENGINE_QUERY_USER hengshi_query
set_kv_config configmap.yaml ENGINE_QUERY_PASSWORD query202020
set_kv_config configmap.yaml ENGINE_QUERY_QUEUE hengshi_query_queue
set_kv_config configmap.yaml ENGINE_ETL_USER hengshi_etl
set_kv_config configmap.yaml ENGINE_ETL_PASSWORD etl202020
set_kv_config configmap.yaml ENGINE_ETL_QUEUE hengshi_etl_queue
kubectl -n hengshi apply -f configmap.yaml
  • 重启hengshi pod
kubectl rollout restart deployment/hengshi-sense

1.6. 部署成单机版 (POC)

1.6.1. 修改配置文件为单机配置

执行前确保 configmap.yaml, hengshi.yaml 等配置文件与 config_to_single.sh 在同一个目录下

./config_to_single.sh

1.6.2. 部署引擎

参考 引擎部署

1.6.3. 部署单机版服务

kubectl -n hengshi apply -f configmap.yaml
kubectl -n hengshi apply -f service.yaml
kubectl -n hengshi apply -f metadb.yaml
kubectl -n hengshi apply -f minio.yaml
kubectl -n hengshi apply -f redis.yaml
kubectl -n hengshi apply -f flink.yaml
kubectl -n hengshi apply -f hengshi.yaml

results matching ""

    No results matching ""

    容器部署 helm部署(可选)