GIS Cloud Suite

GIS Cloud Suite Migration

The content is going to introduce how to migrate the data of Service Instance, Service Node, File Manager, and Storage Resources from a GIS Cloud Suite to another GIS Cloud Suite.

New Environment Preparation

  • The licensed version of each product in the new environment should be higher than the licensed version in old environment. The license in new environment has all the modules of the license in old environment.
  • GIS Cloud Suite in the new environment should be totally new created and clean. If the NFS in new environment already has data, please delete the data first.

Migrate Service Instance, Service Node, and File Manager

Migrate Consul Data

  1. Gets the NFS mounting directories of ‘consul-server-{index}’ from old environment(The consul-servcer cluster has three nodes by default, please migrates the data from three directories).

    Execute the following command in Kubernetes Master machine of the old environment to get the paths of NFS mounting directory of ‘consul-server-{index}‘:

    kubectl get pvc -n {ns} | grep consul-server-{index} | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    1. Replace {ns} in the command by the name of your namespace.
    2. Execute the command three times to get three paths of the directories, replace {index} in the command by 0, 1, 2 each time.
  2. Copy the data from the three directories in step 1, and paste to the three corresponding directories of the new environment(refer to the command in step 1 to get the paths of NFS mounting directory of ‘consul-server-{index}’ in the new environment).

Migrate File Manager Data

  1. Gets the NFS mounting directory of ‘gisapplication-data’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the path of NFS mounting directory of ‘gisapplication-data’:

    kubectl get pvc -n {ns} | grep gisapplication-data | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    Replace {ns} in the command by the name of your namespace.

  2. Copy the data from the directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘gisapplication-data’ in the new environment).

Migrate Industry Data

  1. Gets the NFS mounting directory of ‘ispeco-mysql-data’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the path of NFS mounting directory of ‘ispeco-mysql-data’:

    kubectl get pvc -n {ns} | grep ispeco-mysql-data | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    Replace {ns} in the command by the name of your namespace.

  2. Copy the data from the directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘ispeco-mysql-data’ in the new environment).

Restart The Services

Restart consul-server, ispeco-dashboard-api, and ispeco-mysql services.

Notes:

If the permission errors occur when restarting the consul-server service, please execute the command chmod 777 -R proxy/ raft/ serf/ in the NFS mounting directories of consul-server-{index} in new environemt separately, then restart consul-server service.

When you finish the steps above, the data of Service Instance, Service Node, and File Manager has migrated successfully.

Migrate Storage Resources

You need to migrate the data of Service Instance, Service Node, and File Manager before migrating the data of Storage Resources.

Notes:

The number of nodes in new environment should be same as the old environment. That is, when a service has multiple nodes, the {index}s should be same in two environments.

Only HBase and HDFS Directory in the Storage Resources can be migrated.

Migrate HBase

Migrate the data of hbase-namenode

  1. Gets the NFS mounting directory of ‘hbase-namenode’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the path of NFS mounting directory of ‘hbase-namenode’:

    kubectl get pvc -n {ns} | grep hbase-namenode | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    Replace {ns} in the command by the name of your namespace.

  2. Copy the data from the directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘hbase-namenode’ in the new environment).

Migrate the data of hbase-datanode

  1. Gets the NFS mounting directories of ‘datanode-volume-hbase-datanode-{index}’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the paths of NFS mounting directories of ‘datanode-volume-hbase-datanode-{index}‘:

    kubectl get pvc -n {ns} | grep datanode-volume-hbase-datanode-{index} | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    1. Replace {ns} in the command by the name of your namespace.
    2. If the service has multiple nodes, execute the command multiple times to get paths of the directories, replace {index} in the command by 0, 1, 2, …, N-1 each time.
  2. Copy the data from the directories in step 1, and paste to the corresponding directories of the new environment(refer to the command in step 1 to get the paths of NFS mounting directories of ‘datanode-volume-hbase-datanode-{index}’ in the new environment).

Migrate the data of hbase-regionserver

  1. Gets the NFS mounting directories of ‘hbase-regionserver-data-volume-hbase-regionserver-{index}’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the paths of NFS mounting directories of ‘hbase-regionserver-data-volume-hbase-regionserver-{index}‘:

    kubectl get pvc -n {ns} | grep hbase-regionserver-data-volume-hbase-regionserver-{index} | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    1. Replace {ns} in the command by the name of your namespace.
    2. If the service has multiple nodes, execute the command multiple times to get paths of the directories, replace {index} in the command by 0, 1, 2, …, N-1 each time.
  2. Copy the data from the ‘hbase-regionserver-data-volume-hbase-regionserver-0’ directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘hbase-regionserver-data-volume-hbase-regionserver-0’ in the new environment).

Migrate the data of hbase-master

  1. Gets the NFS mounting directory of ‘hbase-master-config’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the path of NFS mounting directory of ‘hbase-master-config’:

    kubectl get pvc -n {ns} | grep hbase-master-config | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    Replace {ns} in the command by the name of your namespace.

  2. Copy the data from the directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘hbase-master-config’ in the new environment).

Restart The Services

Restart hbase-namenode, hbase-datanode, hbase-regionserver, hbase-master, and iserver-datacatalog services.

When you finish the steps above, the data of HBase has migrated successfully.

Migrate HDFS Directory

Migrate the data of hdfs-namenode

  1. Gets the NFS mounting directory of ‘hdfs-namenode’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the path of NFS mounting directory of ‘hdfs-namenode’:

    kubectl get pvc -n {ns} | grep hdfs-namenode | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    Replace {ns} in the command by the name of your namespace.

  2. Copy the data from the directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘hdfs-namenode’ in the new environment).

Migrate the data of hdfs-datanode

  1. Gets the NFS mounting directory of ‘hdfs-datanode’ from old environment.

    Execute the following command in Kubernetes Master machine of the old environment to get the path of NFS mounting directory of ‘hdfs-datanode’:

    kubectl get pvc -n {ns} | grep hdfs-datanode | awk '{print $3}' | xargs kubectl describe pv | grep Path

    Notes:

    Replace {ns} in the command by the name of your namespace.

  2. Copy the data from the directory in step 1, and paste to the corresponding directory of the new environment(refer to the command in step 1 to get the path of NFS mounting directory of ‘hdfs-datanode’ in the new environment).

Restart The Services

Restart hdfs-namenode, hdfs-datanode, and iserver-datacatalog services.

When you finish the steps above, the data of HDFS Directory has migrated successfully.