Tutorial
FAQ
-
How to solve the problem when the database in Oracle could not connect with SuperMap iDesktop?
Answer: Please create a new database in Oracle, and use the new one. Follow the steps below to create a new database:
(1) Click Site Management > Name(The name of your Oracle database site) > The link of the service name > Command Pad on the iManager to enter Oracle comnmand pad.
(2) Create a folder named ‘supermapshapefile’ under the ‘u01/app/oracle/oradata’ directory(If you already have a folder, skip this step). Execute:
mkdir -p /u01/app/oracle/oradata/supermapshapefile
(3) Change the folder owner to oracle. Execute:
chown oracle /u01/app/oracle/oradata/supermapshapefile
(4) Enter Oracle. Execute:
sqlplus / as sysdba
(5) Fill in the username and password of Oracle. You can view account information by clicking Site Management > Name(The name of your Oracle database site) > Account on the iManager page.
(6) Create a tablespace, the size is 200M(you can set the size of tablespace according to your requirement). Execute:
create tablespace supermaptbs datafile '/u01/app/oracle/oradata/supermapshapefile/data.dbf' size 200M;
(7) Create a new user of the tablespace. For example, username: supermapuser; password: supermap123. Execute:
create user supermapuser identified by supermap123 default tablespace supermaptbs;
(8) Grant new user permission. Execute:
grant connect,resource to supermapuser; grant dba to supermapuser;
-
If you redeploy or adjust spec immediately after deploying or adjusting spec, the license assign failed, how to solve the problem?
Answer: After redeploying or adjusting spec, please make sure the services have been assigned the license successfully, then do others operations like redeploy or adjust spec.
-
When viewing monitoring statistic charts, the charts do not display data or the time of data can not match the real time, how to solve the problem?
Answer: Please make sure the time settings of local machine and Kubernetes node machines are same.
-
How to use the https protocol?
Answer: iManager supports https protocol, please do the following operations to achieve https protocol:
(1) Go to the iManager installation directory(the directory in which you executed ./startup or ./start command to start iManager) and find out the file named values.yml, execute the following command:
sudo vi values.yml
(2) Modify the value of “deploy_imanager_service_protocol” to “https” when you need to use iManager https protocol.
(3) Save the setting and restart iManager:
sudo ./startup.sh
-
How to replace the security certificate in iManager for K8s?
Answer: There are two kinds of security certificates in iManager for K8s, one is for security center(Keycloak), another one is for access entrance. Please follow the steps below to replace two security certificates separately:
Replace the Security Certificate for Keycloak
(1) Executes the following command on Kubernetes Master machine to find the volume of the security certificate(
<namespace>
in the command is the namespace of iManager, it is ‘supermap’ by default. Please replace to the actual namespace if you have any change):kubectl -n <namespace> describe pvc pvc-keycloak | grep Volume: | awk -F ' ' '{print $2}' | xargs kubectl describe pv
(2) Stores your new security certificate to the volume in step (1);
Notes:
The security certificate includes certificate and private key, the certificate should be renamed to tls.crt, the private key should be renamed to tsl.key.(3) Logs in to iManager and redeploys the Keycloak service in Basic Services.
Replace the Security Certificate for Access Entrance
(1) Executes the following command on Kubernetes Master machine to find the volume of the security certificate(
<namespace>
in the command is the namespace of iManager, it is ‘supermap’ by default. Please replace to the actual namespace if you have any change):kubectl -n <namespace> describe pvc pvc-imanager-dashboard-ui | grep Volume: | awk -F ' ' '{print $2}' | xargs kubectl describe pv
(2) Stores your new security certificate to the volume in step (1);
Notes:
The security certificate includes certificate and private key, the certificate should be renamed to ‘certificate.crt’, the private key should be renamed to ‘private.key’.(3) Logs in to iManager and redeploys the imanager-dashboard-ui service in Basic Services.
-
How to create the resource with the same name as the Secret when configuring the image pull secret?
Answer: If the configured registry is private, you need to create a resource with the same name as the secret value in iManager namespace of Kubernetes when configuring the Secret. You also need to create a resource with the same name as the secret value in namespace ‘istio-system’ when enabling Service Mesh(Istio). And create a resource with the same name as the secret value in namespace ‘kube-system’ when enabling metrics server. Please enter the following command in Kubernetes Master machine to create the resource:
kubectl create secret docker-registry <image-pull-secret> --docker-server="<172.16.17.11:5002>" --docker-username=<admin> --docker-password=<adminpassword> -n <supermap>
Notes:
- The contents in the command with symbol ”<>” need to be replaced according to your actual environment(Delete the symbol ”<>” after replacing).
<image-pull-secret>
is the name of your Secret;<172.16.17.11:5002>
is your registry address;<admin>
is the namespace of your registry;<adminpassword>
is the password of your namespace;<supermap>
is the namespace of iManager(replace<supermap>
to ‘istio-system’ or ‘kube-system’ when you create the resource in the namespace istio-system or kube-system). - If the namespace istio-system does not exist, execute ‘kubectl create ns istio-system’ in the machine of Kubernetes master node to create.
- The contents in the command with symbol ”<>” need to be replaced according to your actual environment(Delete the symbol ”<>” after replacing).
-
How to solve the error “Error: UPGRADE FAILED: cannot patch “pv-nfs-grafana” with kind StorageClass: StorageClass.storage.k8s.io “pv-nfs-grafana” is invalid …” when you restart iManager?
Answer: The error occurs because Kubernetes patch operation does not support to update the provider of storageClass. The error has no negative influence on iManager, please ignore the error.
-
How to refresh the certificate of Kubernetes?
Answer: The validity period of Kubernetes certificate is one year, you need to refresh the certificate when expired. Please follow the steps below to refresh the certificate of Kubernetes:
(1) Enter the /etc/kubernetes/pki directory of the master machine of Kubernetes, create a backup directory:
cd /etc/kubernetes/pki/ mkdir -p ~/tmp/BACKUP_etc_kubernetes_pki/etcd/
(2) Backup the original certificate:
mv apiserver.crt apiserver-etcd-client.key apiserver-kubelet-client.crt front-proxy-ca.crt front-proxy-client.crt front-proxy-client.key front-proxy-ca.key apiserver-kubelet-client.key apiserver.key apiserver-etcd-client.crt ~/tmp/BACKUP_etc_kubernetes_pki/. mv etcd/healthcheck-client.* etcd/peer.* etcd/server.* ~/tmp/BACKUP_etc_kubernetes_pki/etcd/
(3) Regenerate the new certificate:
kubeadm init phase certs all
(4) Enter the /etc/kubernetes directory, create a backup directory of the configuration file:
cd /etc/kubernetes/ mkdir -p ~/tmp/BACKUP_etc_kubernetes mv admin.conf controller-manager.conf kubelet.conf scheduler.conf ~/tmp/BACKUP_etc_kubernetes/.
(5) Regenerate the configuration file:
kubeadm init phase kubeconfig all
(6) Restart the Kubernetes MasterNode:
reboot
(7) Copy and overwrite the original file:
mkdir -p ~/tmp/BACKUP_home_.kube cp -r ~/.kube/* ~/tmp/BACKUP_home_.kube/. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
(8) Check the Kubernetes cluster information after updating the certificate(use the command
kubectl cluster-info
):kubectl cluster-info
-
How to configure local storage for built-in HBase environment?
Answer: The NFS volume will impact the write/read ability of HBase, you can optimize the ability by the following way:
(1) Modify the value of
deploy_disable_hbase_nfs_volume
totrue
in the configuration(values.yaml).deploy_disable_hbase_nfs_volume: true
(2) Please refer to the hbase-datanode-local-volume.yaml file, and modify the file according to the actual situation:
apiVersion: v1 kind: PersistentVolume metadata: labels: type: icloud-native name: icloud-native-hbase-datanode-volume-0 #Point 1 spec: storageClassName: local-volume-storage-class capacity: storage: 10Ti accessModes: - ReadWriteMany local: path: /opt/imanager-data/datanode-data #Point 2 persistentVolumeReclaimPolicy: Delete nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1 #Point 3
The places need to be modified are marked by ’#’ on the above:
- Point 1: Specify the name of the PV.
- Point 2: The actual path to store the HBase data, please create the directory first. If you want to create multiple PVs on one node, please create multiple directories and modify the paths to the different directories.
Use the command below to create directory on the node(replace the path by your actual setting):
mkdir -p /opt/imanager-data/datanode-data
- Point 3: Fill in the name of the node of Kubernetes(the node should be available to scheduling).
(3) Execute the command on the Kubernetes master node after modifying:
kubectl apply -f hbase-datanode-local-volume.yaml
Notes:
- The number of PVs should be the same as the number of dataNodes in HBase environment, the default is 3. If you scaled the node, please created the the same number of PVs.
- The PV could be created on any node of Kubernetes(specified by #Third), recommended to create on different nodes.
- The PV could be created before opening/scaling the HBase, or after opening/scaling the HBase.
-
How to solve the problem that the system is hanging, and the log of system has the error “echo 0 > /proc/sys/kernel/hungtasktimeout_secs” disables this message.”?
Answer: The reason of system hangs is the rising loading when system is running, the dirty data of files system can not be written into the disk in the stipulated time. Please refer to the following steps to solve the problem:
(1) Edit the file ‘sysctl.conf’ in ‘/etc’ derectory. Set the method of processing dirty data and the timeout of writting dirty data:
# When the system dirty data reach to the ratio, the system start to process vm.dirty_background_ratio=5 # When the system dirty data reach to the ratio, the system must process vm.dirty_ratio=10 # The timeout of writing dirty data, the value is 120 second by default, 0 mean no limit kernel.hung_task_timeout_secs=0
(2) Force restart, execute the command:
reboot -n -f
-
How to solve the problem that a worker node server breakdown in Kubernetes cluster?
Answer: If your Kubernetes cluster is made up by three or more than three servers(include Master node server), when a worker node server breakdown, the services which are running in the worker node server will emmigrate to other worker node servers automatically. If your Kubernetes cluster is made up by two servers, when the worker node server breakdown, please follow the steps below to recover services:
(1) Check the servers name in Master node server:
kubectl get nodes
(2) Change the name of Master node server:
hostnamectl set-hostname <newhostname>
Notes:
<newhostname>
in the command is the new name of Master node server, the new name supports customization.(3) Enter the directory /etc/kubernetes/manifests:
cd /etc/kubernetes/manifests
(4) Edit the file ‘etcd.yaml’, modify the old name of Master node server in the file to the new name(as the screenshot below, there are two places to modify). Execute the command to edit ‘etcd.yaml’ file:
vi etcd.yaml
(5) Export the YAML file of Worker node server:
kubectl get node <old-nodeName> -o yaml > node.yaml
Notes:
<old-nodeName>
in the commmand is the name of the Worker node server, you can check the name by the command in sttep (1).(6) Edit ‘node.yaml’ file, modify the name of Worker node server in the file to the new name of Master node server, and add the content
node-role.kubernetes.io/master: ""
inlabels
(as the screenshot below: there are four places to modify, and add the content under the red line):vi node.yaml
(7) Edit ‘kubeadm-config ConfigMap’ in the namespace of kube-system, modify the old name of Master node server to the new name(as the screenshot below, there is only one place to modify):
kubectl -n kube-system edit configmap kubeadm-config
(8)Generate certificate for new server, replace the old certificate. Execute the following operation:
cd /etc/kubernetes/pki/ mkdir -p ~/tmp/BACKUP_etc_kubernetes_pki/etcd/ mv apiserver.crt apiserver-etcd-client.key apiserver-kubelet-client.crt front-proxy-ca.crt front-proxy-client.crt front-proxy-client.key front-proxy-ca.key apiserver-kubelet-client.key apiserver.key apiserver-etcd-client.crt ~/tmp/BACKUP_etc_kubernetes_pki/. mv etcd/healthcheck-client.* etcd/peer.* etcd/server.* ~/tmp/BACKUP_etc_kubernetes_pki/etcd/ kubeadm init phase certs all cd /etc/kubernetes mkdir -p ~/tmp/BACKUP_etc_kubernetes mv admin.conf controller-manager.conf kubelet.conf scheduler.conf ~/tmp/BACKUP_etc_kubernetes/. kubeadm init phase kubeconfig all mkdir -p ~/tmp/BACKUP_home_.kube cp -r ~/.kube/* ~/tmp/BACKUP_home_.kube/. cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
(9) Apply the ‘node.yaml’ file of Worker node server which is modified in step (6):
kubectl apply -f node.yaml
(10) Delete the old Worker node server:
kubectl delete node <old-nodeName>
(11) Restart kubelet and docker services:
systemctl daemon-reload && systemctl restart kubelet && systemctl restart doker
-
How to solve the problem if the logs include the information that “The connection to the server localhost:8080 was refused - did you specify the right host or port? Error: Kubernetes cluster unreachable” when deploying iManager on Kubernetes?
Answer: There are two solutions to solve the above problem, please choose one of the following methods:
Switch into the root user Solution
Executes the following command again after switching into the root user:
sudo ./startup.sh
Increase kubectl usage permissions Solution
Executes the following command in any directory of the server:
mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.kubeconf $HOME/.kube/config sudo chown userName:userName $HOME/.kube/config
Notes:
<userName>
in the command is your actual username, please replace it. -
In the service list of the site, click on the service name to enter the details page, and then click on the “log” in the container list. There is an error message, but on the site management page, click on the “log” in the service operation bar, but the Kibana page does not display any log information containing “error”. How should we solve this problem?
Answer: You need to change the mounting directory of the DaemonSet “fluent es” image of the service daemon. There are two ways to solve the above problem. Please choose either:
**Modify in the dashboard interface of Kubernetes **
(1) Visit
http://<ip>: 31234
to enter the dashboard interface of Kubernetes.< IP>
is the IP address of the machine where the iManager is located; (2) Selectsupermap
for the namespace, and click onDaemon Sets
for Workloads in the left column; (3) Selectfluent es
for editing and locate all default mount directories “/var/lib/docker/containers”, as shown in the following figure. Modify to ”/data/docker/containers ” or customize the docker storage path;(4) Click the update button to complete the changes.
Modify by using the command line (1) In the Master node server, execute the following command:
kubectl -n supermap edit daemonset fluentd-es
(2) Find all the default mount directories “/var/lib/docker/containers” and modify them to ” /data/docker/containers ”, or customize the docker storage path; (3) Enter “:wq” on the command line to save and exit, completing the changes.
Notes:
After completing the changes, wait for a moment and you can view the log information normally through the “Log” function on the Site Management page. -
At the bottom of the Site Management page, the statistical chart information of various service resources such as CPU usage was not displayed properly, but instead displayed as “No data points”. After clicking on the service name of the container to be monitored, the same issue occurred when viewing the statistical chart of the container at the bottom of the page. After investigation, it was found that the default storage path for the Docker is not “/var/lib/docker”, which has caused this problem. How should we solve it?
Answer: The default storage path of Docker is not “/var/lib/docker”, which prevents the normal display of monitoring statistics. The reason is that the Docker container performance analysis tool cAdvisor, which is built-in in Kubelet, cannot obtain container information. Please refer to the following steps to solve the above problem:
(1) Edit the startup file for kubelet and execute the following command:
vim /usr/lib/systemd/system/kubelet.service
(2) Add the configuration of “—docker-root” to the ExecStart parameter, and set the parameter value to the actual storage path of the Docker, as shown in the following figure, the actual path is “/data/docker”.
(3) Enter “:wq” on the command line to save and exit, completing the changes (4) Restart kubelet to make the configuration take effect, execute the following command:
systemctl daemon-reload systemctl restart kubelet
Notes:
When troubleshooting the default storage path for docker, you can view it through the graph value in the /etc/docker/daemon.json file. -
If the service node gisapplication of the GIS Cloud suite site crashes while receiving multiple concurrent requests, causing an exception that leads to service access failure, how should we solve it?
Answer: The system default service node gisapplication can receive 200 concurrent requests. When the number of requests exceeds 200 and the response time of the service is long, receiving requests exceeding the default number will cause the service node to crash. When the above problem occurs, please refer to the following steps to manually add environment variables to solve it:
(1) In the GIS Cloud Suite site service list in the iManager interface, perform Edit operations on the “gisapp-*” service node that crashed. (2) Navigate to the environment variable, as shown in the following figure.
(3) Add an environment variable named icn_ext_param_server_tomcat_threads_max and set a value based on the number of received requests. As shown in the following figure, it supports receiving 1000 concurrent requests.
(4) Click OK and it will take effect after restarting.
Notes:
After increasing the number of requests, it is necessary to increase the CUP and memory size of the service node appropriately by Adjust Spec to ensure stable service operation.
-
How to reset the administrator password?
Answer: In the iManager installation directory, a file named reset-password.sh is used to reset the administrator account. Please follow these steps to reset the administrator account password:
(1) Enter the iManager installation directory (the directory where the iManager installation command is executed); (2) Execute reset-password.sh script file to reset
chmod +x reset-password.sh && ./reset-password.sh
-
How to reset the links and protocols of the iManager access entrance?
Answer: In the iManager installation directory, a file named reset-entrance.sh can reset the access portal to the link and protocol used when installing and deploying iManager. Please follow these steps to reset the access entrance:
(1) Enter the iManager installation directory (the directory where the iManager installation command is executed); (2) Execute reset-entrance.sh script file to reset:
chmod +x reset-entrance.sh && ./reset-entrance.sh
-
If an external Keycloak service is configured, and when accessing iManager’s Keycloak after deploying iManager, the login fails with the display of “Internal Server Error”, how can we solve this problem?
Answer: There are two solutions to solve the above problem. Please choose one of the following methods:
Manually modify in the Keycloak interface (1) On the Configure->Realm Settings->Themes page, set the value of the Login Theme to keyclock ;
(2) On the Configure>Authentication>Bindings page, set the value of Browser Flow to browser:
Use the customized images You can also choose to directly use the customized Keycloak image provided by SuperMap official Image Address:registry.cn-beijing.aliyuncs.com/supermap/keycloak-imanager:11.1.0-amd64
Notes:
- When Keycloak extension content is not included, the following functions will be lost: theme, user validity verification, automatic locking of inactive users, and cloud suite token function.
- Keycloak services deployed independently by users also need to be configured with frontenturl (for booting in Docker mode, KEYCLOAKFRONTEND_URL=
http://<ip>:8080/auth
needs to be added,<ip>
is the IP environment variable of the machine where the Keyclock is located; for booting in Installation Package mode, frontendUrl configuration needs to be added to the standalone.xml file, otherwise verification errors may occur.
-
iManager for K8s deployed on Huawei Cloud or Alibaba Cloud cannot jump to the login interface, and after creating GIS sites and GIS Cloud Suite sites based on this site, whether logging in through iPortal or iServer addresses, it cannot jump to the login interface. How can we solve this problem?
Answer: You need to first add the iManager for K8s address, iServer, and iPortal addresses to the redirectUris in keycloak-config: (1) Visit
http://<ip>: 31234
to enter the dashboard interface of Kubernetes.< IP>
is the IP address of the machine where the iManager is located; (2) Select the namespace where iManager for K8s is located, such assupermap
, and click onConfigure and Store
>Config Maps
>keycloak-config
in the left column; (3) In the data column below, fill in the jump addresses of Keycloak after logging in one by one. The<ip>
should be filled in as the real jump IP, and the format should follow the following example:"redirectUris": "http(s)://{ip}:31100/*
After completion, it will automatically take effect within 30 minutes at the latest. If you want to take effect immediately, you need to manually trigger the update-keycloak-redirct-uri under
Workload
>Cron Jobs
. -
How to use the SAML login protocol in iManager for K8s or cloud suite sites based on Keycloak for identity authentication and access control?
Answer: You can configure the three-party identity provider (IDP) directly in Keycloak, and the procedure is as follows:
(1) Visit Keycloak,
http(s)://:<ip>:<port>/auth
, click theIdentity providers
menu bar under Configure, and selectSAML v2.0
to add the provider;(2) Go to the
Add identity provider
page, fill in theImport External IDP Config
field with the url address of the IDP side metadata file or import the IDP metadata file directly and save it;(3) Go to the details page of the newly added identity provider, and create a user information mapping relationship in the
Mappers
column.For example:
Mapper Type
select “SAML Attribute to Role”,Attribute Name
is the name of an attribute in the SAML response assertion message,Attribute Value
is the value of the attribute in the SAML response assertion message, andRole
is the role in the iManager for K8s or Cloud Suite site.The following example maps the test role on the IDP side to the admin role in the iManager.
(4) Go to the identity provider details page
Export
column, you can see the metadata information on the Service Provider side, download the file and provide it to the IDP side for import.(5) After completing the above configuration, re-visit iManager/Cloud Suite, you can see the saml button on the login page, click the “saml” button to login.
-
How to solve the problem if the username or password to login SuperMap License Center(Web) is forgotten?
Answer: The account information is stored in the file
license-security.xml
. So you can solve the problem by restarting license center after deleting this file manually, and the procedure is as follows:(1) Enter the
conf
directory in the SuperMap License Center(Web) package, namelysupermap-bslicense-server-xxx/conf/
;(2) Delete the file
license-security.xml
;(3) Enter the directory
supermap-bslicense-server-xxx/bin/
, and re-execute the startup script;(4) Re-register your username and password.