{"componentChunkName":"component---src-templates-tutorial-js","path":"/en/tutorial/FAQ/","result":{"data":{"markdownRemark":{"html":"<h2 id=\"faq\"><a href=\"#faq\" aria-hidden class=\"anchor\"><svg aria-hidden=\"true\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>FAQ</h2>\n<ol>\n<li>\n<p>How to solve the problem when the database in Oracle could not connect with SuperMap iDesktop?</p>\n<p>Answer: Please create a new database in Oracle, and use the new one. Follow the steps below to create a new database:</p>\n<p>(1) Click <strong>Database > Oracle > Name(The name of your database) > oracle > Command Pad</strong> on the iManager to enter Oracle comnmand pad.</p>\n<p>(2) Create a folder named ‘supermapshapefile’ under the ‘u01/app/oracle/oradata’ directory(If you already have a folder, skip this step). Execute:</p>\n<div class=\"gatsby-highlight\" data-language=\"database\"><pre class=\"gatsby-code-database\"><code class=\"gatsby-code-database\">mkdir -p /u01/app/oracle/oradata/supermapshapefile</code></pre></div>\n<p>(3) Change the folder owner to oracle. Execute:</p>\n<div class=\"gatsby-highlight\" data-language=\"database\"><pre class=\"gatsby-code-database\"><code class=\"gatsby-code-database\">chown oracle /u01/app/oracle/oradata/supermapshapefile</code></pre></div>\n<p>(4) Enter Oracle. Execute:</p>\n<div class=\"gatsby-highlight\" data-language=\"database\"><pre class=\"gatsby-code-database\"><code class=\"gatsby-code-database\">sqlplus / as sysdba</code></pre></div>\n<p>(5) Fill in the username and password of Oracle. You can view account information by clicking <strong>Database > Oracle > Name > Account</strong> on the iManager page.</p>\n<p>(6) Create a tablespace, the size is 200M(you can set the size of tablespace according to your requirement). Execute:</p>\n<div class=\"gatsby-highlight\" data-language=\"database\"><pre class=\"gatsby-code-database\"><code class=\"gatsby-code-database\">create tablespace supermaptbs datafile &#39;/u01/app/oracle/oradata/supermapshapefile/data.dbf&#39; size 200M;</code></pre></div>\n<p>(7) Create a new user of the tablespace. For example, username: supermapuser; password: supermap123. Execute:</p>\n<div class=\"gatsby-highlight\" data-language=\"database\"><pre class=\"gatsby-code-database\"><code class=\"gatsby-code-database\">create user supermapuser identified by supermap123 default tablespace supermaptbs;</code></pre></div>\n<p>(8) Grant new user permission. Execute:</p>\n<div class=\"gatsby-highlight\" data-language=\"database\"><pre class=\"gatsby-code-database\"><code class=\"gatsby-code-database\">grant connect,resource to supermapuser;\ngrant dba to supermapuser;</code></pre></div>\n</li>\n<li>\n<p>If you redeploy or adjust spec immediately after deploying or adjusting spec, the license assign failed, how to solve the problem?</p>\n<p>Answer: After redeploying or adjusting spec, please make sure the services have been assigned the license successfully, then do others operations like redeploy or adjust spec.</p>\n</li>\n<li>\n<p>When viewing monitoring statistic charts, the charts do not display data or the time of data can not match the real time, how to solve the problem?</p>\n<p>Answer: Please make sure the time settings of local machine and Kubernetes node machines are same.</p>\n</li>\n<li>\n<p>How to use the https protocol?</p>\n<p>Answer: Both of Keycloak and iManager support https protocol, please do the following operations to achieve https protocol:</p>\n<p>(1) Go to the iManager installation directory(the directory in which you executed ./startup or ./start command to start iManager) and find out the file named values.yml, execute the following command:</p>\n<div class=\"gatsby-highlight\" data-language=\"shell\"><pre class=\"gatsby-code-shell\"><code class=\"gatsby-code-shell\"><span class=\"token function\">sudo</span> <span class=\"token function\">vi</span> values.yml</code></pre></div>\n<p>(2) Modify the value of “deploy_keycloak_service_protocol” to “https” when you want to use Keycloak https protocol; Modify the value of “deploy_imanager_service_protocol” to “https” when you want to use iManager https protocol.</p>\n<p>(3) Save the setting and restart iManager:</p>\n<div class=\"gatsby-highlight\" data-language=\"shell\"><pre class=\"gatsby-code-shell\"><code class=\"gatsby-code-shell\"><span class=\"token function\">sudo</span> ./startup.sh</code></pre></div>\n</li>\n<li>\n<p>How to replace the security certificate in iManager for K8s?</p>\n<p>Answer: There are two kinds of security certificates in iManager for K8s, one is for security center(Keycloak), another one is for access entrance. Please follow the steps below to replace two security certificates separately:</p>\n<p><strong>Replace the Security Certificate for Keycloak</strong></p>\n<p>(1) Executes the following command on Kubernetes Master machine to find the volume of the security certificate(<code class=\"gatsby-code-text\">&lt;namespace&gt;</code> in the command is the namespace of iManager, it is ‘supermap’ by default. Please replace to the actual namespace if you have any change):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl -n &lt;namespace&gt; describe pvc pvc-keycloak | grep Volume: | awk -F &#39; &#39; &#39;{print $2}&#39; | xargs kubectl describe pv</code></pre></div>\n<p>(2) Stores your new security certificate to the volume in step (1);</p>\n<blockquote>\n<p>Notes:<br>\nThe security certificate includes certificate and private key, the certificate should be renamed to tls.crt, the private key should be renamed to tsl.key.</p>\n</blockquote>\n<p>(3) Logs in to iManager and redeploys the Keycloak service in <strong>Basic Services</strong>.</p>\n<p><strong>Replace the Security Certificate for Access Entrance</strong></p>\n<p>(1) Executes the following command on Kubernetes Master machine to find the volume of the security certificate(<code class=\"gatsby-code-text\">&lt;namespace&gt;</code> in the command is the namespace of iManager, it is ‘supermap’ by default. Please replace to the actual namespace if you have any change):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl -n &lt;namespace&gt; describe pvc pvc-imanager-dashboard-ui | grep Volume: | awk -F &#39; &#39; &#39;{print $2}&#39; | xargs kubectl describe pv</code></pre></div>\n<p>(2) Stores your new security certificate to the volume in step (1);</p>\n<blockquote>\n<p>Notes:<br>\nThe security certificate includes certificate and private key, the certificate should be renamed to ‘certificate.crt’, the private key should be renamed to ‘private.key’.</p>\n</blockquote>\n<p>(3) Logs in to iManager and redeploys the imanager-dashboard-ui service in <strong>Basic Services</strong>.</p>\n</li>\n<li>\n<p>How to create the resource with the same name as the Secret when configuring the image pull secret?</p>\n<p>Answer: When configuring the Secret, you need to create a resource with the same name as the Secret in iManager namespace of Kubernetes. You also need to create a resource with the same name as the secret value in namespace ‘istio-system’ when enabling Service Mesh(Istio). And create a resource with the same name as the secret value in namespace ‘kube-system’ when enabling metrics server. Please enter the following command in Kubernetes Master machine to create the resource:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl create secret docker-registry &lt;image-pull-secret&gt; --docker-server=&quot;&lt;172.16.17.11:5002&gt;&quot; --docker-username=&lt;admin&gt; --docker-password=&lt;adminpassword&gt; -n &lt;supermap&gt;</code></pre></div>\n<blockquote>\n<p>Notes:  </p>\n<ol>\n<li>The contents in the command with symbol ”&#x3C;>” need to be replaced according to your actual environment(Delete the symbol ”&#x3C;>” after replacing).\n<code class=\"gatsby-code-text\">&lt;image-pull-secret&gt;</code> is the name of your Secret;\n<code class=\"gatsby-code-text\">&lt;172.16.17.11:5002&gt;</code> is your registry address;\n<code class=\"gatsby-code-text\">&lt;admin&gt;</code> is the namespace of your registry;\n<code class=\"gatsby-code-text\">&lt;adminpassword&gt;</code> is the password of your namespace;\n<code class=\"gatsby-code-text\">&lt;supermap&gt;</code> is the namespace of iManager(replace <code class=\"gatsby-code-text\">&lt;supermap&gt;</code> to ‘istio-system’ or ‘kube-system’ when you create the resource in the namespace istio-system or kube-system).</li>\n<li>If the namespace istio-system does not exist, execute ‘kubectl create ns istio-system’ in the machine of Kubernetes master node to create.</li>\n</ol>\n</blockquote>\n</li>\n<li>\n<p>How to solve the error “Error: UPGRADE FAILED: cannot patch “pv-nfs-grafana” with kind StorageClass: StorageClass.storage.k8s.io “pv-nfs-grafana” is invalid …” when you restart iManager?</p>\n<p>Answer: The error occurs because Kubernetes patch operation does not support to update the provider of storageClass. The error has no negative influence on iManager, please ignore the error.</p>\n</li>\n<li>\n<p>How to refresh the certificate of Kubernetes?</p>\n<p>Answer: The validity period of Kubernetes certificate is one year, you need to refresh the certificate when expired. Please follow the steps below to refresh the certificate of Kubernetes:</p>\n<p>(1) Enter the /etc directory of the master machine of Kubernetes, backup the files in /kubernetes directory:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">cd /etc\ncp -r kubernetes kubernetes.bak</code></pre></div>\n<p>(2) Create a new configuration file ‘kubeadm.yaml’:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">vi kubeadm.yaml</code></pre></div>\n<p>(3) Configure kubeadm.yaml file(The IP ‘10.10.129.26’ below is the IP of Kubernetes Master node, please replace it by the IP of your Kubernetes Master node):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kind: MasterConfiguration\napiVersion: kubeadm.k8s.io/v1beta1\nkubernetesVersion: v1.14.0\napi:\n advertiseAddress: 10.10.129.29</code></pre></div>\n<p>(4) Issue a new certificate by the configuration file:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubeadm init phase certs all --config=kubeadm.yaml</code></pre></div>\n<p>(5) Regenerate the configuration file:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubeadm init phase kubeconfig all --config kubeadm.yaml</code></pre></div>\n<p>(6) Restart the containers kube-apiserver, kube-controller-manager, kube-scheduler, and etcd:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">docker ps |grep -E &#39;k8s_kube-apiserver|k8s_kube-controller-manager|k8s_kube-scheduler|k8s_etcd_etcd&#39; | awk -F &#39; &#39; &#39;{print $1}&#39; |xargs docker restart</code></pre></div>\n<p>(7) Check the expiration date of the new certificate:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep &#39; Not &#39;</code></pre></div>\n<p>(8) Check the status of Work nodes(use the command <code class=\"gatsby-code-text\">kubectl get nodes</code>), if the status of Work nodes are ‘NotReady’, execute the following command to regenerate the configuration file of Work nodes:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">mv /var/lib/kubelet/pki /var/lib/kubelet/pki.bak\nsystemctl daemon-reload &amp;&amp; systemctl restart kubelet</code></pre></div>\n<p>(9) Check the status of Work nodes again, if the status of Work nodes are still ‘NotReady’, copy the /etc/kubernetes/pki/ca.crt file from Master node and plaste to the same directory of Work nodes, then restart kubelet(use the command <code class=\"gatsby-code-text\">systemctl restart kubelet</code> to restart kubelet, you need to wait for 3 miniutes after restarting kubelet).</p>\n</li>\n<li>\n<p>How to configure local storage for built-in HBase environment?</p>\n<p>Answer: The NFS volume will impact the write/read ability of HBase, you can optimize the ability by the following way:</p>\n<p>(1) Modify the value of <code class=\"gatsby-code-text\">deploy_disable_hbase_nfs_volume</code> to <code class=\"gatsby-code-text\">true</code> in the configuration(values.yaml).</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">deploy_disable_hbase_nfs_volume: true</code></pre></div>\n<p>(2) Please refer to the hbase-datanode-local-volume.yaml file, and modify the file according to the actual situation:</p>\n<div class=\"gatsby-highlight\" data-language=\"yaml\"><pre class=\"gatsby-code-yaml\"><code class=\"gatsby-code-yaml\"><span class=\"token key atrule\">apiVersion</span><span class=\"token punctuation\">:</span> v1\n<span class=\"token key atrule\">kind</span><span class=\"token punctuation\">:</span> PersistentVolume\n<span class=\"token key atrule\">metadata</span><span class=\"token punctuation\">:</span>\n <span class=\"token key atrule\">labels</span><span class=\"token punctuation\">:</span>\n   <span class=\"token key atrule\">type</span><span class=\"token punctuation\">:</span> icloud<span class=\"token punctuation\">-</span>native\n <span class=\"token key atrule\">name</span><span class=\"token punctuation\">:</span> icloud<span class=\"token punctuation\">-</span>native<span class=\"token punctuation\">-</span>hbase<span class=\"token punctuation\">-</span>datanode<span class=\"token punctuation\">-</span>volume<span class=\"token punctuation\">-</span><span class=\"token number\">0  </span><span class=\"token comment\">#Point 1</span>\n<span class=\"token key atrule\">spec</span><span class=\"token punctuation\">:</span>\n <span class=\"token key atrule\">storageClassName</span><span class=\"token punctuation\">:</span> local<span class=\"token punctuation\">-</span>volume<span class=\"token punctuation\">-</span>storage<span class=\"token punctuation\">-</span>class\n <span class=\"token key atrule\">capacity</span><span class=\"token punctuation\">:</span>\n   <span class=\"token key atrule\">storage</span><span class=\"token punctuation\">:</span> 10Ti\n <span class=\"token key atrule\">accessModes</span><span class=\"token punctuation\">:</span>\n   <span class=\"token punctuation\">-</span> ReadWriteMany\n <span class=\"token key atrule\">local</span><span class=\"token punctuation\">:</span>\n   <span class=\"token key atrule\">path</span><span class=\"token punctuation\">:</span> /opt/imanager<span class=\"token punctuation\">-</span>data/datanode<span class=\"token punctuation\">-</span>data  <span class=\"token comment\">#Point 2</span>\n <span class=\"token key atrule\">persistentVolumeReclaimPolicy</span><span class=\"token punctuation\">:</span> Delete\n <span class=\"token key atrule\">nodeAffinity</span><span class=\"token punctuation\">:</span>\n   <span class=\"token key atrule\">required</span><span class=\"token punctuation\">:</span>\n     <span class=\"token key atrule\">nodeSelectorTerms</span><span class=\"token punctuation\">:</span>\n       <span class=\"token punctuation\">-</span> <span class=\"token key atrule\">matchExpressions</span><span class=\"token punctuation\">:</span>\n         <span class=\"token punctuation\">-</span> <span class=\"token key atrule\">key</span><span class=\"token punctuation\">:</span> kubernetes.io/hostname\n           <span class=\"token key atrule\">operator</span><span class=\"token punctuation\">:</span> In\n           <span class=\"token key atrule\">values</span><span class=\"token punctuation\">:</span>\n             <span class=\"token punctuation\">-</span> node1   <span class=\"token comment\">#Point 3</span></code></pre></div>\n<p>The places need to be modified are marked by ’#’ on the above:</p>\n<ul>\n<li>Point 1: Specify the name of the PV.</li>\n<li>Point 2: The actual path to store the HBase data, please create the directory first. If you want to create multiple PVs on one node, please create multiple directories and modify the paths to the different directories.</li>\n</ul>\n<p> Use the command below to create directory on the node(replace the path by your actual setting):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">mkdir -p /opt/imanager-data/datanode-data</code></pre></div>\n<ul>\n<li>Point 3: Fill in the name of the node of Kubernetes(the node should be available to scheduling).</li>\n</ul>\n<p>(3) Execute the command on the Kubernetes master node after modifying:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl apply -f hbase-datanode-local-volume.yaml</code></pre></div>\n<blockquote>\n<p>Notes:  </p>\n<ol>\n<li>The number of PVs should be the same as the number of dataNodes in HBase environment, the default is 3. If you scaled the node, please created the the same number of PVs.  </li>\n<li>The PV could be created on any node of Kubernetes(specified by #Third), recommended to create on different nodes.  </li>\n<li>The PV could be created before opening/scaling the HBase, or after opening/scaling the HBase.</li>\n</ol>\n</blockquote>\n</li>\n<li>\n<p>How to solve the problem that the system is hanging, and the log of system has the error “echo 0 > /proc/sys/kernel/hung<em>task</em>timeout_secs” disables this message.”?</p>\n<p>Answer: The reason of system hangs is the rising loading when system is running, the dirty data of files system can not be written into the disk in the stipulated time. Please refer to the following steps to solve the problem:</p>\n<p>(1) Edit the file ‘sysctl.conf’ in ‘/etc’ derectory. Set the method of processing dirty data and the timeout of writting dirty data:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\"># When the system dirty data reach to the ratio, the system start to process\nvm.dirty_background_ratio=5\n# When the system dirty data reach to the ratio, the system must process \nvm.dirty_ratio=10\n# The timeout of writing dirty data, the value is 120 second by default, 0 mean no limit\nkernel.hung_task_timeout_secs=0</code></pre></div>\n<p>(2) Force restart, execute the command:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">reboot -n -f</code></pre></div>\n</li>\n<li>\n<p>How to solve the problem that a worker node server breakdown in Kubernetes cluster?</p>\n<p>Answer: If your Kubernetes cluster is made up by three or more than three servers(include Master node server), when a worker node server breakdown, the services which are running in the worker node server will emmigrate to other worker node servers automatically. If your Kubernetes cluster is made up by two servers, when the worker node server breakdown, please follow the steps below to recover services:</p>\n<p>(1) Check the servers name in Master node server:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl get nodes</code></pre></div>\n<p>(2) Change the name of Master node server:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">hostnamectl set-hostname &lt;newhostname&gt;</code></pre></div>\n<blockquote>\n<p>Notes:<br>\n<code class=\"gatsby-code-text\">&lt;newhostname&gt;</code> in the command is the new name of Master node server, the new name supports customization.</p>\n</blockquote>\n<p>(3) Enter the directory /etc/kubernetes/manifests:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">cd /etc/kubernetes/manifests</code></pre></div>\n<p>(4) Edit the file ‘etcd.yaml’, modify the old name of Master node server in the file to the new name(as the screenshot below, there are two places to modify). Execute the command to edit ‘etcd.yaml’ file:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">vi etcd.yaml</code></pre></div>\n<p>\n  <a\n    class=\"gatsby-resp-image-link\"\n    href=\"/iManager_K8S/1014/static/d005593fbb6eb7a09c16ce5cd743cb49/b04e4/modifyetcd.png\"\n    style=\"display: block\"\n    target=\"_blank\"\n    rel=\"noopener\"\n  >\n  \n  <span\n    class=\"gatsby-resp-image-wrapper\"\n    style=\"position: relative; display: block;  max-width: 840px; margin-left: auto; margin-right: auto;\"\n  >\n    <span\n      class=\"gatsby-resp-image-background-image\"\n      style=\"padding-bottom: 106.66666666666667%; position: relative; bottom: 0; left: 0; background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAVCAIAAADJt1n/AAAACXBIWXMAAA7EAAAOxAGVKw4bAAAB8UlEQVQ4y5VUC26bQBT0EbofXDuJHZPGMbssLAss2EmjtPe/VGcAV1WMWio9I4wY5vMGVuJ7J1476Sp9Z3WS6bXhfF00K+lr8Rql8eq5VN9K/VToe7sQv1K7XLatOg3gl1I/5HprlzLrjZVNI5tWuiDzSoH5P2TnQbx1sqqVrWTmaX67WLYsApHG651jZpjNYmb8ZKihnGwq4+hxTryC/McVzIPXRj06kEtbqcwr2DZ+lIMIEIQas7CeWd4yQ7A4R9G3Mrbivfvy0QsYeSnVsZyOaUE7elDxCaz2TvZRANkRn1xiEmr1XKgTNl9wjiV0IUtcuQE/5LIMkApC7etT5ffvEVpk3ci6ZiKDBTxiipPDk0H2fT7AQlIF0TSUEFtiRhge0bfUnxYsEr14lApFHJh3Tp5j+tYe60rjJsN4eDfWjtqOgh+d3porrRnXOaXNeoMHnsnc4G3BX+IxaC6o0pnmXZmhzQfiPXc+WhVtM8lG+CA/uE/9uQY2BjOAiYG3g1P7HKP3TvOYz+15TWbIRsPFj7O4RM5PntCnyqaSzfVsYH4qmDBs43hhW8RHT/+ICmOG5h2K2VfSyC6KLnIf6CNMDlZV6ngHFru1v+O9ASeGy0AqNqBMvAkAfEw2dvokrf/yVt1ZautarJeX/gX4c34BvNmfPB9bu1kAAAAASUVORK5CYII='); background-size: cover; display: block;\"\n    >\n      <img\n        class=\"gatsby-resp-image-image\"\n        style=\"width: 100%; height: 100%; margin: 0; vertical-align: middle; position: absolute; top: 0; left: 0; box-shadow: inset 0px 0px 0px 400px white;\"\n        alt=\"modyifyetcd\"\n        title=\"\"\n        src=\"/iManager_K8S/1014/static/d005593fbb6eb7a09c16ce5cd743cb49/1e088/modifyetcd.png\"\n        srcset=\"/iManager_K8S/1014/static/d005593fbb6eb7a09c16ce5cd743cb49/65ed1/modifyetcd.png 210w,\n/iManager_K8S/1014/static/d005593fbb6eb7a09c16ce5cd743cb49/d10fb/modifyetcd.png 420w,\n/iManager_K8S/1014/static/d005593fbb6eb7a09c16ce5cd743cb49/1e088/modifyetcd.png 840w,\n/iManager_K8S/1014/static/d005593fbb6eb7a09c16ce5cd743cb49/b04e4/modifyetcd.png 888w\"\n        sizes=\"(max-width: 840px) 100vw, 840px\"\n      />\n    </span>\n  </span>\n  \n  </a>\n    </p>\n<p>(5) Export the YAML file of Worker node server:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl get node &lt;old-nodeName&gt; -o yaml &gt; node.yaml</code></pre></div>\n<blockquote>\n<p>Notes:<br>\n<code class=\"gatsby-code-text\">&lt;old-nodeName&gt;</code> in the commmand is the name of the Worker node server, you can check the name by the command in sttep (1).</p>\n</blockquote>\n<p>(6) Edit ‘node.yaml’ file, modify the name of Worker node server in the file to the new name of Master node server, and add the content <code class=\"gatsby-code-text\">node-role.kubernetes.io/master: &quot;&quot;</code> in <code class=\"gatsby-code-text\">labels</code>(as the screenshot below: there are four places to modify, and add the content under the red line):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">vi node.yaml</code></pre></div>\n<p>\n  <a\n    class=\"gatsby-resp-image-link\"\n    href=\"/iManager_K8S/1014/static/2af98b23e0cea9fab4dc231e0e1e106a/a016c/modifynode.png\"\n    style=\"display: block\"\n    target=\"_blank\"\n    rel=\"noopener\"\n  >\n  \n  <span\n    class=\"gatsby-resp-image-wrapper\"\n    style=\"position: relative; display: block;  max-width: 762px; margin-left: auto; margin-right: auto;\"\n  >\n    <span\n      class=\"gatsby-resp-image-background-image\"\n      style=\"padding-bottom: 125.23809523809524%; position: relative; bottom: 0; left: 0; background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAZCAIAAAC+dZmEAAAACXBIWXMAAA7EAAAOxAGVKw4bAAACfElEQVQ4y41UWW7bMBDVBeqIi2Q7cbzH2kiKkkjJVlz/9Kvo/e/TN3TSIkDtGhgQQ4lv5s0aTX6eed0y14mdli+lTAs5fVQiNrj4e8/bVj4H5Ifkf/Ukp2sSRH4q4WMktpqdPHcdty05XymxVnS+VqSsldiZqy42SrwZsddib8RGi7WOYAMwdh74ycfngZ17ksGzo+em4XUjspqeAnMwZGin6UpgFcE7HpHzwbHeMe8Aw5UhlqYlPE7f8Q55cYQHnaUCXzkvIrGsCBZoc+/IVWUJc4XB7uCFsnIZmG81CaLYaDkrI7FS7DJw1YhDzYua/oEhYnszvLKUxVnI7TWLH6kKOrINjcz7jr334Ezeuo6QUEpLT++UCoZ573iIWYDqyVMyizrZqTTJkvt1hm3WtJPBTdpuYprYuSekZ3BVropM7VfFHedImJqPfTG68ux0Y1dV/ZrXi8omiPa5TObh3ewWGKUf3NrYnbV7XU/RJ027UHa5qVbTLJWhq9L8BvjNxGM/GT2Yf7sMqPDkMize3fbUbY9tqi1yTlX9F/mI688e2oRmvLbnc5HOCzDHFRW+NTARdy7+cRSZoQrnoc6ZwQ+RkvyZgRu0t5oKaxqxqvBIIEMvJbL40Eiiy/jRx5dBuE6OXh6dRJ8dzK0MfwWDHspbN+vcFM5WhVpPMy7zhzyLV0UzhN6Ec2WTRZmkn3N/Hf3/tCeKAdkbYvFSCcS80x9Dj7LP7tLG9D/9OrPRx6Aw9mQLU1XWogwW78eMGaZl0FBLY5kBKVlGnEVgfp82dhDWIMaD1tihxqw/uEMjOS9ph6DPNjph2VUeXb1cW0SLPgEe049lAl3OHwL/BpuFveLYUM0CAAAAAElFTkSuQmCC'); background-size: cover; display: block;\"\n    >\n      <img\n        class=\"gatsby-resp-image-image\"\n        style=\"width: 100%; height: 100%; margin: 0; vertical-align: middle; position: absolute; top: 0; left: 0; box-shadow: inset 0px 0px 0px 400px white;\"\n        alt=\"modifynode\"\n        title=\"\"\n        src=\"/iManager_K8S/1014/static/2af98b23e0cea9fab4dc231e0e1e106a/a016c/modifynode.png\"\n        srcset=\"/iManager_K8S/1014/static/2af98b23e0cea9fab4dc231e0e1e106a/65ed1/modifynode.png 210w,\n/iManager_K8S/1014/static/2af98b23e0cea9fab4dc231e0e1e106a/d10fb/modifynode.png 420w,\n/iManager_K8S/1014/static/2af98b23e0cea9fab4dc231e0e1e106a/a016c/modifynode.png 762w\"\n        sizes=\"(max-width: 762px) 100vw, 762px\"\n      />\n    </span>\n  </span>\n  \n  </a>\n    </p>\n<p>(7) Edit ‘kubeadm-config ConfigMap’ in the namespace of kube-system, modify the old name of Master node server to the new name(as the screenshot below, there is only one place to modify):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl -n kube-system edit configmap kubeadm-config</code></pre></div>\n<p>\n  <a\n    class=\"gatsby-resp-image-link\"\n    href=\"/iManager_K8S/1014/static/5f544210353f9717bd87c3d6e7471d60/03e1f/modifykubeadm.png\"\n    style=\"display: block\"\n    target=\"_blank\"\n    rel=\"noopener\"\n  >\n  \n  <span\n    class=\"gatsby-resp-image-wrapper\"\n    style=\"position: relative; display: block;  max-width: 489px; margin-left: auto; margin-right: auto;\"\n  >\n    <span\n      class=\"gatsby-resp-image-background-image\"\n      style=\"padding-bottom: 166.66666666666669%; position: relative; bottom: 0; left: 0; background-image: url('data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAAhCAIAAABWXBxEAAAACXBIWXMAAA7EAAAOxAGVKw4bAAADc0lEQVQ4y41ViVLiQBTkB1YzM+FwFQRRyEkyOSYJIF67//9L2z0jgiLWVk2FF5h3dfd79GRZisdGNpUsCtj+w0oFmT8M/+f0ZEVndZfw/Xf0fpx9FfmjH51xQ+alMJWo7SlKsa5FV6MQUZVqsVLT9FyInj8I8Zto6cAQplKz1L+2+a/jQ/JReBqixw/4jxNZoe1S7Br6j2PfD/x+4Cv7dG6DwL8KGe6Ts/O/iZl/1xCFpsZT5oXoDL5R96n32gqN0K3YmZPMo1DqQjy3YmtEU8EZnZOItXEQqCD3/nS4897CcWZRVbzUWp+skIlmqpeWQLS2ilQzFkK3tXpYsZcPZ1lbnnXBmgHb1sBW04S2yw8swMW2EesK1PqTGPX2XBky07Iu4ek9t95Tq5aZwCs8UQsChZki+JG6iekGOEdHZbNhVMjijXhl595bK8tCzVPE5VXUObBPdz71DIWmmjZi/2aGd7ZHVmqDcyJx1m3CsvNC3a8Qhfm7mlQ7LoY/OI9ClWixMRAjG5ul6o7nIKzv5HXIzN5Mxcrho/bacjldk4PgfNlyqSYxgcVsrrQaRxhMFeUqyUl7pmGc+u+pingJGvKeGoiBzKH/ILc6rYS2/pbbE+erCGLw3jq1Kqgh1F9TKkQO2kAtkFCSk+TB6UjS2QoT+SGMpwbgkfbOytvpvKlOYd/zDG5QcFt7O4M8RHuScDAB1ThWZD46O8+gl9lIL5/QIAkHYBDmNCVyGIbhuWWAZdTUaNuD1AwzU7As2KBbxOUwquVZZ17F9HV2EoBQU4EtPDkbeEXEgBI67rznZICqxMbOLe5lGgizzusYmpPLTM1XeEUL3zhjACkv5Nwws4oyeo4iouUH78vMGac8q4fssjMXbX2xNr/WxntpgRBX5xdtf4u27AeTJFsYPd/Voc4ozFkCwR6m1z8yjkqwzsNw3l/GcbpsdJYk/VmCdQl54QnC1G3C1T9PeWAs+H+EDbVfQ6gcNPRDiPESCxSqarmoxKOBwikSoIUoQA5nkck458D2g49lEMsg47oDpWXJzFjamxpXmRDJcaa2kHnqQ22+255shjk90IvBgIyLEsf727llxvHCVMQ59jF3qxWvG88e4iEn/9a2VkxYfa8dZxi3cy2xW4DcB07ozgHp0CbJqBMgwVhppyf+puwy8YOvbB0R9g/NTgSbt0+nngAAAABJRU5ErkJggg=='); background-size: cover; display: block;\"\n    >\n      <img\n        class=\"gatsby-resp-image-image\"\n        style=\"width: 100%; height: 100%; margin: 0; vertical-align: middle; position: absolute; top: 0; left: 0; box-shadow: inset 0px 0px 0px 400px white;\"\n        alt=\"modifykubeadm\"\n        title=\"\"\n        src=\"/iManager_K8S/1014/static/5f544210353f9717bd87c3d6e7471d60/03e1f/modifykubeadm.png\"\n        srcset=\"/iManager_K8S/1014/static/5f544210353f9717bd87c3d6e7471d60/65ed1/modifykubeadm.png 210w,\n/iManager_K8S/1014/static/5f544210353f9717bd87c3d6e7471d60/d10fb/modifykubeadm.png 420w,\n/iManager_K8S/1014/static/5f544210353f9717bd87c3d6e7471d60/03e1f/modifykubeadm.png 489w\"\n        sizes=\"(max-width: 489px) 100vw, 489px\"\n      />\n    </span>\n  </span>\n  \n  </a>\n    </p>\n<p>（8）Generate certificate for new server, replace the old certificate. Execute the following operation:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">cd /etc/kubernetes/pki/\nmkdir -p ~/tmp/BACKUP_etc_kubernetes_pki/etcd/\nmv apiserver.crt apiserver-etcd-client.key apiserver-kubelet-client.crt front-proxy-ca.crt front-proxy-client.crt front-proxy-client.key front-proxy-ca.key apiserver-kubelet-client.key apiserver.key apiserver-etcd-client.crt ~/tmp/BACKUP_etc_kubernetes_pki/.\nmv etcd/healthcheck-client.* etcd/peer.* etcd/server.* ~/tmp/BACKUP_etc_kubernetes_pki/etcd/\nkubeadm init phase certs all\n\ncd /etc/kubernetes\nmkdir -p ~/tmp/BACKUP_etc_kubernetes\nmv admin.conf controller-manager.conf kubelet.conf scheduler.conf ~/tmp/BACKUP_etc_kubernetes/.\nkubeadm init phase kubeconfig all\n\nmkdir -p ~/tmp/BACKUP_home_.kube\ncp -r ~/.kube/* ~/tmp/BACKUP_home_.kube/.\ncp -i /etc/kubernetes/admin.conf $HOME/.kube/config</code></pre></div>\n<p>(9) Apply the ‘node.yaml’ file of Worker node server which is modified in step (6):</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl apply -f node.yaml</code></pre></div>\n<p>(10) Delete the old Worker node server:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">kubectl delete node &lt;old-nodeName&gt;</code></pre></div>\n<p>(11) Restart kubelet and docker services:</p>\n<div class=\"gatsby-highlight\" data-language=\"sh\"><pre class=\"gatsby-code-sh\"><code class=\"gatsby-code-sh\">systemctl daemon-reload &amp;&amp; systemctl restart kubelet &amp;&amp; systemctl restart doker</code></pre></div>\n</li>\n</ol>","frontmatter":{"title":"Tutorial","next":null,"prev":null},"fields":{"path":"content/tutorial/FAQ.en.md","slug":"/en/tutorial/FAQ/","langKey":"en"}}},"pageContext":{"slug":"/en/tutorial/FAQ/"}}}