AIX 系统安装11.2.0.2集群二节点执行root.sh脚本报错故障分析过程

2020-12-04 00:00:00 集群 执行 节点 脚本 打补丁

问题现象: 节点root.sh脚本失败,日志如下


集群日志
2020-12-03 15:32:30.631: [ CSSD][2587]clssnmvDHBValidateNCopy: node 1, rac-node1, has a disk HB, but no network HB, DHB has rcfg 502391333, wrtcnt, 74064, LATS 215313735, lastSeqNo 74061, uniqueness 1606972534, timestamp 1606980750/1181116862
2020-12-03 15:32:30.632: [ CSSD][4129]clssnmvDHBValidateNCopy: node 1, rac-node1, has a disk HB, but no network HB, DHB has rcfg 502391333, wrtcnt, 74065, LATS 215313736, lastSeqNo 74062, uniqueness 1606972534, timestamp 1606980750/1181117183
2020-12-03 15:32:31.318: [ CSSD][4900]clssgmWaitOnEventValue: after CmInfo State val 3, eval 1 waited 0
2020-12-03 15:32:31.700: [ CSSD][2587]clssnmvDHBValidateNCopy: node 1, rac-node1, has a disk HB, but no network HB, DHB has rcfg 502391333, wrtcnt, 74066, LATS 215314804, lastSeqNo 74064, uniqueness 1606972534, timestamp 1606980751/1181117867
2020-12-03 15:32:31.702: [ CSSD][4129]clssnmvDHBValidateNCopy: node 1, rac-node1, has a disk HB, but no network HB, DHB has rcfg 502391333, wrtcnt, 74068, LATS 215314806, lastSeqNo 74065, uniqueness 1606972534, timestamp 1606980751/1181118251
2020-12-03 15:32:32.321: [ CSSD][4900]clssgmWaitOnEventValue: after CmInfo State val 3, eval 1 waited 0
2020-12-03 15:32:32.771: [ CSSD][2587]clssnmvDHBValidateNCopy: node 1, rac-node1, has a disk HB, but no network HB, DHB has rcfg 502391333, wrtcnt, 74069, LATS 215315875, lastSeqNo 74066, uniqueness 1606972534, timestamp 1606980752/1181118892
2020-12-03 15:32:32.772: [ CSSD][4129]clssnmvDHBValidateNCopy: node 1, rac-node1, has a disk HB, but no network HB, DHB has rcfg 502391333, wrtcnt, 74071, LATS 215315876, lastSeqNo 74068, uniqueness 1606972534, timestamp 1606980752/1181119321

cssd.log日志
2020-12-02 17:30:47.001: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'rac-node2',
haName 'CSS_khfw-cluster', inf 'mcast://230.0.1.0:42424/193.2.192.30'
2020-12-02 17:30:47.001: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: created local interface for node 'rac-node2', haName 'C
SS_khfw-cluster', inf '193.2.192.30:10875'
2020-12-02 17:30:47.001: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'rac-node2',
haName 'CSS_khfw-cluster', inf 'mcast://230.0.1.0:42424/192.2.192.30'
2020-12-02 17:30:47.001: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: created local interface for node 'rac-node2', haName 'C
SS_khfw-cluster', inf '192.2.192.30:10876'
2020-12-02 17:30:47.001: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: created local bootstrap interface for node 'rac-node2',
haName 'CSS_khfw-cluster', inf 'mcast://230.0.1.0:42424/194.2.192.30'
2020-12-02 17:30:47.001: [GIPCHTHR][1543] gipchaWorkerUpdateInterface: created local interface for node 'rac-node2', haName 'C
SS_khfw-cluster', inf '194.2.192.30:10877

分析 :从上述看gipc报没有网络心跳,但是这个私网是通的,这里具有迷惑性,二从cssd.log日志可以清楚看到不断的通过广播地址 230.0.1.0:42424 与节点2通信 
但是这个多播没有收到任何响应,需要我们排查。后来找到一篇MOS 文档,感兴趣的可以好好读读,基本说明了问题原因,并提供了测试交换机是否支持该多播地址的脚本

 Grid Infrastructure Startup During Patching, Install or Upgrade May Fail Due to Multicasting Requirement (Doc ID 1212703.1)

这个问题通过打补丁9974223解决,他的原理就是224.0.0.251作为多播地址 (如果交换机可以解决多播230的问题,也可以)
9974223

经过测试确实230地址的多播不支持,但是224支持,这样通过打补丁可以解决
rac-node1-2:/home/oracle/mcasttest$ ./mcasttest.pl -n rac-node1-2,rac-node2 -i en1,en2,en5
########### Setup for node rac-node1-2 ##########
Checking node access 'rac-node1-2'
Checking node login 'rac-node1-2'
Checking/Creating Directory /tmp/mcasttest for binary on node 'rac-node1-2'
Distributing mcast2 binary to node 'rac-node1-2'
########### Setup for node rac-node2 ##########
Checking node access 'rac-node2'
Checking node login 'rac-node2'
Checking/Creating Directory /tmp/mcasttest for binary on node 'rac-node2'
Distributing mcast2 binary to node 'rac-node2'
########### testing Multicast on all nodes ##########

Test for Multicast address 230.0.1.0

Dec 3 17:01:56 | Multicast Failed for en1 using address 230.0.1.0:42000 <<<<<230不支持
Dec 3 17:02:26 | Multicast Failed for en2 using address 230.0.1.0:42001
Dec 3 17:02:57 | Multicast Failed for en5 using address 230.0.1.0:42002

Test for Multicast address 224.0.0.251
Dec 3 17:02:58 | Multicast Succeeded for en1 using address 224.0.0.251:42003 <<<<<224支持
Dec 3 17:02:59 | Multicast Succeeded for en2 using address 224.0.0.251:42004
Dec 3 17:03:00 | Multicast Succeeded for en5 using address 224.0.0.251:42005

这里的本质就是集群初始化需要构建集群成员,但是这个过程需要底层的广播方式获得,而后才是私网地址间的单播通信,所以执行root.sh脚本失败,并且一直再报没有网路心跳,就是由于底层根本没有获得构建成员关系的任何信息造成,解决多播问题就可以。打补丁就是跳过230而该有224地址。

问题到这里就分析完了,下面是打补丁和其他相关处理方法,就放在这里吧。下面有很多注意事项,还是很值得了解,不然安装就会一路坎坷。

打补丁之前先在所有节点会滚root.sh执行状态,
/grid/11.2/crs/install/rootcrs.pl -verbose -deconfig -force

打补丁
opatch napply -local -oh /grid/11.2 -id 9974223 <<<<需要额外的软件空间

补丁失败
Prerequisite check "CheckApplicable" failed.
The details are:

Patch 9974223:
Copy Action: Destination File "/grid/11.2/bin/crsd.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'crsd.bin' to '/grid/11.2/bin/crsd.bin'
Copy Action: Destination File "/grid/11.2/bin/gnsd" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'gnsd' to '/grid/11.2/bin/gnsd'
Copy Action: Destination File "/grid/11.2/bin/gnsd.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'gnsd.bin' to '/grid/11.2/bin/gnsd.bin'
Copy Action: Destination File "/grid/11.2/bin/oclskd.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'oclskd.bin' to '/grid/11.2/bin/oclskd.bin'
Copy Action: Destination File "/grid/11.2/bin/octssd.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'octssd.bin' to '/grid/11.2/bin/octssd.bin'
Copy Action: Destination File "/grid/11.2/bin/ohasd.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'ohasd.bin' to '/grid/11.2/bin/ohasd.bin'
Copy Action: Destination File "/grid/11.2/bin/ologgerd" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'ologgerd' to '/grid/11.2/bin/ologgerd'
Copy Action: Destination File "/grid/11.2/bin/orarootagent.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'orarootagent.bin' to '/grid/11.2/bin/orarootagent.bin'
Copy Action: Destination File "/grid/11.2/bin/osysmond.bin" is not writeable.
'oracle.crs, 11.2.0.2.0': Cannot copy file from 'osysmond.bin' to '/grid/11.2/bin/osysmond.bin'

UtilSession failed:
Prerequisite check "CheckApplicable" failed.
Log file location: /grid/11.2/cfgtoollogs/opatch/opatch2020-12-03_17-42-42PM_1.log

OPatch failed with error code 73
解决方法:
chmod -R g+w /grid

升级OPatch
unzip p6880880_112000_AIX64-5L.zip -d /grid/11.2/
rac-node2:/oradata/database_112020_AIX64-5L$ opatch version
OPatch Version: 11.2.0.3.6

OPatch succeeded.

报错,一些文件被占用,下面检查那些文件被占用,需要释放

root@rac-node1-2:/>genld -l | grep /grid/11.2
9000000072a1000 b22e /grid/11.2/jdk/jre/bin/libnio.a
900000007280000 20399 /grid/11.2/jdk/jre/bin/libnet.a
800000000000000 8a7a /grid/11.2/oui/lib/aix/liboraInstaller.so
900000007268000 17838 /grid/11.2/jdk/jre/bin/libzip.a
90000000289c000 1d41 /grid/11.2/jdk/jre/bin/libwrappers.a
900000006b58000 37b76 /grid/11.2/jdk/jre/bin/libjava.a
90000000723a000 21700 /grid/11.2/jdk/jre/bin/libj9ute23.so
900000007227000 12c7c /grid/11.2/jdk/jre/bin/libiverel23.so
9000000071c0000 66acc /grid/11.2/jdk/jre/bin/libjclscar_23.so
90000000719b000 2427d /grid/11.2/jdk/jre/bin/libj9vrb23.so
900000007168000 32c27 /grid/11.2/jdk/jre/bin/libj9jvmti23.so
900000007139000 2ed2c /grid/11.2/jdk/jre/bin/libj9dyn23.so
9000000070a1000 97a11 /grid/11.2/jdk/jre/bin/libj9gc23.so
900000006bdd000 4abfb9 /grid/11.2/jdk/jre/bin/libj9jit23.so
900000006bcc000 10eef /grid/11.2/jdk/jre/bin/libj9trc23.so
900000006bba000 11bab /grid/11.2/jdk/jre/bin/libj9zlib23.so
900000006b90000 29413 /grid/11.2/jdk/jre/bin/libj9dmp23.so
900000006b08000 4f5e4 /grid/11.2/jdk/jre/bin/libj9prt23.so
90000000289a000 1d2f /grid/11.2/jdk/jre/bin/libj9hookable23.so
900000006a99000 6ee0e /grid/11.2/jdk/jre/bin/libj9vm23.so
900000006a8d000 bdcf /grid/11.2/jdk/jre/bin/libj9thr23.so
900000002898000 1ff3 /grid/11.2/jdk/jre/bin/libjsig.so
900000006a71000 1b631 /grid/11.2/jdk/jre/bin/j9vm/libjvm.so
900000006a61000 f6aa /grid/11.2/jdk/jre/bin/classic/libjvm.so
相关进程
root@rac-node1-2:/>ps -ef | grep java
root 9371774 9634040 0 Dec 02 - 0:27 /var/opt/tivoli/ep/_jvm/jre/bin/java -Xmx384m -Xminf0.01 -Xmaxf0.4 -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Xbootclasspath/a:/var/opt/tivoli/ep/runtime/core/eclipse/plugins/com.ibm.rcp.base_6.2.1.20091117-1800/rcpbootcp.jar:/var/opt/tivoli/ep/lib/com.ibm.logging.icl_1.1.1.jar:/var/opt/tivoli/ep/lib/jaas2zos.jar:/var/opt/tivoli/ep/lib/jaasmodule.jar:/var/opt/tivoli/ep/lib/lwinative.jar:/var/opt/tivoli/ep/lib/lwinl.jar:/var/opt/tivoli/ep/lib/lwirolemap.jar:/var/opt/tivoli/ep/lib/lwisecurity.jar:/var/opt/tivoli/ep/lib/lwitools.jar:/var/opt/tivoli/ep/lib/passutils.jar:../../runtime/agent/lib/cas-bootcp.jar -Xverify:none -cp eclipse/launch.jar:eclipse/startup.jar:/var/opt/tivoli/ep/runtime/core/eclipse/plugins/com.ibm.rcp.base_6.2.1.20091117-1800/launcher.jar com.ibm.lwi.LaunchLWI
root 16056386 17629248 0 18:24:17 pts/1 0:00 grep java
oracle 17825918 16973838 0 17:57:33 pts/0 4:28 /grid/11.2/jdk/bin/java -mx160m -Xverify:none -cp /grid/11.2/OPatch/ocm/lib/emocmclnt.jar:/grid/11.2/oui/jlib/OraInstaller.jar:/grid/11.2/oui/jlib/OraPrereq.jar:/grid/11.2/oui/jlib/share.jar:/grid/11.2/oui/jlib/orai18n-mapping.jar:/grid/11.2/oui/jlib/xmlparserv2.jar:/grid/11.2/oui/jlib/emCfg.jar:/grid/11.2/oui/jlib/ojmisc.jar:/grid/11.2/OPatch/jlib/opatch.jar:/grid/11.2/OPatch/jlib/opatchsdk.jar:/grid/11.2/OPatch/oplan/jlib/automation.jar:/grid/11.2/OPatch/oplan/jlib/apache-commons/commons-cli-1.0.jar:/grid/11.2/OPatch/jlib/oracle.opatch.classpath.jar:/grid/11.2/OPatch/oplan/jlib/jaxb/activation.jar:/grid/11.2/OPatch/oplan/jlib/jaxb/jaxb-api.jar:/grid/11.2/OPatch/oplan/jlib/jaxb/jaxb-impl.jar:/grid/11.2/OPatch/oplan/jlib/jaxb/jsr173_1.0_api.jar:/grid/11.2/OPatch/oplan/jlib/OsysModel.jar:/grid/11.2/OPatch/oplan/jlib/osysmodel-utils.jar:/grid/11.2/OPatch/oplan/jlib/CRSProductDriver.jar:/grid/11.2/OPatch/oplan/jlib/oracle.oplan.classpath.jar -DOPatch.ORACLE_HOME=/grid/11.2 -DOPatch.DEBUG=false -DOPatch.RUNNING_DIR=/grid/11.2/OPatch -DOPatch.MW_HOME= -DOPatch.WL_HOME= -DOPatch.COMMON_COMPONENTS_HOME= -DOPatch.OUI_LOCATION= -DOPatch.FMW_COMPONENT_HOME= -DOPatch.OPATCH_CLASSPATH= -DOPatch.WEBLOGIC_CLASSPATH= -Xbootclasspath/a:/grid/11.2/OPatch/ocm/lib/emocmclnt.jar:/grid/11.2/OPatch/ocm/lib/emocmcommon.jar:/grid/11.2/OPatch/ocm/lib/emocmclnt-14.jar:/grid/11.2/OPatch/ocm/lib/osdt_core3.jar:/grid/11.2/OPatch/ocm/lib/osdt_jce.jar:/grid/11.2/OPatch/ocm/lib/http_client.jar:/grid/11.2/OPatch/ocm/lib/regexp.jar:/grid/11.2/OPatch/ocm/lib/jcert.jar:/grid/11.2/OPatch/ocm/lib/jnet.jar:/grid/11.2/OPatch/ocm/lib/jsse.jar:/grid/11.2/OPatch/ocm/lib/log4j-core.jar:/grid/11.2/OPatch/ocm/lib/xmlparserv2.jar oracle/opatch/OPatch napply -local -oh /grid/11.2 -id 9974223 -invPtrLoc /grid/11.2/oraInst.loc
解决方法,kill相关进程
root@rac-node1-2:/>kill 9371774 9634040
root@rac-node1-2:/>kill -9 17825918 16973838
上述问题解决后,会话超时跳出,如果重新执行打补丁,报错
OPatchSession cannot load inventory for the given Oracle Home /grid/11.2. Possible causes are:
No read or write permission to ORACLE_HOME/.patch_storage
Central Inventory is locked by another OUI instance
No read permission to Central Inventory
The lock file exists in ORACLE_HOME/.patch_storage
The Oracle Home does not exist in Central Inventory

UtilSession failed: Lock file left by a different patch, OPatch will not try re-using the lock file.

解决方法:
cd ORACLE_HOME/.patch_storage
cp patch_locked patch_locked.old
rm -rf patch_locked
touch patch_free

继续执行打补丁成功
opatch napply -local -oh /grid/11.2 -id 9974223

验证个
rac-node1-2:/home/oracle$ opatch lspatches;
9974223;


查看空间
df -kgm | grep grid


节点2
rac-node2:/oradata/database_112020_AIX64-5L/9974223$ date
Thu Dec 3 19:55:11 CST 2020

rac-node2:/oradata/database_112020_AIX64-5L/9974223$ opatch lspatches;
9974223;


跑脚本
个节点执行root.sh脚本有个报错,但是脚本依然在跑
/grid/11.2/bin/lsdb.bin: Failed to initialize Cluster Context
skgxn error number 1311719766
operation skgxnqtsz
location SKGXN not av
errno 0: Error 0
/grid/11.2/bin/lsdb.bin: Cannot allocate memory of size 0
User oracle has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.

个节点执行结果
root@rac-node1-2:/grid/11.2>./root.sh
Running Oracle 11g root script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /grid/11.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /grid/11.2/crs/install/crsconfig_params
/grid/11.2/bin/lsdb.bin: Failed to initialize Cluster Context
skgxn error number 1311719766
operation skgxnqtsz
location SKGXN not av
errno 0: Error 0
/grid/11.2/bin/lsdb.bin: Cannot allocate memory of size 0
User oracle has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9359: The AIX ODM entry for 'acfsctl' was successfully added.
ACFS-9359: The AIX ODM entry for 'advmctl' was successfully added.
ACFS-9359: The AIX ODM entry for 'advmvol' was successfully added.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-2672: Attempting to start 'ora.mdnsd' on 'rac-node1-2'
CRS-2676: Start of 'ora.mdnsd' on 'rac-node1-2' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'rac-node1-2'
CRS-2676: Start of 'ora.gpnpd' on 'rac-node1-2' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac-node1-2'
CRS-2672: Attempting to start 'ora.gipcd' on 'rac-node1-2'
CRS-2676: Start of 'ora.cssdmonitor' on 'rac-node1-2' succeeded
CRS-2676: Start of 'ora.gipcd' on 'rac-node1-2' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'rac-node1-2'
CRS-2672: Attempting to start 'ora.diskmon' on 'rac-node1-2'
CRS-2676: Start of 'ora.diskmon' on 'rac-node1-2' succeeded
CRS-2676: Start of 'ora.cssd' on 'rac-node1-2' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded

第二节点
root@rac-node2:/>cd /grid
root@rac-node2:/grid>ls
11.2 lost+found oraInventory
root@rac-node2:/grid>cd 11.2
root@rac-node2:/grid/11.2>./root.sh
Running Oracle 11g root script...

The following environment variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /grid/11.2

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /grid/11.2/crs/install/crsconfig_params
/grid/11.2/bin/lsdb.bin: Failed to initialize Cluster Context
skgxn error number 1311719766
operation skgxnqtsz
location SKGXN not av
errno 0: Error 0
/grid/11.2/bin/lsdb.bin: Cannot allocate memory of size 0
User oracle has the required capabilities to run CSSD in realtime mode
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'system'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9359: The AIX ODM entry for 'acfsctl' was successfully added.
ACFS-9359: The AIX ODM entry for 'advmctl' was successfully added.
ACFS-9359: The AIX ODM entry for 'advmvol' was successfully added.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node rac-node1-2, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


开始发现scanip 资源不正常,启动scan监听后解决


srvctl start scan_listener
感觉之前打补丁修改/grid权限有问题, chown -R oracle.dba /grid
之前是root用户,这个是跑完root脚本集群改的,所以后续打补丁后虽然跑root.sh脚本成功,但是很多文件权限不对

这里建议重新安装集群,解决这个问题更靠普。



安装集群数据库软件报错
[INS-35354] The system on which you are attempting to install Oracle RAC is not part of a valid cluster.

解决办法,修改inventory.xml文件,增加CRS="true"标记
root@rac-node1-2:/grid/oraInventory/ContentsXML>ls -lrt
total 24
-rw-rw---- 1 oracle dba 489 Dec 02 16:39 inventory.xml
-rw-rw---- 1 oracle dba 270 Dec 03 18:47 libs.xml
-rw-rw---- 1 oracle dba 307 Dec 03 18:47 comps.xml
root@rac-node1-2:/grid/oraInventory/ContentsXML>cat inv*
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2010, Oracle. All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>11.2.0.2.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="Ora11g_gridinfrahome1" LOC="/grid/11.2" TYPE="O" IDX="1">
<NODE_LIST>
<NODE NAME="rac-node1-2"/>
<NODE NAME="rac-node2"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
</INVENTORY>

解决方法


rac-node1-2:/grid/11.2/oui/bin$ ./runInstaller -updateNodeList "CLUSTER_NODES={rac-node1-2,rac-node2}" ORACLE_HOME="/grid/11.2" HOME NAME="Ora11g_gridinfrahome1" LOCAL_NODE="rac-node1-2" CRS=true;
rac-node1-2:/grid/11.2/oui/bin$ ./runInstaller -updateNodeList "CLUSTER_NODES={rac-node1-2,rac-node2}" ORACLE_HOME="/grid/11.2" HOME NAME="Ora11g_gridinfrahome1" LOCAL_NODE="rac-node2" CRS=true;

执行结果
节点1
rac-node1-2:/grid/11.2/oui/bin$ 7806-2}" ORACLE_HOME="/grid/11.2" HOME NAME="Ora11g_gridinfrahome1" LOCAL_NODE="rac-node1-2" CRS=true; <
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 33280 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /grid/oraInventory
'UpdateNodeList' was successful.
节点2
rac-node2:/grid/11.2/oui/bin$ 7806-2}" ORACLE_HOME="/grid/11.2" HOME NAME="Ora11g_gridinfrahome1" LOCAL_NODE="rac-node2" CRS=true; <
Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB. Actual 33280 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /grid/oraInventory
'UpdateNodeList' was successful.









相关文章