亚洲免费在线-亚洲免费在线播放-亚洲免费在线观看-亚洲免费在线观看视频-亚洲免费在线看-亚洲免费在线视频

Top 5 Grid Infrastructure Startup Issues [ID

系統(tǒng) 1954 0

最近使用開發(fā)的過程中出現(xiàn)了一個小問題,順便記錄一下原因和方法--

????

Applies to:

????Oracle Database - Enterprise Edition - Version 11.2.0.1 and later

????Information in this document applies to any platform.

????

Purpose

????The purpose of this note is to provide a summary of the top 5 issues that may prevent the successful startup of the Grid Infrastructure (GI) stack.

????

Scope

????This note applies to 11gR2 Grid Infrastructure only.

To determine the status of GI, please run the following commands:

????

1. $GRID_HOME/bin/crsctl check crs
2. $GRID_HOME/bin/crsctl stat res -t -init
3. $GRID_HOME/bin/crsctl stat res -t
4. ps -ef | egrep 'init|d.bin'

????

Details

????

????

Issue #1: CRS-4639: Could not contact Oracle High Availability Services, ohasd.bin not running or ohasd.bin is running but no init.ohasd or other processes

???? Symptoms:

????1. Command '$GRID_HOME/bin/crsctl check crs' returns error:
???? CRS-4639: Could not contact Oracle High Availability Services
2. Command 'ps -ef | grep init' does not show a line similar to:
???? root 4878 1 0 Sep12 ? 00:00:02 /bin/sh /etc/init.d/init.ohasd run
3. Command 'ps -ef | grep d.bin' does not show a line similar to:
???? root 21350 1 6 22:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/ohasd.bin reboot
??? Or it may only show "ohasd.bin reboot" process without any other processes

????
Possible Causes:

????

1. The file '/etc/inittab' does not contain the line
????? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
2. runlevel 3 has not been reached, some rc3 script is hanging
3. the init process (pid 1) did not spawn the process defined in /etc/inittab (h1) or a bad entry before init.ohasd like xx:wait:<process> blocked the start of init.ohasd
4. CRS autostart is disabled
5. The Oracle Local Registry ($GRID_HOME/cdata/<node>.olr) is missing or corrupted

????
Solutions:

????

1. Add the following line to /etc/inittab?
??? h1:35:respawn:/etc/init.d/init.ohasd run >/dev/null 2>&1 </dev/null
?? and then run "init q" as the root user.
2. Run command 'ps -ef | grep rc' and kill any remaining rc3 scripts that appear to be stuck.
3. Remove the bad entry before init.ohasd. Consult with OS vendor if "init q" does not spawn "init.ohasd run" process
4. Enable CRS autostart:
?? # crsctl enable crs
?? # crsctl start crs
5. Restore OLR from backup, as root user:
???# touch $GRID_HOME/cdata/<node>.olr
? # chown root:oinstall $GRID_HOME/cdata/<node>.olr
? # ocrconfig -local -restore$GRID_HOME/cdata/<node>/backup_<date>_<num>.olr
? # crsctl start crs

If OLR backup does not exist for any reason, perform deconfig and rerun root.sh is required to recreate OLR, as root user:
?? # $GRID_HOME/crs/install/rootcrs.pl -deconfig -force
?? # $GRID_HOME/root.sh
6. If above does not help, check OS messages for ohasd.bin logger message and manually execute crswrapexece.pl command mentioned in the OS message with LD_LIBRARY_PATH set to <GRID_HOME/lib to continue debug.

?

????

????

Issue #2: CRS-4530: Communications failure contacting Cluster Synchronization Services daemon, ocssd.bin is not running

???? Symptoms:

????1. Command '$GRID_HOME/bin/crsctl check crs' returns errors:
??? CRS-4638: Oracle High Availability Services is online
??? CRS-4535: Cannot communicate with Cluster Ready Services
??? CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
??? CRS-4534: Cannot communicate with Event Manager
2. Command 'ps -ef | grep d.bin' does not show a line similar to:
??? oragrid 21543 1 1 22:24 ? 00:00:01 /u01/app/11.2.0/grid/bin/ocssd.bin
3. ocssd.bin is running but abort with message "CLSGPNP_CALL_AGAIN" in ocssd.log
4. ocssd.log shows:

?? 2012-01-27 13:42:58.796: [ CSSD][19]clssnmvDHBValidateNCopy: node 1, racnode1, has a disk HB, but no network HB, DHB has rcfg 223132864, wrtcnt, 1112, LATS 783238209,??
?? lastSeqNo 1111, uniqueness 1327692232, timestamp 1327693378/787089065

????5. for 3 or more node cases, 2 nodes form cluster fine, the 3rd node joined then failed, ocssd.log show:

?? 2012-02-09 11:33:53.048: [ CSSD][1120926016](:CSSNM00008:)clssnmCheckDskInfo: Aborting local node to avoid splitbrain. Cohort of 2 nodes with leader 2, racnode2, is smaller than???
?? cohort of 2 nodes led by node 1, racnode1, based on map type 2
?? 2012-02-09 11:33:53.048: [ CSSD][1120926016]###################################
?? 2012-02-09 11:33:53.048: [ CSSD][1120926016]clssscExit: CSSD aborting from thread clssnmRcfgMgrThread

????6. ocssd.bin startup timeout after 10minutes

???? ?? 2012-04-08 12:04:33.153: [ ? ?CSSD][1]clssscmain: Starting CSS daemon, version 11.2.0.3.0, in (clustered) mode with uniqueness value 1333911873
?? ......
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]clssgmShutDown: Received abortive shutdown request from client.
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]###################################
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]clssscExit: CSSD aborting from thread GMClientListener
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5]###################################
?? 2012-04-08 12:14:31.994: [ ? ?CSSD][5](:CSSSC00012:)clssscExit: A fatal error occurred and the CSS daemon is terminating abnormally

???? Possible Causes:

????

1. Voting disk is missing or inaccessible
2. Multicast is not working (for 11.2.0.2+)
3. private network is not working, ping or traceroute <private host> shows destination unreachable. Or firewall is enable for private network while ping/traceroute work fine
4. private network is pingable with normal ping command but not pingable with jumbo frame size (eg: ping -s 8900 <private ip>) when jumbo frame is enabled (MTU: 9000+). Or partial cluster nodes have jumbo frame set (MTU: 9000) and the problem node does not have jumbo frame set (MTU:1500)
5. gpnpd does not come up, stuck in dispatch thread,? Bug 10105195
6. too many disks discovered via asm_diskstring or slow scan of disks due to? Bug 13454354 ?on Solaris 11.2.0.3 only

????
Solutions:

????

1. restore the voting disk access by checking storage access,? disk permissions etc.
?? If the voting disk is missing from the OCR ASM diskgroup, start CRS in exclusive mode and recreate the voting disk:
?? # crsctl start crs -excl
?? # crsctl replace votedisk <+OCRVOTE diskgroup>
2. Refer to? Document 1212703.1 ?for multicast test and fix
3. Consult with the network administrator to restore private network access or disable firewall for private network (for Linux, check service iptables status and service ip6tables status)
4. Engage network admin to enable jumbo frame from switch layer if it is enabled at Network card
5. Kill the gpnpd.bin process on surviving node, refer? Document 10105195.8
?? Once above issues are resolved, restart Grid Infrastructure stack.
?? If ping/traceroute all work for private network, there is a failed 11.2.0.1 to 11.2.0.2 upgrade happened, please check out?
??? Bug 13416559 ?for workaround
6. Limit the number of ASM disks scan by supplying a more specific asm_diskstring, refer to? bug 13583387
?? For Solaris 11.2.0.3 only, please apply patch 13250497, see? Document 1451367.1 .
????每日一道理
時間好比一條小溪,它能招引我們奔向生活的海洋;時間如同一葉扁舟,它將幫助我們駛向理想的彼岸;時間猶如一支畫筆,它會指點我們描繪人生的畫卷。

?

????

????

Issue #3: CRS-4535: Cannot communicate with Cluster Ready Services, crsd.bin is not running

???? Symptoms:

????1. Command '$GRID_HOME/bin/crsctl check crs' returns errors:
??? CRS-4638: Oracle High Availability Services is online
??? CRS-4535: Cannot communicate with Cluster Ready Services
??? CRS-4529: Cluster Synchronization Services is online
??? CRS-4534: Cannot communicate with Event Manager
2. Command 'ps -ef | grep d.bin' does not show a line similar to:
??? root 23017 1 1 22:34 ? 00:00:00 /u01/app/11.2.0/grid/bin/crsd.bin reboot
3. Even if the crsd.bin process exists, command 'crsctl stat res -t -init' shows:
??? ora.crsd
??????? 1??? ONLINE???? INTERMEDIATE

???? Possible Causes:

????

1. ocssd.bin is not running or resource ora.cssd is not ONLINE
2. +ASM<n> instance can not startup
3. OCR is inaccessible
4. Network configuration has been changed causing gpnp profile.xml mismatch
5. $GRID_HOME/crs/init/<host>.pid file for crsd has been removed or renamed manually, crsd.log shows: 'Error3 -2 writing PID to the file'
6. ocr.loc content mismatch with other cluster nodes. crsd.log shows: 'Shutdown CacheLocal. my hash ids don't match'

????
Solutions:

????

1. Check the solution for Issue 2, ensure ocssd.bin is running and ora.cssd is ONLINE
2. For 11.2.0.2+, ensure that the resource? ora.cluster_interconnect.haip ?is ONLINE, refer to? Document 1383737.1 ?for ASM startup?
?? issues related to HAIP.
3. Ensure the OCR disk is available and accessible. If the OCR is lost for any reason, refer to? Document 1062983.1 ?on how to restore?
?? the OCR.
4. Restore network configuration to be the same as interface defined in $GRID_HOME/gpnp/<node>/profiles/peer/profile.xml, refer to?
??? Document 283684.1 ?for private network modification.
5. touch the file with <host>.pid under $GRID_HOME/crs/init.
?? For 11.2.0.1, the file is owned by <grid> user.
?? For 11.2.0.2, the file is owned by root user.
6. Using ocrconfig -repair command to fix the ocr.loc content:
?? for example, as root user:
# ocrconfig -repair -add +OCR2 (to add an entry)
# ocrconfig -repair -delete +OCR2 (to remove an entry)
ohasd.bin needs to be up and running in order for above command to run.

Once above issues are resolved, either restart GI stack or start crsd.bin via:
?? # crsctl start res ora.crsd -init

?

????

????

Issue #4: Agent or mdnsd.bin, gpnpd.bin, gipcd.bin not running

???? Symptoms:

????1. orarootagent not running. ohasd.log shows:
2012-12-21 02:14:05.071: [ ? ?AGFW][24] {0:0:2} Created alert : (:CRSAGF00123:) : ?Failed to start the agent process: /grid/11.2.0/grid_2/bin/orarootagent Category: -1 Operation: fail Loc: canexec2 OS error: 0 Other : no exe permission, file [/grid/11.2.0/grid_2/bin/orarootagent]?
2. mdnsd.bin, gpnpd.bin or gipcd.bin not running, here is a sample for mdnsd log file:
2012-12-31 21:37:27.601: [? clsdmt][1088776512]Creating PID [4526] file for home /u01/app/11.2.0/grid host lc1n1 bin mdns to /u01/app/11.2.0/grid/mdns/init/
2012-12-31 21:37:27.602: [? clsdmt][1088776512]Error3 -2 writing PID [4526] to the file []?
2012-12-31 21:37:27.602: [? clsdmt][1088776512]Failed to record pid for MDNSD
or
2012-12-31 21:39:52.656: [? clsdmt][1099217216]Creating PID [4645] file for home /u01/app/11.2.0/grid host lc1n1 bin mdns to /u01/app/11.2.0/grid/mdns/init/
2012-12-31 21:39:52.656: [? clsdmt][1099217216]Writing PID [4645] to the file [/u01/app/11.2.0/grid/mdns/init/lc1n1.pid]
2012-12-31 21:39:52.656: [? clsdmt][1099217216]Failed to record pid for MDNSD
3. oraagent or appagent not running, crsd.log shows:
2012-12-01 00:06:24.462: [ ? ?AGFW][1164069184] {0:2:27} Created alert : (:CRSAGF00130:) : ?Failed to start the agent /u01/app/grid/11.2.0/bin/appagent_oracle

???? Possible Causes:

????

1. orarootagent missing execute permission
2. missing process associated <node>.pid file or the file has wrong ownership or permission
3. wrong permission/ownership within GRID_HOME

????
Solutions:

????

1. Either compare the permission/ownership with a good node GRID_HOME and make correction accordingly or as root user:
?? # cd <GRID_HOME>/crs/install
?? # ./rootcrs.pl -unlock
?? # ./rootcrs.pl -patch
This will stop clusterware stack, set permssion/owership to root for required files and restart clusterware stack.
2. If the corresponding <node>.pid does not exist, touch the file with correct ownership and permission, otherwise correct the <node>.pid ownership/permission as required, then restart the clusterware stack.
Here is the list of <node>.pid file under <GRID_HOME>, owned by root:root, permission 644:
? ./ologgerd/init/<node>.pid
? ./osysmond/init/ <node> .pid
? ./ctss/init/ <node> .pid
? ./ohasd/init/ <node> .pid
? ./crs/init/ <node> .pid
Owned by <grid>:oinstall, permission 644:
? ./mdns/init/ <node> .pid ?
? ./evm/init/ <node> .pid
? ./gipc/init/ <node> .pid
? ./gpnp/init/ <node> .pid?


3. For cause 3, please refer to solution 1.

?

????

????

Issue #5: ASM instance does not start, ora.asm is OFFLINE

???? Symptoms:

????1. Command 'ps -ef | grep asm' shows no ASM processes
2. Command 'crsctl stat res -t -init' shows:
???????? ora.asm
?????????????? 1??? ONLINE??? OFFLINE

????
Possible Causes:

????

1. ASM spfile is corrupted
2. ASM discovery string is incorrect and therefore voting disk/OCR cannot be discovered
3. ASMlib configuration problem
4. ASM instances are using different cluster_interconnect, HAIP OFFLINE on 1 node causing the 2nd ASM instance could not start

????
Solutions:

????

1. Create a temporary pfile to start ASM instance, then recreate spfile, see? Document 1095214.1 ?for more details.
2. Refer to? Document 1077094.1 ?to correct the ASM discovery string.
3. Refer to? Document 1050164.1 ?to fix ASMlib configuration.
4. Refer to? Document 1383737.1 ?for solution. For more information about HAIP, please refer to? Document 1210883.1

?

????For further debugging GI startup issue, please refer to? Document 1050908.1 ?Troubleshoot Grid Infrastructure Startup Issues.

文章結(jié)束給大家分享下程序員的一些笑話語錄: 馬云喜歡把自己包裝成教主,張朝陽喜歡把自己包裝成明星,李彥宏喜歡把自己包裝成的很知性,丁磊喜歡把自己包裝的有創(chuàng)意,李開復(fù)總擺出一副叫獸的樣子。看來的。其實我想說,缺啥補(bǔ)啥,人之常情。

Top 5 Grid Infrastructure Startup Issues [ID 1368382.1]


更多文章、技術(shù)交流、商務(wù)合作、聯(lián)系博主

微信掃碼或搜索:z360901061

微信掃一掃加我為好友

QQ號聯(lián)系: 360901061

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描下面二維碼支持博主2元、5元、10元、20元等您想捐的金額吧,狠狠點擊下面給點支持吧,站長非常感激您!手機(jī)微信長按不能支付解決辦法:請將微信支付二維碼保存到相冊,切換到微信,然后點擊微信右上角掃一掃功能,選擇支付二維碼完成支付。

【本文對您有幫助就好】

您的支持是博主寫作最大的動力,如果您喜歡我的文章,感覺我的文章對您有幫助,請用微信掃描上面二維碼支持博主2元、5元、10元、自定義金額等您想捐的金額吧,站長會非常 感謝您的哦!!!

發(fā)表我的評論
最新評論 總共0條評論
主站蜘蛛池模板: 久久福利精品 | 免费观看黄色网 | 免费一级毛片免费播放 | 一级毛片免费播放视频 | 亚洲视频在线观 | 国产在线观看美女福利精 | 中文精品久久久久中文 | 国产性做久久久久久 | 四虎新地址4hu 你懂的 | 久艾草国产成人综合在线视频 | 伊人色综合久久 | 俄罗斯一级成人毛片 | 乱子伦xxx欧美 | 国产精品国产三级国产an | 成人深夜视频在线观看 | 特级全黄一级毛片免费 | 久草在线精品视频 | 国产亚洲精品久久久久久久网站 | 女性一级全黄生活片免费看 | 亚洲精品三区 | 午夜宅男在线观看 | 免费一级特黄视频 | 精品久久久视频 | 99re这里只有精品在线观看 | 一级黄色α片 | 99精品国产在现线免费 | 日本欧美一区二区三区在线观看 | 国产一区二区三区免费视频 | 国产精品第二页在线播放 | 天天操狠狠| 草免费视频 | 国产91在线 | 日本 | 超级毛片| 久久人人精品 | 日本一级作爱片在线观看 | 欧美亚洲高清 | 欧美日韩精选 | 激情午夜网 | 久久精品日日躁精品 | 久久伊人亚洲 | 久久久久国产精品免费 |