指引网

当前位置: 主页 > 数据库 > Oracle >

RAC 安装错误

来源:网络 作者:佚名 点击: 时间:2018-01-13 06:37
[摘要] 运行root.sh 如下 [root@rac2 crs]# ./root.sh WARNING: directory '/u01' is not owned by root Checking to see if Oracle CRS stack is already configured Oracle CRS stack is already configured a

运行root.sh 如下

  [root@rac2 crs]# ./root.sh

  WARNING: directory '/u01' is not owned by root

  Checking to see if Oracle CRS stack is already configured

  Oracle CRS stack is already configured and will be running under init(1M)

  其实是 因为存储设备和上次安装的设备文件名称不一致

  把下面的文件删除在重新运行即可

  [root@rac2 crs]# rm /etc/oracle/scls_scr/rac2/oracle/cssfatal

  2、dd设备文件

  安装rac 在第二个节点一直卡在

  Waiting for the Oracle CRSD and EVMD to start

  Waiting for the Oracle CRSD and EVMD to start

  Waiting for the Oracle CRSD and EVMD to start

  Waiting for the Oracle CRSD and EVMD to start

  后来发现是因为voting disk和ocr没有清除干净,oracle给的是:

  dd if=/dev/zero of=/dev/raw/raw1 bs=8192 count=12800

  3、通常在最后一个节点执行root.sh时会遇到错误

  CSS is active on these nodes.

  node1

  node2

  CSS is active on all nodes.

  Waiting for the Oracle CRSD and EVMD to start

  Waiting for the Oracle CRSD and EVMD to start

  Oracle CRS stack installed and running under init(1M)

  Running vipca(silent) for configuring nodeapps

  /u01/app/oracle/product/10.2.0/crs/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory

  出现以下报错,各节点修改以下参数就可以:

  [root@node2 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/vipca

  if [ "$arch" = "i686" -o "$arch" = "ia64" ]

  then

  LD_ASSUME_KERNEL=2.4.19

  export LD_ASSUME_KERNEL

  fi

  unset LD_ASSUME_KERNEL ------添加的行

  #End workaround

  ;;

  [root@node2 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/srvctl

  LD_ASSUME_KERNEL=2.4.19

  export LD_ASSUME_KERNEL

  unset LD_ASSUME_KERNEL ------添加的行

  [root@node1 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/vipca

  if [ "$arch" = "i686" -o "$arch" = "ia64" ]

  then

  LD_ASSUME_KERNEL=2.4.19

  export LD_ASSUME_KERNEL

  fi

  unset LD_ASSUME_KERNEL ------添加的行

  #End workaround

  ;;

  [root@node1 bin]# vi /u01/app/oracle/product/10.2.0/crs/bin/srvctl

  LD_ASSUME_KERNEL=2.4.19

  export LD_ASSUME_KERNEL

  unset LD_ASSUME_KERNEL ------添加的行

  在节点2再次执行下面操作:

  [root@node2 bin]# /u01/app/oracle/product/10.2.0/crs/root.sh

  4、

  ASM: Device is already labeled for ASM disk

  当一块磁盘或分区已经被一个DataGroup用过,此时想用这块磁盘或分区重新生成Asmlib生成Oracle磁盘设备时即会报标题的错。

 类似如下:

  oracle@vvfs$ /etc/init.d/oracleasm createdisk VOL1 /dev/sda1

  Marking disk "/dev/sda1" as an ASM disk: asmtool:

  Device "/dev/sda1" is already labeled for ASM disk "" [FAILED]

  oracle@vvfs$ /etc/init.d/oracleasm deletedisk VOL1

  Removing ASM disk "VOL1" [FAILED]

  解决问题其实很简单,把磁盘头清一下就可以了:

  dd if=/dev/zero of=<your_raw_device> bs=1024 count=100

  现在操作就该正常了:

  oracle@vvfs$ dd if=/dev/zero of=/dev/sda1 bs=1024 count=100

  100+0 records in

  100+0 records out

  oracle@vvfs$ /etc/init.d/oracleasm createdisk VOL /dev/sda1

  Marking disk "/dev/sda1" as an ASM disk: [ OK ]

  5、Error 0(Native: listNetInterfaces:[3])

  [Error 0(Native: listNetInterfaces:[3])]

  [root@node2 bin]# ./oifcfg iflist

  eth1 10.10.17.0

  virbr0 192.168.122.0

  eth0 192.168.100.0

  [root@node2 bin]# ./oifcfg setif -global eth0/192.168.100.0:public

  [root@node2 bin]# ./oifcfg setif -global eth1/10.10.17.0:cluster_interconnect

  [root@node2 bin]# ./oifcfg getif

  eth0 192.168.100.0 global public

  eth1 10.10.17.0 global cluster_interconnect

  在节点上重新运行 crs/bin/install/rootdelete.sh

  然后在运行 crs/bin/root.sh

------分隔线----------------------------