什么是Oracle ASM Filter Driver(ASMFD)?
简单地说,这是一个可以取代ASMLIB和udev设置的新功能,并且还增加了I/O Filter功能,这也体现在该功能的命名中。ASMFD目前只在Linux操作系统中有效,并且必须要使用最新版的Oracle ASM 12.1.0.2。在之前,由于Linux操作系统对于块设备的发现顺序不定,所以在系统重启以后,无法保证原来的/dev/sda还是sda,所以不能直接使用这样原始的设备名来做ASM Disk的Path,因此出现了ASMLIB,以Label的方式给予设备一个固定的名字,而ASM则直接使用这个固定的名字来创建ASM磁盘,后来ASMLIB必须要ULN帐号才可以下载了,于是大家全部转到了udev方式,我还因此写了几篇文章来阐述在Linux中如何设置udev rule。比如:
How to use udev for Oracle ASM in Oracle Linux 6
Oracle Datafiles & Block Device & Parted & Udev
现在Oracle推出了ASMFD,可以一举取代ASMLIB和手动设置udev rules文件的繁琐,并且最重要的是I/O Filter功能。
什么是I/O Filter功能?
文档原文如下:
The Oracle ASM Filter Driver rejects any I/O requests that are invalid. This action eliminates accidental overwrites of Oracle ASM disks that would cause corruption in the disks and files within the disk group. For example, the Oracle ASM Filter Driver filters out all non-Oracle I/Os which could cause accidental overwrites.
意思是:该功能可以拒绝所有无效的I/O请求,最主要的作用是防止意外覆写ASM磁盘的底层盘,在后面的测试中可以看到对于root用户的dd全盘清零这样的变态操作也都是可以过滤的。
真是不错,那么该怎么启用这个功能呢?
通常我们原先的ASM中都应该使用的是ASMLIB或者udev绑定的设备,这里就直接描述如何将原先的设备名重新命名为新的AFD设备名。
--确认目前ASMFD模块(以下简称AFD)的状态,未加载。
[grid@dbserver1 ~]asmcmd afd_state
ASMCMD-9526: The AFD state is 'NOT INSTALLED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com'
--获取当前ASM磁盘发现路径,我这里是使用udev绑定的名称
[grid@dbserver1 ~] asmcmd dsget
parameter:/dev/asm*
profile:/dev/asm*
--设置ASM磁盘路径,将新的Disk String加入
--可以看到在设置该参数时,ASM会检查现有已经加载的磁盘,如果不在发现路径上,将会报错。
[grid@dbserver1 ~]asmcmd dsset AFD:*
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-15014: path '/dev/asm-disk7' is not in the discovery set (DBD ERROR: OCIStmtExecute)
--因此我们必须将新的路径加在原始路径后面,设置成多种路径,该操作会运行一段时间,视ASM磁盘多少而定
[grid@dbserver1 ~] asmcmd dsset '/dev/asm*','AFD:*'
[grid@dbserver1 ~]asmcmd dsget
parameter:/dev/asm*, AFD:*
profile:/dev/asm*,AFD:*
--检查现在GI环境中的节点。
[grid@dbserver1 ~] olsnodes -a
dbserver1 Hub
dbserver2 Hub
--以下命令必须在所有Hub节点上都运行,可以使用Rolling的方式。以下命令有些需要root用户,有些需要grid用户,请注意#或者不同的提示符表示不同的用户。
--先停止crs
[root@dbserver1 ~]# crsctl stop crs
--作AFD Configure,实际上这是一个解压程序包,安装,并加载Driver的过程,需要消耗一些时间
[root@dbserver1 ~]# asmcmd afd_configure
Connected to an idle instance.
AFD-627: AFD distribution files found.
AFD-636: Installing requested AFD software.
AFD-637: Loading installed AFD drivers.
AFD-9321: Creating udev for AFD.
AFD-9323: Creating module dependencies - this may take some time.
AFD-9154: Loading 'oracleafd.ko' driver.
AFD-649: Verifying AFD devices.
AFD-9156: Detecting control device '/dev/oracleafd/admin'.
AFD-638: AFD installation correctness verified.
Modifying resource dependencies - this may take some time.
--结束以后,可以再次检查AFD状态,显示已加载。
[root@dbserver1 ~]# asmcmd afd_state
Connected to an idle instance.
ASMCMD-9526: The AFD state is 'LOADED' and filtering is 'DEFAULT' on host 'dbserver1.vbox.com'
--接下来需要设置AFD自己的磁盘发现路径了,其实这里很像以前ASMLIB的操作。
--设置AFD磁盘发现路径,必须先启动CRS,否则将会遇到下面的错误。同时也可以看到这个信息是存储在每个节点自己的OLR中,因此必须在所有节点中都设置。
[root@dbserver1 ~]# asmcmd afd_dsget
Connected to an idle instance.
ASMCMD-9511: failed to obtain required AFD disk string from Oracle Local Repository
[root@dbserver1 ~]#
[root@dbserver1 ~]# asmcmd afd_dsset '/dev/sd*'
Connected to an idle instance.
ASMCMD-9512: failed to update AFD disk string in Oracle Local Repository.
--启动CRS
[root@dbserver1 ~]# crsctl start crs
CRS-4123: Oracle High Availability Services has been started.
--此时查看后台的ASM告警日志,可以看到加载的磁盘仍然使用的是原始路径。但是也可以看到libafd已经成功加载。
2014-11-20 17:01:04.545000 +08:00
NOTE: Loaded library: /opt/oracle/extapi/64/asm/orcl/1/libafd12.so
ORACLE_BASE from environment = /u03/app/grid
SQL> ALTER DISKGROUP ALL MOUNT /* asm agent call crs *//* {0:9:3} */
NOTE: Diskgroup used for Voting files is:
CRSDG
Diskgroup with spfile:CRSDG
NOTE: Diskgroup used for OCR is:CRSDG
NOTE: Diskgroups listed in ASM_DISKGROUP are
DATADG
NOTE: cache registered group CRSDG 1/0xB8E8EA0B
NOTE: cache began mount (first) of group CRSDG 1/0xB8E8EA0B
NOTE: cache registered group DATADG 2/0xB8F8EA0C
NOTE: cache began mount (first) of group DATADG 2/0xB8F8EA0C
NOTE: Assigning number (1,2) to disk (/dev/asm-disk3)
NOTE: Assigning number (1,1) to disk (/dev/asm-disk2)
NOTE: Assigning number (1,0) to disk (/dev/asm-disk1)
NOTE: Assigning number (1,5) to disk (/dev/asm-disk10)
NOTE: Assigning number (1,3) to disk (/dev/asm-disk8)
NOTE: Assigning number (1,4) to disk (/dev/asm-disk9)
NOTE: Assigning number (2,3) to disk (/dev/asm-disk7)
NOTE: Assigning number (2,2) to disk (/dev/asm-disk6)
NOTE: Assigning number (2,1) to disk (/dev/asm-disk5)
NOTE: Assigning number (2,5) to disk (/dev/asm-disk12)
NOTE: Assigning number (2,0) to disk (/dev/asm-disk4)
NOTE: Assigning number (2,6) to disk (/dev/asm-disk13)
NOTE: Assigning number (2,4) to disk (/dev/asm-disk11)
--将afd_ds设置为ASM磁盘的底层磁盘设备名,这样以后就不再需要手工配置udev rules了。
[grid@dbserver1 ~] asmcmd afd_dsset '/dev/sd*'
[grid@dbserver1 ~]asmcmd afd_dsget
AFD discovery string: /dev/sd*
--我在测试的时候,上面犯了一个错误,将路径设置为了“dev/sd*”,缺少了最开始的根目录。因此此处没有发现任何磁盘,如果你的测试中,这一步已经可以发现磁盘,请告诉我。
[grid@dbserver1 ~] asmcmd afd_lsdsk
There are no labelled devices.
--再次提醒,到此为止的所有命令,都必须要在集群环境的所有节点中都执行。
--接下来就是将原先的ASM磁盘路径从udev转到AFD
--先检查现在的磁盘路径
[root@dbserver1 ~]# ocrcheck -config
Oracle Cluster Registry configuration is :
Device/File Name : +CRSDG
[root@dbserver1 ~]# crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE 4838a0ee7bfa4fbebf8ff9f58642c965 (/dev/asm-disk1) [CRSDG]
2. ONLINE 72057097a36e4f02bfc7b5e23672e4cc (/dev/asm-disk2) [CRSDG]
3. ONLINE 7906e2fb24d24faebf9b82bba6564be3 (/dev/asm-disk3) [CRSDG]
Located 3 voting disk(s).
[root@dbserver1 ~]# su - grid
[grid@dbserver1 ~]asmcmd lsdsk -G CRSDG
Path
/dev/asm-disk1
/dev/asm-disk10
/dev/asm-disk2
/dev/asm-disk3
/dev/asm-disk8
/dev/asm-disk9
--由于要修改OCR所在的磁盘,因此修改之前需要停止Cluster。
[root@dbserver1 ~]# crsctl stop cluster -all
--直接修改会报错,因为/dev/asm-disk1已经存在在ASM中了。
[grid@dbserver1 ~] asmcmd afd_label asmdisk01 /dev/asm-disk1
Connected to an idle instance.
ASMCMD-9513: ASM disk label set operation failed.
disk /dev/asm-disk1 is already provisioned for ASM
--必须要增加migrate关键字,才可以成功。
[grid@dbserver1 ~]asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate
Connected to an idle instance.
[grid@dbserver1 ~] asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK01 ENABLED /dev/asm-disk1
--在我的测试ASM中,一共有13块磁盘,因此依次修改。
asmcmd afd_label asmdisk01 /dev/asm-disk1 --migrate
asmcmd afd_label asmdisk02 /dev/asm-disk2 --migrate
asmcmd afd_label asmdisk03 /dev/asm-disk3 --migrate
asmcmd afd_label asmdisk04 /dev/asm-disk4 --migrate
asmcmd afd_label asmdisk05 /dev/asm-disk5 --migrate
asmcmd afd_label asmdisk06 /dev/asm-disk6 --migrate
asmcmd afd_label asmdisk07 /dev/asm-disk7 --migrate
asmcmd afd_label asmdisk08 /dev/asm-disk8 --migrate
asmcmd afd_label asmdisk09 /dev/asm-disk9 --migrate
asmcmd afd_label asmdisk10 /dev/asm-disk10 --migrate
asmcmd afd_label asmdisk11 /dev/asm-disk11 --migrate
asmcmd afd_label asmdisk12 /dev/asm-disk12 --migrate
asmcmd afd_label asmdisk13 /dev/asm-disk13 --migrate
[grid@dbserver1 ~]asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK01 ENABLED /dev/asm-disk1
ASMDISK02 ENABLED /dev/asm-disk2
ASMDISK03 ENABLED /dev/asm-disk3
ASMDISK04 ENABLED /dev/asm-disk4
ASMDISK05 ENABLED /dev/asm-disk5
ASMDISK06 ENABLED /dev/asm-disk6
ASMDISK07 ENABLED /dev/asm-disk7
ASMDISK08 ENABLED /dev/asm-disk8
ASMDISK09 ENABLED /dev/asm-disk9
ASMDISK10 ENABLED /dev/asm-disk10
ASMDISK11 ENABLED /dev/asm-disk11
ASMDISK12 ENABLED /dev/asm-disk12
ASMDISK13 ENABLED /dev/asm-disk13
--在另外的节点中,不再需要作label,而是直接scan即可,这跟使用ASMLIB的操作非常相像。
[grid@dbserver2 ~] asmcmd afd_scan
Connected to an idle instance.
[grid@dbserver2 ~]asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK12 ENABLED /dev/asm-disk12
ASMDISK09 ENABLED /dev/asm-disk9
ASMDISK08 ENABLED /dev/asm-disk8
ASMDISK11 ENABLED /dev/asm-disk11
ASMDISK10 ENABLED /dev/asm-disk10
ASMDISK13 ENABLED /dev/asm-disk13
ASMDISK01 ENABLED /dev/asm-disk1
ASMDISK04 ENABLED /dev/asm-disk4
ASMDISK06 ENABLED /dev/asm-disk6
ASMDISK07 ENABLED /dev/asm-disk7
ASMDISK05 ENABLED /dev/asm-disk5
ASMDISK03 ENABLED /dev/asm-disk3
ASMDISK02 ENABLED /dev/asm-disk2
--重新启动Cluster
[root@dbserver1 ~]# crsctl start cluster -all
--可以看到ASM告警日志中已经显示开始使用新的名称。关于其中WARNING的含义表示目前AFD还不支持Advanced Format格式的磁盘,普通磁盘格式一个扇区是512字节,而高级格式则为4K字节。
2014-11-20 17:46:16.695000 +08:00
* allocate domain 1, invalid = TRUE
* instance 2 validates domain 1
NOTE: cache began mount (not first) of group CRSDG 1/0x508D0B98
NOTE: cache registered group DATADG 2/0x509D0B99
* allocate domain 2, invalid = TRUE
* instance 2 validates domain 2
NOTE: cache began mount (not first) of group DATADG 2/0x509D0B99
WARNING: Library 'AFD Library - Generic , version 3 (KABI_V3)' does not support advanced format disks
NOTE: Assigning number (1,0) to disk (AFD:ASMDISK01)
NOTE: Assigning number (1,1) to disk (AFD:ASMDISK02)
NOTE: Assigning number (1,2) to disk (AFD:ASMDISK03)
NOTE: Assigning number (1,3) to disk (AFD:ASMDISK08)
NOTE: Assigning number (1,4) to disk (AFD:ASMDISK09)
NOTE: Assigning number (1,5) to disk (AFD:ASMDISK10)
NOTE: Assigning number (2,0) to disk (AFD:ASMDISK04)
NOTE: Assigning number (2,1) to disk (AFD:ASMDISK05)
NOTE: Assigning number (2,2) to disk (AFD:ASMDISK06)
NOTE: Assigning number (2,3) to disk (AFD:ASMDISK07)
NOTE: Assigning number (2,4) to disk (AFD:ASMDISK11)
NOTE: Assigning number (2,5) to disk (AFD:ASMDISK12)
NOTE: Assigning number (2,6) to disk (AFD:ASMDISK13)
--检查磁盘加载路径,以及功能全部是AFD样式了。
[grid@dbserver1 ~] asmcmd lsdsk
Path
AFD:ASMDISK01
AFD:ASMDISK02
AFD:ASMDISK03
AFD:ASMDISK04
AFD:ASMDISK05
AFD:ASMDISK06
AFD:ASMDISK07
AFD:ASMDISK08
AFD:ASMDISK09
AFD:ASMDISK10
AFD:ASMDISK11
AFD:ASMDISK12
AFD:ASMDISK13
--但是我们可以看到在数据字典中仍然存在之前的磁盘路径。
SQL> select NAME,LABEL,PATH from VASM_DISK;
NAME LABEL PATH
-------------------- ------------------------------- --------------------------------------------------
/dev/asm-disk7
/dev/asm-disk6
/dev/asm-disk13
/dev/asm-disk12
/dev/asm-disk11
/dev/asm-disk4
/dev/asm-disk2
/dev/asm-disk9
/dev/asm-disk3
/dev/asm-disk5
/dev/asm-disk10
/dev/asm-disk8
/dev/asm-disk1
CRSDG_0000 ASMDISK01 AFD:ASMDISK01
CRSDG_0001 ASMDISK02 AFD:ASMDISK02
CRSDG_0002 ASMDISK03 AFD:ASMDISK03
DATADG_0000 ASMDISK04 AFD:ASMDISK04
DATADG_0001 ASMDISK05 AFD:ASMDISK05
DATADG_0002 ASMDISK06 AFD:ASMDISK06
DATADG_0003 ASMDISK07 AFD:ASMDISK07
CRSDG_0003 ASMDISK08 AFD:ASMDISK08
CRSDG_0004 ASMDISK09 AFD:ASMDISK09
CRSDG_0005 ASMDISK10 AFD:ASMDISK10
DATADG_0004 ASMDISK11 AFD:ASMDISK11
DATADG_0005 ASMDISK12 AFD:ASMDISK12
DATADG_0006 ASMDISK13 AFD:ASMDISK13
26 rows selected.
--需要将ASM磁盘发现路径(注意,这跟设置AFD磁盘发现路径不是一个命令)中原先的路径去除,只保留AFD路径。
[grid@dbserver1 ~] asmcmd dsset 'AFD:*'
[grid@dbserver1 ~]asmcmd dsget
parameter:AFD:*
profile:AFD:*
--再次重启ASM,一切正常了。
SQL> select NAME,LABEL,PATH from VASM_DISK;
NAME LABEL PATH
-------------------- ------------------------------- ------------------------------------------------------------
CRSDG_0000 ASMDISK01 AFD:ASMDISK01
CRSDG_0001 ASMDISK02 AFD:ASMDISK02
CRSDG_0002 ASMDISK03 AFD:ASMDISK03
DATADG_0000 ASMDISK04 AFD:ASMDISK04
DATADG_0001 ASMDISK05 AFD:ASMDISK05
DATADG_0002 ASMDISK06 AFD:ASMDISK06
DATADG_0003 ASMDISK07 AFD:ASMDISK07
CRSDG_0003 ASMDISK08 AFD:ASMDISK08
CRSDG_0004 ASMDISK09 AFD:ASMDISK09
CRSDG_0005 ASMDISK10 AFD:ASMDISK10
DATADG_0004 ASMDISK11 AFD:ASMDISK11
DATADG_0005 ASMDISK12 AFD:ASMDISK12
DATADG_0006 ASMDISK13 AFD:ASMDISK13
13 rows selected.
--收尾工作,将原先的udev rules文件移除。当然,这要在所有节点中都运行。以后如果服务器再次重启,AFD就会完全接管了。
[root@dbserver1 ~]# mv /etc/udev/rules.d/99-oracle-asmdevices.rules ~oracle/
还有什么发现?
其实,AFD也在使用udev。囧。
[grid@dbserver1 ~]$ cat /etc/udev/rules.d/53-afd.rules
#
# AFD devices
KERNEL=="oracleafd/.*", OWNER="grid", GROUP="asmdba", MODE="0770"
KERNEL=="oracleafd/*", OWNER="grid", GROUP="asmdba", MODE="0770"
KERNEL=="oracleafd/disks/*", OWNER="grid", GROUP="asmdba", MODE="0660"
Label过后的磁盘在/dev/oracleafd/disks目录中可以找到。
[root@dbserver2 disks]# ls -l /dev/oracleafd/disks
total 52
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK01
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK02
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK03
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK04
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK05
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK06
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK07
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK08
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK09
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK10
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK11
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK12
-rw-r--r-- 1 root root 9 Nov 20 18:52 ASMDISK13
这里有一个很大不同,所有磁盘的属主变成了root,并且只有root才有写入的权限。很多文章认为,这就是AFD的filter功能体现了,因为现在用oracle或者grid用户都没有办法直接对ASM磁盘进行写入操作,自然就获得了一层保护。比如以下命令会直接报权限不足。
[oracle@dbserver1 disks]$ echo "do some evil" > ASMDISK99
-bash: ASMDISK99: Permission denied
但是如果你认为这就是AFD的保护功能,那也太小看Oracle了,仅仅是这样也对不起名字中Filter字样。且看后面分解。
操作系统中也可以看到AFD磁盘和底层磁盘的对应关系。
[grid@dbserver1 /]$ ls -l /dev/disk/by-label/
total 0
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK01 -> ../../sdc
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK02 -> ../../sdd
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK03 -> ../../sde
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK04 -> ../../sdf
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK05 -> ../../sdg
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK06 -> ../../sdh
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK07 -> ../../sdi
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK08 -> ../../sdj
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK09 -> ../../sdk
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK10 -> ../../sdl
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK11 -> ../../sdm
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK12 -> ../../sdn
lrwxrwxrwx 1 root root 9 Nov 20 19:17 ASMDISK13 -> ../../sdo
再次重启服务器以后,afd_lsdsk的结果中显示的路径都已经变为底层磁盘,但是Filtering却变成了DISABLED。不要在意这里的Label和Path的对应和上面的不一样,因为有些是在节点1中执行的结果,有些是在节点2中执行的结果,而这也是AFD功能的展示,不管两边机器发现块设备的顺序是不是一样,只要绑定了AFD的Label,就没问题了。
ASMCMD> afd_lsdsk
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK01 DISABLED /dev/sdd
ASMDISK02 DISABLED /dev/sde
ASMDISK03 DISABLED /dev/sdf
ASMDISK04 DISABLED /dev/sdg
ASMDISK05 DISABLED /dev/sdh
ASMDISK06 DISABLED /dev/sdi
ASMDISK07 DISABLED /dev/sdj
ASMDISK08 DISABLED /dev/sdk
ASMDISK09 DISABLED /dev/sdl
ASMDISK10 DISABLED /dev/sdm
ASMDISK11 DISABLED /dev/sdn
ASMDISK12 DISABLED /dev/sdo
ASMDISK13 DISABLED /dev/sdp
最后,该来测试一下I/O Filter功能了吧,等好久了!
对,这才是重点。
先看一下如何启用或者禁用Filter功能。在我的测试中,单独设置某块盘启用还是禁用是不生效的,只能全局启用或者禁用。
[grid@dbserver1 ~]$ asmcmd help afd_filter
afd_filter
Sets the AFD filtering mode on a given disk path.
If the command is executed without specifying a disk path then
filtering is set at node level.
Synopsis
afd_filter {-e | -d } [<disk-path>]
Description
The options for afd_filter are described below
-e - enable AFD filtering mode
-d - disable AFD filtering mode
Examples
The following example uses afd_filter to enable AFD filtering
on a given diskpath.
ASMCMD [+] >afd_filter -e /dev/sdq
See Also
afd_lsdsk afd_state
启用Filter功能。
[grid@dbserver1 ~]asmcmd afd_filter -e
[grid@dbserver1 ~] asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK01 ENABLED /dev/sdb
ASMDISK02 ENABLED /dev/sdc
ASMDISK03 ENABLED /dev/sdd
ASMDISK04 ENABLED /dev/sde
ASMDISK05 ENABLED /dev/sdf
ASMDISK06 ENABLED /dev/sdg
ASMDISK07 ENABLED /dev/sdh
ASMDISK08 ENABLED /dev/sdi
ASMDISK09 ENABLED /dev/sdj
ASMDISK10 ENABLED /dev/sdk
ASMDISK11 ENABLED /dev/sdl
ASMDISK12 ENABLED /dev/sdm
ASMDISK13 ENABLED /dev/sdn
为了以防万一,不破坏我自己的实验环境,增加了一块磁盘来作测试。
[root@dbserver1 ~]# asmcmd afd_label asmdisk99 /dev/sdo
Connected to an idle instance.
[root@dbserver1 ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK01 ENABLED /dev/sdb
ASMDISK02 ENABLED /dev/sdc
ASMDISK03 ENABLED /dev/sdd
ASMDISK04 ENABLED /dev/sde
ASMDISK05 ENABLED /dev/sdf
ASMDISK06 ENABLED /dev/sdg
ASMDISK07 ENABLED /dev/sdh
ASMDISK08 ENABLED /dev/sdi
ASMDISK09 ENABLED /dev/sdj
ASMDISK10 ENABLED /dev/sdk
ASMDISK11 ENABLED /dev/sdl
ASMDISK12 ENABLED /dev/sdm
ASMDISK13 ENABLED /dev/sdn
ASMDISK99 ENABLED /dev/sdo
创建一个新的磁盘组。
[grid@dbserver1 ~]$ sqlplus / as sysasm
SQL> create diskgroup DGTEST external redundancy disk 'AFD:ASMDISK99';
Diskgroup created.
先用KFED读取一下磁盘头,验证一下确实无误。
[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 0 ; 0x004: blk=0
kfbh.block.obj: 2147483648 ; 0x008: disk=0
kfbh.check: 1854585587 ; 0x00c: 0x6e8abaf3
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17
kfdhdb.driver.reserved[0]: 1145918273 ; 0x008: 0x444d5341
kfdhdb.driver.reserved[1]: 961237833 ; 0x00c: 0x394b5349
kfdhdb.driver.reserved[2]: 57 ; 0x010: 0x00000039
kfdhdb.driver.reserved[3]: 0 ; 0x014: 0x00000000
kfdhdb.driver.reserved[4]: 0 ; 0x018: 0x00000000
kfdhdb.driver.reserved[5]: 0 ; 0x01c: 0x00000000
kfdhdb.compat: 168820736 ; 0x020: 0x0a100000
kfdhdb.dsknum: 0 ; 0x024: 0x0000
kfdhdb.grptyp: 1 ; 0x026: KFDGTP_EXTERNAL
kfdhdb.hdrsts: 3 ; 0x027: KFDHDR_MEMBER
kfdhdb.dskname: ASMDISK99 ; 0x028: length=9
kfdhdb.grpname: DGTEST ; 0x048: length=6
kfdhdb.fgname: ASMDISK99 ; 0x068: length=9
直接使用dd尝试将整个磁盘清零。dd命令本身没有任何错误返回。
[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo
dd: writing to `/dev/sdo': No space left on device
409601+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 19.9602 s, 10.5 MB/s
之后重新mount磁盘组,如果磁盘被清零,在重新mount的时候一定会出现错误,而现在正常挂载。
SQL> alter diskgroup DGTEST dismount;
Diskgroup altered.
SQL> alter diskgroup DGTEST mount;
Diskgroup altered.
觉得不过瘾?那再创建一个表空间,插入一些数据,做一次checkpoint,仍然一切正常。
SQL> create tablespace test datafile '+DGTEST' size 100M;
Tablespace created.
SQL> create table t_afd (n number) tablespace test;
Table created.
SQL> insert into t_afd values(1);
1 row created.
SQL> commit;
Commit complete.
SQL> alter system checkpoint;
System altered.
SQL> select count(*) from t_afd;
COUNT(*)
----------
1
但是诡异的是,这时候在操作系统级别直接去读取/dev/sdo的内容,会显示全部都已经被清空为0了。
[root@dbserver1 ~]# od -c -N 256 /dev/sdo
0000000 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0000400
使用strings命令也完全看不到任何有意义的字符。
[root@dbserver1 disks]# strings /dev/sdo
[root@dbserver1 disks]#
但是,千万不要被这样的假象迷惑,以为磁盘真的被清空了,在dd的时候,/var/log/message会产生大量日志,明确表示这些在ASM管理的设备上的IO操作都是不被支持,这才是Filter真正起作用的场合。
afd_mkrequest_fn: write IO on ASM managed device (major=8/minor=224) not supported
使用kfed,仍然可以读取到正常的信息。
[grid@dbserver1 ~]$ kfed read AFD:ASMDISK99
kfbh.endian: 1 ; 0x000: 0x01
kfbh.hard: 130 ; 0x001: 0x82
kfbh.type: 1 ; 0x002: KFBTYP_DISKHEAD
kfbh.datfmt: 1 ; 0x003: 0x01
kfbh.block.blk: 0 ; 0x004: blk=0
kfbh.block.obj: 2147483648 ; 0x008: disk=0
kfbh.check: 1854585587 ; 0x00c: 0x6e8abaf3
kfbh.fcn.base: 0 ; 0x010: 0x00000000
kfbh.fcn.wrap: 0 ; 0x014: 0x00000000
kfbh.spare1: 0 ; 0x018: 0x00000000
kfbh.spare2: 0 ; 0x01c: 0x00000000
kfdhdb.driver.provstr:ORCLDISKASMDISK99 ; 0x000: length=17
......
直到重新启动服务器(重新启动ASM,重新启动Cluster,在操作系统仍然看到的是清零后的数据),所有的数据又回来了。目前还不确认Oracle是使用了怎样的重定向技术实现了这样的神奇效果。
[root@dbserver1 ~]# od -c -N 256 /dev/sdo
0000000 001 202 001 001 \0 \0 \0 \0 \0 \0 \0 200 u 177 D I
0000020 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000040 O R C L D I S K A S M D I S K 9
0000060 9 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000100 \0 \0 020 \n \0 \0 001 003 A S M D I S K 9
0000120 9 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000140 \0 \0 \0 \0 \0 \0 \0 \0 D G T E S T \0 \0
0000160 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000200 \0 \0 \0 \0 \0 \0 \0 \0 A S M D I S K 9
0000220 9 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000240 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
*
0000300 \0 \0 \0 \0 \0 \0 \0 \0 022 257 367 001 \0 X \0 247
0000320 022 257 367 001 \0 h 036 344 \0 002 \0 020 \0 \0 020 \0
0000340 200 274 001 \0 310 \0 \0 \0 002 \0 \0 \0 001 \0 \0 \0
0000360 002 \0 \0 \0 002 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0 \0
0000400
[root@dbserver1 ~]#
[root@dbserver1 ~]# strings /dev/sdo | grep ASM
ORCLDISKASMDISK99
ASMDISK99
ASMDISK99
ORCLDISKASMDISK99
ASMDISK99
ASMDISK99
ASMDISK99
ASMDISK99
ASMPARAMETERFILE
ASMPARAMETERBAKFILE
ASM_STALE
最后将Filter禁用之后再测试。
[root@dbserver1 ~]# asmcmd afd_filter -d
Connected to an idle instance.
[root@dbserver1 ~]# asmcmd afd_lsdsk
Connected to an idle instance.
--------------------------------------------------------------------------------
Label Filtering Path
================================================================================
ASMDISK01 DISABLED /dev/sdb
ASMDISK02 DISABLED /dev/sdc
ASMDISK03 DISABLED /dev/sdd
ASMDISK04 DISABLED /dev/sde
ASMDISK05 DISABLED /dev/sdf
ASMDISK06 DISABLED /dev/sdg
ASMDISK07 DISABLED /dev/sdh
ASMDISK08 DISABLED /dev/sdi
ASMDISK09 DISABLED /dev/sdj
ASMDISK10 DISABLED /dev/sdk
ASMDISK11 DISABLED /dev/sdl
ASMDISK12 DISABLED /dev/sdm
ASMDISK13 DISABLED /dev/sdn
ASMDISK99 DISABLED /dev/sdo
同样使用dd命令清零整个磁盘。
[root@dbserver1 ~]# dd if=/dev/zero of=/dev/sdo
dd: writing to `/dev/sdo': No space left on device
409601+0 records in
409600+0 records out
209715200 bytes (210 MB) copied, 4.46444 s, 47.0 MB/s
重新mount磁盘组,如期报错,磁盘组无法加载。
SQL> alter diskgroup DGTEST dismount;
Diskgroup altered.
SQL> alter diskgroup DGTEST mount;
alter diskgroup DGTEST mount
*
ERROR at line 1:
ORA-15032: not all alterations performed
ORA-15017: diskgroup "DGTEST" cannot be mounted
ORA-15040: diskgroup is incomplete
重新启动数据库,也会发现由于表空间中数据库不存在而导致数据库无法正常Open。
SQL> startup
ORACLE instance started.
Total System Global Area 838860800 bytes
Fixed Size 2929936 bytes
Variable Size 385878768 bytes
Database Buffers 226492416 bytes
Redo Buffers 5455872 bytes
In-Memory Area 218103808 bytes
Database mounted.
ORA-01157: cannot identify/lock data file 15 - see DBWR trace file
ORA-01110: data file 15: '+DGTEST/CDB12C/DATAFILE/test.256.864163075'
有结论吗?
以上还不够吗?就酱!
請問如何安裝?
先安裝GI,才作配置,
但安裝GI的時候,要預先配置好OCR及VOTE DISK才能成功,那麼在安GI前要有特別工作?
@ Leung
11g中已经不需要先配置好OCR和VD了,安装GI的过程中会自动创建。