Using VNX Snapshots on Linux
August 5, 2013 5 Comments
My previous post was about using VNX Snapshots on Windows and now let’s see how to use this functionality on Linux, Redhat specifically.
Configuration:
VNX 5700 – Block OE 05.32.000.5.206
Redhat 6.4 Enterprise Server
PowerPath – 5.7 SP 1 P 01 (build 6)
Snapcli – V3.32.0.0.6 – 1 (64 bits)
2 storage groups
As we can from screenshot above, i have two LUNs presented to source server (LUN 353 and LUN 382). On the host LUNs have been fdisk, aligned and volume group created.
[root@source ~]# powermt display dev=all Pseudo name=emcpowera VNX ID=APM00112345678 [stg-group1] Logical device ID=60060160131C420092A5A2B8ECF0E211 [LUN 353] state=alive; policy=CLAROpt; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 4 ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 fnic sdp SP B0 active alive 0 0 2 fnic sdn SP B1 active alive 0 0 2 fnic sdl SP A3 active alive 0 0 2 fnic sdj SP A2 active alive 0 0 1 fnic sdh SP B3 active alive 0 0 1 fnic sdf SP B2 active alive 0 0 1 fnic sdd SP A0 active alive 0 0 1 fnic sdb SP A1 active alive 0 0 Pseudo name=emcpowerb VNX ID=APM00112345678 [stg-group1] Logical device ID=60060160131C4200D263B55218F7E211 [LUN 382] state=alive; policy=CLAROpt; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 4 ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 fnic sdq SP B0 active alive 0 0 2 fnic sdo SP B1 active alive 0 0 2 fnic sdm SP A3 active alive 0 0 2 fnic sdk SP A2 active alive 0 0 1 fnic sdi SP B3 active alive 0 0 1 fnic sdg SP B2 active alive 0 0 1 fnic sde SP A0 active alive 0 0 1 fnic sdc SP A1 active alive 0 0 [root@source ~]# vgdisplay VG_VNX -v Using volume group(s) on command line Finding volume group "VG_VNX" --- Volume group --- VG Name VG_VNX System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 2 Act PV 2 VG Size 19.99 GiB PE Size 4.00 MiB Total PE 5118 Alloc PE / Size 4864 / 19.00 GiB Free PE / Size 254 / 1016.00 MiB VG UUID hbfpXb-q7mU-KMi0-nFxP-9lw2-bi8q-S0vJpY --- Logical volume --- LV Path /dev/VG_VNX/vnx_lv LV Name vnx_lv VG Name VG_VNX LV UUID hqEH3q-zchl-ZeRd-KcuA-3PLi-seMi-8PxWsR LV Write Access read/write LV Creation host, time localhost.localdomain, 2013-07-28 14:19:05 -0400 LV Status available # open 1 LV Size 19.00 GiB Current LE 4864 Segments 2 Allocation inherit Read ahead sectors auto - currently set to 256 Block device 253:3 --- Physical volumes --- PV Name /dev/emcpowera1 PV UUID YAGYyX-tTpm-NFDC-2PRw-nntO-jRUs-Z54k00 PV Status allocatable Total PE / Free PE 2559 / 0 PV Name /dev/emcpowerb1 PV UUID d2bjdo-qrw3-J00L-KkF6-I7zq-7qGo-3d1Bxm PV Status allocatable Total PE / Free PE 2559 / 254
1) Step one is to create consistency group, because our volume group consists of two LUNs, both LUNs need to be snapped at the same time and consistency group allows us to do just that.
[root@management ~]# naviseccli -h 10.210.6.19 snap -group -create -name vnx_consistency_group -res 353,382
2) Next we need to create SMP (Snapshot Mount Point) and present it to stg-group2. Think of SMP as a placeholder device that will be used to attach snapshot to, since we have two LUNs we need to create two SMPs.
[root@management ~]# naviseccli -h 10.210.6.19 lun -create -type Snap -primaryLunName "LUN 353" -name SMP_LUN_353 -allowInbandSnapAttach yes -sp A [root@management ~]# naviseccli -h 10.210.6.19 lun -create -type Snap -primaryLunName "LUN 382" -name SMP_LUN_382 -allowInbandSnapAttach yes -sp A
3) Now let’s identify each SMP Snapshot Mount Point Number and then attach both to storage group “stg-group2
[root@management ~]# naviseccli -h 10.210.6.19 lun -list -l 353 -snapMountPoints LOGICAL UNIT NUMBER 353 Name: LUN 353 Snapshot Mount Points: 7533 [root@management ~]# naviseccli -h 10.210.6.19 lun -list -l 382 -snapMountPoints LOGICAL UNIT NUMBER 382 Name: LUN 382 Snapshot Mount Points: 7532
Since stg-group2 does not have any LUNs in it, we are going to start with HLU 0
[root@management ~]# naviseccli -h 10.210.6.19 storagegroup -addhlu -gname stg-group2 -alu 7533 -hlu 0 [root@management ~]# naviseccli -h 10.210.6.19 storagegroup -addhlu -gname stg-group2 -alu 7532 -hlu 1
4) Now that SMPs presented to target host, let’s rescan the bus and see what happens. I am testing on RedHat 6.4 so i am using these command to rescan the bus:
[root@target ~]# ls -l /sys/class/scsi_host/ lrwxrwxrwx. 1 root root 0 Jul 28 13:04 host1 -> ../../devices/pci0000:00/0000:00:02.0/0000:02:00.0/0000:03:00.0/0000:04:00.0/0000:05:01.0/0000:07:00.0/host1/scsi_host/host1 lrwxrwxrwx. 1 root root 0 Jul 28 13:04 host2 -> ../../devices/pci0000:00/0000:00:02.0/0000:02:00.0/0000:03:00.0/0000:04:00.0/0000:05:02.0/0000:08:00.0/host2/scsi_host/host2 [root@target ~]# echo "- - -" > /sys/class/scsi_host/host1/scan [root@target ~]# echo "- - -" > /sys/class/scsi_host/host2/scan [root@target ~]# powermt check [root@target ~]# powermt set policy=co dev=all [root@target ~]# powermt save [root@target ~]# powermt display dev=all Pseudo name=emcpowera VNX ID=APM00112345678 [] Logical device ID=60060160131C42004456510ACEF7E211 [] state=alive; policy=CLAROpt; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 4 ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 fnic sdq SP B1 active alive 0 0 2 fnic sdp SP B0 active alive 0 0 2 fnic sdo SP A3 active alive 0 0 2 fnic sdn SP A2 active alive 0 0 1 fnic sdm SP B2 active alive 0 0 1 fnic sdl SP B3 active alive 0 0 1 fnic sdk SP A0 active alive 0 0 1 fnic sdj SP A1 active alive 0 0 Pseudo name=emcpowerb VNX ID=APM00112345678 [] Logical device ID=60060160131C420006DFEB996EF6E211 [] state=alive; policy=CLAROpt; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 4 ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 fnic sdi SP B1 active alive 0 0 2 fnic sdh SP B0 active alive 0 0 2 fnic sdg SP A3 active alive 0 0 2 fnic sdf SP A2 active alive 0 0 1 fnic sde SP B2 active alive 0 0 1 fnic sdd SP B3 active alive 0 0 1 fnic sdc SP A0 active alive 0 0 1 fnic sdb SP A1 active alive 0 0
5) We are ready to create snapshots, on source server flush memory to disk
[root@source ~]# /usr/snapcli/snapcli flush -o /dev/emcpowera1,/dev/emcpowerb1 Flushed /dev/emcpowera1,/dev/emcpowerb1.
6) Create snapshot using consistency group, run this command on source server. Notice how we specify each powerpath devices that is a member of the volume group.
[root@source ~]# /usr/snapcli/snapcli create -s vnx_snapshot -o /dev/emcpowera1,/dev/emcpowerb1 -c vnx_consistency_group Attempting to create consistent snapshot vnx_snapshot. Successfully created consistent snapshot vnx_snapshot. on object /dev/emcpowera1. on object /dev/emcpowerb1.
7) Attach snapshots to SMP created earlier, run this command on target server.
[root@target ~]# /usr/snapcli/snapcli attach -s vnx_snapshot -f Scanning for new devices. Attached snapshot vnx_snapshot on device /dev/emcpowerb. Attached snapshot vnx_snapshot on device /dev/emcpowera.
8) When snapshot gets attached, volume group gets automatically imported. We can verify it by running this command on target server
[root@target ~]# vgdisplay -v VG_VNX Using volume group(s) on command line Finding volume group "VG_VNX" --- Volume group --- VG Name VG_VNX System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 2 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 0 Max PV 0 Cur PV 2 Act PV 2 VG Size 19.99 GiB PE Size 4.00 MiB Total PE 5118 Alloc PE / Size 4864 / 19.00 GiB Free PE / Size 254 / 1016.00 MiB VG UUID hbfpXb-q7mU-KMi0-nFxP-9lw2-bi8q-S0vJpY --- Logical volume --- LV Path /dev/VG_VNX/vnx_lv LV Name vnx_lv VG Name VG_VNX LV UUID hqEH3q-zchl-ZeRd-KcuA-3PLi-seMi-8PxWsR LV Write Access read/write LV Creation host, time localhost.localdomain, 2013-07-28 14:19:05 -0400 LV Status NOT available LV Size 19.00 GiB Current LE 4864 Segments 2 Allocation inherit Read ahead sectors auto --- Physical volumes --- PV Name /dev/emcpowerb1 PV UUID YAGYyX-tTpm-NFDC-2PRw-nntO-jRUs-Z54k00 PV Status allocatable Total PE / Free PE 2559 / 0 PV Name /dev/emcpowera1 PV UUID d2bjdo-qrw3-J00L-KkF6-I7zq-7qGo-3d1Bxm PV Status allocatable Total PE / Free PE 2559 / 254
Notice how LV Status is “NOT Available”, that means while volume group got imported, it needs to be activated
[root@target ~]# vgchange -a y VG_VNX 1 logical volume(s) in volume group "VG_VNX" now active
Now if we repeat vgdisplay command, LV Status will be “available”. At this point if the mount point is in your fstab you can simply run “mount -a”, if you not you can manually mount the logical volume.
Another helpful step would be to run “powermt check” to update PowerPath configuration with correct LUN information. If you look back at “powermt display dev=all” in step 4 you will notice that it did not display any storage group nor LUN related information. But now after we run “powermt check” on target device followed by “powermt display dev=all” we will see storage group and LUN (SMP in this case) information populated.
[root@target ~]# powermt display dev=all Pseudo name=emcpowera VNX ID=APM00112345678 [stg-group2] Logical device ID=60060160131C42004456510ACEF7E211 [SMP_LUN_382] state=alive; policy=CLAROpt; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 4 ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 fnic sdq SP B1 active alive 0 0 2 fnic sdp SP B0 active alive 0 0 2 fnic sdo SP A3 active alive 0 0 2 fnic sdn SP A2 active alive 0 0 1 fnic sdm SP B2 active alive 0 0 1 fnic sdl SP B3 active alive 0 0 1 fnic sdk SP A0 active alive 0 0 1 fnic sdj SP A1 active alive 0 0 Pseudo name=emcpowerb VNX ID=APM00112345678 [stg-group2] Logical device ID=60060160131C420006DFEB996EF6E211 [SMP_LUN_353] state=alive; policy=CLAROpt; queued-IOs=0 Owner: default=SP A, current=SP A Array failover mode: 4 ============================================================================== --------------- Host --------------- - Stor - -- I/O Path -- -- Stats --- ### HW Path I/O Paths Interf. Mode State Q-IOs Errors ============================================================================== 2 fnic sdi SP B1 active alive 0 0 2 fnic sdh SP B0 active alive 0 0 2 fnic sdg SP A3 active alive 0 0 2 fnic sdf SP A2 active alive 0 0 1 fnic sde SP B2 active alive 0 0 1 fnic sdd SP B3 active alive 0 0 1 fnic sdc SP A0 active alive 0 0 1 fnic sdb SP A1 active alive 0 0
9) After you are done with snapshot we are going to detach it from target server. First we are going to flush memory to disk
[root@target ~]# /usr/snapcli/snapcli flush -o /dev/emcpowera1,/dev/emcpowerb1 Flushed /dev/emcpowera1,/dev/emcpowerb1.
Then unmount file system, deactivate/export volume group and detach snapshots.
[root@target ~]# vgchange -a n VG_VNX [root@target ~]# vgexport VG_VNX Volume group "VG_VNX" successfully exported [root@target ~]# /usr/snapcli/snapcli detach -s vnx_snapshot Detaching snapshot vnx_snapshot on device /dev/emcpowerb. Detaching snapshot vnx_snapshot on device /dev/emcpowera.
10) Finally we are going to destroy snapshot from source server
[root@source ~]# /usr/snapcli/snapcli destroy -s vnx_snapshot -o /dev/emcpowera1,/dev/emcpowerb1 Destroyed snapshot vnx_snapshot on object /dev/emcpowera1,/dev/emcpowerb1.
11) When you are ready to create snapshots again simply repeat steps 5-8.
Nice post, Thank you for sharing. How do you handle the vgimport process when the source and the mount host is same ? or in a situation when you have multiple snapshots of same source mounted on single mount host ?
Thank you Bhupesh, i have not had an opportunity to test importing snapshot VG on the same host where source of snapshot resides.
I will try it on my test system and post the findings here, I know the HP-UX side, you need to use vgchgid.
I’m having some problems in make snapcli works fine in linux… when i try to flush powerpath devices like /dev/emcpowera1 occurs error :Error looking up object “/dev/emcpowera1”.
Error: 0x3E02000C (The specified object was not found)…
I install the package most recently available : snapcli-3.32.0.0.6-1.x86_64 for redhat 6.2.
What I missing ?
Thanks in advance
Pingback: EMC – Using naviseccli to create a VNX Snapshot | penguinpunk.net