VNX – Creating cascaded clones, sort of..

If you are familiar with VMAX TimeFinder/Clone you will easily recognized what this is,  TimeFinder/Clone Cascaded clone feature.  There are a lot of use-cases that would allow one to take advantage of this feature, one example would be to refresh a dev or test environment from a nightly clone of production.

8-16-2014 8-02-15 PM

Great feature but what if you need to do same thing on a VNX ?  Unfortunately native Snapview Clone software will not allow you to create clone of “C” because clone “B” is a target of another clone session. If you try to create a clone session by selecting B as your source (regardless if it’s synchronized or fractured)  you will get an error message similar to this:

8-16-2014 8-22-53 PM

So what do we ?  Well we reach out to our good ol’  friend SANCopy.  Most people know and use SANCopy for array to array migration but not many know that you can also use it for intra array copies.  Typically intra array copies are performed using SnapView clone but this is a special case.  SANCopy will allow us to create a completely new session between  “B” and “C”.   Here is how :

Environment:

VNX5600

8-16-2014 8-36-13 PM

 

8-16-2014 8-39-07 PM

First step we create a standard SnapView Clone session between LUN Oracle_A and Oracle_B and

then Fracture it.

8-16-2014 8-45-48 PM

Next step we are going to create a SANCopy session between LUN Oracle_B and Oracle_C.

8-16-2014 8-54-27 PM

Here is one caveat,  as you can see from the screenshot below we do not see LUN ID 12000 which should be our Oracle_C. We don’t see it because our source LUN (Oracle_B) is currently owned by SPB and our target (Oracle_C) is owned by SPA. So we need to trespass  Oracle_C to match SP owner of source LUN, SPB in this case.

8-16-2014 9-08-35 PM

Once you trespass the target LUN, you will see it in the destination list and you can go ahead and create the session. Once session is created we need to start it, in Unisphere navigate to Storage > Data Migration > Sessions Tab.  Select the session and hit Start8-16-2014 9-28-59 PM

Now if you select the SAN Copy LUNs tab you will see this, i know it looks as if we have two different sessions, don’t worry it’s normal.

8-16-2014 9-32-58 PM

When session Status changes to Completed you can go ahead and delete the SANCopy session. SANCopy session can remain in place and will not impact your existing SnapView clone session between Oracle_A and Oracle_B. You can freely synchronize it and fracture it again. Finally if you had to trespass target LUNs , don’t forget to trespass them back after you delete the SANCopy session.

 

Using VNX Snapshots on Linux

My previous post was about using VNX Snapshots on Windows and now let’s see how to use this functionality on Linux, Redhat specifically.

Configuration:

VNX 5700 – Block OE 05.32.000.5.206
Redhat 6.4 Enterprise Server
PowerPath – 5.7 SP 1 P 01 (build 6)
Snapcli – V3.32.0.0.6 – 1 (64 bits)
2 storage groups

stg-group1linux

stg-group2

As we can from screenshot above, i have two LUNs presented to source server (LUN 353 and LUN 382).  On the host LUNs have been fdisk, aligned and volume group created.

[root@source ~]# powermt display dev=all
Pseudo name=emcpowera
VNX ID=APM00112345678 [stg-group1]
Logical device ID=60060160131C420092A5A2B8ECF0E211 [LUN 353]
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   2 fnic                   sdp         SP B0    active   alive      0      0
   2 fnic                   sdn         SP B1    active   alive      0      0
   2 fnic                   sdl         SP A3    active   alive      0      0
   2 fnic                   sdj         SP A2    active   alive      0      0
   1 fnic                   sdh         SP B3    active   alive      0      0
   1 fnic                   sdf         SP B2    active   alive      0      0
   1 fnic                   sdd         SP A0    active   alive      0      0
   1 fnic                   sdb         SP A1    active   alive      0      0

Pseudo name=emcpowerb
VNX ID=APM00112345678 [stg-group1]
Logical device ID=60060160131C4200D263B55218F7E211 [LUN 382]
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   2 fnic                   sdq         SP B0    active   alive      0      0
   2 fnic                   sdo         SP B1    active   alive      0      0
   2 fnic                   sdm         SP A3    active   alive      0      0
   2 fnic                   sdk         SP A2    active   alive      0      0
   1 fnic                   sdi         SP B3    active   alive      0      0
   1 fnic                   sdg         SP B2    active   alive      0      0
   1 fnic                   sde         SP A0    active   alive      0      0
   1 fnic                   sdc         SP A1    active   alive      0      0

[root@source ~]# vgdisplay VG_VNX -v
    Using volume group(s) on command line
    Finding volume group "VG_VNX"
  --- Volume group ---
  VG Name               VG_VNX
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       4864 / 19.00 GiB
  Free  PE / Size       254 / 1016.00 MiB
  VG UUID               hbfpXb-q7mU-KMi0-nFxP-9lw2-bi8q-S0vJpY

  --- Logical volume ---
  LV Path                /dev/VG_VNX/vnx_lv
  LV Name                vnx_lv
  VG Name                VG_VNX
  LV UUID                hqEH3q-zchl-ZeRd-KcuA-3PLi-seMi-8PxWsR
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2013-07-28 14:19:05 -0400
  LV Status              available
  # open                 1
  LV Size                19.00 GiB
  Current LE             4864
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

  --- Physical volumes ---
  PV Name               /dev/emcpowera1     
  PV UUID               YAGYyX-tTpm-NFDC-2PRw-nntO-jRUs-Z54k00
  PV Status             allocatable
  Total PE / Free PE    2559 / 0

  PV Name               /dev/emcpowerb1     
  PV UUID               d2bjdo-qrw3-J00L-KkF6-I7zq-7qGo-3d1Bxm
  PV Status             allocatable
  Total PE / Free PE    2559 / 254

1) Step one is to create consistency group, because our volume group consists of two LUNs, both LUNs need to be snapped at the same time and consistency group allows us to do just that.

[root@management ~]# naviseccli -h 10.210.6.19 snap -group -create -name vnx_consistency_group -res 353,382

2) Next we need to create SMP (Snapshot Mount  Point) and present it to stg-group2. Think of SMP as a placeholder device that will be used to attach snapshot to, since we have two LUNs we need to create two SMPs.

[root@management ~]# naviseccli -h 10.210.6.19 lun -create -type Snap -primaryLunName "LUN 353" -name SMP_LUN_353 -allowInbandSnapAttach yes -sp A

[root@management ~]# naviseccli -h 10.210.6.19 lun -create -type Snap -primaryLunName "LUN 382" -name SMP_LUN_382 -allowInbandSnapAttach yes -sp A

3) Now let’s identify each SMP Snapshot Mount Point Number and then attach both to storage group “stg-group2

[root@management ~]# naviseccli -h 10.210.6.19  lun -list -l 353 -snapMountPoints
LOGICAL UNIT NUMBER 353
Name:  LUN 353
Snapshot Mount Points:  7533

[root@management ~]# naviseccli -h 10.210.6.19  lun -list -l 382 -snapMountPoints
LOGICAL UNIT NUMBER 382
Name:  LUN 382
Snapshot Mount Points:  7532

Since stg-group2 does not have any LUNs in it, we are going to start with HLU 0

[root@management ~]# naviseccli -h 10.210.6.19  storagegroup -addhlu -gname stg-group2 -alu 7533 -hlu 0

[root@management ~]# naviseccli -h 10.210.6.19  storagegroup -addhlu -gname stg-group2 -alu 7532 -hlu 1

4) Now that SMPs presented to target host, let’s rescan the bus and see what happens. I am testing on RedHat 6.4 so i am using these command to rescan the bus:

[root@target ~]# ls -l /sys/class/scsi_host/

lrwxrwxrwx. 1 root root 0 Jul 28 13:04 host1 -> ../../devices/pci0000:00/0000:00:02.0/0000:02:00.0/0000:03:00.0/0000:04:00.0/0000:05:01.0/0000:07:00.0/host1/scsi_host/host1
lrwxrwxrwx. 1 root root 0 Jul 28 13:04 host2 -> ../../devices/pci0000:00/0000:00:02.0/0000:02:00.0/0000:03:00.0/0000:04:00.0/0000:05:02.0/0000:08:00.0/host2/scsi_host/host2

[root@target ~]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@target ~]# echo "- - -" > /sys/class/scsi_host/host2/scan

[root@target ~]# powermt check
[root@target ~]# powermt set policy=co dev=all
[root@target ~]# powermt save
[root@target ~]# powermt display dev=all
Pseudo name=emcpowera
VNX ID=APM00112345678 []
Logical device ID=60060160131C42004456510ACEF7E211 []
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   2 fnic                   sdq         SP B1    active   alive      0      0
   2 fnic                   sdp         SP B0    active   alive      0      0
   2 fnic                   sdo         SP A3    active   alive      0      0
   2 fnic                   sdn         SP A2    active   alive      0      0
   1 fnic                   sdm         SP B2    active   alive      0      0
   1 fnic                   sdl         SP B3    active   alive      0      0
   1 fnic                   sdk         SP A0    active   alive      0      0
   1 fnic                   sdj         SP A1    active   alive      0      0

Pseudo name=emcpowerb
VNX ID=APM00112345678 []
Logical device ID=60060160131C420006DFEB996EF6E211 []
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   2 fnic                   sdi         SP B1    active   alive      0      0
   2 fnic                   sdh         SP B0    active   alive      0      0
   2 fnic                   sdg         SP A3    active   alive      0      0
   2 fnic                   sdf         SP A2    active   alive      0      0
   1 fnic                   sde         SP B2    active   alive      0      0
   1 fnic                   sdd         SP B3    active   alive      0      0
   1 fnic                   sdc         SP A0    active   alive      0      0
   1 fnic                   sdb         SP A1    active   alive      0      0

5) We are ready to create snapshots, on source server flush memory to disk

[root@source ~]# /usr/snapcli/snapcli flush -o /dev/emcpowera1,/dev/emcpowerb1
Flushed /dev/emcpowera1,/dev/emcpowerb1.

6) Create snapshot using consistency group, run this command on source server. Notice how we specify each powerpath devices that is a member of the volume group.

[root@source ~]# /usr/snapcli/snapcli create -s vnx_snapshot -o /dev/emcpowera1,/dev/emcpowerb1 -c vnx_consistency_group
Attempting to create consistent snapshot vnx_snapshot.
Successfully created consistent snapshot vnx_snapshot.
 on object /dev/emcpowera1.
 on object /dev/emcpowerb1.

7) Attach snapshots to SMP created earlier, run this command on target server.

[root@target ~]# /usr/snapcli/snapcli attach -s vnx_snapshot -f
Scanning for new devices.
Attached snapshot vnx_snapshot on device /dev/emcpowerb.
Attached snapshot vnx_snapshot on device /dev/emcpowera.

8) When snapshot gets attached, volume group gets automatically imported. We can verify it by running this command on target server

[root@target ~]# vgdisplay -v VG_VNX
    Using volume group(s) on command line
    Finding volume group "VG_VNX"
  --- Volume group ---
  VG Name               VG_VNX
  System ID             
  Format                lvm2
  Metadata Areas        2
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                2
  Act PV                2
  VG Size               19.99 GiB
  PE Size               4.00 MiB
  Total PE              5118
  Alloc PE / Size       4864 / 19.00 GiB
  Free  PE / Size       254 / 1016.00 MiB
  VG UUID               hbfpXb-q7mU-KMi0-nFxP-9lw2-bi8q-S0vJpY

  --- Logical volume ---
  LV Path                /dev/VG_VNX/vnx_lv
  LV Name                vnx_lv
  VG Name                VG_VNX
  LV UUID                hqEH3q-zchl-ZeRd-KcuA-3PLi-seMi-8PxWsR
  LV Write Access        read/write
  LV Creation host, time localhost.localdomain, 2013-07-28 14:19:05 -0400
  LV Status              NOT available
  LV Size                19.00 GiB
  Current LE             4864
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto

  --- Physical volumes ---
  PV Name               /dev/emcpowerb1     
  PV UUID               YAGYyX-tTpm-NFDC-2PRw-nntO-jRUs-Z54k00
  PV Status             allocatable
  Total PE / Free PE    2559 / 0

  PV Name               /dev/emcpowera1     
  PV UUID               d2bjdo-qrw3-J00L-KkF6-I7zq-7qGo-3d1Bxm
  PV Status             allocatable
  Total PE / Free PE    2559 / 254

Notice how LV Status is “NOT Available”, that means while volume group got imported, it needs to be activated

[root@target ~]# vgchange -a y VG_VNX
  1 logical volume(s) in volume group "VG_VNX" now active

Now if we repeat vgdisplay command, LV Status will be “available”. At this point if the mount point is in your fstab you can simply run “mount -a”, if you not you can manually mount the logical volume.

Another helpful step would be to run “powermt check” to update PowerPath configuration with correct LUN information. If you look back at “powermt display dev=all” in step 4 you will notice that it did not display any storage group nor LUN related information. But now after we run “powermt check” on target device followed by “powermt display dev=all” we will see storage group and LUN (SMP in this case) information populated.

[root@target ~]# powermt display dev=all
Pseudo name=emcpowera
VNX ID=APM00112345678 [stg-group2]
Logical device ID=60060160131C42004456510ACEF7E211 [SMP_LUN_382]
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   2 fnic                   sdq         SP B1    active   alive      0      0
   2 fnic                   sdp         SP B0    active   alive      0      0
   2 fnic                   sdo         SP A3    active   alive      0      0
   2 fnic                   sdn         SP A2    active   alive      0      0
   1 fnic                   sdm         SP B2    active   alive      0      0
   1 fnic                   sdl         SP B3    active   alive      0      0
   1 fnic                   sdk         SP A0    active   alive      0      0
   1 fnic                   sdj         SP A1    active   alive      0      0

Pseudo name=emcpowerb
VNX ID=APM00112345678 [stg-group2]
Logical device ID=60060160131C420006DFEB996EF6E211 [SMP_LUN_353]
state=alive; policy=CLAROpt; queued-IOs=0
Owner: default=SP A, current=SP A       Array failover mode: 4
==============================================================================
--------------- Host ---------------   - Stor -  -- I/O Path --   -- Stats ---
###  HW Path               I/O Paths    Interf.  Mode     State   Q-IOs Errors
==============================================================================
   2 fnic                   sdi         SP B1    active   alive      0      0
   2 fnic                   sdh         SP B0    active   alive      0      0
   2 fnic                   sdg         SP A3    active   alive      0      0
   2 fnic                   sdf         SP A2    active   alive      0      0
   1 fnic                   sde         SP B2    active   alive      0      0
   1 fnic                   sdd         SP B3    active   alive      0      0
   1 fnic                   sdc         SP A0    active   alive      0      0
   1 fnic                   sdb         SP A1    active   alive      0      0

9) After you are done with snapshot we are going to detach it from target server. First we are going to flush memory to disk

[root@target ~]# /usr/snapcli/snapcli flush -o /dev/emcpowera1,/dev/emcpowerb1
Flushed /dev/emcpowera1,/dev/emcpowerb1.

Then unmount file system, deactivate/export volume group and detach snapshots.

[root@target ~]# vgchange -a n VG_VNX

[root@target ~]# vgexport VG_VNX
  Volume group "VG_VNX" successfully exported

[root@target ~]# /usr/snapcli/snapcli detach -s vnx_snapshot 
Detaching snapshot vnx_snapshot on device /dev/emcpowerb.
Detaching snapshot vnx_snapshot on device /dev/emcpowera.

10) Finally we are going to destroy snapshot from source server

[root@source ~]# /usr/snapcli/snapcli destroy -s vnx_snapshot -o /dev/emcpowera1,/dev/emcpowerb1
Destroyed snapshot vnx_snapshot on object /dev/emcpowera1,/dev/emcpowerb1.

11) When you are ready to create snapshots again simply repeat steps 5-8.

Using VNX Snapshots on Windows

In Block OE version 32 EMC introduced new snapshot functionality VNX Snapshots, very slick technology that simplifies the process of creating snapshots for block storage. There are lot of benefits in using VNX Snapshots versus legacy Snapview Snapshots (no need for RLP, no COFW ..etc). You can read about it in the following two documents:

http://www.emc.com/collateral/software/white-papers/h10858-vnx-snapshots-wp.pdf

https://support.emc.com/docu45754_SnapCLI_for_VNX_Release_Notes_Version_3.32.0.0.5.pdf?language=en_US

https://support.emc.com/docu41553_VNX-Command-Line-Interface-Reference-for-Block.pdf?language=en_US

These papers provide a lot of good information about the technology and some examples. I found some example kind of confusing so i wanted to provide my own examples. Overall goal was to allow my customer to create/delete/mount snapshots without needing to issue naviseccli commands.

Configuration:

VNX 5700 – Block OE 05.32.000.5.206

2 Windows 2008 R2 servers

SnapCLI – V3.32.0.0.5 – 1 (32 bits)

Naviseccli – 7.32.25.1.63

2 storage groups

stg-group1

stg-group2

1) First step we need to install snapcli on both Windows servers, nothing special there, next next Done
2) Next we need to create SMP (Snapshot Mount Point) and present it to stg-group2. Think of SMP as a placeholder device that will be used to attach snapshot to. On a management host where i have naviseccli installed i run the following command. You want to specify “allowInbandSnapAttach” option as that will allow you to attach snapshot on target host using snapcli, otherwise you would have to use naviseccli to attach it. Since our goal is to have customer only use snapcli that’s exactly what we need. LUN 353 is owned by SPA hence -spa A.

C:\>naviseccli -address 10.210.6.19 lun -create -type Snap -primaryLunName "LUN 353" -name SMP_LUN_353 -allowInbandSnapAttach yes -sp A

Now let’s see what SMP looks like and note snapshot mount point number

C:\>naviseccli -address 10.210.6.19 lun -list -l 353 -snapMountPoints
LOGICAL UNIT NUMBER 353
Name: LUN 353
Snapshot Mount Points: 7533

3) Now we need to add SMP to storage stg-group2, note alu is the snapshot mount point point number and since there are no LUNs in the storage group we are using hlu 0

C:\>naviseccli -address 10.210.6.19 storagegroup -addhlu -gname stg-group2 -alu 7533 -hlu 0

If we were to look in Disk Management and rescan, we would see “Unknown” disk in offline state.

smpview

4) At this point we are ready to create snapshot, on source system we flush any data in memory to disk and then create snapshot

C:\>snapcli flush -o F:

C:\>snapcli create -s "Snapshot_LUN_353" -o F:
Attempting to create snapshot Snapshot_LUN_353 on device \\.\PhysicalDrive1.
Attempting to create the snapshot on the entire LUN.
Created snapshot Snapshot_LUN_353.

5) Now on target system we are going to attach snapshot

C:\>snapcli attach -s "Snapshot_LUN_353" -f -d F:
Scanning for new devices.
User specified drive letter F:
Attached snapshot Snapshot_LUN_353 on device F:.

At this point if we look in Disk Management again the drive should be online and available

snapmounted
6) When you are done with snapshot we need to flush it and detach it, on target server we run these commands

C:\>snapcli flush -o F:
Flushed F:.

C:\>snapcli detach -s "Snapshot_LUN_353"
Detaching snapshot Snapshot_LUN_353 on device F:.

7) And finally delete the snapshot, on source server run

C:\>snapcli destroy -s "Snapshot_LUN_353" -o F:
Destroyed snapshot Snapshot_LUN_353 on object F:.