Popular Posts

Tuesday, October 20, 2015

Difference between SnapVault and SnapMirror?

What does it mean that SnapVault is a backup solution?

Below diagram can help - 
SnapVault
Example has few assumptions:
  • we’ve got filerX in one location and filerY in other location
  • that customer has a connection to both filerX and FilerY, although all shares to customers are available from filerX (via CIFS, NFS, iSCSI or FC)
  • all customer data is being transfered to the filerY via Snapvault
What we can do with snapvault?
  • as a backup solution, we can have a longer snapshot retention time on filerY, so more historic data will be available on filerY, if filerY has slower disks, this solution is smart, because slower disk = cheaper disks, and there is no need to use 15k rpm disk on filer that is not serving data to the customer.
  •  if customer has an network connection and access to shares on filerY he can by himself restore some data to filerX, even single files
  • if there is a disaster within filerX and we lose all data we can restore the data from filerY
What we cannot do with snapvault?
  • in case of an disaster within filerX we cannot “set” filerY as a production side. We cannot “revert the relationship” making the qtree on filerY as a source, and make them read/write. They are snapvault destinations so they are read-only.
  • (having snapmirror license available on filerY we can convert Snapvault qtree to snapmirror qtree which solves that ‘issue’)
What does it mean that SnapMirror is a DR solution?
Lets add diagram :


SnapMirror
Example has few assumptions:
  • we’ve got filerX in one location and filerY in other location
  • that customer has a connection to both filerX and FilerY, although all shares to customers are available from filerX
  • all customer data is being transfered to the filerY via snapmirror
What we can do with snapmirror?
  •  as a backup solution we can restore the accidentally deleted, or lost data on filerX,  if the snapmirror relationship has not been updated meantime
  • if there is some kind or issue with filerX (from a network problem, to a total disaster) we can easily reverse the relationship. We can make the volume or qtree on filerY, as a source, and make it read-write, provide an network connection to the customer and voila – we are back online! After the issue has been solved we can resync the original source with changes made at the destination and reverse the relationship again.



Sunday, March 29, 2015

Force deleting NetApp Qtree

Some times we might run in to issues deleting QTREE in NetApp, so let's see how can we force delete QTREE.

Here let's assume we are deleting qtree in volume1 which belongs to vfiler: vfiler1

Steps 1: Check if there are any CIFS shares related to qtree

>vfiler run vfiler1 cifs shares

cifs1$   /vol/volume1/qtree1
                        everyone / Full Control

Step 2: Delete CIFS1$ share before deleting QTREE

>vfiler run vfiler1 cifs share -delete cifs1$

Step 3: Go to Advanced mode

>priv set advanced

Step 4: Delete Qtree

>*vfiler run vfiler1 qtree delete "/vol/volume1/qtree1"

If above command give any error, go with force deleting qtree

>*vfiler run vfiler1 qtree delete -f "/vol/volume1/qtree1"


VNX/Celerra Multi path file system (mpfs) allocation.

Procedure might varies (VNX/Celerra)

If we are moving data from NetApp filer to VNX which is dedicated to NFS and CIFS and some of it is accessed by both cifs and nfs users. we may opt to choose mpfs setup on the clients and on VNX. create mpfs volumes share it using cifs and nfs and see the mpfs stats if the data is moving through mpfs. For that to be successful we need to install mpfs client software and need to have the usermappers setup on VNX and start testing from both the clients. Later on we can use emcopy or robocopy to copying data from NetApp to VNX

Step:1 Start MPFS service


server_setup server_2 -P mpfs -option start=32


Step:2 Create a new volume with a 256K stripe per EMC's recommendations


nas_volume -name mpfsvol -create -Stripe 262144 d54,d55,d56,d57,d58,d59,d60,d61


Step:3 Create a new metalvolume for the new volume:


nas_volume -name mpfsmtvol -create -Meta mpfsvol


Step:4 Create a new file system on this metavolume


nas_fs -name mpfs -create mpfsmtvol


Step:5 Create a new mountpoint


server_mountpoint server_2 -create /mpfs


Step:6 Mount file system


server_mount server_2 -option nolock,accesspolicy=NATIVE mpfs /mpfs


Step:7 Create new mountpoint for testing


server_mountpoint server_2 -create /mpfs/test


Step:8 Create CIFS share for this work


server_export server_2 -P cifs -n test$ -o netbios=server1 -comment 'MPFS CIFS Share for test' /mpfs/test


Step:9 Validate


server_mpfs server_2 -mountstatus | grep -i mpfs

server_df server_2 /mpfs

server_mpfs server_2

nas_volume -info mpfsmtvol


Step:10 Make sure system administrators are having MPFS client software installed!


Step:11  Verify stats if data is going through fiber channel interface or via CIFS


mpfsctl stats

DFM threshold values set from DFM server

These are some commands on how to set DFM alerts

From DFM server:

C:\Windows>dfm eventtype modify -v Critical aggregate-full
Event "aggregate-full" was left unchanged.
C:\Windows>dfm eventtype modify -v Error aggregate-almost-full
Modified event "aggregate-almost-full".


C:\Windows>dfm option set aggrFullThreshold=95
Changed aggregate full threshold (%) to 95.

C:\Windows>dfm option set aggrNearlyFullThreshold=90
Changed aggregate nearly full threshold (%) to 90.

Tuesday, March 24, 2015

Moving Vfiler from one filer to another

Pre-requisite:
                 Required checklist info to implement changes
                                     To move vfiler  
·         Filer name
·         Aggregate name
·         Make sure to setup DR replications after moving vfiler to new filer?

 Start replicating volumes:
 (Example: we are moving labvfilerfrom lafiler1 to labfiler3. Here labfiler1 DR is labdrfiler1 and labfiler3 DR is labdrfiler3)

1.       ssh to labfiler1 to get volume info for labvfiler

labfiler1>  vfiler status -r labvfiler
labvfiler                       running
   ipspace: vfilers1
   IP address: 10.10.1.2 [vif1-110]
   Path: /vol/labvfiler [/etc]
   Path: /vol/labvfiler_vol1
   Path: /vol/labvfiler_vol2
   Path: /vol/labvfiler_vol3
   Path: /vol/labvfiler_vol4
   UUID: 2a0000d6-4594-1ew1-9e2f-007891w2fc3
labfiler1> vol size labvfiler
Warning: Volume 'labvfiler' has fs_size_fixed option set.  The file system
size may differ from the volume size.
See 'vol status -b' for more detail.
vol size: Flexible volume 'labvfiler' has size 5g.
labfiler1> vol size labvfiler_vol1
vol size: Flexible volume 'labvfiler_vol1' has size 2202g.
labfiler1> vol size labvfiler_vol2
vol size: Flexible volume 'labvfiler_vol2' has size 1975g.
labfiler1> vol size labvfiler_vol3
vol size: Flexible volume 'labvfiler_vol3' has size 1714g.
labfiler1> vol size labvfiler_vol4
vol size: Flexible volume 'labvfiler_vol4' has size 3319g. 

2.       Once you collect volume info, ssh to labfiler3 and check on which aggregate has enough space
3.       Create NEW volumes on labfiler3 and start replicating
Ø  vol create volumename –s none aggrname XXXg
Ø  vol restrict volumename
Ø  snapmirror initialize –S source:volumename destination:volumename

labfiler3> vol create labvfiler -s none aggr0 5g
Creation of volume 'labvfiler' with size 5g on containing aggregate
'aggr0' has completed.
labfiler3> vol create labvfiler_vol1 -s none aggr0 2202g
Creation of volume 'labvfiler_vol1' with size 2202g on containing aggregate
'aggr0' has completed.
labfiler3> vol create labvfiler_vol2 -s none aggr0 1975g
Creation of volume 'labvfiler_vol2' with size 1975g on containing aggregate
'aggr0' has completed.
labfiler3> vol create labvfiler_vol3 -s none aggr0 1714g
Creation of volume 'labvfiler_vol3' with size 1714g on containing aggregate
'aggr0' has completed.
labfiler3> vol create labvfiler_vol4 -s none aggr0 3319g
Creation of volume 'labvfiler_vol4' with size 3319g on containing aggregate


labfiler3> vol restrict labvfiler
Volume 'labvfiler' is now restricted.
labfiler3> vol restrict labvfiler_vol1
Volume 'labvfiler_vol1' is now restricted.
labfiler3> vol restrict labvfiler_vol2
Volume 'labvfiler_vol2' is now restricted.
labfiler3> vol restrict labvfiler_vol3
Volume 'labvfiler_vol3' is now restricted.
labfiler3> vol restrict labvfiler_vol4
Volume 'labvfiler_vol4' is now restricted.


labfiler3> snapmirror initialize -S labfiler1.rr.snd.com:labvfiler labfiler3:labvfiler
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
labfiler3> snapmirror initialize -S labfiler1.rr.snd.com:labvfiler_vol1 labfiler3:labvfiler_vol1
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
labfiler3> snapmirror initialize -S labfiler1.rr.snd.com:labvfiler_vol2 labfiler3:labvfiler_vol2
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
labfiler3> snapmirror initialize -S labfiler1.rr.snd.com:labvfiler_vol3 labfiler3:labvfiler_vol3
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
labfiler3> snapmirror initialize -S labfiler1.rr.snd.com:labvfiler_vol4 labfiler3:labvfiler_vol4
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log. 

4.       Update snapmirror.conf with scheduling depends on cutover/move date
5.       As labfiler3 d2d is labdrfiler3, we have to replicate all volume from labfiler3 to labdrfiler3.
Create same volumes on labdrfiler3 and start snapmirror initialize once labfiler1 to labfiler3 base transfer completes. Here we can also start copying from labdrfiler1 to labdrfiler3 


ON DAY OF CUTOVER

********************************
** ON labfiler1 ***************
*********************************

vfiler stop labvfiler

_______________________________

********************************
** ON labfiler3 ***************
*********************************

snapmirror update -S labfiler1.rr.snd.com:labvfiler        labfiler3:labvfiler 
snapmirror update -S labfiler1.rr.snd.com:labvfiler_vol1   labfiler3:labvfiler_vol1
snapmirror update -S labfiler1.rr.snd.com:labvfiler_vol2     labfiler3:labvfiler_vol2
snapmirror update -S labfiler1.rr.snd.com:labvfiler_vol3     labfiler3:labvfiler_vol3 
snapmirror update -S labfiler1.rr.snd.com:labvfiler_vol4     labfiler3:labvfiler_vol4

snapmirror status labfiler3:labvfiler 
snapmirror status labfiler3:labvfiler_vol1
snapmirror status labfiler3:labvfiler_vol2
snapmirror status labfiler3:labvfiler_vol3 
snapmirror status labfiler3:labvfiler_vol4

snapmirror quiesce labfiler3:labvfiler 
snapmirror quiesce labfiler3:labvfiler_vol1
snapmirror quiesce labfiler3:labvfiler_vol2
snapmirror quiesce labfiler3:labvfiler_vol3 
snapmirror quiesce labfiler3:labvfiler_vol4

snapmirror break labfiler3:labvfiler 
snapmirror break labfiler3:labvfiler_vol1
snapmirror break labfiler3:labvfiler_vol2
snapmirror break labfiler3:labvfiler_vol3 
snapmirror break labfiler3:labvfiler_vol4

vol options labvfiler_vol1 fs_size_fixed off
vol options labvfiler_vol2 fs_size_fixed off
vol options labvfiler_vol3 fs_size_fixed off
vol option labvfiler_vol4 fs_size_fixed off

vfiler create labvfiler -r /vol/labvfiler

ifconfig vif1-110 alias 10.10.1.2 netmask 255.255.255.0

Updated below entry in \etc\rc 

ifconfig vif1-110 alias 10.10.1.2 netmask 255.255.255.0


vfiler run labvfiler cifs shares
vfiler run labvfiler exportfs

_______________________________

********************************
** ON labfiler1 ***************
********************************


vol offline labvfiler 
vol offline labvfiler_vol1
vol offline labvfiler_vol2
vol offline labvfiler_vol3 
vol offline labvfiler_vol4

vol status labvfiler 
vol status labvfiler_vol1
vol status labvfiler_vol2
vol status labvfiler_vol3 
vol status labvfiler_vol4

********************************
** ON labdrfiler1 ***************
********************************

vol offline labvfiler 
vol offline labvfiler_vol1
vol offline labvfiler_vol2
vol offline labvfiler_vol3 
vol offline labvfiler_vol4


After 1 days you can destroy volumes on OLD filer once you have no issues with vfiler 
********************************
** ON labfiler1 & labdrfiler1 **
********************************

After confirming with NAS engineering team and L3's, proceed with vol destroy

vol destroy labvfiler
y
vol destroy labvfiler_vol1
y
vol destroy labvfiler_vol11
y
vol destroy labvfiler_vol2
y
vol destroy labvfiler_vol3 
y
vol destroy labvfiler_vol4
y


Please leave your comments if you have any questions.

Hope this procedure helps many.

Thanks
SANNASDAS