Popular Posts

Thursday, July 7, 2011

Renaming Server(Host) name in Clariion (CX) Arrays!!

Following is the procedure to rename the existing server or host name in CX with Different Flare code versions.

I. Follow these steps for CX series, CX3 series, CX4 series with FLARE Release 28 only
Note: This process can be done non-disruptively!

-Right-click the array icon and select Connectivity Status.
-Highlight the entries that must be changed and Select the Info button.  (Record current initiator settings.)
-Select the Group Edit button.
-Find the HBA WWNs on the left window and move them to the right window.
-Make any necessary changes for the initiator settings to match what was recorded in Step 2.
-Select the New Host radio button.
-Type in the correct host name and IP address.
-Click OK and then confirm the success.
-Close Connectivity Status.
-Right-click the array icon and select Update Now.
Note:  If the changes do not take place right away, you may have to restart the Management Server. See solution emc99876.

The host name should now show correctly!

II. Follow these steps for the CX4 series with FLARE Release 29 only

Note: This process can be done non-disruptively!

-Right-click the array icon and select Connectivity Status.
-Select the Host Initiators tab then select the host.
-Highlight the host that must be changed and Select the Edit button.  (Record current initiator settings.)
-Select the Edit button.
-Check the HBA WWNs on the top window.
-Make any necessary changes for the initiator settings to match what was recorded in Step 2.
-Select the New Host radio button.
-Type in the correct host name and IP address.
-Click OK and then confirm the success.
-Close Connectivity Status.
-Right-click the array icon and select Update Now.
Note:  If the changes do not take place right away, you may have to restart the Management Server. See solution emc99876.

The host name should now show correctly!

III. Follow these steps for the  CX4 series with FLARE Release 30 only

Note: This process can be done non-disruptively!

-Select the array in the system pulldown menu top left of the Unisphere screen.
-On the left pain under System Management select Storage Sytem Connectivity Status
-Select the Host Initiator Tab then select the host.
-Highlight the host  the entries that must be changed and Select the Edit button.  (Record current initiator settings.)
-Find the HBA WWNs on the top window.
-Make any necessary changes for the initiator settings to match what was recorded in Step 2.
-Select the New Host radio button.
-Type in the correct host name and IP address.
-Click OK and then confirm the success.
-Close Connectivity Status.
-Right-click the array icon and select Update Now.
Note:  If the changes do not take place right away, you may have to restart the Management Server. See solution emc99876.

The host name should now show correctly!

Commands for installing Disk Shelf Firmware [NetApp]

1.Download the Zip file and extract it on Filer which you want to upgrade firmware (example: netapp01)

2.priv set advanced 

3.storage download shelf (by typing this command and hitting enter it will prompt for "Are you sure you want to upgrade all the disk shelves" - If you wish to upgrade all the shelves go ahead and type "Y" or else if you need any particular shelf type following command

4.storage download shelf 0b.shelf4 (0b.shelf4 = adapter and shelf number)

while upgrading the shelf will be going to unknown state for few seconds, don't worry at that time it will come to "ok" state within few seconds.

to verify the upgraded firmware type this command " sysconfig -v"

All this you can run through CLI, You can also find this procedure in now.netapp.com

SAN Copy incremental steps!

USING NAVISPHERE MANAGER:

1. CX 700 Array
   Source Lun # 1
   Size: 100 GB
   Raid type: R5
   Raid Group: 50
   Storage Group : Test1

2. CX4-960 Array
   Target Lun # 1
   Size: 100GB
   Raid type: any
   Raid Group: any
   Storage Group: Test1

3. Initially ensure what are the luns to be migrated using SANCOPY

4. Create the Lun#1 on the target array with same size or greater size as source lun#1.

4. Create Reserve Luns and configure reserve pool on the source with 20% of each source lun.

5. Create the same storage group (Test1) on the target array(CX4-960) as source array (CX 700)

6. Add the target Lun # 1 to the storage group Test1 on target array (CX4-960).

7. Create a storage group named "SANCOPY" on Target array CX4-960.

8. Also add the target lun # 1 on CX4-960 to "SANCOPY" storage group.

9. In order to enable SANCOPY connections Zoning has to be done between CX 700 and CX4-960
   Select 2 available free ports on SP-A (eg: port 1 and port 2) & 2 available free ports on SP-B (port1 & port2) on CX 700 as well as CX4-960

10.Switch Zoning:
   Create Aliases for CX700 SPA& SPB (port 1 & 2)
   1.CX700SPA1
   2.CX700SPA2
   3.CX700SPB1
   4.CX700SPB2
11. Create aliases for CX4-960 SPA & SPB (port 1 & 2)
   1.CX4-960SPA1
   2.CX4-960SPA2
   3.CX4-960SPB1
   4.CX4-960SPB2

12. Create Zones between CX700 and CX4-960
    Zone1 CX700SPA1 to CX4-960SPA1
    Zone2 CX700SPA1 to CX4-960SPB1
    Zone3 CX700SPA2 to CX4-960SPA1
    Zone4 CX700SPA2 to CX4-960SPB1
    Zone5 CX700SPB1 to CX4-960SPA1
    Zone6 CX700SPB1 to CX4-960SPB1
    Zone7 CX700SPB1 to CX4-960SPA1
    Zone8 CX700SPB1 to CX4-960SPB1

13. Add the zones to active configuration.
    CFGADD: CFG1
    Zone1 CX700SPA1 to CX4-960SPA1
    Zone2 CX700SPA1 to CX4-960SPB1
    Zone3 CX700SPA2 to CX4-960SPA1
    Zone4 CX700SPA2 to CX4-960SPB1
    Zone5 CX700SPB1 to CX4-960SPA1
    Zone6 CX700SPB1 to CX4-960SPB1
    Zone7 CX700SPB1 to CX4-960SPA1
    Zone8 CX700SPB1 to CX4-960SPB1

14. Enable the CFG with CFG enable

15. Login to Source CX700 array using navisphere manager
    Right Click on the array CX700
    go to "sancopy"
    go to "Update Conections"

16. Then Log on to target CX4-960 using navisphere manager
    Right Click on "SANCOPY" Storage group
    go to "sancopy"
    go to "enable connnections"
    then dialog box appears with CX700 SPA in drop down menu
    check box both the ports i.e., port1 & port 2
    next drop down in the same dialog box on top to CX700 SPB
    check box both the ports i.e., port1 & port 2
    Click apply

17. Next go to CX700 array
    Right click on the CX700 array
    go to "update now"

18. Next go to CX4-960 array
    Right click on the CX4-960 array
    go to "connectivity status"
    you should see all the CX700 SPA & B ports logged and registered.

19. Next go to CX700 array
    Right click on the CX700 array
    go to "connectivity status"
    you should see all the CX4-960 SPA & B ports logged and registered.

20. Once the sancopy connection have been enabled, now you are ready to create the sessions.

21. Before creating san copy sessions you should have the following details in hand
    SP owner of source lun
    Destination Lun World Wide Name

22. On the source array CX700
    Click on the Storage group Test1 to expand the storage group
    Expand the Luns Tab
    Right Click on the Lun # 1
    go to "sancopy"
    go to "create session for lun"
    A dialog box appears where you have to enter all the session details
    type the name of the session (CX700_1_CX4-960_1) on the top
    next, in the drop down select the option "Incremental"
    Select check box "Initial Sync Required"
    select the source lun #1 in source storage
    Next, click on "select the destination storage"
    A dialog box appears with an option "ENTER WWN"
    Click on "Enter WWN"
    Then, enter the WWN of the target Lun # 1 (CX4-960) and click "ok"
    Next, select both source lun and target lun in the same dialog box
    then, Click on "Verify Connections" (you should see a window displaying all the logical units are accessible)
    Next, select the throttle value to "6"
    Accept all the default values for "LINK BANDWIDTH" (i.e., 1.5)
    Finally Click "Apply"

23. Next, Click on "sancopy sessions" tab on CX700 array to exapnd it.

24. If you expand the Incremental sesions you will see the session name CX700_1_CX4-960_1

25. In order to start the session, 
    Right click on session "CX700_1_CX4-960_1"
    click "start"

26. To check the status of the session,
    Right click on session "CX700_1_CX4-960_1"
    click "status"

27.Once the initial copy is completed 100%, we need to run the final incremental copy

28. To start the final incremental copy the lun should be taken offline i.e., remove the hosts from storage group
    Test1 on source array CX700 so that nobody can have acess to that particular lun

29. To remove the lun # 1 from storagegroup on CX700,
    Right Click on "Test1" Storage group
    Click "Connect hosts"
    remove the host from the storage group
    Click "apply"

30. Next go to the session CX700_1_CX4-960_1
    Right click on session "CX700_1_CX4-960_1"
    click on "Mark"
    click "yes"
    Next, Again Right click on session "CX700_1_CX4-960_1"
    click on "start"
    then check the status of the session.

31. Once the final incremental copy has started, in the mean time 
    zoning should be done between the HOST HBA's and the New storage array i.e., CX4-960
    
32. For zoning between host HBA's and CX4-960 SP's
    Create Aliases
    1. host hba1
    2. host hba2
    3. CX4-960SPA3 (eg SPA-port3)
    4. CX4-960SPB3 (eg SPB-port3)

33. Create Zones
    1.host hba1_CX4-960SPA3
    2.host hba1_CX4-960SPB3
    3.host hba1_CX4-960SPA3
    4.host hba2_CX4-960SPB3

34. Add the zones to active CFG

35. Stop & Start the "NAVISPHERE AGENT" on the HOST

36. Go to Target array CX4-960,
    Right click on "CX4-960 array"
    go to "Connectivity status"
    you should see the host registered with four paths.

37. Verify whether the final incremental copy has completed to 100%
    Right click on session "CX700_1_CX4-960_1"
    click "status"

38. Finally, go to Target array,
    Right Click on "Test1" Storage group
    Click "Connect hosts"
    add the host to the storage group
    Click "apply

39. Bring up the server and reboot, then check whether you can access the new luns from CX4-960.

40. Finally, After you see new luns remove all the dead paths i.e., old luns from CX700 on the host.

How to re-size Snapmirror volumes?

Before increasing size of Source volume, increase the size of Destination volume.


vol options source_volume fs_size_fixed off  # Turn off the FS on source volume #

vol size destination_volume +50g # Increase size as per customer request #

vol size source_volume +50g # Increase the size of source volume #

vol options source_volume fs_size_fixed on # Turn ON the FS for source volume #

snapmirror update -S source filer: source_volume  destination filer: destination_volume # update the snapmirror #

Saturday, June 18, 2011

How to Discover new Client after installing NBU agent?

Open NBU consol


In Left Pane, You will see Policy option in Netbackup console.
1. Click on policy Option- right click- new policy ( will see new window)
Here, you uncheck the "use backup policy configuration wizard", it will show in bottom of this window 
2. Give the Policy name.-click  Ok
3. You will see new window, in this window will see Client Tab- Click on this tab.
4. In this client tab window- Click on " New Button"
5. Put the client full name like. ab01c.tst.md
6. will automatically detect the Client O/S.

or using CLI

using the bpplclients command.
bpplclients -add
Example:
bpplclients my_test_policy -add my-test.mydomain.com Linux RedHat2.6
or for a Windows server
bpplclients my_windows_policy -add my-wintest.mydomain.com PC Windows2003
To confirm that it has been added you can run bppllist my_test_policy -U and review the list of clients in the policy.

Netbackup Issues: Policy has red cross mark, what does that mean?

If you see any "Red cross" mark as above mentioned figure, this means that the policy is de-activated so no backups taking place. Right-click on policy & "Activate".



Wednesday, June 1, 2011

Rescan Cmds after allocating Storage to AIX and Solaris

AIX:
AIX# cfgmgr
AIX# fget_config -Av
Solaris:
Solaris# devfsadm
Solaris# cfgadm -a
Solaris# format (to view new devices)

Netapp Snapmirror Setup.


Snapmirror is an licensed utility in Netapp to do data transfer across filers. Snapmirror works at Volume level or Qtree level. Snapmirror is mainly used for disaster recovery and replication.

Snapmirrror needs a source and destination filer. (When source and destination are the same filer, the snapmirror happens on local filer itself.  This is when you have to replicate volumes inside a filer. If you need DR capabilities of a volume inside a filer, you have to try syncmirror ).

Synchronous SnapMirror is a SnapMirror feature in which the data on one system is replicated on another system at, or near, the same time it is written to the first system. Synchronous SnapMirror synchronously replicates data between single or clustered storage systems situated at remote sites using either an IP or a Fibre Channel connection. Before Data ONTAP saves data to disk, it collects written data in NVRAM. Then, at a point in time called a consistency point, it sends the data to disk.

When the Synchronous SnapMirror feature is enabled, the source system forwards data to the destination system as it is written in NVRAM. Then, at the consistency point, the source system sends its data to disk and tells the destination system to also send its data to disk.
This guides you quickly through the Snapmirror setup and commands. 

1) Enable Snapmirror on source and destination filer 


source-filer> options snapmirror.enable 
snapmirror.enable            on 
source-filer> 
source-filer> options snapmirror.access 
snapmirror.access            legacy 
source-filer>

2) Snapmirror Access
Make sure destination filer has snapmirror access to the source filer. The snapmirror filer's name or IP address should be in /etc/snapmirror.allow. Use wrfile to add entries to /etc/snapmirror.allow.

source-filer> rdfile /etc/snapmirror.allow 
destination-filer 
destination-filer2 
source-filer>

3) Initializing a Snapmirror relation 

Volume snapmirror : Create a destination volume on destination netapp filer, of same size as source volume or greater size. For volume snapmirror, the destination volume should be in restricted mode. For example, let us consider we are snapmirroring a 100G volume - we create the destination volume and make it restricted.

destination-filer> vol create demo_destination aggr01 100G 
destination-filer> vol restrict demo_destination
Volume SnapMirror creates a Snapshot copy before performing the initial transfer. This copy is referred to as the baseline Snapshot copy. After performing an initial transfer of all data in the volume, VSM (Volume SnapMirror) sends to the destination only the blocks that have changed since the last successful replication. When SnapMirror performs an update transfer, it creates another new Snapshot copy and compares the changed blocks. These changed blocks are sent as part of the update transfer.

Snapmirror is always destination filer driven. So the snapmirror initialize has to be done on destination filer. The below command starts the baseline transfer.

destination-filer> snapmirror initialize -S source-filer:demo_source destination-filer:demo_destination 
Transfer started. 
Monitor progress with 'snapmirror status' or the snapmirror log. 
destination-filer>

Qtree Snapmirror : For qtree snapmirror, you should not create the destination qtree. The snapmirror command automatically creates the destination qtree. So just volume creation of required size is good enough.
Qtree SnapMirror determines changed data by first looking through the inode file for inodes that have changed and changed inodes of the interesting qtree for changed data blocks. The SnapMirror software then transfers only the new or changed data blocks from this Snapshot copy that is associated with the designated qtree. On the destination volume, a new Snapshot copy is then created that contains a complete point-in-time copy of the entire destination volume, but that is associated specifically with the particular qtree that has been replicated.

destination-filer> snapmirror initialize -S source-filer:/vol/demo1/qtree destination-filer:/vol/demo1/qtree 
Transfer started. 
Monitor progress with 'snapmirror status' or the snapmirror log.
4) Monitoring the status : Snapmirror data transfer status can be monitored either from source or destination filer. Use "snapmirror status" to check the status.

destination-filer> snapmirror status 
Snapmirror is on. 
Source                          Destination                          State          Lag Status 
source-filer:demo_source        destination-filer:demo_destination   Uninitialized  -   Transferring (1690 MB done) 
source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree   Uninitialized  -   Transferring (32 MB done) 
destination-filer>

5) Snapmirror schedule : This is the schedule used by the destination filer for updating the mirror. It informs the SnapMirror scheduler when transfers will be initiated. The schedule field can either contain the word sync to specify synchronous mirroring or a cron-style specification of when to update the mirror. The cronstyle schedule contains four space-separated fields. 
If you want to sync the data on a scheduled frequency, you can set that in destination filer's /etc/snapmirror.conf . The time settings are similar to Unix cron. You can set a synchronous snapmirror schedule in /etc/snapmirror.conf by adding “sync” instead of the cron style frequency. 

destination-filer> rdfile /etc/snapmirror.conf 
source-filer:demo_source        destination-filer:demo_destination - 0 * * *  # This syncs every hour 
source-filer:/vol/demo1/qtree   destination-filer:/vol/demo1/qtree - 0 21 * * # This syncs every 9:00 pm 
destination-filer>

6) Other Snapmirror commands 
  • To break snapmirror relation - do snapmirror quiesce and snapmirror break.
  • To update snapmirror data  - do snapmirror update
  • To resync a broken relation - do snapmirror resync.
  • To abort a relation - do snapmirror abort
Snapmirror do provide multipath support. More than one physical path between a source and a destination system might be desired for a mirror relationship. Multipath support allows SnapMirror traffic to be load balanced between these paths and provides for failover in the event of a network outage.