Follow by Email

Thursday, April 25, 2013

How to pull logs and core dump through the Netapp filer view

One can see the current system logs and pick up a core dump, if it exists, through a web browser at:

http://filername/na_admin/logs and http://filername/na_admin/cores

If not able to pull them , Please enable the below option 

FAS2040> options httpd.autoindex.enable on

Tuesday, April 23, 2013

Configuring SnapVault on Netapp

In this example, I will assume our Primary Filer as Filer1 and our secondary filer as filer2.

Step 1:

- Configure the Primary Filer for Snapvault, add the Snapvault license by using the following command on a terminal session:

license add sv_primary_license

Step 2:

- Enable snapvault on primary by using the following command

options snapvault.enable on

Step 3:

- Specify the name of the secondary storage system on the primary by using the command

options snapvault.access host=NameOfTheSecondaryFilerHere

Note that you can also use the following command to all all  hosts as destination filers

options snapvault.access host=all

Step 4:

- Perform steps  1 and 2 on the secondary filer as well

Step 5:

- Enable the compliance clock on the secondary filer by using the command:

date -c initialize

Step 6:

- You can also optionally create a "log volume" for snapvault logs, but we will by pass it here

Step 7:

- Finally on the secondary filer for each qtree you need to backup on the primary, create a initial baseline transfer 

For example, if you want the qtree called /vol/mail1/maillun1 on the primary to be backed up to a qtree on secondary called /vol/sv_mail1/maillun1 , you will give the following command:

snapvault start -S filer1:/vol/mail1/maillun1   /vol/sv_mail1/maillun1

Note that it is a good practiceto name your volumes with sv_ (indicating snapvault volume).

This initialization can take some time to complete., to check the progress, you can give the command:

snapvault status

That is it. The final step is to create a Snapvault schedule on  Primary and secondary. Here are the steps

Step 8:

- Create a schedule on Primary by using the following command:

snapvault snap sched -x vol snap_name count [@day_list][@hour_list]

so for example, it will be:

snapvault snap sched -x filer1:
 /vol/mail1/maillun1 sv_nightly 150@mon-sat 0

meaning, snapvault will create backups with prefix sv_nightly for the primary qtree
 /vol/mail1/maillun1 at midnight monday to saturday and will retain 150 copies (one for every night)

Step 9:

Similar to step 8 configure the secondary, I will just demonstrate the command for the secondary
snapvault snap sched -x filer2: /vol/sv_mail1/maillun1sv_nightly 150@mon-sat 0

Monday, April 22, 2013

How to Change the retention period of backups in Backup Exec 2012

How to up the drives in Netbackup

To bring up a tape drive in NetBackup, you can use the vmoprcmd command.

/usr/openv/volmgr/bin/vmoprcmd -up

For example, to bring up drive 16, you can run this:

/usr/openv/volmgr/bin/vmoprcmd -up 16

Sunday, April 21, 2013

In a Netbackup environment If we have added a new media server and to add this media server to all Client servers

Go to  <install_dir>\VERITAS\NetBackup\bin

Run  "add_media_server_on_clients.exe"

The add_media_server_on_clients command reads the server list information from the master server and pushes this list out to clients.  This prevents the need of updating every client manually with the appropriate server list.

Thursday, April 18, 2013

Netbackup Media Server currently not connected to the master Server

If one have recently installed any antivirus on the media server will be facing this kind of issue , we had a similar issue once when we have installed McAfee on one of the media server and it was behaving the same way when ever the backup triggers it says "Netbackup Media Server currently not connected to the master Server" . 
Later We had to do exclusions for all the NETBACKUP processes to make it work.
Prior to this please try pinging and check the name resolution for servers and check for all network troubleshooting I don't think it is name resolution issue but please make sure Hostname look up problems can also trigger this error. Verify the forward and reverse lookups are working properly for the master and media server.

Wednesday, April 17, 2013

Netapp – FCP Partner path misconfigured error

We got a notification from our filers that a partner path is misconfigured. We have a active/active cluster.
Start the following command on the filer that issued the error: lun stats -o -i 1 -c 1
You can change the numbers to more suitable levels if you like. The results are as follows;
filer> lun stats -o -i 1 -c 1
 Read Write Other QFull   Read  Write Average   Queue     Partner  Lun
  Ops   Ops   Ops           kB     kB Latency   Length    Ops      kB
    0     2     0     0      0      8    0.50    2.00     0        0 /vol/Vol01/LunA
    2     2     0     0      8      8    6.50    2.00     8        63 /vol/Vol02/LunB
    7     2     0     0     28      8   19.00    1.03     0        0 /vol/Vol03/LunC

This command lists all volumes on the filer and how many ops are processed. One of the columns is called "Partner Ops". If this columns has any number higher than 0 it means that this Lun is receiving data over the wrong filer head and that all data is transferred over the Interlink cable (the one that syncs the nvram of the cache card in the cluster heads). You can resolve this by installing or updating the "host utilities" and/or "the MPIO and letting this tool configure the multi-paths to this storage.
You can read more about this in the following Netapp article:
The technical background behind this problem lies in the "FC Nodename" and the "FC Portname". In our cluster we have 4 paths from our server to the LUN. All paths will use the same WWN ID for the "FC Nodename" and a separate WWN ID for the "FC Portname".
You can check the FC Nodename and FC Portname of your filer with the following command: sysconfig -v 2The data will look like this (please note that I removed some lines that where not needed for this example).
filer> sysconfig -v 2
        slot 2: Fibre Channel Target Host Adapter 2a
                (Dual-channel, QLogic 2432(2462) rev. 2, 64-bit, <ONLINE>)
                Firmware rev:     4.5.2
                FC Nodename:      bb:0a:09:80:87:29:7f:bb (bb0a098087297fbb)
                FC Portname:      bb:0a:09:81:97:29:7f:bb (bb0a098197297fbb)
                Connection:       PTP, Fabric
        slot 2: Fibre Channel Target Host Adapter 2b
                (Dual-channel, QLogic 2432(2462) rev. 2, 64-bit, <ONLINE>)
                Firmware rev:     4.5.2
                FC Nodename:      bb:0a:09:80:87:29:7f:bb(bb0a098087297fbb)
                FC Portname:      bb:0a:09:82:97:29:7f:bb (bb0a098297297fbb)
                Connection:       PTP, Fabric

Both "FC Nodename" WWN IDs have the same number (in blue) and the "FC Portname" only differentiate one number with each other (in red). In a Multi-path configuration the only way to differentiate a FC path to a Lun is by looking at the "FC Portname" number.
Since we have 4 paths to our Lun the server will see the same drive 4 times. The MPIO software (like the build in software from Windows 2008) will need to "merge" these disks to one and select one path as primary. Should this path fail the MPIO software will select a different path without downtime on the server. Since the default MPIO software can't see what path is the best path to use, it's possible that the MPIO software selects a path to the filer head that is not handling the Lun. When this happens all data written to this Lun will be send to the wrong (passive) filer head. This filer head will send all data received to the active head by using the Interlink cable that normally synchronises the nvram of the cache card. The active head will the write the data to the disk by normal procedure.

The different write paths possible. (c) Netapp
The Host utilities provided by Netapp (or their OEM supplier like IBM) has a method to select the best path, the path connected to the active filer head. So be sure to install and/or update these and, in case of a Linux/ESX host run the following command every time you add a new Lun to the system: /opt/ontap/santools/config_mpath --access <FilerA>:<user>:<password> --access <FilerB>:<user>:<password> --primary --loadbalance
Be sure to replace all the <> variables with their respective values.

Netbackup important UNIX commands

Master Server
1) Check the license details

2) Stop and Start the netabackup services
i) /etc/init.d/netbackup stop (start)       —>  graceful stop and start
ii) /usr/openv/netbackup/bin/bp.kill_all    —> Stop backup including GUI sessions, ungraceful
iii) /usr/openv/netbackup/bin/bp.start_all —> Start the backup                   
iv) /usr/openv/netbackup/bin/initbprd      —> starts the master server
v) /usr/openv/netbackup/bin/vmd           —> starts the media server
vi) /usr/openv/netbackup/bin/jnbSA        —> Starts the GUI sessions

3) Scan the tape devices
#sgscan (in  Solaris)                                                                                                        
#/usr/openv/volmgr/bin/scan (in AIX)

4) Display all the netbackup process
#bpps –x

5) Check the backup status
In GUI —>  Activity monitor
In CLI —>  #bpdbjobs -report

6) Lists the server errors
#bperrror –U –problems –hoursago 1
#bperror –U –backstat -by_statcode -hoursago 1

7) Display information about the error code
#bperror –S <statuscode>

8) Reread bp.conf file without stop/start the netbackup
#bprdreq -rereadconfig

Media Server (starts with bpxxx )
1) List the storage units
#bpstulist –U

2) List media details
# /usr/openv/netbackup/bin/goodies/available_media
This cmd is retrieving all info about media that is available/full/expired/Frozen

3) List all the netbackup jobs
#bpdbjobs –report <hoursago>

4) Freeze or Unfreeze media
In CLI, #bpmedia –unfreeze [-freeze] –ev <media ID>

5) List media details
#bpmedialist -ev <media 
6) List the media contents
#bpmedialist –U –mcontents –m <mediaID>

7) List the information about NB images on media
#bpimmedia –mediaid <ID> -L

8) List backup image information
#bpimagelist -U (general)
# bpimagelist -media –U (for media)

9) Expire a tape
# bpexpdate –d 0 –ev <mediaID> -force

10) Expire client images
#bpimage –cleanup –allclients

11) Which tapes are used for taking backup
In GUI, Backup and Restore –> Find the Filesystem –> Preview Media Button
In CLI, #bpimagelist –media –U

Volume Commands (starts with vmxxx)

1) Tape Drive (vmoprcmd)
1) List the drive status, detail drive info and pending requests
In GUI, Device mgmt
In CLI, #vmoprcmd
#vmoprcmd –d ds (status)
#vmopcrmd –d pr (pending requests)

2) Control a tape device
In GUI, Device mgmt
In CLI, #vmoprcmd [-reset] [-up] [-down] <drive number>

2) Tape Media commands (vmpool,vmquery,vmchange,vmdelete)
1) List all the pools
In CLI, #vmpool –listall -bx

2) List the scratch pool available
#vmpool -list_scratch

3) List tapes in pool
In CLI, #vmquery –pn <pool name> -bx                 
4) List all tapes in the robot
In CLI, #vmquery –rn 0 –bx

5) List cleaning tapes
In CLI, #vmquery –mt dlt_clean –bx

6) List tape volume details
#vmquery –m <media ID>

7) Delete a volume from the catalog
#vmdelete –m <mediaID>

8) Changes a tapes expiry date
#vmchange -exp 12/31/2012 hr:mm:ss –m <media ID>

9) Changes a tape’s media pool
#vmchange -p <pool number> -m <media ID>

3) Tape/Robot commands (starts with tpxxx)
1) List the tape drives
#tpconfig –d

2) List the cleaning times on drives
#tpclean -L

3) Clean a drive
#tpclean –C <drive number>

Client Commands
i) List the clients

Policy Commands
i) List the policies
#bppllist –U

ii) List the detailed information about the policies
#bpplist –U -allpolicies

Thursday, April 4, 2013

Deleting Older backups from B2D backup exec 2012 to reclaim space on Hard Disk

1)     Select the Storage Tab in the BE2012 Interface.
2)     Select your disk storage device and click the ====> for the DETAILS view.
3)     In the details view select the Backup Sets option.
4)     I just scrolled to the bottom of the list which is the oldest backup sets,
5)     Select large groups of backup sets using Shift+Left Click
6)     Right click the selected backup sets and select DELETE.
7)     Yes to All when prompted to delete.

Wednesday, April 3, 2013

I have been chosen as Netapp Hall of fame member of the Week for March 4....Forget to share

Types of Ports in Netapp

On an Netapp Controller we will be having multiple number of ports. Find description about each of them..

Onboard Gbe ports: Available for host connectivity [e0a, e0b,e0c, e0d....]
Onboard FC: For SAN, also will be using FC disk shelves [0a,0b,0c,0d.....]
Onboard SAS: Some SAS tape drives are supported, but as you likely will be using SAS disk shelves
SCSI Connection: for connection to tape devices
Infini Band: for cluster interconnect, now MTP cable is used with converter
Serial Console: For CE's to connect directly to filers by using DB9 cables
RLM NIC: for remote management

Also we will be using 10 gig ports for connected HA pair's for SNAPVAULT traffic.