Monday, March 25, 2013

Types of backups in Netbackup


Automatic (Scheduled)

 Full :- All files in the specified path are backed up on each client

 Differential Incremental:-All files changed since the last or incremental
(Differential or cumulative ) backup are backed up.

 Cumulative incremental:- All files changed since the last full backup
are backed up

 User (Application)

User Backup :- All files that the user specified are backed up
User archive :- All files that the user specified are backed up. If the backup

is successful the backed-up files are deleted

Steps to decommision Netapp Filer

Follow these steps to decommision...

Disable the auto support from the admin host: ssh <filer> options autosupport.enable off

Unmount the root volume for both heads from both admin servers: umount /<filername>

log in to DFM host and execute

rsh <to stop cluster alerts>: dfm service stop , then dfm service start sql, then dfm host delete <filername(fqdn)> , then dfm service start


Disable the cluster,
# rsh <filername> cf disable

rsh <filername> vol status
any vols besides vol0 online need to be offline. If the only volumes showing are arch, take them offline for 2 days before proceeding. issue the command rsh <filename> vol offline <volname>

rsh <filername> aggr offline <aggrname> then rsh <filername> aggr destroy <aggrname>

rsh <filername> aggr status -r ( capture the record )

rsh <filername> disk zero spares (Note: This may take a couple of hours)

rsh <filername> aggr status -d

rsh <filername> aggr status aggr0 -d <this will show vol0's disks, collect them and do not run disk sanity on it>

rsh <filername> license |grep sanitization <this is to check if license is available for sanitization service> / if not found then add as below

rsh <filername> license add <Licencenumber>

rsh <filername> disk sanitize start -c 6 <disk_names in multiples seprated by space> (This will take several hours to complete)

remember to run sanity on disk's of respective filer in a cluster. (all disks sanity cannot be run from a single filer of a cluster)

rsh <filername> disk sanitize status <to check sanity status>

rsh <filername> aggr status -r <capture the record>

rsh <filername> disk sanitize status

rsh to filer & copy the output of log file, i.e, rdfile /etc/log/sanitized_disks to text file or /etc/log/sanitized_disks

repeat the same to copy log file, rsh <filername>rdfile /etc/log/sanitization.log or /etc/log/sanitization.log

rsh <filername> sysconfig | head -4 <to collect Chassis, model & HW detail> (will be usefull upon raising SR's)

unmount the /<filername> from whichever admin host that you were using for executing decom

ssh <filername> then umount <filername>

Once above steps completed decom filer networks.

Netapp Snapshot Management

 

This article explains the different commands related to Netapp Snapshot Management. 

Creating a test volume of 10gb to perform snapshot related operations 
geekyfacts-filer > vol create testvol aggr 10g
Creation of volume 'testvol' with size 10g on containing aggregate

geekyfacts-filer > df -h testvol
Filesystem total used avail capacity Mounted on
/vol/testvol/ 8192MB 1420KB 8190MB 0% /vol/testvol/
/vol/testvol/.snapshot 2048MB 0MB 2048MB 0% /vol/testvol/.snapshot
geekyfacts-filer >
Snapshot Create

geekyfacts-filer > snap list testvol
Volume testvol
working...

No snapshots exist.
geekyfacts-filer > snap create testvol testsnap
creating snapshot...

geekyfacts-filer > snap list testvol
Volume testvol
working...

%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Dec 19 01:02 testsnap
geekyfacts-filer >


Snapshot Rename



geekyfacts-filer > snap rename testvol testsnap snaptestgeekyfacts-filer > snap list testvol
Volume testvol
working...

%/used %/total date name
---------- ---------- ------------ --------
3% ( 3%) 0% ( 0%) Dec 19 01:02 snaptest
geekyfacts-filer >

Snapshot Delete


geekyfacts-filer > snap delete testvol snaptest
deleting snapshot...

geekyfacts-filer > snap list testvol
Volume testvol
working...

No snapshots exist.
geekyfacts-filer >

Snap Scheduling 


Scheduling automatic snapshot to keep 2 weekly, 2 daily, 8 hourly(taken at hours 8,12,16,20) online.

geekyfacts-filer > snap sched testvol 2 2 8@8,12,16,20
geekyfacts-filer > snap sched testvol
Volume testvol: 2 2 
8@8,12,16,20
geekyfacts-filer >


Snap Space Reservation


Default snapshot space reserve is 20% of the volume size 

geekyfacts-filer > snap reserve testvol
Volume testvol: current snapshot reserve is 20% or 2097152 k-bytes.


geekyfacts-filer > df -h testvol
Filesystem total used avail capacity Mounted on
/vol/testvol/ 8192MB 1796KB 8190MB 0% /vol/testvol/
/vol/testvol/.snapshot 2048MB 0MB 2048MB 0% /vol/testvol/.snapshot


Changing the snapshot reserve to 5% of the volume size 

geekyfacts-filer > snap reserve testvol 5

geekyfacts-filer > snap reserve testvol
Volume testvol: current snapshot reserve is 5% or 524288 k-bytes.


geekyfacts-filer > df -h testvol
Filesystem total used avail capacity Mounted on
/vol/testvol/ 9728MB 1796KB 9726MB 0% /vol/testvol/
/vol/testvol/.snapshot 512MB 0MB 512MB 0% /vol/testvol/.snapshot
geekyfacts-filer >

Snap Restore 

Export and mount the testvol to put some contents in the volume. 

geekyfacts-filer > exportfs -p rw=geekyfacts.com,root=geekyfacts.com /vol/testvol

Login to the server for which volume was exported(in our case geekyfacts.com)

[root@geekyfacts]# mkdir /test
[root@geekyfacts]# mount geekyfacts-filer:/vol/testvol /test
[root@geekyfacts]# df -h /test
Filesystem Size Used Avail Use% Mounted on
geekyfacts-filer:/vol/testvol
9.5G 1.8M 9.5G 1% /test

[root@geekyfacts]# cd /test
[root@geekyfacts]# touch file.before_snapshot

geekyfacts-filer > snap create testvol testsnap
creating snapshot...


geekyfacts-filer > snap list testvol
Volume testvol
working...

%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Dec 19 02:15 testsnap


[root@geekyfacts]# touch file.aftersnap

[root@geekyfacts]# ls -l
total 0
-rw-r--r-- 1 root root 0 Dec 19 02:16 file.aftersnap
-rw-r--r-- 1 root root 0 Dec 19 02:15 file.before_snapshot


Snapshot testsnap hold only file.before_snapshot 

[root@geekyfacts]# ls .snapshot/testsnap/
file.before_snapshot


Volume restore using Snapshot

geekyfacts-filer > snap restore -t vol -s testsnap testvol
WARNING! This will revert the volume to a previous snapshot.
All modifications to the volume after the snapshot will be
irrevocably lost.

Volume testvol will be made restricted briefly before coming back online.
Are you sure you want to do this? yes
You have selected volume testvol, snapshot testsnap
Proceed with revert? yes
Volume testvol: revert successful.


[root@geekyfacts]# ls
file.before_snapshot
File Restore using Snapshot

[root@geekyfacts]# rm file.before_snapshot
rm: remove regular empty file `file.before_snapshot'? y


geekyfacts-filer> snap restore -t file -s testsnap -r /vol/testvol/file.before_snapshot /vol/testvol/file.before_snapshot
WARNING! This will restore a file from a snapshot into the active
filesystem. If the file already exists in the active filesystem,
it will be overwritten with the contents from the snapshot.

Are you sure you want to do this? yes
You have selected file /vol/testvol/file.before_snapshot, snapshot testsnap
It will be restored as /vol/testvol/file.before_snapshot

Proceed with restore? yes

[root@geekyfacts]# ls
file.before_snapshot
geekyfacts-file >#


Snap Delta 


[root@geekyfacts]# touch file{1,2,3,4}

geekyfacts-file > snap create testvol testsnap1
creating snapshot...


geekyfacts-file > snap list testvol
Volume testvol
working...

%/used %/total date name
---------- ---------- ------------ --------
0% ( 0%) 0% ( 0%) Dec 19 02:23 testsnap1
19% (19%) 0% ( 0%) Dec 19 02:15 testsnap


geekyfacts-file > snap delta testvol
Volume testvol
working...

From Snapshot To KB changed Time Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
testsnap1 Active File System 56 20s 10080.000
testsnap testsnap1 424 0d 00:08 3173.388

Summary...
From Snapshot To KB changed Time Rate (KB/hour)
--------------- -------------------- ----------- ------------ ---------------
testsnap Active File System 480 0d 00:08 3449.101

geekyfacts-file>

Snap reclaimable 


geekyfacts-file > snap reclaimable testvol testsnapProcessing (Press Ctrl-C to exit) .
snap reclaimable: Approximately 424 Kbytes would be freed.

Friday, March 22, 2013

How to check Netapp Serial Number

The NetApp filer’s Serial Number (S/N) is stored in a file within the etc directory. You can use the rdfile /etc/serialnum/ command to display the Serial Number (S/N).

In NetApp Data ONTAP GX, the file /mroot/etc/serialnum can be read to locate the Serial Number (S/N).

We can also view the serial number using the commands "sysconfig -a"  ( or )  "sysconfig -A"
As the output is huge and it is difficult to trace out the serial number we can view as above.

Thursday, March 7, 2013

Restoring a Netapp SnapVault LUN


Restoring a SnapVault LUN
1. In windows, open ‘computer -> manage -> disk management’ and
select the volume (LUN) you want to restore.
2.Now Choose ‘delete partition’.
3.On primary filer (the one that holds the lun that needs a restore)
1 unmap the lun
2 run : snapvault restore -S secondary:lunpath primary:lunpath
3 run : lun map lunpath igroup
For ex:-
1 lun unmap /vol/datavol/lunq/lun win
2 snapvault restore -S gr7m2:/vol/datavol/lunq gr7m1:/vol/datavol/lunq
3 lun map /vol/datavol/lunq/lun win
4. Rescan the DISKS in WINDOWS 
Now you will be able to find the LUN back as same as original :)

Monday, March 4, 2013

How to Find a Failed Disk on a Netapp filer

Through command line interface or Filer View one can find the FAILED disk on a Netapp Filer.

We can use commands below to find a failed disk

sysconfig -d




vol status -f




aggr status -f