Saturday, August 23, 2014

Installation & Configuration of SAM-QFS Shared Filesystem.




QFS shared gives the advantage of a clustered file-system.
1.       Installation of SAMFS:
Download the SAM-QFS software release and patches from oracle support.
Clue: 1318956.1

2.       if it available in ISO format, Mount ISO image.
lofiadm -a /root/SUN_SAM-FS_5.2.iso
mount -F hsfs /dev/lofi/1 /mnt

3.       Install SAM-QFS Packages
#cd /mnt/sparc/2.11
# # pkgadd -d . SUNWsamfsr SUNWsamfsu

Accept:y
4.       [Optional] Umount after successful installation.

Configure QFS.

Note: Format should be carried out in one node only.
Note: svc:/network/qfs/shared-mount:default will be in maintenance state. Reboot after the whole operation


1.       vi /etc/opt/SUNWsamfs/mcf
sharefs1        30      ma      sharefs1        on      shared
/dev/dsk/c0t600144F000212834D270536C93270001d0s0        31      mm      sharefs1        on   
/dev/dsk/c0t600144F000212834D270536C93270001d0s3        32      mr      sharefs1        on
/dev/dsk/c0t600144F000212834D270536C93270001d0s4        33      mr      sharefs1 on



npadbtest01     172.16.0.170,npadbtest01.NPA.local      1       -       server
npadbtest02     172.16.0.162,npadbtest02.NPA.local      2       -

3.       copy the two files above to second node:

4.       force a read of SAMFS configuration file

5.       create a samfs filesystem:

6.       mount the file system
#mkdir /sharefs1
#mount -F samfs -o shared sharefs1 /sharefs1
#df –h



7.       On the second node.
#samfsconfig  /dev/dsk/*



8.       Mounting on second Node:
#mkdir /sharefs1
# mount -F samfs -o shared sharefs1 /sharefs1





How to Configure Shared Sun QFS With NFS



Edit to include QFS as dependency:
<!--Must have QFS filesystems mounted before sharing them-->
<dependency name='qfs' grouping='require_all' restart_on='error' type='service'>
<service_fmri value='svc:/network/qfs/shared-mount:default'/>
</dependency>






root@npadbtest01:~# svccfg export /network/nfs/server > /var/tmp/server.xml
root@npadbtest01:~# cat /var/tmp/server.xml
root@npadbtest01:~# svcs -l svc:/network/qfs/shared-mount:default
root@npadbtest01:~# vi /var/tmp/server.xml
root@npadbtest01:~# svccfg validate /var/tmp/server.xml
root@npadbtest01:~# svcadm disable nfs/server
root@npadbtest01:~# svccfg delete nfs/server
root@npadbtest01:~# svccfg import /var/tmp/server.xml
root@npadbtest01:~# svcadm enable nfs/server
root@npadbtest01:~# svcs -d svc:/network/nfs/server:default







Share filesystem to clients:







Checking configuration:

# samfsinfo sharefs1

Switch to another master server
root@npadbtest01:/etc/opt/SUNWsamfs# samsharefs -s npadbtest02 sharefs1




HTH


*Major Source: Oracle Documentation.

Thursday, May 15, 2014

Creating iSCSI LUN Visible by Selected Node.


(VERSION INFORMATION: Solaris 11.1)


This restrict selected targets visible to initiator i.e. host group and selected LUN for particular target.
This is one of the best practice when provision LUN.

  • 1. Create ZFS pool.
zpool create db-pool c0t600A0B800056405E0000621E530C68E7d0
zpool add -f db-pool c0t600A0B800056405E000049EF5253ECF4d0




  • 2. create zfs disk/volume
zfs create -V 4g db-pool/dbvot1
  • 3. Make ZFS volume LUN
stmfadm create-lu /dev/zvol/rdsk/db-pool/dbvot1

  • 4. Enable scsi initiator (on all the isci client node)
svcadm enable network/iscsi/initiator
svcs -l iscsi/initiator ##check





  • 5. iscsiadm list initiator-node [all client node]


The highlighted section will be used in the next command. Sample principle apply for FC WWN.

  • 6. stmfadm create-hg host-db #[iscsi server node]
  • 7. stmfadm add-hg-member -g host-db iqn.1986-03.com.sun:01:00144ffba06e.535be0bb iqn.1986-03.com.sun:01:00144ff99ca4.535be0e7

  • 8. stmfadm create-tg target-racdb00
Create target group




  • 9. Create the iSCSI
#itadm create-target

  • 10. itadm list-target v


  • 11. PrimayNode #stmfadm offline-target iqn.1986-03.com.sun:02:315667dd-969a-e405-f715-c6d61ace3acb
Need to make target offline, if it online. See screenshot above.

  • 12. PrimayNode #stmfadm add-tg-member -g target-racdb00 iqn.1986-03.com.sun:02:315667dd-969a-e405-f715-c6d61ace3acb
Add “iscsi target” to the target group






  • 13. PrimayNode #stmfadm list-lu v
Check available iSCSI LUN.


Can you create ZFS iSCSI


SCSI LUN created in step 1 to 3.

  • 14. PrimayNode # stmfadm add-view -h host-db -t target-racdb00 -n 10 600144F000212834D270536B68270002
Adding LUN to host group and target group.

  • 15. PrimaryNode #stmfadm list-view -l 600144F000212834D270536B68270002




  • 16. Bring logical unit online.
PrimaryNode #stmfadm online-target iqn.1986-03.com.sun:02:315667dd-969a-e405-f715-c6d61ace3acb





  • 17. Activate client’s initiator
root@dbnode1:~# iscsiadm list discovery
root@dbnode1:~# iscsiadm modify discovery -t enable







  • 18. Add iSCSI address and port(default is 3260)
root@dbnode1:~# iscsiadm add discovery-address 192.168.200.21:3260







  • 19. root@dbnode1:~# iscsiadm list target


  • 20. Dynamically update /dev and /devices.
root@dbnode1:~# devfsadm v





  • 21. root@dbnode1:~# format








Other Cheat Commands:
  • 1. delete logical unit view
stmfadm remove-view -l 600144F000212834D270536B68270002 0
  • 2. stmfadm à SCSI Target Mode Framework






Saturday, March 29, 2014

Cloning ASM Diskgroup to Create Database Clone



(VERSION INFORMATION: Oracle GI & DB11.2.0.3; Red Hat 5u7)


Purpose is to simulate a VLDB environment where you have to clone your database using tools like Snapshot , BCV, SnapMirror. This scenario cloned RAC database using ASM to a single database.

Description of test environment:  There are two node RAC. The cloned database was created on the first node.

Cloning ASM Diskgroups


1)      Setup your storage device to have enough space for diskgroup clone.
ASMCMD> lsdsk --candidate –p
/dev/sfe1

 Note: all the necessary configuration on the raw devices as be performed e.g. ASMLib or UDEV. This lab use UDEV.

2)      ASMCMD> lsdsk -k -G data
 /dev/sde1




3)      As oracle OS user shutdown the database. Or you can use ALTER SYSTEM SUSPEND/RESUME database feature, you can suspend I/O to the database, split the mirror, and make a backup of the split mirror.
srvctl stop database -d orcl

4)      Create a clone of diskgroup disks. Here, we are using dd utility; in production environment you may come across some more sophisticated tools like BCV. In fact, some mirroring tools do not require Oracle DB to suspend I/O neither shutdown down before you split your mirror (Note: ASM instance does caching and striping, so be sure of the capability of the tool). Oracle DB suspend I/O features reduce downtime compare to DB shutdown.

dd if=/dev/sde1 of=/dev/sdf1 bs=1024k




5)      Use the KFED ASM utility to read and compare the disks header.
Change directory as root to grid ORACLE_HOME/bin

[root@red1 bin]# cd /u01/app/11.2.0/grid/bin
[root@red1 bin]# ./kfed read /dev/sde1 > kfed_sde1.log
[root@red1 bin]# ./kfed read /dev/sdf1 > kfed_sdf1.log
[root@red1 bin]# diff kfed_sde1.log kfed_sdf1.log
[root@red1 bin]# head kfed_sde1.log



Note that /dev/sdf1 will not be listed among candidate disks, as in step 1.
ASMCMD> lsdsk --candidate –p



6)      Rename disk group by renamedg command. I used a two-step method here. You can simply use one step method ($ renamedg dgname=data newdgname=clone_data verbose=true)

[grid@red1 ~]$ renamedg phase=one config=cloneasmdisk.conf verbose=true dgname=data newdgname=clone_data asm_diskstring='/dev/sdf1'



[grid@red1 ~]$ renamedg phase=two dgname=data newdgname=clone_data config=cloneasmdisk.conf verbose=true





7)      Using the diff(Linux utility) and kfed utility, verify the success of renamedg operation

[root@red1 bin]# ./kfed read /dev/sdf1 > kfed_sdf1.log
[root@red1 bin]# diff kfed_sde1.log kfed_sdf1.log



OR
Query v$asm_diskgroup
SQL> select name from v$asm_diskgroup;



8)      Mount the clone diskgroup.
SQL>alter diskgroup clone_data mount;
OR

ASMCMD> mount clone_data
ASMCMD> lsdg




Cloning Database


1.       Create and edit  pfile from the source database

SQL> create pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/clonedb.ora'  from spfile;
File created.

Using an editor, replace +DATA(sorce_diskgroup) with +CLONE_DATA(clone_diskgroup).
Edit parameters audit_file_dest, dispatchers
Also, substitute as necessary orcl (i.e. my source instance name) with clodb (name given to new instance).

After editing, pfile look like

pga_aggregate_target=134217728
sga_target=398458880
*.audit_file_dest='/u01/app/oracle/admin/clodb/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+CLONE_DATA'
#FOR control_files DO NOT USE THE FULL PATH. Also remove old control file
*.db_block_size=8192
*.db_create_file_dest='+clone_data'
*.db_domain='tobi.com'
*.db_name='clodb'
#changing the database name
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=clodbXDB)'
*.open_cursors=300
*.pga_aggregate_target=132120576
*.processes=200
*.remote_login_passwordfile='exclusive'
*.sessions=225
*.sga_target=398458880
*.undo_tablespace='UNDOTBS1'
*.db_unique_name='clodb'
*.db_create_online_log_dest_1='+CLONE_DATA'
*.cluster_database='false'

Create directory audit_file_dest
mkdir -p /u01/app/oracle/admin/clodb/adump

2.       Create control file trace from source DB:

SQL> alter database backup controlfile to trace as '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/controltrace.ora';

Edit according. After editing, see below listings:

CREATE CONTROLFILE SET DATABASE "CLODB" RESETLOGS  NOARCHIVELOG
    MAXLOGFILES 192
    MAXLOGMEMBERS 3
    MAXDATAFILES 1024
    MAXINSTANCES 32
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 (
    '+CLONE_DATA/orcl/onlinelog/group_1.261.839965849'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 2 (
    '+CLONE_DATA/orcl/onlinelog/group_2.262.839965855'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 3 (
    '+CLONE_DATA/orcl/onlinelog/group_3.265.839966205'
  ) SIZE 50M BLOCKSIZE 512,
  GROUP 4 (
    '+CLONE_DATA/orcl/onlinelog/group_4.266.839966209'
  ) SIZE 50M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '+CLONE_DATA/orcl/datafile/system.256.839965691',
  '+CLONE_DATA/orcl/datafile/sysaux.257.839965693',
  '+CLONE_DATA/orcl/datafile/undotbs1.258.839965695',
  '+CLONE_DATA/orcl/datafile/users.259.839965695',
  '+CLONE_DATA/orcl/datafile/undotbs2.264.839966043'
CHARACTER SET AL32UTF8;

3.       Startup new instance for clone database.

[oracle@red1 ~]$ ORACLE_SID=clodb
[oracle@red1 ~]$ export ORACLE_SID
[oracle@red1 ~]$ sqlplus / as sysdba

SQL>  startup nomount pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/clodb3.ora';
ORACLE instance started.

4.       Create Control file and Rename database

SQL>  @/u01/app/oracle/product/11.2.0/dbhome_1/dbs/cloctl.sql
SQL> select name from v$datafile;
SQL> ALTER TABLESPACE TEMP ADD TEMPFILE ‘+CLONE_DATA/orcl/tempfile/temp.263.839965869'   SIZE 35651584 REUSE AUTOEXTEND ON NEXT 655360 MAXSIZE 32767M;

Note that the “ALTER TABLESPACE TEMP ADD TEMPFILE …” was copied from control trace created earlier.





5.       Do a little test: creating tablesapce testclone.

SQL> create tablespace testclone datafile size 10m;

  


Create spfile to ease your dba task J
SQL> create spfile from pfile='/u01/app/oracle/product/11.2.0/dbhome_1/dbs/clodb3.ora';

Conclusion:
For Very Large Database, backup time can be reduced significantly by cloning ASM diskgroups.


Ref & More reading:
1. Oracle® Automatic Storage Management Administrator's Guide 11g Release 2 (11.2)
2. Oracle® Database Backup and Recovery User's Guide 11g Release 2 (11.2)