Every DBA know’s the situations, where your boss / customer ask you to copy fresh data into the test database… since this blog is about oracle databases, we are solving this issue usually either with “datapump” or “rman duplicate” – and there is nothing wrong about it. But now imagine, we are talking about more than 400 test environments. To satisfy all testers, the production is exported every day (although not all test databases are refreshed daily), which takes several hours every day. And then there are a lot of costs for the expensive SAN storage… So i was asked if there is a better, faster and cheaper solution. As you may suppose, after some research I found the 12c new feature of doing snapshot clones… and it’s really nice, so I want to show you how it works.
- really fast provisioning (a few seconds)
- very efficient storage usage
- it’s based on pluggable database, so you need the multitenant option license
- only 3 storage types/vendors are certified until now: ZFS, ACFS and NetApp
since i do not have a netapp or zfs applicance, I’m using ACFS for my test, but there are also some things to consider:
- it’s slower than ASM
- you have to install a RAC – oracle restart is not sufficient… nevertheless it’s OK to run a 1 node “cluster” where you don’t want to buy the RAC option.
OK, lets start… first – as mentioned above – we need a rac. If you don’t know how to install, i recommend to visit this site: rack attack.
Next we are going to create an ACFS volume… this can be done easily with asmcmd:
bash> asmcmd volcreate -G <diskgroup> -s 10G <volume_name>
of course you have to replace the variables in the brackets… in my test environment I created a dedicated diskgroup called ACFS and created a volume named oradata.
then we have to check, which device name was reserved for this new volume
bash> asmcmd volinfo -G ACFS oradata Diskgroup Name: ACFS Volume Name: ORADATA Volume Device: /dev/asm/oradata-30 State: ENABLED ...
Great! now we have to crate a filesystem on that virtual block device
and last step: mount the volume
sudo /sbin/acfsutil registry -a /dev/asm/oradata-30 /u00/app/oracle/oradata
that’s it – now the clusterware should detect the new entry in the OCR (the check is running every 30 seconds) and automount the volume… if not, you can do it manually:
sudo mkdir /u00/app/oracle/oradata sudo /bin/mount -t acfs /dev/asm/oradata-30 /u00/app/oracle/oradata
next chapter: create the database. Since i want to simulate the complete way, I created two databases: one is the source (=production) called ORCL, the other the target (=test) database named CDB. Both databases are located on ASM disksgroups (not the ACFS volume!). The source is installed the classical way (no multi- or singletenant) while the target is installed as CDB but without any PDB.
To use the PDB-clone-feature, we have to create a database link from the target to the source first:
SYS@CDB1>CREATE DATABASE LINK orcl CONNECT TO system IDENTIFIED BY <password> USING 'rac01-scan:1521/orcl';
I just used the user “system”, but any other user with the “create session” and “create pluggable database” privilege is sufficient.
next we have to prepare the directory and create the master pdb
bash> mkdir -p /u00/app/oracle/oradata/CDB/prod_seed sql> CREATE PLUGGABLE DATABASE prod_seed FROM NON$CDB@orcl CREATE_FILE_DEST='/u00/app/oracle/oradata/CDB/prod_seed';
based on the size of the database and the speed of your storage, this will take some minutes…
when finished, you have a seed pdb with the interal structure of a non-cdb… so you have to convert it first.
SQL> ALTER SESSION SET CONTAINER=prod_seed; SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql SQL> ALTER PLUGGABLE DATABASE prod_seed OPEN; SQL> ALTER PLUGGABLE DATABASE prod_seed OPEN READ ONLY FORCE; SQL> ALTER PLUGGABLE DATABASE prod_seed SAVE STATE;
please note that you have to open the database first to make some kind of instance recovery, afterwards you can bring the pdb in the correct state to be a seed database.
And now: create the clone
bash> mkdir -p /u00/app/oracle/oradata/CDB/clone1 SQL> CREATE PLUGGABLE DATABASE clone1 FROM prod_seed CREATE_FILE_DEST='/u00/app/oracle/oradata/CDB/clone1' SNAPSHOT COPY; SQL> ALTER PLUGGABLE DATABASE clone1 OPEN;
P.S. in some guidelines you will be advised to set the parameter clonedb to true – don’t do, at least if you are using ACFS! it’s only necessary if your storage can’t do snapshots, so oracle is trying to do some magic, don’t know if its working. For further details, just read the manual (search for “SNAPSHOT COPY”)
OK, and now? Finished! … and where is the magic? the last ‘create pluggable database…’ just took a few seconds, regardless the size of the seed database, and it consumes no space at all (at least at the moment)…
bash> du -sh * 228K clone1 3.6G prod_seed
and what’s inside?
ls -lhr clone1/CDB/72C5EDFE41727F36E0531F01100AB18B/datafile/ total 20K lrwxrwxrwx. 1 oracle dba 154 Aug 6 16:17 o1_mf_users_fpjp2wdt_.dbf -> /u00/app/oracle/oradata/.ACFS/snaps/72C5EDFE41727F36E0531F01100AB18B/CDB/prod_seed/CDB/72C340E0CEE355E1E0531F01100AFEB9/datafile/o1_mf_users_fpjbvo7t_.dbf lrwxrwxrwx. 1 oracle dba 157 Aug 6 16:17 o1_mf_undotbs2_fpjp2wdg_.dbf -> /u00/app/oracle/oradata/.ACFS/snaps/72C5EDFE41727F36E0531F01100AB18B/CDB/prod_seed/CDB/72C340E0CEE355E1E0531F01100AFEB9/datafile/o1_mf_undotbs2_fpjbvo7g_.dbf lrwxrwxrwx. 1 oracle dba 157 Aug 6 16:17 o1_mf_undotbs1_fpjp2wco_.dbf -> /u00/app/oracle/oradata/.ACFS/snaps/72C5EDFE41727F36E0531F01100AB18B/CDB/prod_seed/CDB/72C340E0CEE355E1E0531F01100AFEB9/datafile/o1_mf_undotbs1_fpjbvo4c_.dbf lrwxrwxrwx. 1 oracle dba 155 Aug 6 16:17 o1_mf_system_fpjp2w7z_.dbf -> /u00/app/oracle/oradata/.ACFS/snaps/72C5EDFE41727F36E0531F01100AB18B/CDB/prod_seed/CDB/72C340E0CEE355E1E0531F01100AFEB9/datafile/o1_mf_system_fpjbvnyh_.dbf lrwxrwxrwx. 1 oracle dba 155 Aug 6 16:17 o1_mf_sysaux_fpjp2wc4_.dbf -> /u00/app/oracle/oradata/.ACFS/snaps/72C5EDFE41727F36E0531F01100AB18B/CDB/prod_seed/CDB/72C340E0CEE355E1E0531F01100AFEB9/datafile/o1_mf_sysaux_fpjbvo3z_.dbf
so oracle “tells” the storage (in this case the ACFS driver) to do a snapshot of these data files (and only these files, if there are other databases on this acfs volume they won’t be inside the snapshot), afterwards these snapshots are linked to the location where the data files of the new clone database should be – thats it… very easy and the dba has not to worry about storage snapshots, it’s done automatically. If you drop the clone the snapshot will also be deleted automatically.
just tell me if you have any questions…