Wednesday, March 16, 2011

PL/SQL Unit testing in the builds

At the OTN Dev Day in Dallas last week, I asked how many people unit tested PL/SQL.  The response was a kind of group chuckle acknowledging it should be done but isn't being done.  This was not really surprising since it's the same response I get everywhere I ask.  The difference is that in this event the attendees did a hands on lab and made tests.  The exact same lab that is in the Oracle Learning Library.  After doing the lab people realized how easy it actually is to test their PL/SQL.

Now what's not in the lab is how to schedule them for an automated nightly run or as a part of a system build out.  For that, there is a command line to kick off a test.  In the sqldeveloper/bin, there's a ututil.sh and ututil.bat.  Simply run these and you get the syntax.

sqldeveloper/bin$ ./ututil.sh 

Oracle SQL Developer
 Copyright (c) 1997, 2011, Oracle and/or its affiliates. All rights reserved. 

ututil -run ?
ututil -exp ?
ututil -imp ?

There's 3 choices. Export a test to an xml file, Import the test from the xml file, and Run the test. I'll focus on the run option.

./ututil.sh -run -?

Oracle SQL Developer
 Copyright (c) 1997, 2011, Oracle and/or its affiliates. All rights reserved. 

ututil -run -test (-id <id>|-name <name>} -repo <connection name>
  -db <connection name>
ututil -run -suite (-id <id>|-name <name>} -repo <connection name>
  -db <connection name> 

 Since there's a command line, it can be integrated into existing build procedures very quickly.  There's a few switches needed to run the test or suite of tests.

First, the obvious one is of course the test or suite of tests.

Second, the what repository to connect to for the tests.  All tests are stored in a central repository.  This allows  developers or QA or any group to create and manage the tests in a central place.  The repository is also where all results are stored.

Third, what database to target for the running of the tests.  With all the tests in a central repository, they can be run against any database.  This allows the tests to run with ZERO setup on any targeted database.  All you need is a login and the testing can begin.  The results are also stored in the central repository, test runs against various systems can be compared.  Which makes it very easy to see what tests work on which systems and not on others.  This is useful for the cases where some development code works properly yet when deployed to an integration environment it does not.  The central repository can also make it easy to compare timings of the test runs.  For example, something I've heard many times is on a development system with limited data code runs very fast yet when deployed against a larger dataset it is less than desirable speeds.