Tuesday, December 17, 2013

Seattle: Come Meet Some Oracle ACE Directors on Jan 9th

As part of the Kickoff to Kscope campaign ODTUG is hosting a panel discussion in Seattle with Oracle ACE Directors including myselfKellyn Pot'Vin, Tim Gorman, Tim Tow, and Cameron Lackpour. The panel will be moderated by none other than ODTUG president Monty Latiolais.

The event will be held at the Sheraton Seattle Hotel - Jefferson Room from 4 to 6pm and will be followed by a networking Happy Hour where you can meet other Oracle professionals in Seattle and the panelists. Best part is that it's all free!

For more information check out the official page: http://kscope14.com/events/seattle-street-team Don't forget your questions for the panel.

Wednesday, December 4, 2013


If you follow me on twitter (@martindsouza) or the #orclapex hashtag you'll have noticed that I've been posting some new links to quickly get to APEX resources.

Last week I got a notice (don't know how I was on that list) that the domain oracleapex.com was free to purchase. Being the enthusiast that I am I couldn't resist buying it. I also bought it so that it wouldn't be hijacked or domain parked for malicious purposes.

To make things easier for the whole community I'm using the domain as a redirect for APEX which will hopefully help new and existing users to quickly access the reference material they need for APEX development.

Update (16-Mar-2014): The official/updated list can now be found on oracleapex.com. See http://www.talkapex.com/2014/03/oracleapexcom-part-2.html for information about this change.

Here are all the URLs and where they point to:

Thanks to everyone for all the suggestions so far and if you have any other links you'd like to see leave a comment or send me a tweet!

note: I'll keep this page up to date with the latest urls. Eventually I'll host the official list on oracleapex.com.

Friday, November 1, 2013

Logger - Survey

I'm planning to re-brand Logger to help make it easier for people to find and more marketable in general. I'd appreciate a few minutes of your time to complete a survey that I just built which will help the future growth of Logger.

Please note that this re-branding initiative will not affect it's current open source license or its internal code naming structure (i.e. "logger").

To do the survey please click here.

Thank you,

- Martin

Wednesday, October 9, 2013

ODTUG Elections - Please Vote

It's that time of year again that half of the positions for the ODTUG Board of Directors must be voted on by its paid members. I am up for re-election this year and would appreciate your continued support in helping the ODTUG community.

To view the everyone who is running for the board and the link to vote please go to: http://www.odtug.com/2014_election. Please remember that only paid ODTUG members are allowed to vote and that you are allowed to vote for up to five people.

I have included my campaign statement and profile below. Thank you for your support.

- Martin

Campaign Statement

Over the past two years, I have had the privilege to represent the Oracle APEX community as a director for ODTUG. I have made the decision to run again this year for the ODTUG board and am looking for your support in doing so.

During my two-year tenure, I have been involved in several committees within ODTUG that have helped bring conferences around the world, improved marketing initiatives, and organized developer contests.

I have been able to build strong relationships with other ODTUG board members, Oracle employees, and many developers in the APEX community. These relationships have allowed me to be a liaison between all three groups and help to relay ideas and opportunities to the appropriate parties. I hope to continue to be able to facilitate open communication over the next two years.

I am looking for your vote to continue as an ODTUG board member to represent developers within one of the world’s largest Oracle user groups. Thank you in advance for your support.

Bio Statement

Martin D'Souza is the CTO and cofounder at ClariFit, a consulting firm that specializes in APEX and PL/SQL development and training. He is also a volunteer director for ODTUG. Martin’s career has seen him hold a range of positions within award-winning companies. His experience in the technology industry has been focused on developing database-centric web applications using the Oracle technology stack. Martin is the author of the popular blog www.TalkApex.com, a designated Oracle ACE Director, coauthored and authored various APEX books, and is an active participant in various open source projects, such as Logger. He has also presented at numerous international conferences, such as APEXposed!, COUG, Oracle OpenWorld, and ODTUG Kscope, for which he won the Presenter of the Year award in 2011. Martin holds a computer engineering degree from Queen’s University in Ontario, Canada. You can contact Martin via email at martin@clarifit.com or by Twitter @martindsouza.

Saturday, October 5, 2013

Leave it to the Experts

At ODTUG Kscope 13 this year in New Orleans, I gave a talk on responsive web design in APEX. During the presentation someone asked me about specific CSS width configurations for different screen sizes. My answer at the time (and I still stand by it) was to use what the APEX team provided as they have experts who's sole job is to focus on these details.

Case in point, today one of the developers on the Oracle APEX team (Shakeeb Rahman) tweeted the following question "Anyone know of a study that talks about web contrast and if dark gray is better than black for web text?". I don't know the answer nor have I ever even thought if it before.

Similar to most developers, my job requires me to to keep up to date on technology and design patterns but it's humanly impossible to be an expert in every single area. Having amazing and committed developers on the Oracle APEX team allows me to not worry about all the finer details and focus on providing my clients with solid solutions for their business problems using APEX.

To summarize, not only is APEX a great development framework for the Oracle technology stack, they also have experts covering all the small details that most developers have never thought of or don't have the time to research.

Here's the full Twitter conversation:

Tuesday, September 17, 2013

Presenting at Oracle Open World (OOW)

I'll be participating in two sessions at OOW. On Sunday I'll be sitting on the Printing Options for APEX Q&A Panel which is from 3:30-4:30 in Moscone West - 2005 (code UGF10241).

On Thursday I'll be giving a presentation called How to Instrument PL/SQL and Oracle Application Express Code (code: CON5469) which focuses on leveraging Logger in your PL/SQL and APEX applications. The presentation is from 12:30-1:30 at Marriott Marquis - Salon 7 on Thursday. I gave this talk in Montreal and I had several people tell me how it can help them in their current development projects. I hope that it will do the same for you.

I'll also be going to the APEX meetup on Tuesday night which is open to everyone. It will be a nice time to have a casual drink and meet people from the APEX development team along with some of the gurus in the industry.

See you next week!

Friday, September 13, 2013

Logger with Batch Processes

At ODTUG APEXposed this week Logger which highlighted some of its features. The item that got the most interest from participants was the ability to set the logging level for individual sessions without affecting any other sessions. This wasn't a big surprise since this was the key feature that most people had asked to see in the 2.0.0 release.

One attendee asked if we could modify the logging level in a particular session from another session while it was still running. The answer is yes you can. The best example I could give at the time was baking a loaf of bread. Most ovens have a window and a light. Half way during the baking process you can turn on the light and see how your bread is doing. Logger allows you to do that as well. The following example highlights this:
SQL> -- Simulate production setup
SQL> exec logger.set_level('ERROR');

PL/SQL procedure successfully completed.

SQL> -- Procedure to simulate batch script

create or replace procedure run_long_batch(
  p_client_id in varchar2,
  p_iterations in pls_integer)
  l_scope logger_logs.scope%type := 'run_long_batch';
  logger.log('START', l_scope);
  for i in 1..p_iterations loop
    logger.log('i: ' || i, l_scope);
  end loop;
end run_long_batch;
 19  /

Procedure created.

SQL> -- *** In another session run the following code *** 
SQL> exec run_long_batch(p_client_id => 'batchdemo', p_iterations => 60);

SQL> -- In current window toggle the logging level for the batch job
SQL> -- Note: This is done while it's still running in the other session
  -- Enable debug mode for batch job
  logger.set_level('DEBUG', 'batchdemo');
  -- Disable
  -- Enable again
  logger.set_level('DEBUG', 'batchdemo');
  -- Disable
 15  /

PL/SQL procedure successfully completed.

SQL> -- View items that were logged
select logger_level, text, time_stamp, scope
from logger_logs
order by id;

------------ ---------- ---------------------------------------------------------------------------
          16 i: 8       11-SEP-13 PM
          16 i: 9       11-SEP-13 PM
          16 i: 20      11-SEP-13 PM
          16 i: 21      11-SEP-13 PM

From the query's output you'll notice that it started logging, then stopped for a short period, then started again, then stopped. This can really help if a batch process is taking a long time and you want to see where exactly in the code it is.

Friday, August 30, 2013

APEX OOW 2013 Meetup

Each year the Oracle APEX community gets together at Oracle Open World (OOW) to have a few beers and catch up. This is a great informal event and its your chance to network with a lot of people in the APEX community and meet some of the gurus and members of the core APEX development team.

Normally Dimitri Gielis organizes the meetup, however he is expecting his third child and understandably he will not be coming to OOW. As such I've been handed the torch to help organize the event this year. To be honest by "I" I really mean Lauren Prezby at ODTUG has done the planning and found our venue for this year :-)

We'll be having the annual APEX OOW Meetup at Johnny Foleys on Tuesday at 7:30 onwards.

243 O'Farrell Street
San Francisco, CA 94102

ODTUG will also be giving away some cool APEX swag!

Looking forward to seeing everyone there!


Friday, August 9, 2013

Logger 2.1.0 - Released!

I'm pleased to announce that Logger 2.1.0 has officially been released!

The major changes have been captured in the blog articles that I have posted this week:

You can download a copy of Logger 2.1.0 from the releases page. As there have been some significant changes I highly recommend that you review the change log (especially if you have code that manually inserts into LOGGER_LOGS).

As always, your input and feedback is welcome. Feel free to submit any enhancement requests or bugs on the project's issues page

Logger 2.1.0 - 10g Sequence Fixed

In preparation for the release of Logger 2.1.0 I've decided to write a few posts highlighting some of the new features.

Apparently not too many people who use Logger are still running Oracle 11gR1 or lower since I was only notified of this issue now.  Patrick Jolliffe was kind enough to point out that there was an issue when installing Logger in a 10g. It turns out that  ":new.id := logger_logs_seq.nextval;" doesn't work in 10g. It only works in 11gR2 and above. Clearly it's been a while since I've used 10g, or better yet 11gR1.

The ability to reference sequences nextval in PL/SQL is mentioned in the 11gR2 New Features guide.

Logger 2.1.0 address this issue by only supporting the old "select into..." technique for various reasons. At first I was skeptical about performance impacts of having the context switch, however it appears that behind the scenes if called from PL/SQL it does an internal select into to obtain the value. Connor McDonald has some good slides about this here (see page 59 onwards).

Thanks to Dan McGhan and Scott Wesley for helping sort this issue out.

Thursday, August 8, 2013

Logger 2.1.0 - ins_logger_logs

In preparation for the release of Logger 2.1.0 I've decided to write a few posts highlighting some of the new features.

Prior to 2.1.0, Logger used a trigger when inserting into the LOGGER_LOGS table. Though their was never a compelling reason to remove the trigger a bug request and an unrelated enhancement request prompted me to reexamine the need for it.

Starting with 2.1.0 there is no trigger on the primary Logger table: LOGGER_LOGS. Instead all insert statements are handled by a single procedure:  ins_logger_logs. I'm sure this will make Tom Kyte happy as there's one less trigger in the world now.

On the performance perspective, initial testing has shown a ~15% performance increase when using the standard logging procedures. Though this may not seem like a lot, in large systems where Logger is called often it can start to add up.

As you can imagine, if you have a direct insert into LOGGER_LOGS and leveraged some of the features that the trigger had (i.e. obtaining the sequence for the ID column) then your application will have a run time error. I was very hesitant about removing the trigger for this sole reason but am able to justify it by the fact that directly inserting into the table has never been supported by Logger. Regardless, if you do have insert statements into the table you can simply replace them with calls to the ins_logger_logs procedure.

ins_logger_logs should not be used often and only makes sense in very specific cases. A few things to note about this procedure:
- It is an autonomous transaction procedure. This means that an explicit commit occurs at the end of the procedure. Since it is an autonomous transaction it will not affect (i.e. not commit) your current session.
- It will always insert the value into the LOGGER_LOGS table. No check is performed to factor in the current logging level. For this reason you're better off using the standard Logger procedures.

On a side note, you may be wondering why you would manually do a direct insert into the LOGGER_LOGS table? In rare occasions you may need the ID that was generated from a log statement for other purposes. None of primary log procedures returns the ID. One situation that I've seen in the past is to log all the APEX info on the APEX error handling page. On that page the user would see an error code that developers can reference. That unique code is actually the ID from Logger.

Here is an example of the new procedure:
set serveroutput on

  l_id logger_logs.id%type;
  -- Note: Commented out parameters not used for this demo (but still accessible via API)
    p_logger_level => logger.g_debug,
    p_text => 'Custom Insert',
    p_scope => 'demo.logger.custom_insert',
--    p_call_stack => ''
    p_unit_name => 'Dynamic PL/SQL',
--    p_line_no => ,
--    p_extra => ,
    po_id => l_id
  dbms_output.put_line('ID: ' || l_id);

ID: 2930650

Wednesday, August 7, 2013

Logger 2.1.0 - ok_to_log

In preparation for the release of Logger 2.1.0 I've decided to write a few posts highlighting some of the new features.

A while ago, someone asked me to add a feature into logger to determine if a log statement would actually be logged. I fought this off like the plague as I wanted to avoid the following situation:
if can_log then
  logger.log('some message');
end if;

if can_log then
  logger.log('another message');
end if;
Unfortunately I've seen this type of coding style many times and it makes code difficult to read and maintain. This also ruins one of the key goals behind Logger which is to make it easy and quick for developers to use.

Reluctantly, I finally caved in. As a result there's a new function called ok_to_log. ok_to_log tells you if what you're about to log will actually get logged. It is not meant to be used like the example above. I can't emphasize this enough.

The main use case for ok_to_log is for situations where you only want to do something for logging purposes that will slow down the system. A good example is when you want to log the values of an array. Since it will take time to loop through the array there's no point in doing it if Logger won't actually log anything. The following example highlights this:
select *
  type typ_array is table of number index by pls_integer;
  l_array typ_array;
  -- Load test data
  for x in 1..100 loop
    l_array(x) := x;
  end loop;

  -- Only log if logging is enabled
  if logger.ok_to_log('DEBUG') then
    for x in 1..l_array.count loop
    end loop;
  end if;
If the logging level is set to DEBUG or higher (as it would be in development environments) then logger will loop through the array and log the values. If the logging level is set to ERROR (as it normally would be in production instances) it won't loop over the array since it now knows ahead of time that Logger won't log the values.

Note that the level passed into ok_to_log should correspond to the logging statements used inside the block of code that you want to log. If they don't, it really won't make sense and could result in unnecessary performance hits.

To summarize the ok_to_log function:
- I was reluctant to add it in for obvious reasons, please don't make me regret it.
- Only use in situations where you need to perform an expense block of code only for logging purposes.
- Make sure the level passed in ok_to_log matches the log statements using inside the associated block of code.

Tuesday, August 6, 2013

Logger 2.1.0 - Demo Scripts

In preparation for the release of Logger 2.1.0 I've decided to write a few posts highlighting some of the new features.

The new Logger documentation contains a lot of sample code. Though this is good for documentation purposes I found that I still needed to build a set of demos for various presentations and demonstrations about Logger. Instead of recreating demos each time I've started to keep them in a demo folder (included in each release). This can also be useful for individuals looking to demo Logger at an organization for the first time.

Though not complete, expect the list of demos to expand over time.  If you have a demo script you want added, please feel free to send it to me or make a pull request.

Logger 2.1.0 - Documentation Overhaul

In preparation for the release of Logger 2.1.0 I've decided to write a few posts highlighting some of the new features.

As Logger expands, so does its documentation. We have reached a point that it was no longer feasible to continue with a one page documentation structure. Besides being hard to find a specific bit of information I also found that it was hard to get a quick overview of all the functions that Logger has.

The documentation for Logger has been completely revamped. It's broken up into two main sections: the initial readme file (found on the main project page) and a wiki. The readme file contains a brief overview of logger and links to the wiki. The wiki has been broken up into four sub sections: Installation, Logger API, Change Log, and Best Practices. This is a start to make Logger's documentation similar to Oracle's documentation format which should make things consistent for developers.

If you've already used Logger for a while, take a look at the Logger API page. I think you'll find some functions that you didn't know existed.

The wiki documents are stored in a separate repository on GitHub. This means that they aren't synced with a release is tagged. To get around this issue, each build now contains a folder called "wiki". In it, you will find all the wiki files (in markdown format) generated at the time of the build.

On a side note, the documentation restructuring took a lot of time. I didn't have enough time to document all the functions and they will all be updated in the near future once I have more time. Of course you're encouraged to fork the project and update the documentation if you want to help out.

Monday, August 5, 2013

Logger 2.1.0 - Text Length > 4000 Characters

In preparation for the release of Logger 2.1.0 I've decided to write a few posts highlighting some of the new features.

I must thank Juergen Schuster for suggestion the following change to Logger. He sent me an email with this request and though it took a while to get in, I'm glad to finally see it in this release of Logger.

In the main Logger procedures the first parameter, p_text, is a varchar2 data type. This means that in PL/SQL you can pass in strings up to 32767 characters however the max table size (pre 12c) for a varchar2 is 4000 characters. As you can imagine it caused some people some unexpected runtime errors when they passed in large blocks of text. The only way to get around this was either check the size of text before logging it (if you were unsure of it's final size) or store it in the EXTRA (clob) column.

Starting in 2.1.0, Logger will gracefully handle text entries larger than 4000 characters. If the text is larger than 4000 characters it will automatically be appended to the EXTRA column and a message, "*** Content moved to EXTRA column ***" will be displayed in the TEXT column. I hope this resolves some unforeseen runtime errors that may have occurred in the past.

With 12c finally released (and its ability to have varchar2 columns of 32767 length) this functionality is bound to change / become obsolete. A new issue has been created to address this in a future release of Logger.

Wednesday, July 31, 2013

5 Years Later...

I was working on another post today and noticed that I had just exceed the 400,000 pageviews milestone! The next thing that caught my eye is that it has been 5 years to the day since my first post. So I thought I'd write an article reflecting the past five years.

I started this blog (initially called apex-smb.blogspot.com) to simply post some of code I was working on and elicit feedback from the community. Seemed simple to me: post some code and get free feedback. Over the life of this blog I certainly did get a lot of feedback. I also got a lot more than I had imagined. Since my initial post I have achieved many milestones that I didn't think were possible or hadn't even thought of when I first started. These include:

- Co-founding ClariFit
- Writing several books
- Becoming an Oracle ACE, then later ACE Director
- Working on various open source projects

By far the best outcome is the relationships I've built with people in the Oracle community around the world. This is especially true with the people that I looked up to (and still do) when I first started working with APEX (called HTMLDB back then).

I'm not going to say that this blog is the sole reason for everything that took place over the past 5 years. There were a lot of people that helped support and encourage me along the way. Without their continued support I don't think I'd be in the positive situation that I am in today.

This blog started out as a fun side project with no real motive other than to share code and help improve my writing skills. Turns out it changed a lot for me and went well past its initial intentions. The only "negative" thing (and I use the term very loosely) is that over the past few years I haven't had the same amount of time that I would like to allocate to writing articles at the same pace when I first started. This is primarily due to running a company, writing various books, working on open source projects, and my speaking engagements (and yes I realize that these are all "first world problems").

Going forward I plan to still continue writing. It's something that I really enjoy doing and I hope you, as a reader of this blog, enjoy as well.

Looking forward to the next 5 years!

- Martin

Thursday, July 18, 2013

APEXposed en Montreal - Sept 10/11

This fall ODTUG will be in Montreal for the 2nd annual APEXposed conference. Like last year, there's a star-studded lineup of speakers including Oracle's Director of Software Development, APEX's product manager, and Oracle ACE and ACE Directors. The one difference compared with last year is that we have some local speakers from Montreal and Toronto.

Some of the presentations that I'm looking forward to are:

- Joel Kallman's (Oracle Director of Software Development) talk on APEX 5.0, giving us an inside look of what's coming up in the near future.
- David Peake's (APEX Product Manager) talk on how to leverage some of the new features in 12c with APEX.
- Roel Hartman's (Director at APEX Evangelists) presentation on building mobile applications in APEX.
- Steve Feuerstein's (Dell) keynote. I just talked to him about it, should be great!

And, as shameless plug, I'll be giving two talks: "Instrumenting your PL/SQL and APEX Applications" and "APEX Development Best Practices". I'm really excited about the talk on code instrumentation as I'll be highlighted some of the new features that I've integrated into the 2.0.0 version of Logger and also share some of the new upcoming features.

If you're interested and yet haven't already registered for the conference you're in luck. It's still not to late to sign up for this 2 day (September 10th and 11th) conference. To register simply go to http://www.odtug.com/apexposed and sign up.

See you in Montreal!

Friday, July 5, 2013

It's up to You - Oracle and APEX Open Source Projects

At ODTUG Kscope 13 last week I got to moderate the APEX Lunch & Learn experts panel. One of the questions that came up was when a certain feature was going to be added to Peter Raganitsch's  APEX Developer Addon.

This sparked a bit of conversation around open source projects in the Oracle and APEX communities. They're a few people who create and maintain these open source projects. Unfortunately they're not a lot of other people who contribute to these projects. Because of this, projects don't get upgraded and enhanced as quickly as they could.

It's up to all of us in the community to enhance open source projects. Next time you need a new feature added to a project, instead of asking for it, I encourage everyone to add it yourself then contact the developer with the changes so it can be implemented to the project.

On a personal note, I'm heavily involved with several open source projects. The two most popular ones are Logger and the ClariFit APEX Plugins. I just moved the ClariFit APEX Plugins to GitHub yesterday to make it easier for others to contribute to.

Tuesday, June 4, 2013

Logger 2.0.0 - Released!

After several months in Beta, I'm please to announce that Logger 2.0.0 is now officially released! You can download the zip file from the 2.0.0 release page.

Over the past few months I've blogged about some of the significant changes we've made to Logger. Here are all the articles describing some of the new functionality:
The upgraded help document on the main Github page contains all the detailed information about Logger as well as examples. In the future, I'll be looking at moving it towards the Wiki page for better organization but for now it should do.

If you've never used it before check it out, best of all it's free and open source!

Wednesday, May 8, 2013

Logger 2.0.0 - Session Specific Logging - Advanced Features

In preparation for the release of Logger 2.0.0 I've decided to write a few posts highlighting some of the new features. 

In my previous post I covered how to set and unset session specific logging based on the Client Identifier. In this post I'll cover some advanced options for this functionality. Before continuing, please be sure to read the previous post or else this won't make sense.

In the previous post the demo code to set a session specific logging level only used two parameters: the logging level and the Client Identifier. In logger.set_level they're actually four parameters:


This is the level to set. If p_client_id is null then it will set the system level configuration in logger_prefs > LEVEL.

The additional parameters are only valid if p_client_id is defined.


If not null, will apply p_level for the specific client_id only. If null then p_level will be applied for the system level configuration.


Determines if the call stack should be stored. If null then will use the default setting in logger_prefs > INCLUDE_CALL_STACK. Valid values are TRUE and FALSE (strings).


Will end session specific logging after set hours. If not defined then the default setting will be used logger_prefs > PREF_BY_CLIENT_ID_EXPIRE_HOURS. Note that this will not be exact as a job is run hourly to clean up expired session specific logging that has expired for a given client_id. Instead of waiting for the job to clean up expired sessions you can explicitly unset them.

How to View All Client_id Settings

To view the system level settings you can use the logger.status procedure. There was debate as to whether or not we include all the client_id settings in this procedure. In the end we did not include it list them in this procedure as the list may get very long. To view all the session specific logging configurations use the following query:
select *
from logger_prefs_by_client_id;


When calling logger.set_level with a client_id that already exists in logger_prefs_by_client_id it will update the values and update the expiry date (i.e. you extend the setting).

Tuesday, May 7, 2013

Logger 2.0.0 - Enable Session Specific Logging

In preparation for the release of Logger 2.0.0 I've decided to write a few posts highlighting some of the new features.

One of the most common feature request for Logger has been the ability to enable logging (or setting the logger level) for a given session. This post covers how to set and unset session specific logging.

Logger Levels

There is no "on/off" switch for Logger. Instead, like most other logging and code instrumentation tools, it supports multiple levels. This allows developers to turn up and down the amount of logging. In most development systems the logging level is set to debug mode and in most production instances it's set to error mode. This means that in production only items that are explicitly logged using logger.log_error will be stored. For more information read the configuration section of the documentation.

Setting the Logger Level

Prior to Logger 2.0.0 you could only configure the logging level for the entire schema. To following code snippet highlights this:
exec logger.set_level('DEBUG');

  -- All of these will be logged
  logger.log('Logging log level');
  logger.log_information('Logging information level');
  logger.log_warning('Logging warning level');
  logger.log_error('Logging error level');

exec logger.set_level('WARNING');

  logger.log('Logging log level'); -- Not stored
  logger.log_information('Logging information level'); -- Not stored
  logger.log_warning('Logging warning level'); -- Stored
  logger.log_error('Logging error level'); -- Stored
What happens in production systems when a user is experiencing some issues and you want to see all their logging information? The only way to do this before was to set the logging level to debug mode and the entire schema would be affected. This could slow down all the applications in the schema which is an undesired affect.

Setting by Session
The term "session" is a bit misleading. Since more most Oracle applications are stateless it's not realistic to enable logging for a specific Oracle session (SID). Instead Logger uses the client_identifier (also referred to as client_id). Client identifiers can easily be configured and are commonly used in stateless applications. For example, APEX sets the client_id with: :APP_USER || ':' || :APP_SESSION

The following example demonstrates how to enable "session" specific logging in Logger:
-- Connection 1
exec logger.set_level ('ERROR');

  logger.log('will not get stored');
  logger.log_error('will get stored');

select id, logger_level, text, client_identifier
from logger_logs_5_min;

---- ------------ -------------------- --------------------
  13            2 will get stored      

-- Connection 2 (i.e a different connection)
exec dbms_session.set_identifier('logger_demo');

exec logger.set_level('DEBUG', sys_context('userenv','client_identifier'));
-- Or could have used: logger.set_level('DEBUG', 'logger_demo');

  logger.log('will get stored');
  logger.log_error('will get stored');

select id, logger_level, text, client_identifier
from logger_logs_5_min;

---- ------------ -------------------- --------------------
  14           16 will get stored      logger_demo
  15            2 will get stored      logger_demo
Unsettting by Session

When setting a session specific logging level, Logger will apply an expiration time to it. By default  this is set to 12 hours but can be configured by setting the logger_prefs value:  PREF_BY_CLIENT_ID_EXPIRE_HOURS  You can also pass in the time using the parameter p_client_id_expire_hours. If you don't explicitly unset a specific session then logger will automatically clean up these sessions after the desired time.

They're three ways to explicitly unset session specific logging:
-- Unset by specific client_id
exec logger.unset_client_level(p_client_id => 'logger_demo');

-- Unset all sessions that have expired
exec logger.unset_client_level;

-- Unset all sessions (regardless of expiration time)
exec logger.unset_client_level_all

By setting session specific logging you can allow for an individual connection to have logging enabled without affecting the rest of the applications in the schema.

They're some other interesting features in session specific logging that I wasn't able to fit this article. In the next article I'll discuss these features.

Monday, May 6, 2013

Logger 2.0.0 - Logging Parameters

In preparation for the release of Logger 2.0.0 I've decided to write a few posts highlighting some of the new features. 

After using Logger for several years one thing I found that I was doing over and over again is logging the parameters to functions and parameters. The beginning of a procedure usually looked like this:
procedure my_proc(
  p_empno in number,
  p_hiredate in date,
  p_boolean in boolean) -- not really relevant for proc, but good for demo
  l_scope logger_logs.scope%type := 'my_proc';
  logger.log('START', l_scope);
  logger.log('p_empno: ' || to_char(p_empno), l_scope);
  logger.log('p_hiredate: ' || to_char(p_hiredate, 'DD-MON-YYYY HH24:MI:SS'), l_scope);
  logger.log('p_boolean: ' || case when p_boolean then 'TRUE' else 'FALSE' end, l_scope);
end my_proc;  
When you have data types that aren't strings the conversion process can become a bit tedious especially if you have to log a lot of parameters. Of course snippets can help reduce some of the time but it was still annoying.

Another issue that I found was that developers would use their own formatting for logging parameters. Some would include a dash instead of a colon, some would include spaces and others wouldn't, etc. Eventually it became difficult to read the logs with all the different formatting.

To get around these issues Logger now takes in a parameter, p_params, which is an array of name/value pairs. Besides visual consistency, it will also handle all the formatting of the different data types so you don't need to spend time on it. The following code snippet is the same as above, but using the new parameters feature:
procedure my_proc(
  p_empno in number,
  p_hiredate in date,
  p_boolean in boolean) -- not really relevant for proc, but good for demo
  l_scope logger_logs.scope%type := 'my_proc';
  l_params logger.tab_param;
  logger.append_param(l_params, 'p_empno', p_empno);
  logger.append_param(l_params, 'p_hiredate', p_hiredate);
  logger.append_param(l_params, 'p_boolean', p_boolean);
  logger.log('START', l_scope, null, l_params);

end my_proc;  

If you now query logger_logs you'll now see the parameters appeneded to the extra column (note if you defined something in the p_extra parameter it would still appear):
select text, scope, extra
from logger_logs_5_min;

----- --------- ----------------------------------
START my_proc   *** Parameters ***
                p_empno: 123
                p_hiredate: 26-APR-2013 03:14:58
                p_boolean: TRUE
You'll notice that all the formatting and implicit conversions have been taken care of.


Unless you are running Logger in no-op mode then there will be a slight performance hit each time you append a parameter (for the cost of appending to an array and implicit conversion). You'll need to factor this in when leveraging this feature. Odds are you'll be able to use the p_params parameter on most of your procedures and functions, but not on frequently used methods.

Other Considerations

Not all people will use the parameters option, and that's fine. We made it an optional parameter specifically for that reason. We added this feature to support one of the primary goals of Logger which is "to be as simple as possible to use".

Friday, May 3, 2013

Logger 2.0.0 Beta

Logger is a PL/SQL logging and debugging framework. It's used in many organizations to instrument code in their Oracle applications. Tyler Muth created Logger a few years ago and has since released several upgrades, the last being 1.4.0. 

After some great feedback, I'm pleased to announce that we've just launched  Logger 2.0.0 Beta. Please download it, install, and provide us with your feedback or any issues you encounter. If you installed the alpha version you'll need to do a complete uninstall and reinstall before using the beta version. The beta version will upgrade over 1.4.0.

The beta release includes some minor bug fixes and documentation updates. The major change is that the log_params procedure has been dropped (I had hinted that this may happen in my previous post). Instead, to remain consistent, a fourth parameter has been added to the major log procedures to support a parameters array. For more information, see the documentation regarding p_params.

If you have any feedback, suggestions, or issues please use the project's issues page. If you don't already have a GitHub account, you'll need to create one (it's free). If your curious as to the direction of this project, please look at the issues page and you'll see what we have planned and what others have requested.

This release will be the last major release before officially launching Logger 2.0.0. Once it's released I'd like to start working on some of the feature requests that have been suggested.

Special thanks to Mark Wagoner for helping test and provide some additional feedback on the latest release.

Monday, April 15, 2013

Logger 2.0.0 Alpha

Logger is a PL/SQL logging and debugging framework. It's used in many organizations to instrument code in their Oracle applications. Tyler Muth created Logger a few years ago and has since released several upgrades, the last being 1.4.0.

I contacted Tyler a few months ago about the possibility of working together to upgrade Logger and expand on its capabilities. Since then we've been busy working on Logger on our spare time (which isn't much these days). That being said, I'm pleased to announce that Logger 2.0.0.a01 (Alpha) is now available to download!

Besides some underlying changes, such as moving to GitHub and a new build script, the following things were added to Logger:
  • Upgrade process: Can upgrade Logger rather than having to uninstall the old version and install the new version.
  • Log parameters: Standardize the logging of parameters when entering a procedure or function. You won't need to convert the data format as the procedure is overloaded to handle many different object types.
  • Logger level by Client Identifier: This is by far one of the features I think most people wanted in Logger. You can now set the logging level based on a Client Identifier.
  • Other: They're a few other features we may get into this 2.0.0 release. If not they will be added in future releases.
You can read more about the new features, along with demo snippets on Logger's new home page: https://github.com/tmuth/Logger---A-PL-SQL-Logging-Utility The new home page contains all the help text and sample code.

What next? Well we've been busy at work developing Logger now we're looking to the community to help test it out. Please download it, install or upgrade, and give it a try. If you find any issues please post them on this simple issue logging form.

Don't forget that this is an alpha release and things may be changed before the final release 2.0.0 release. One thing I'm still up in the air is how we implement  log_params, as such I highly suspect some of the syntax will change in the upcoming weeks.

When will this be officially released? Calendar year 2013 (sound familiar ;-). In all honesty I hope that we can release this by the end of May. Of course the release date all depends on the feedback we get back from you, so please help test!

If you want to follow updates on this project, I'll be labeling all Logger related posts with "logger".

Saturday, April 6, 2013

Console Wrapper 1.0.4

Console Wrapper 1.0.4 is officially out. You can download it from the project page: https://code.google.com/p/js-console-wrapper/

The only change for this release was to resolve an IE 9/10 issue which I previous blogged about.

If you're using any of the ClariFit plugins you may want to update the $console_wrapper.js file that's included with each plugin. I'll be sure to include the update file for each of the plugin's updates.

Friday, March 29, 2013

Oracle Instant Client on Mac OS X

A while back I broke down to the peer pressure in the APEX community (you know who you are ;-) and bought a Mac Book Air. I'm still learning the ropes and one thing that took some time to get working was the Oracle Instant Client. Installing it isn't as straightforward as installing the Instant Client on Windows.

I finally got it working and thought I'd post what I did for others that are new to OS X.

Download Instant Client

First you'll nee to download the Instant Client for OS X. I downloaded the following files for Version (64-bit):

- instantclient-basic-macos.x64-
- instantclient-sqlplus-macos.x64-

Unzip both files and put their contents in /oracle/instantclient_11_2/

Setting Paths

You'll need to set the appropriate paths to load by default in Terminal. Open Terminal and run:
vi ~/.bash_profile
Add the following to the file:
#in ~/.bash_profile

export PATH

#Create this directory and put your tnsnames.ora file in it
Running SQL*Plus

Now when you open a new terminal window you should be able to run SQL*Plus
#ex: sqlplus username/password@//server:port/sid
sqlplus system/oracle@//localhost:1521/XE
#or use a connection string leveraging a tnsnames entry

Additional Addons

When using SQL*Plus in Windows you can use the Up arrow to get your previous command. If you do that in Linux you'll get some weird character. The good news is that there's a program to resolve it called rlwrap. CJ Travis has a good post on how to install rlwrap of Mac OS X.

Wednesday, March 27, 2013

Console Wrapper 1.0.4 - Beta

Recently Dimitri Gielis sent me an email letting me know that there was a bug with Console Wrapper 1.0.3. He also include the following screen shot with the error:

In case  you can't see it, the error is "Object doesn't support property or method 'apply'". It turns out that the issue was caused by a change in IE9 and IE10 on how they handle the console object. The following article describes the issue in detail: http://whattheheadsaid.com/2011/04/internet-explorer-9s-problematic-console-object

I've fixed Console Wrapper to handle the issues presented in IE9 and IE10. Before I officially release 1.0.4 on https://code.google.com/p/js-console-wrapper/ I'd like some people to beta test it. If you're interested in testing 1.0.4 you can download it here. If you find any issues please send me an email (my email address is in the comments section at the top).
Update: 6-Apr-2013: 1.0.4 has been officially released and is now available to download on the official project page.  

Thanks in advance.

Sunday, March 24, 2013

And the Winner Is...

Two weeks ago I posted a question/contest for Who's Birthday is it in "n" number of days? I received 63 answers (within the time limit) all with some interesting and unique approaches. Here's how I went about picking the winners:  

Leap Year

One thing that most people got wrong was what happened if the hire date was on Feb 29th (during a leap year) while the current year did not have a leap year. I could do a search online to find some recent leap years, instead I decided to wrote a query to find recent leap years.
-- Find past leap years
-- https://forums.oracle.com/forums/thread.jspa?threadID=1128019
select yr, 
    when mod(yr,400) = 0 or mod(yr,4) = 0then 'Leap Year'
    else null 
  end leap
from (
  select extract (year from add_months(sysdate, -8*12)) + level - 1 yr
  from dual
  connect by level <= 8
Test Data

Instead of testing against the main sample EMP table I recreated the EMP table with all the dates in a given leap year (2008). I kept the same structure as the EMP table so that it would be easy to test your solutions against.
drop table emp;

-- create mock emp table with all the dates (including leap year
create table emp as
select level empno, level ename, level job, level mgr, to_date('01-01-2008', 'DD-MM-YYYY') + level - 1 hiredate, level sal, level comm, level deptno
from dual
connect by level <= 366;
Picking the winner and testing solution:

Since we had 63 entries I wasn't going to test them all. Instead I decided to randomly pick solutions (based on the entry number) and validate their solution. If they were correct I'd mark them as a winner. If not, I'd move on to the next entry until I found 2 winners.

Here's how I picked the random entries:
-- Chose random winners
select trunc(dbms_random.value(1,63))
from dual;
To test the solutions I set my current SYSDATE to 1-Feb-2013 so that it would have to encounter people that were hired on 29-Feb (even though 29-Feb doesn't exist in 2013).
ALTER SYSTEM SET fixed_date='2013-02-01-14:10:00';

The winners are Erik van Roon (aka "Belly") and Iudith Mentzel. Please email me (my email address is top right corner of blog) and I will forward your information to Red Gate so that you can claim your prize.

Thanks for all the entries and thanks again to Red Gate for offering up the prizes (licenses to their new product: Source Control for Oracle). If you haven't already seen what Source Control for Oracle is or want to download a free trial I encourage you to check it out.

Thursday, March 14, 2013

Kscope 13

Taking a note out of Simon Sinek's famous TED talk, I won't focus on the How or What for Kscope 13, rather the Why.

So why does Kscope exist? It's goal is to bring together Oracle developers from around the world and offer them a unique learning and networking experience unlike any other conference.

I've been to many other conferences, Oracle related or otherwise, and none offer the experience that you will get at Kscope. That being said, they're still 12 days to take advantage of the Early Bird rate and if you haven't already signed up I encourage you to do so!

For those that prefer more of the How and What about Kscope 13, you to check out the website: kscope13.com

Monday, March 11, 2013

Who's Birthday is it in "n" number of days?

A friend of mine recently asked me the following question: "I have a table with names and birthdays. How do I find who's birthday is coming up in the next 15 days". Initially this it appears to be a simple question but it's a bit more complex than I originally thought.

I was going to post my solution here, then James Murtagh, from Red Gate, offered me two five-user licenses for their new product called Source Control for Oracle to give away to the readers of this blog. Instead, I'm going to run an informal contest to give away these sets of licenses (each valued at $1475). As with all contests, please read the terms and conditions from Redgate.

So here's the question (similar to my friend's question about birthday's but on the common "EMP" table): Suppose that I'm the HR manager and am planning to recognize the anniversary date that each employee was hired on. I'd like to know all the employees who's anniversary hire date is in the next 30 days.

Please post your solution in the comments section below. Every answer with a correct solution will have their name entered into the draw. Answers must be submitted by end of day on Friday March 15th. I'll announce the winners next week.

- Use SYSDATE for today's date. I'll just alter the FIXED_DATE setting in oracle to set the SYDATE value for my testing.  
- Write your query for the default EMP table. If you don't have the EMP table in your schema this article contains the scripts to generate it.

I'm looking forward to everyone's solutions!

Update: Please read the follow up post to see how I tested this solution.

Wednesday, January 30, 2013

Embedding Files within APEX: Where things go wrong

This is the final part of a three part series on embedding files in your APEX application. You should read all the articles (Part 1 and Part 2) before deciding whether or not to leverage this feature in APEX as it has its pros and cons.

The previous posts in this series covered how to upload, reference, and create installation scripts for embedded files within APEX. This post will cover some caveats that people may not be aware of when using embedded files within APEX.

I already hinted in the first post of this series that they're several reasons why you may not want to embed files within APEX. Besides performance issues they're situations which you'll get some unexpected behavior. I've included examples of some of these issues below. Each example assumes that you have a static file (test.js) included in the base application and it's also in the installation scripts.

Copying an Application 

A lot of times people tend to copy an application within a workspace to back it up before doing a big change or to run some one-off tests on their own. If you copy an application, despite selecting Yes for "Copy Supporting Objects Definitions" the application specific files are not copied.

In this case you'll need to manually run the Supporting Objects > Install Supporting Objects to install the file (assuming that the installation script has the latest version of the file). But wait. When you do that it'll actually remove the file from the original application and install it in the new one (i.e. it no longer exists in the application that you copied from).

Deleting a Copied Application

If you copied an application and selected "Yes" for "Copy Supporting Object Definitions" we know from the previous example that it doesn't install any embedded files. If you now delete the newly copied application and check off "Deinstall Supporting Objects" it will also remove any files from the base application that you copied from.


If embedding files in your APEX application sounds confusing don't worry, it is. I had to do many test cases while writing this blog to confirm some of the scenarios.

If you've read through the entire three part series on embedding files in your APEX application you'll notice that it can be very confusing and cumbersome. This can cause a lot of unexpected issues in the long run.

As much as possible I avoid embedding files in APEX and instead store them on a web server. To me, it's just not worth the hassle. Of course they're exceptions. If I can't get access to the web server or if the application is to be bundled as a stand alone application then it makes sense.

Embedding Files within APEX: Supporting Objects

This is part two of a three part series on embedding files in your APEX application. You should read all the articles (Part 1) before deciding whether or not to leverage this feature in APEX as it has its pros and cons.

A major misconception when uploading a file into APEX is that it will be included when you export the application. Though I agree that makes the most sense, unfortunately it doesn't work this way. You need to explicitly include an installation script that will copy the file into the workspace where the application is being imported to. Sound confusing? Unfortunately it is.

In the previous post I uploaded a file into an application. If I were to export then import this application into another workspace that file wouldn't be included. To explicitly include the file you need to create a Supporting Object installation script:

- Go to Application > Supporting Objects > Installation Scripts and click the Create button.
- At the bottom, under the Tasks heading select "Create Scripts to Install Files".

 - Select the file that is required for the application and click the Create Script button.

At this point in time, APEX will encode (base64) the file (in it's current state) and create a PL/SQL installation script that can be run when you import the application. If you change the file (in Shared Components > Static Files) it has no impact on the installation script (i.e. it's a snapshot of the file at this point in time).

Not updating the installation script with the latest version of the file is where a lot of issues occur when migrating the application from different environments. After someone initially creates the installation script, there's a high probability that you may change the base file, forgetting to update the installation script. This is one of the main reasons why I don't usually recommend embedding files into my application.

Part 3 of the series will cover some issues with embedding files in an application.


This is first of a 3 part series on embedding files in your APEX application. You should read all the articles before deciding whether or not to leverage this feature in APEX as it has its pros and cons.

APEX allows you to embed static web files (CSS, JS, images, etc) into your application. This functionality removes the need to store web files on a web server which is required for some applications.

To upload your file into the application go to Shared Components > Static Files. Click the Create button. On the Create page you can upload a file and either associate the file to a specific application or no application.

Files associated to a specific application must have a unique filename within its parent application. It can then be referenced (most likely in a page template) using the #APP_IMAGES# substitution string. Ex: #APP_IMAGES#test.js

Files that are not associated with a specific application are available to all the applications within the workspace and can be referenced (most likely in a page template) using the #WORKSPACE_IMAGES# substitution string. Ex: #WORKSPACE_IMAGES#test.js

Files that are added to the application aren't stored on the web server. They are stored in the database. For high traffic applications this may not be a great idea and you may want to look at storing them on a web server.

Part 2 of the series will cover how to include the files in a installation script.

Tuesday, January 29, 2013

Oracle Magazine - On Code Reviews and Standards

Last year at Kscope 12 in San Antonio I gave a different kind of talk than I normally do. It was called "Building a Better Team" and focused on the importance and value of doing peer reviews and having coding standards.

It just so happened that Jeff Erickson, from Oracle Magazine, was in attendance. Following my talk Jeff asked me to do an interview for the Community: Up Close section of the magazine. The article, On Code Reviews and Standards was published this month! I've also included the video interview below.

If you don't already have a peer review process implemented in your organization I hope this article will help motivate you to start one.

Monday, January 28, 2013

ODTUG Marketing Committee

In case you missed it on the ODTUG blog, I along with Bambi Price are starting a new Marketing Committee and are looking for some volunteers to join the committee. All the information can be found in this article.

If you're looking to get involved with the ODTUG community and/or passionate about marketing this is an excellent opportunity for you. I look forward to working on this committee and help with ODTUG's marketing efforts.