Thursday, May 21, 2015

APEX Meetups - Calgary

One of the popular things these days in the APEX community is to have localized APEX Meetups. These are usually monthly meetings where one, or a few, people give a presentation about APEX. These are a lot more informal than traditional conferences and help the community grow in a given area.

I recently launched the Calgary APEX Meetup group: ORCLAPEX-YYC. If you live in or around Calgary and are interested in getting together once a month to hear more about APEX join this group! The plan is to host our first Meetup in early June and are currently looking for a venue to host us. If you can acquire some space please leave a comment on the discussion board.

We're looking to keep these events free and are always looking for sponsors to help fund some of the small costs. If you or your organization is interested please leave a comment on the discussion board.

Monday, May 18, 2015

SQLcl and login.sql

In SQL*Plus you can configure it to automatically run login scripts, typically used to configure your session. These are stored in the files glogin.sql and login.sql. Some examples that you'd find in these files are statements like: set serveroutput on.  You can read more about login scripts here.

SQLcl also handles login scripts, however it may require some additional configuration. The easiest way to leverage this is to create a file called login.sql in your current directory, then call SQLcl from that directory.

If you're like me and launch SQLcl from many directories that approach won't work. Instead, you can create one login.sql file and have it automatically referenced. To do this create login.sql in a folder (in my case it was ~/Documents/Oracle) and then add the following to ~/.bash_profile:   export SQLPATH=~/Documents/Oracle/

Now when you run SQLcl anywhere it will reference your login.sql file.

Thanks to Kris Rice and Jeff Smith for helping with this.

Friday, May 15, 2015

Help Promote Logger

Here's a snippet from a Twitter conversation today between Steven Feuerstein and I around a user that created their own PL/SQL logging platform:

Steven is absolutely correct when he says "You haven't done a good enough job promoting it" ("it" being Logger). Despite my best efforts, promoting Logger is something I've struggled with and need some help.

Based on past emails and conversations, I know they're a lot of developers and organizations using Logger. If you're using Logger, either personally or within your organization, I need your help. Please blog about it (and post the link in the comments section below), email me a quote that I can use (, Tweet about it (use #orcllogger), or mention it at the next conference you go to.

To those that help promote it, thank you in advance.

- Martin

Thursday, May 14, 2015

APEX Docs for iBooks

I'm not sure if this is new news or not, but you can download the APEX documentation in ePub format which can be loaded into iBooks.

To download the ebook format go to then click the download button for the appropriate doc and select ePub as the format.

Here's a screen shot of the APEX API docs in iBooks. I still prefer the online / HTML format but having the docs in iBooks helps when you may not have a internet connection.

Wednesday, May 13, 2015


In APEX, there's a special request type called BRANCH_TO_PAGE_ACCEPT. This can be used in the REQUEST portion of the APEX URL. The syntax for the REQUEST attribute is "BRANCH_TO_PAGE_ACCEPT|".

For example, suppose you are on Page 1 (P1) and want a button which will:
  - Go Page 2 (P2)
  - Set P2_X to some_value
  - Automatically submit the page, simulating the SAVE button on P2

Create a button on P1 that redirects to the following URL: f?p=&APP_ID.:2:&APP_SESSION.:BRANCH_TO_PAGE_ACCEPT|SAVE:&DEBUG.:2:P2_X:some_value

The current documentation for BRANCH_TO_PAGE_ACCEPT states that "Using BRANCH_TO_PAGE_ACCEPT is the same as navigating to page 1, entering a value into the item P1_DATA, and clicking a button that submits the page with a SAVE request." You may interpret this as:

  - Run Page 2
    - Therefor load/run the page rendering regions and processes
  - Simulate clicking the Save button on P2
  - Run the Page Processing of P2

This is not entirely true as the following occurs when a user goes to the above URL:

  - Set P2_X to some_value (this is regular APEX URL behaviour)
  - Execute the Page Processing portion of P2.
    - This includes Validations, Processes, and Page branching on P2 as shown below.

What does not occur is the Page Rendering potion of P2. This is important to note because if you had some Pre-Rendering page processes (i.e. on page load) they will not be run. This is slightly contrary to what the documentation states that it is "... the same as navigating to Page 2 ...".

If everything runs smoothly on the P2 submit page processing (i.e no errors or validations fail) then you will branch to the defined page branch on P2. If something does fail then you will be shown P2 along with the corresponding error message.

It is important to note that currently the page item built-in validations are not executed when using BRANCH_TO_PAGE_ACCEPT. For example, suppose that you set Value Required attribute to Yes for P2_X.

If you run P2 normally, set P2_X to an empty value, then submit the page, an error will appear stating that "X must have some value". This is done automatically for you due to the Value Required attribute. If you use the URL f?p=&APP_ID.:2:&APP_SESSION.:BRANCH_TO_PAGE_ACCEPT|SAVE:&DEBUG.:2:P2_X: it will set P2_X to null, no error will occur as the page item's build in validations aren't executed.

Tuesday, May 12, 2015

APEX_JSON.PARSE Issue with CLOBs and 11g

Update: this issue is documented on the APEX 5 Known Issues page (bug 20974582) and has a patch for users with access to MOS (bug 20931298).

APEX 5.0 comes with some new APIs including APEX_JSON which manages JSON files.

When using APEX_JSON to consume a JSON file, the process is fairly straight forward:
- Parse: This parses the JSON and stores it in a record type.
- Read values in the JSON object by providing the path to each name/value pair.

The APEX_JSON documentation has some great examples which covers the above process in more details.

There is one "bug" that I found with APEX_JSON.PARSE when passing in a CLOB that has more than 8191 characters.  The reason is APEX_JSON.PARSE uses DBMS_LOB.SUBSTR behind the scenes which has a known bug in 11g R1, 11g R2, and 11g XE. In 12c onwards, this is not an issue. An example of the error that is raised is:

ORA-20987: Error at line 626, col19: Expected ":", seeing "<varchar2>"

ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 292
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 560
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 756
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 774
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 624
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 719
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 615
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 758
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 774
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 811
ORA-06512: at "APEX_050000.WWV_FLOW_JSON", line 887
ORA-06512: at line 24

The APEX_JSON.PARSE procedure currently has two supported overloaded methods. One which takes in a varchar2, and the other which takes in a CLOB. Thankfully there's a third (undocumented) method which takes in a table of varchar2. Note: in future versions of APEX, this may be a supported procedure.

Here's a sample snippet on how to convert your CLOB to a varchar2 table and then call APEX_JSON.PARSE:
  l_clob clob;
  l_json apex_json.t_values;

  l_temp varchar2(1000);

  l_clob_tab apex_application_global.vc_arr2;


  l_clob := 'load json data here';

  -- Convert clobtotable
    c_max_vc2_size pls_integer := 8100; -- Bug with dbms_lob.substr 8191

    l_offset pls_integer := 1;
    l_clob_length pls_integer;
    l_clob_length := dbms_lob.getlength(l_clob);

    while l_offset <= l_clob_length loop
      l_clob_tab(l_clob_tab.count + 1) :=
        dbms_lob.substr (
         lob_loc => l_clob,
         amount => least(c_max_vc2_size, l_clob_length - l_offset +1 ),
         offset => l_offset);

      l_offset := l_offset + c_max_vc2_size;
    end loop;

  -- Parse clob as JSON
    p_values => l_json,
    p_source => l_clob_tab,
    p_strict => true);

  -- You can now use the JSON object for l_json
Note: the code to convert the CLOB to the table will be available soon as part of the new OraOpenSource OOS_UTILS project.

Thanks to Christian Neumüller for helping resolve this issue.