Obtaining Database PSU history from SQL

With Oracle 12c it is now possible to query patch information directly from SQL.

This is done using the DBMS_QOPATCH package to fetch the output from OPatch as an XML document. We then process this document using xmltabl() to map the attributes to columns and join to DBA_REGISTRY_SQLPATCH. Here’s an example of the output:

  PATCH_ID BUNDLE_SERIES                  DESCRIPTION                                                  PATCH_DATE
---------- ------------------------------ ------------------------------------------------------------ ------------
  24732082 PSU                            DATABASE PATCH SET UPDATE                    17-JAN-2017

The query to produce this is shown below. There are additional columns available in DBA_REGISTRY_SQLPATCH that you might also want to display.

 with a as (select dbms_qopatch.get_opatch_lsinventory patch_output from dual)
select x.patch_id
     , bundle_series
     , s.description
     , to_char(to_date(regexp_substr(s.description,'.[0-9]{6}'),'.YYMMDD'),'DD-MON-YYYY') patch_date
  from a,
          passing a.patch_output
             patch_id number path 'patchID',
             patch_uid number path 'uniquePatchID',
             description varchar2(80) path 'patchDescription',
             rollbackable varchar2(8) path 'rollbackable'
       ) x
           , dba_registry_sqlpatch s
 where x.patch_id = s.patch_id
   and x.patch_uid = s.patch_uid
  order by patch_date


This article was first published on http://www.dba-resources.com.

Oracle October 2015 Critical Patch Update issues

After applying the October 2015 Critical Patchset Update to our databases ( on IBM AIX) , we discovered a number of invalid objects within the XDB and GSMADMIN_INTERNAL schemas:

select owner, object_name, status, object_type from dba_objects where status = 'INVALID' order by 1;

OWNER                OBJECT_NAME                    STATUS  OBJECT_TYPE
-------------------- ------------------------------ ------- -----------------------
XDB                  DBMS_XMLPARSER                 INVALID PACKAGE BODY
XDB                  DBMS_XDB_ADMIN                 INVALID PACKAGE BODY
XDB                  DBMS_CLOBUTIL                  INVALID PACKAGE
XDB                  DBMS_CLOBUTIL                  INVALID PACKAGE BODY

The issue can be resolved by granting the missing permissions needed for successful compilation listed below:

grant execute on DBMS_SQL to xdb;
grant execute on DBMS_LOB to xdb;
grant execute on UTL_FILE to xdb;
grant execute on utl_inaddr to GSMADMIN_INTERNAL;

Finally, recompile the invalid objects by using the $ORACLE_HOME/rdbms/admin/utlrp.sql script included in your installation.

This article was first published on http://www.dba-resources.com.

Obtaining Database PSU history from SQL

It is possible to determine all Patch Set Updates that has been applied to a database using a simple SQL statement – but note that this will only retrieve the details of patchsets that contained a database element (i.e. you had to run a SQL script in as part of the patch).

Note: It is better to use the opatch utility to retrieve the details of all patches and patchsets applied to an environment. However, this query may still prove useful.

col Time for a18
col target for a30
col action for a30

select to_char(action_time,'DD-MON-YYYY HH24:MI:SS') time
     , namespace||' '||version target
     , action||' '||comments action
  from dba_registry_history;

This article was first published on http://www.dba-resources.com.

How to determine flashback database progress

After you have issued the FLASHBACK DATABASE command how can you monitor the progress of the command, and more importantly, how can you predict when it will complete?

With Oracle 11g this information is now recorded in the v$session_longops table so we can monitor the progress of the flashback operation in another session very simply.

For example:

col opname for a30
col complete for a8
alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS' ;

select opname,round(sofar/totalwork*100)||'%' complete,start_time,(sysdate+time_remaining/86400) predicted_finish from v$session_longops where time_remaining > 0;

OPNAME                         COMPLETE START_TIME           PREDICTED_FINISH
------------------------------ -------- -------------------- --------------------
Flashback Database             97%      23-MAR-2015 10:23:49 23-MAR-2015 10:26:34

Unfortunately, this information is not present in v$session_longops in Oracle 10g.

This article was first published on http://www.dba-resources.com.

CRS: pullup() dependency test


We were recently creating some custom CRS resources that we need to automatically startup and shutdown to follow the Primary database in a DataGuard configuration. To achieve this, we were going to add start, stop and pullup dependencies between the database service resource and our custom resource.

However, the behaviour of the pullup dependency as described in the Oracle documentation is counter-intuitive and so I decided to do some testing around it to see the results and reassure myself that the documentation was correct.

In short, the documentation states that if resource A has aSTART_DEPENDENCY=pullup(B) dependency then resource B will pull up resource A(i.e. starting B will start A).

This test confirms whether this is indeed the case.

1. Create the test scripts

Firstly, we need to create the necessary action scripts that will be used by our test resources. let’s create a generic action script which can start, stop and check a given script/command:

File: /oracle/dbadmin/generic_action.ksh

[code language=”bash”]
# —————————————————————-
# A Generic action script for use with custom CRS resources
# which will perform a start/stop/check on the given script/cmd.
# syntax: generic_action.ksh <action> <cmd>
# NOTE: remember to quote <cmd> when calling the script if the
# command contains spaces.
# —————————————————————-


PID1=`ps -ef | grep "${CMD}" | grep -v generic_action | grep -v grep | awk ‘{ print $2 }’`
BASENAME=`basename ${CMD} 2>/dev/null`

case ${ACTION} in
if [ "${PID1}" != "" ]
umask 002
nohup ${CMD} >${LOG_DIR}/${BASENAME}.log 2>&1 &
echo "${BASENAME}: ${status_p1}"

if [ "${PID1}" != "" ] ; then
kill -9 ${PID1} && echo "${BASENAME} daemon killed"
echo "${BASENAME}: no running Process!"

if [ "${PID1}" != "" ] ; then
echo "running"
exit 0
echo "not running"
exit 1

echo "Usage: "`basename $0`" start|stop|check <cmd>"


Now we need a process that can be safely started and stopped, but which doesn’t do anything exciting. The sleep command is a good candidate, so let’s put that in a script.

File: /oracle/dbadmin/sleep_test.ksh

[code language=”bash”]
sleep 86400

We are going to have two CRS application resources, TEST_ONE and TEST_TWO. Each resource will need it’s own action script (which is very simple as we’re utilizing the generic_action.ksh script above).

When calling sleep_test.ksh we append a seond parameter which is not used by the script, but will help us identify which sleep command belongs to which resource.

File: /oracle/dbadmin/test_one.ksh

[code language=”bash”]
./generic_action.ksh $1 "/oracle/dbadmin/sleep_test.ksh TEST_ONE"
return $?

File: /oracle/dbadmin/test_two.ksh

[code language=”bash”]
./generic_action.ksh $1 "/oracle/dbadmin/sleep_test.ksh TEST_TWO"
return $?

2. Create the CRS resources

Now that we have all the necessary scripts to start and stop our test processes, we need to register them as resources in CRS.

[code language=”bash”]
crsctl add resource ROB_TEST1 -type application -attr "ACTION_SCRIPT=/oracle/dbadmin/test_one.ksh,AUTO_START=never,PLACEMENT =restricted,HOSTING_MEMBERS=`hostname`"

crsctl add resource ROB_TEST2 -type application -attr "ACTION_SCRIPT=/oracle/dbadmin/test_two.ksh,AUTO_START=never,PLACEMENT =restricted,HOSTING_MEMBERS=`hostname`"

3. Test the pullup() dependency behaviour

Let’s add a startup dependency to ROB_TEST1 which references ROB_TEST2:

[code language=”bash”]
crsctl modify resource ROB_TEST1 -attr "START_DEPENDENCIES=’pullup:always(ROB_TEST2)’"

And test to see whether starting ROB_TEST1 starts ROB_TEST2, or vice-versa.

[code language=”bash”]
<pre>crs_stat -t | grep ROB



[code language=”bash”]
crsctl start resource ROB_TEST1

crs_stat -t | grep ROB

ROB_TEST1 application ONLINE ONLINE wycloram001

So at this point, we have confirmed that starting ROB_TEST1 does not “pullup”ROB_TEST2. Let’s shutdown the resource and try it in reverse.

[code language=”bash”]
crsctl stop resource ROB_TEST1

crs_stat -t | grep ROB


And this time try starting ROB_TEST2

[code language=”bash”]
crsctl start resource ROB_TEST2

<pre>crs_stat -t | grep ROB

ROB_TEST1 application ONLINE ONLINE wycloram001
ROB_TEST2 application ONLINE ONLINE wycloram001

So starting ROB_TEST2 does “pullup” ROB_TEST1.


So creating resource A with -attr "START_DEPENDENCIES='pullup(B)'" will cause resource A to be started by resource B, not the other way around.

Personally I find this counter intuitive, and would have preferred pullup() to have been called pullup_by() or start_with().

Clean up

Finally, we need to clean up after ourselves and remove all resources and files created.

[code language=”bash”]
crsctl stop resource ROB_TEST1
crsctl stop resource ROB_TEST2

crsctl delete resource ROB_TEST1
crsctl delete resource ROB_TEST2

rm /oracle/dbadmin/generic_action.ksh
rm /oracle/dbadmin/test_one.ksh
rm /oracle/dbadmin/test_two.ksh
rm /oracle/dbadmin/generic_action.ksh

crs_stat -t | grep ROB

This article was first published on http://www.dba-resources.com.

Finding the origin of failed login attempts

This guide is intended to aid in establishing the origin of failed database login attempts. Most of the time these failed attempts are due to typos or outdated application configurations and are therefore not malicious, however due to default password management policies setting a limit on failed login attempts this often causes accounts to become locked out for legitimate users.

Note: An ACCOUNT_STATUS of “LOCKED” (in DBA_USERS) means the account was locked manually by the DBA, whereas “LOCKED(TIMED)” indicates the account was locked due to the number of failed login attempts being exceeded (as defined by FAILED_LOGIN_ATTEMPTS in the profile in effect for the user).

The following options are available for collecting information on the origin of failed connection attempts (in order of simplicity):

1. Using database auditing (if already enabled)

Caveat: This is the simplest method to determine the source of failed login attempts providing that auditing is already enabled on your database as the information has (probably) already been captured. However, if auditing is not enabled then doing so will require that the database be restarted, in which case this option is no longer the simplest!

Firstly, check to see whether auditing is enabled and set to “DB” (meaning the audit trail is written to a database table).

show parameter audit_trail

If not set, then you will need to enable auditing, restart the database and then enable auditing of unsucessful logins as follows:

audit session whenever not successful;

The audit records for unsuccessful logon attempts can then be found as follows:

col ntimestamp# for a30 heading "Timestamp"
col userid for a20 heading "Username"
col userhost for a15 heading "Machine"
col spare1 for a15 heading "OS User"
col comment$text for a80 heading "Details" wrap

select ntimestamp#, userid, userhost, spare1, comment$text from sys.aud$ where returncode=1017 order by 1;

Sample output:

Timestamp Username Machine OS User
------------------------------ -------------------- --------------- ---------------
08-DEC-14 PM APPUSER unix_app_001 orafrms
Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.218.

08-DEC-14 PM APPUSER unix_app_001 orafrms
Authenticated by: DATABASE; Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.218.

Note: the USERHOST column is only populated with the Client Host machine name as of 10G, in earlier versions this was the Numeric instance ID for the Oracle instance from which the user is accessing the database in a RAC environment.

2. Use a trigger to capture additional information

The following trigger code can be used to gather additional information about unsuccessful login attempts and write them to the database alert log, it is recommended to integrate this code into an existing trigger if you already have a trigger for this triggering event.

 l_message varchar2(2000);
 -- ORA-1017: invalid username/password; logon denied
 select 'Failed login attempt to the "'|| sys_context('USERENV' ,'AUTHENTICATED_IDENTITY') ||'" schema'
 || ' using ' || sys_context ('USERENV', 'AUTHENTICATION_TYPE') ||' authentication'
 || ' at ' || to_char(logon_time,'dd-MON-yy hh24:mi:ss' )
 || ' from ' || osuser ||'@'||machine ||' ['||nvl(sys_context ('USERENV', 'IP_ADDRESS'),'Unknown IP')||']'
 || ' via the "' ||program||'" program.'
 into l_message
 from sys .v_$session
 where sid = to_number(substr(dbms_session.unique_session_id,1 ,4), 'xxxx')
 and serial# = to_number(substr(dbms_session.unique_session_id,5 ,4), 'xxxx');
 -- write to alert log
 sys.dbms_system .ksdwrt( 2,l_message );

Some sample output from the alert.log looks like:

Tue Jan 06 09:45:36 2015
Failed login attempt to the "appuser" schema using DATABASE authentication at 06-JAN-15 09:45:35 from orafrms@unix_app_001 [] via the "frmweb@unix_app_001 (TNS V1-V3)" program.

3. Setting an event to generate trace files on unsuccessful login.

You can instruct the database to write a trace file whenever an unsuccessful login attempt is made by setting the following event (the example below will only set the event until the next time the database is restarted. Update your pfile or spfile accordingly if you want this to be permanent).

alter system set events '1017 trace name errorstack level 10';

Trace files will be generated in user_dump_dest whenever someone attempts to login using an invalid username / password. As the trace is requested at level 10 it will include a section labeled PROCESS STATE that includes trace information such as :

O/S info: user:orafrms, term: pts/15, ospid: 29959, machine:unix_app_001
program: frmweb@unix_app_001 (TNS V1-V3)
application name: frmweb@unix_app_001 (TNS V1-V3), hash value=0
last wait for 'SQL*Net message from client' blocking sess=0x0 seq=2 wait_time=5570 seconds since wait started=0

In this case it was an ‘frmweb’ client running as OS user ‘orafrms’ that started the client session. The section “Call Stack Trace” may aid support in further diagnosing the issue.

Note: If the OS user or program is ‘oracle’ the connection may originate from a Database Link.

4. Using SQL*Net tracing to gather information

A sqlnet trace can provide you with even more details about the connection attempt but use this only if none of the above are successful in determining the origin of the failed login as it will be hard to find what you are looking for if you enable sqlnet tracing (and it can potentially consume large amounts of disk space).

To enable SQL*Net tracing create or edit the server side sqlnet.ora file and add the following parameters:

# server side sqlnet trace parameters
trace_level_server = 16
trace_directory_server = <any directory on a volume with enough freespace>

This article was first published on http://www.dba-resources.com.

Wrapping PL/SQL code from within the database

If you have ever wanted to wrap PL/SQL  code that already exists within the database, then you can do so using the simple procedure below which uses a combination of DBMS_METADATA and DBMS_DDL.

Our reason for doing this was because our code is deployed directly from source control into the database and needs to be wrapped after deployment to stop casual viewing of the source. We didn’t want to have to stop wrapped code in source control, and didn’t have access to the wrap tool from the deployment server.

Here’s the procedure:

CREATE OR REPLACE PROCEDURE wrap_existing_pkg( p_pkg_name varchar2 )
  l_body      clob;   
  l_src       DBMS_SQL.VARCHAR2A;
  l_idx       number :=1;   
  l_amount    number := 32767;
  l_offset    number := 1;
  l_length    number;
  -- NOTE: Use of this package requires that the user has been granted
  --       execute permission on DBMS_DDL by SYS.

  l_body   := dbms_metadata.get_ddl( 'PACKAGE_BODY', p_pkg_name);                     
  l_length := dbms_lob.getlength( l_body );
  dbms_lob.open( l_body, 0);
  while ( l_offset < l_length ) loop            
    dbms_lob.read( l_body, l_amount, l_offset, l_src(l_idx) );
    l_offset := l_offset + l_amount;
    l_idx := l_idx+1;
  end loop;                
  dbms_lob.close( l_body );
  dbms_ddl.create_wrapped( l_src, 1, l_idx-1 );

To wrap a package body called ‘ENCRYPTION_TOOLS’ you would execute the following:

execute wrap_existing_pkg('ENCRYPTION_TOOLS');

That’s it!

This article was first published on http://www.dba-resources.com.

Scripting: Is the DB DataGuarded?

Whilst developing some administrative functions within PL/SQL I needed to know whether the database that the code was being run in was part of an Oracle DataGuard setup, which you would think would be a trivial exercise.

My first thought was that we could determine this via v$database. If the DATABASE_ROLE column in v$database contains STANDBY then we know it’s a DataGuard configuration… but what if the database role is PRIMARY? how then can we determine if that’s a stand-alone database or a DataGuard primary database?

Next, I thought looking in  v$parameter might reveal some useful parameters such as dg_broker_config_file. However this only tells us whether a DataGuard configuration has been setup using DataGuard Broker – not whether the configuration is active, or whether it was configured manually (without broker).

My final solution was to query v$archive_dest_status to determine if there were any valid remote destinations. This isn’t foolproof by any means, but covers the most of the likely scenarios. If you have any better solutions then please post them in the comments.

Here’s the query I used within a PL/SQL block:

  l_isDataguarded number;
  select count(*) into l_isDataguarded from v$archive_dest_status 
    where status='VALID' and type!='LOCAL';
  if (l_isDataguarded>0) then
    -- The database is part of a valid dataguard configuration
    -- The database is standalone
 end if; END; /


This article was first published on http://www.dba-resources.com.

Intermittent ORA-12514 error connecting to SCAN listeners

Users were complaining of ORA-12514: TNS:listener does not currently know of service requested in connect descriptor errors whilst connecting to a database on a RAC cluster.

Testing manually:

  • The TNS entry using the SCAN listener connects successfully nearly all the time when using SQLplus.
  • The TNS entry using the SCAN listener failed every time using the batch script (upon viewing the code, the batch script created 3 connections in quick succession)
  • Changing the TNS entry to use the local listener caused the batch script to run successfully.

I checked that the database is configured to register the service with the SCAN and VIP listeners. The parameters were correct:

NAME               VALUE
service_names      SALES.MYDOMAIN.COM
remote_listener    devdbs-scan:1521
local_listener     (DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=ora012-vip)(PORT=1528))))          

Then I checked that the service was registered with all the SCAN listeners:

ora014> lsnrctl status LISTENER_SCAN1 | grep -i SALES
Service "SALES.MYDOMAIN.COM" has 1 instance(s).
  Instance "SALES_1", status READY, has 1 handler(s) for this service...

ora012> lsnrctl status LISTENER_SCAN2 | grep -i SALES
Service "SALES.MYDOMAIN.COM" has 1 instance(s).
  Instance "SALES_1", status READY, has 1 handler(s) for this service...

ora015> lsnrctl status LISTENER_SCAN3 | grep -i SALES
So, one of the SCAN listeners did not know of the service…?
Next, I checked how long the SCAN listeners have been running:
ora014> lsnrctl status LISTENER_SCAN1 | grep -i uptime
Uptime                    106 days 23 hr. 3 min. 32 sec
ora012> lsnrctl status LISTENER_SCAN2 | grep -i uptime
Uptime                    46 days 14 hr. 57 min. 14 sec
ora015> lsnrctl status LISTENER_SCAN3 | grep -i uptime
Uptime                    7 days 12 hr. 37 min. 7 sec

The SCAN listener that did not have the service registered had an uptime much shorter than the others, meaning that this SCAN listener had failed over fairly recently. This led me to Bug 13066936 (“Instance does not register services when scan fails over”) which was mentioned in DocID 1373350.1 in Metalink/MOS. The problem is fixed in,, and Bundle 24 for Exadata


Upgrade the Grid Infrastructure and RDBMS Oracle installations to a fixed version.


The workaround is to make a note of the current value of remote_listener, change it to a null value and then change it back as follows:

col remote_val new_value remote_val
select value remote_val from v$parameter where name='remote_listener';
alter system set remote_listener='';
alter system register;
alter system set remote_listener='&remote_val';
alter system register;

I have written a short kornshell script (below) which will re-register all running instances on a node with the SCAN listeners using the workaround detailed above. If you encounter this bug and need a quick workaround you should run this script once on each node in the cluster.

# ------------------------------------------------------------------------
# A simple kornshell script to force all running instances to re-register
# with the remote_listener (workaround for bug 13066936)
# ------------------------------------------------------------------------
for INST in `ps -aef | grep 'ora_pmon' | egrep -v '(grep|sed)' | sed 's/^.*ora_pmon_//'`
  echo . Reregistering $INST with remote_listener
  export ORAENV_ASK=NO
  . oraenv $INST
  sqlplus -SL "/ as sysdba" <<-EOF >/dev/null
    col remote_val new_value remote_val
    select value remote_val from v$parameter where name='remote_listener';
    alter system set remote_listener='';
    alter system register;
    alter system set remote_listener='&remote_val';
    alter system register;

This article was first published on http://www.dba-resources.com.

JDBC Connection strings for Oracle (thin driver)

This is a subject that I am often asked about by developers and support teams changing  application configuration details: What is the correct format of JDBC connection strings for the thin driver?

Oracle’s JDBC Thin driver uses Java sockets to connect directly to Oracle. It provides its own TCP/IP version of Oracle’s SQL*Net protocol. Because it is 100% Java, this driver is platform independent and can also run from a Web Browser (applets).

Old syntax, supporting instance (SID) names only::


New syntax, supporting both SID and SERVICE names:


There are also some drivers that support a URL syntax which allow to put Oracle user id and password in URL.


Finally, you can also give a tnsnames.ora entry-like in the string:


This article was first published on http://www.dba-resources.com.