My experience, findings and thoughts in my daily work with Oracle products

A Big Set of Oracle Presentations from NYOUG

On the web site of The New York Oracle Users Group (NYOUG) you can find some very interesting presentations from the recent NYOUG events:

June 2005
Things You “Know” - from Tom Kyte, Oracle Corporation

Resource Mapping: A Wait Time-Based Methodology for Database Performance Analysis – from Matt Larson, Confio Software
Presents the Resource Mapping Methodology and how it can be used for Wait-Event Analysis.

March 2005
Interpreting Execution Plans – from the OCM DBA in Oracle9i Tanel Põder,

December 2004
Backup and Recovery - A Completeness Check - from Michael S. Abbey, The Pythian Group
What must be the strategy for checking of your backups.

Oracle Real Application Clusters: Expert Advice on Using RAC – from Darron Clark, Oracle Corporation
Why should I choose Real Application Clusters?

If you want to understand why the RAC may be is not the best solution for you, take a look here: You Probably Don't Need RAC

New and Improved Methods for Administering Your Database - from Howard Horowitz, HGH
Good overview of some of the new features coming with Oracle 10g.

November 2004
Change Data Capture in Oracle 10g - from Jack Raitto, CDC Development Manager, Oracle Corporation
Sometimes you need to know what information has been changed in your database. It may be for reports or as a part of an elaborate ETL process. If the database wasn't designed with this in mind, you may have to develop a custom solution. This can be a costly and error-prone process including triggers and big inefficient queries. The alternative is to use Oracle's 9i feature - Change Data Capture (CDC). This is a nice solution and completely independent from the rest of the database design.

June 2004
Oracle 10g Feature: RMAN Incrementally Updated Backups - from Dave Anderson, SkillBuilders

Using 10046 trace data to resolve Oracle performance issues - A review of three case studies (Paper, Presentation) - from Jeff Holt and Gary Goodman, Hotsos Enterprises, Ltd.

Oracle 10g (OCP) Certification Preparation - from Howard Horowitz
The best information that I have ever found for 1Z0-040 exam, 91 pages full with examples, screenshots and helpful information for everyone who is planning to go for this exam.

The same document but with added notes inside the presentation: Oracle 10g (OCP) Certification Preparation

Another document from the same author: Oracle Certification Preparation (OCP) / Hidden Treasures

Some of the old ones:
V$Views: Don't Leave $Home Without Them - Michael Bell, BMC

ORA-3113's, 600's and 7445's Oh My! - Anita Bardeen, Oracle Corporation
Very helpful info (coming from Oracle employee) about three scary Oracle errors: ORA-3113 - "end-of-file on communication channel", ORA-600 and ORA-7445 - "exception encountered core dump"

Special: Oracle Support Answers Your Questions (Presentation, Paper) – Mike Fleck, Oracle Support
If you want to understand what are the common reasons for TAR requests and what are the roles and responsibilities inside the Oracle Support team.

For a full set of the NYOUG presentations: NYOUG Presentations
Note: Some of the links for the presentations are broken.


Once More Into The Fray

Mike Ault has published an answer to all comments about the recent published article from Don Burleson: Once More Into The Fray
Added from Rado: All of the comments (from Tom Kyte and other visitors) after the above blog post was deleted from Mike as he declares into his post and here: Comment on "Getting Credible Information" blog post

Added from Rado: Sorry but now I just have discovered that the following thread link is already removed from the Don Burleson's Oracle forum (The name of the thread were: Still errors in document "super-sizing your PGA"):
Don Burleson's opinion: Link for the forum thread

Post on the Niall's Litchfield blog about the same article: A few thoughts on PGA memory management

With all of the above I am closing this thread as well.

The _PGA_MAX_SIZE post is updated

I have updated my
post about the _PGA_MAX_SIZE hidden parameter with additional information that pop-up in the past few days.
I will appreciate any comments and corrections about my points if you think that they are wrong or you just have more valuable information that I can add to this post.


Oracle OPatch Security Holes

A new white paper is coming from one of the leading security companies -
NGS (Next Generation Security) Software Ltd.
It will be related to discovered problems after using of the OPatch utility for applying of Oracle patches and the title is "Patch Verification of Oracle Database Servers".
Some quotes from this Eweek news story: Security Firm: Oracle Opatch Leaves Firms Uncovered:
A total of more than 100 recently surveyed database servers, a staggering 76 percent have anomalies between expected and actual patch levels.

"The fact is that Opatch is failing, or [an Oracle] patch failed to fix certain issues appropriately," he said. "Customers aren't doing anything wrong. It's the tools themselves that are faulty."

In the April CPU, on all platforms, new Java classes supplied to fix SQL injection vulnerabilities in DBMS_SUBSCRIBE and DBMS_ISUBSCRIBE were not actually loaded

Opatch often fails for various reasons. "Permissions are wrong; files that are to be patched are still in use; environment variables are wrong; whatever the reason might be, and a quick search on Google reveals many more, Opatch can often fail to update the inventory"
Quotes from another interview with David Litchfield, managing director for NGS: OPatch, wherefore art thou?
Incomplete patches

Patches fail to fix the flaw

Failure to perform post installation tasks

OPatch fails to update the inventory
I will open a new post when the paper is available on the NGS site with a link for download.

Interview with Oracle Chief Security Officer - Mary Ann Davidson:
When security researchers become the problem, one of the seldom official Oracle statements about their security problems.


Pete Finnigan's Weblog

Pete Finnigan has mentioned my blog in his Oracle Security weblog, especially the security-related post: How to see the MOD_PLSQL passwords in clear text

Thanks to him for his comments!

I want to note that this information can be found at his book: Oracle Security Step-by-Step (Version 2.0) as step 8.1.2 from Phase 8 - Application Servers and the Middle Tier.

Additional information online:
Fact sheet about Oracle Mod_PLSQL passwords
How to Unencrypt the Portal Password in


The Don Burleson's article

Added from Rado (23 August): Don updated his article again yesterday (see the link below). Now it is more detailed and there are lot of quoted documents.
But there is again some questionable information:

No RAM sort may use more than 5% of pga_aggegate_target or _pga_max_size, whichever is smaller. This means that no task may use more than 200 megabytes for sorting or hash joins.
Again, for serial operations for one "RAM sort" is used one workarea. We know that the _PGA_MAX_SIZE is per process but one process can have many workareas. The max size of one workarea is calculated from the formula min(5% of PGA_AGGREGATE_TARGET, _PGA_MAX_SIZE/2), not
I have checked that by tests. I have changed the _PGA_MAX_SIZE on my database and after every change, the _SMM_MAX_SIZE has been recalculated every time to be equal to
* pga_aggregate_target = 4g
* _pga_max_size = 300m
* _smm_px_max_size = 333m
With these hidden parameters set we see a 4x large size increase for parallel queries and sorts:
* A RAM sort or hash join may now have up to the full 200 megabytes (5% of pga_aggegate_target) a 400% increase.
Again, the same mistake, here the single workarea will have size no more than min(5% of PGA_AGGREGATE_TARGET, _PGA_MAX_SIZE/2), which in this case is min(200 MB, 150 MB) = 150 MB, the calculated value for _SMM_MAX_SIZE will be 153600 (in KB).

Don Burleson have published one revised version of his article on his site: Undocumented secrets for super-sizing your PGA

In the revised article you can see that:
* pga_aggregate_target = 6g
* _pga_max_size = 300m
* _smm_px_max_size = 333m

With these hidden parameters set we see a 5x large size increase for parallel queries and sorts:

* A RAM sort or hash join may now have up to 300 megabytes (5% of pga_aggegate_target) a 50% increase.
I did not perform a tests but according to the all information that I have read about the subject, the single RAM sort will use no more than 150 MB for these particular settings because it is taken as min(5% of PGA_AGGREGATE_TARGET,_PGA_MAX_SIZE/2), which in this case is 150 MB.
At general, the article is more unclear than its original because it do not explain in detail the mentioned hidden parameters and their meanings and correct use.

I have seen from the
Doug's blog that Jonathan Lewis has published a detailed article on his site about the recently discussed article for the PGA hidden parameters:
Undocumented secrets – or untested fairytales?

Jonathan Lewis disproves many of the assertions coming from Don Burleson's article with lot of tests, as:
- sort_area_size is ignored when pga_aggregate_target is set and when workarea_size_policy =auto
- The maximum default allowed value is 200 megabytes (for PGA_AGGREGATE_TARGET)

Jonathan Lewis explains in more details the _SMM_MAX_SIZE and _SMM_PX_MAX_SIZE hidden parameters.

There is a post on the Tom Kyte's blog about the same article too: Getting Credible Information

Added from Rado (24 August): There is a new blog post from Niall Litchfield on his blog: A few thoughts on PGA memory management

Both of them concludes in that everyone must take responsibility for playing around with hidden parameters (especially when they are applied to a production system) and that everyone must knows what exactly is he/she doing.

According to results coming from this presentation (and mentioned discussions from my previous The _PGA_MAX_SIZE hidden parameter post) it can be seen that good results for big sort tasks can be achieved by setting of the _PGA_MAX_SIZE and related hidden parameters but only with a lot of testing and guidance of the Oracle Support team.

In conclusion, do NOT believe to everything that you have read on Internet, be suspicious especially when it presents some new and revolutionary information. If you are suspicious - TEST IT, if you are not suspicious - TEST IT, if you are just curious - TEST IT.
If you decide to apply it on your production system - TEST IT, TEST IT, TEST IT.
Do NOT set hidden parameters without testing and permission of Oracle Support.


The _PGA_MAX_SIZE hidden parameter

Check the following links for more details regarding to my previous post (Management of the Oracle9i PGA)

From all information that I have read I can note several important points:
  • The _PGA_MAX_SIZE does not controls the total used PGA memory for the whole system.
  • The _PGA_MAX_SIXE controls the max used PGA memory per process, which by default is 200 MB.
  • A work area is a large extent on the memory available in the Program Global Area (PGA).
  • Each time a cursor is executed, a new runtime area is created for that cursor in the PGA memory region of the server process executing that cursor.
  • One process can have many work areas.
  • The size that can be used from a single work area is limited to a max of 100 MB (_PGA_MAX_SIZE/2) by default for serial operations. This is in case that your PGA_AGGREGATE_TARGET > = 2 GB. Otherwise it will use no more than of (5% of PGA_AGGREGATE_TARGET), for values for PGA_AGGREGATE_TARGET <= 2GB. It is coming from that the min(5% of PGA_AGGREGATE_TARGET, 100MB) is taken.
  • The size for a single work area is limited to 30% of the PGA_AGGREGATE_TARGET/DOP for parallel operations, where DOP is Degree of Parallelism.
  • If you have set, for example the PGA_AGGREGATE_TARGET to 1 GB then Oracle cannot give more than 50 MB to a single work area (for serial operations).
  • If you want to use more than 100 MB per single work area then may be the _PGA_MAX_SIZE can help you to achieve better results for your sort tasks and to let you to perform the biggest ones only inside the memory.
  • The _PGA_MAX_SIZE is measured in Bytes.
  • Another hidden parameters that you must be aware of, are the _SMM_MAX_SIZE and _SMM_PX_MAX_SIZE.
  • _SMM_MAX_SIZE limits the maximum work area size for serial operations. _SMM_MAX_SIZE is calculated from the result of min(5% of PGA_AGGREGATE_TARGET, _PGA_MAX_SIZE/2).
  • When you are executing a parallel query it is still using only one work area, all slaves are working under that work area.
  • _SMM_PX_MAX_SIZE limits the maximum size of used memory from all slave processes. The used size by each slave process inside that work area is limited from the value of _SMM_MAX_SIZE parameter. So, the value of _SMM_MAX_SIZE is used not only for serial but for parallel operations as well.
  • _SMM_PX_MAX_SIZE is used only when you are running query with a degree with a parallelism more than 6. For example, if you have DOP = 6 then every slave process can takes no more than _SMM_MAX_SIZE. When DOP > 6 then the _SMM_PX_MAX_SIZE is taking a place.
  • The values for _SMM_MAX_SIZE and _SMM_PX_MAX_SIZE are measured in KBytes.
  • After changing of the _PGA_MAX_SIZE parameter and restarting of the database, values for both _SMM_MAX_SIZE and _SMM_PX_MAX_SIZE are recalculated automatically. This means that you don't need to change them manually. But you must know what they are doing.
  • For 32-bit systems the preferrable value for _SMM_MAX_SIZE is 500 MB because the limitations. On 64-bit system you will be able to allocate around 4GB of memory for single sort task. Be aware of these values when you are changing the _PGA_MAX_SIZE because it will affect the _SMM_MAX_SIZE as well.
  • Because the _PGA_MAX_SIZE is used per process you must set the PGA_AGGREGATE_TARGET to appropriate value as well. This is the parameter, which indeed controls the total amount of the PGA memory.
  • If you want to avoid any ORA-04030 errors configure the values of these parameters appropriate and with attention depending of your current system environment.
  • Do NOT set these hidden parameters on production systems without permission of Oracle Support. You must be absolutely sure that this is the only way that can solve your problem before to go for it. Usually you must open a TAR and to request for permission.
  • If you want to use them, you must ask Oracle Support for needed patches that must be applied.
  • When you are quering the V$PGASTAT view, be aware of that how the row "Cache Hit Percentage" is calculated: PGA Cache Hit Ratio = total bytes processed / (total bytes processed + total extra bytes read/written). With this formual it is almost impossible to get a result much smaller then 50%.
  • Usually, this parameter can be helpful only on large databases. There are Data Warehouse databases that have reduced the average query execution time by 50% after changing of the _PGA_MAX_SIZE.
  • Remember, use this solution only if you don't have another workaround to solve your problems.
  • If you decide to go for it, perform a lot of tests of your test environments before to put it on you production database.
  • Remember that all of the above info is the result from my research inside all available information about this hidden parameter, I did not test many of these points. In this post I just summarize all of that I know about this subject.
  • Do NOT accept the information, which is posted here as your main source, your main source must be the results of your tests and the Oracle Support Team approval and confirmation that you have done everything that is needed to use this parameter in your production environment.
I will appreciate any comments about the above points, especially if I am wrong in some of my conclusions.

Another useful information about this subject:

Oracle9i New Feature: Automated SQL Execution Memory Management
Oracle9i Monitoring Automated SQL Execution Memory Management
Automatic PGA Memory Managament in 9i

I think that few things, which are coming from the article that insists my posts must be reconsidered or at least discussed:
  • the maximum default allowed PGA value for the whole system is 200 MB. Yes, if the PGA_AGGREGATE_TARGET = 200 MB.
  • no task can use more than 10 MB for sorting or hash joins. Yes, if the PGA_AGGREGATE_TARGET = 200 MB.
  • the total workarea size cannot exceed 200 MB because the default value of the _PGA_MAX_SIZE. Yes, per process.

Added on 08 September: Jonathan Lewis has written new article with more info, details and results of testing accroding to Don Burleson's article: The Snark research mechanism ?
Use this article if you want to learn more about the _PGA_MAX_SIZE parameter.

I am correcting myself in that the advices from the Metalink Note 223730.1 (quotes from previous post) are not correct.


Management of the Oracle9i PGA

If you think that your PGA can be configured properly just by increasing of the PGA_AGGREGATE_TARGET parameter, think twice - it is NOT!

See this article from Don Burleson: Undocumented secrets for super-sizing your PGA
It shows that there are two hidden parameters that can help for proper using of the PGA. Otherwise, Oracle will never use more than 200 MB for the whole PGA nor will it use more than 10 MB for single PGA task (sort, hash join, group-by, bitmap merge) regardless of your setting.
This can explains why in many cases V$PGA_TARGET_ADVICE shows that the size of the current PGA is enough but even that there are lot of performed disk sorts.
If this information is correct, it means that the following advices coming from Metalink Note 223730.1: Automatic PGA Memory Management in 9i
To determine the appropriate setting for PGA_AGGREGATE_TARGET parameter I recommend to follow the following steps:
Make a first estimate for PGA_AGGREGATE_TARGET based on the following rule:
- For OLTP systems
PGA_AGGREGATE_TARGET = ( * 80%) * 20%
- For DSS systems
PGA_AGGREGATE_TARGET = ( * 80%) * 50%
cannot be considered as fully correct ones.
There are lot of questions that are raising from the above article.


Using of BULK COLLECT and FORALL for better performance

Lets see some quotes from the Oracle documentation:

PL/SQL engine executes procedural statements but sends SQL statements to the SQL engine, which executes the SQL statements and, in some cases, returns data to the PL/SQL engine.
Too many context switches between the PL/SQL and SQL engines can harm performance. That can happen when a loop executes a separate SQL statement for each element of a collection, specifying the collection element as a bind variable. For example, the following DELETE statement is sent to the SQL engine with each iteration of the FOR loop:

depts NumList := NumList(10, 30, 70); -- department numbers

FOR i IN depts.FIRST..depts.LAST LOOP
DELETE FROM emp WHERE deptno = depts(i);

Using the FORALL Statement
The keyword FORALL instructs the PL/SQL engine to bulk-bind input collections before sending them to the SQL engine. Although the FORALL statement contains an iteration scheme, it is not a FOR loop. Its syntax follows:
FORALL index IN lower_bound..upper_bound
FORALL does not require a LOOP command.

, using of the BULK COLLECT and FORALL statements can drastically improve the performance. For example, I have rewritten a PL/SQL procedure used in a batch job and in result the execution time has been reduced from 12 minutes to 16 seconds – only by using BULK COLLECT and FORALL.
In few words, you must always try to avoid this:

Copyrighted from Steven Feuerstein's book - Oracle PL/SQL Programming, Third Edition

If you are one of these who already are using this feature or you have a PL/SQL procedures that are suffering from slow performance because “row-by-row” processing then may be you will be interested in the following post.

The record types can be defined as based on cursor:

CURSOR c_dept_data IS
SELECT dept_id, dept_name
FROM departments
WHERE dept_id in (8,10);

TYPE rt_dept IS
vrt_dept rt_dept;
or based on table:

TYPE rt_dept IS
vrt_dept rt_dept;
Table DEPARTMENTS have four columns: dept_id,dept_name,dept_mgr_id,dept_description
Lets try to use the BULK COLLECT:
SELECT dept_id,dept_name,dept_mgr_id,dept_description BULK COLLECT INTO vrt_dept FROM departments;
And to insert the collected data into some temporary table with FORALL:
FORALL i IN vrt_dept.FIRST .. vrt_dept.LAST
INSERT INTO departments_tmp VALUES vrt_dept (i);
It is working and everything seems okay.
Lets try to do something little bit complicated - to insert only some of the collected columns by changing only the FORALL statement:
FORALL i IN vrt_dept.FIRST .. vrt_dept.LAST
INSERT INTO departments_tmp (dept_id,dept_description) VALUES (vrt_dept(i).dept_id,vrt_dept(i).dept_description);
In result the following compilation error is returned:
PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
That is because bulk bind cannot use table of composite types. It works fine with a table of native types, but not with a table of objects or records. The compiler does not support that reference to bulk binds fields. For more information see this Metalink note:

Binding a Collection With FORALL Fails With ORA-01767or PLS-00103
But this is probably the most common case when someone will want to use the BULK COLLECT and FORALL statements (for example - cursor with parameters and joins of three or more tables). The workaround of this problem is to be used tables of native types:

TYPE rt_dept_id IS TABLE OF departments.dept_id%TYPE INDEX BY BINARY_INTEGER;
TYPE rt_dept_description IS TABLE OF departments.dept_description%TYPE INDEX BY BINARY_INTEGER;

vrt_dept_id rt_dept_id;
vrt_dept_description rt_dept_description;

SELECT dept_id,dept_description BULK COLLECT INTO vrt_dept_id, vrt_dept_description FROM departments;
And then:

FORALL i IN vrt_dept.FIRST .. vrt_dept.LAST
INSERT INTO departments_tmp (dept_id,dept_description) VALUES (vrt_dept_id(i),vrt_dept_description(i));
Now the procedure is compiled and works correctly after testing.
If you are sure about the used types instead declaring the tables by this way:

TYPE rt_dept_id IS TABLE OF departments.dept_id%TYPE INDEX BY BINARY_INTEGER;
You can define a collection variables in the following way (without defining types, just collection variables):

vrt_dept_id dbms_sql.NUMBER_TABLE;
vrt_dept_description dbms_sql.VARCHAR2_TABLE;
For full reference of Bulk SQL types, use this link: Constants, Types, and Exceptions for DBMS_SQL
If there is a possibility the returned record set to be empty then there are few important rules that should be followed.
Lets try some example with a cursor:

CURSOR c_dept_data IS
SELECT dept_id, dept_name
FROM departments
WHERE dept_id in (8,10);

TYPE rt_dept_id IS TABLE OF departments.dept_id%TYPE INDEX BY BINARY_INTEGER;
TYPE rt_dept_name IS TABLE OF departments.dept_name%TYPE INDEX BY BINARY_INTEGER;

vrt_dept_id rt_dept_id;
vrt_dept_name rt_dept_name;


OPEN c_dept_data;
FETCH c_dept_data BULK COLLECT INTO vrt_dept_id,vrt_dept_name;
CLOSE c_dept_data;

FORALL i IN vrt_dept_id.FIRST .. vrt_dept_id.LAST
UPDATE departments_tmp
SET dept_name = vrt_dept_name(i)
WHERE dept_id = vrt_dept_id(i);
In this case the procedure is compiled but it returns a runtime error when c_dept_date cursor returns no rows:
ORA-06502: PL/SQL: numeric or value error
This problem can be solved by using of one of the collection methods available: COUNT. The COUNT method returns the number of elements defined in an array:


OPEN c_dept_data;
FETCH c_dept_data BULK COLLECT INTO vrt_dept_id,vrt_dept_name;

FORALL i IN 1 .. vrt_dept_id.COUNT
UPDATE departments_tmp
SET dept_name = vrt_dept_name(i)
WHERE dept_id = vrt_dept_id(i);

CLOSE c_dept_data;
In the above case the FIRST and LAST collection methods are replaced by the COUNT collection method, a LOOP block is added plus EXIT condition.
The other workaround is the following:


OPEN c_dept_data;
FETCH c_dept_data BULK COLLECT INTO vrt_dept_id,vrt_dept_name;

CLOSE c_dept_data;
IF vrt_dept_id.COUNT > 0 THEN
-- do FORALL
-- do something else
Rollback behavior
If one of the bulk DML statements fails, the individual statement is rolled back and previous DML statements within the FORALL series of statements are not rolled back and the FORALL statement stops executing.
For example if you are using FORALL to insert 100 rows and an error appears on 78th row the execution will stop and rows from 78 to 100 will not be inserted indeed. If you want to overcome this behavior and to insert all rows you can use the SAVE EXCEPTIONS clause.
With the SAVE EXCEPTIONS the cursor attribute SQL%BULK_EXCEPTIONS is used. It is an array that records two elements ERROR_INDEX and ERROR_CODE:
v_bulk_errors EXCEPTION;
PRAGMA EXCEPTION_INIT (v_bulk_errors, -24381);

BEGIN -- outer BEGIN

BEGIN –- inner BEGIN clause
OPEN c_dept_data;
FETCH c_dept_data BULK COLLECT INTO vrt_dept_id,vrt_dept_name;

FORALL i IN 1 .. vrt_dept_id.COUNT
UPDATE departments_tmp
SET dept_name = vrt_dept_name(i)
WHERE dept_id = vrt_dept_id(i);

CLOSE c_dept_data;

'An error '|| i ||' was occured ' ||
' during update of dept_id: '||
'. Oracle error: ' ||

END; -- end inner BEGIN

Be aware of that you can use RETURNING clause with FORALL. This functionality can be used an array with affected rows to be populated and after that to be used for performing other actions with affected data.
Limiting of processed rows
If you want to process an limited count of rows you can use the LIMIT clause. For example, if you need to process a huge table then it is possible to overload your process memory (that will result with an ORA-04030 error) :
OPEN c_dept_data;
FETCH c_dept_data BULK COLLECT INTO vrt_dept_id,vrt_dept_name LIMIT 1000;

FORALL i IN 1 .. vrt_dept_id.COUNT
UPDATE departments_tmp
SET dept_name = vrt_dept_name(i)
WHERE dept_id = vrt_dept_id(i);

CLOSE c_dept_data;
In this case the collection variables will be populated on every 1000 rows, after that the FORALL will be executed for them and then will be fetched the next 1000. Be aware of that the previous values for the collection are overwritten.
The other workaround to cope with memory problems coming from large PL/SQL tables is to execute the following supplied stored procedure inside your code:
It will free unused memory for your session after large portions of data was processed. For full details about solving of memory problems with this procedure, see: Procedure DBMS_SESSION.FREE_UNUSED_USER_MEMORY Specification
You can use the SQL%BULK_ROWCOUNT attribute, which is available with the FORALL statement. For each processed row in FORALL there is a corresponding row for this attribute. For example, if you want to know did the n-th processed row from FORALL have affected some rows then you must call SQL%BULK_ROWCOUNT(n). If no rows were affected, the n-th row will be zero.
For more examples and explanations about using the BULK COLLECT and FORALL you can refer to the documentation:
Reducing Loop Overhead for Collections with Bulk Binds
Additional specific information:
Array insert may report false constraint violations with row triggers
Bulk Operations Across a Database Link is not a Supported Feature in 8i


Strange behaviour of the CBO, part 2

After playing around with setting of columns to allow NULL values or not (setting COL1 and COL3 to allow NULL values, test and put it again to their default condition) and precomputing statistics, the issue from previous post become more unclear.
Now the structure of the table is the same like it was before, statistics are fresh but cost for the execution plan is always 175. It doesn't matter what is inside the LIKE clauses





I have removed all LIKE clauses from the query - no difference.
After that I recreate the index for COL1 and again recompute the statistics, the cost become 68 for the following query:

If you remember my previous post the cost and execution plan was absolutely opposite than current results. Now the following query have cost of 175:

But the expected from me was 69! For all other variants of LIKE clauses it is the same: 175.

Next step: I have recreated the index for the second column (COL3) that I have played arround and then I have recomputed statistics again. In result, the cost is 18 and now the execution plan is stable independently from all different kind of LIKE clauses.
One friend of mine had suggestion about the count of '%' symbols and in result I run this query:

But the cost remains 18. It seems that count of '%' does not matter when they are more than one.

At the end, this query have stable execution plan but nothing is clear for me. It seems the indexes was fragmented and the CBO chooses the execution plan according to the computed height and clustering factor of the indexes. After reorganizing, the CBO choose a better execution plan according to new statistics coming from recreated indexes.
Anyway, it is still not clear what exactly is happening.

In conclusion I can say few things at least:
1. Keep your table statistics fresh.
2. Reorganize your indexes that suffer from heavy update and delete activity. It will affect the CBO execution plans as well
3. Don't forget to keep backup of your old statistics in case of emergency case of slow query (or slow application) that will let you to return faster the application in its previous and stable condition (by importing old stable statistics) and after that to have enough time to investigate how the new statistics have affected the application.


Strange behaviour of the CBO, part 1

The following interesting issue does not have clear explanation till now.

I have query that is using the following predicates.

May be I should explain from where is coming this strange query.
If you are developping some application and you need to generate some dynamic search query you can put a bind variables between the '%' symbols and to build dynamically this query. Usually it looks for Oracle like this:

During the investigation for the whole query I have removed the bind variables just to test it with real values. I put some real values for LIKE clauses. I have tested it without values for the bind variables as well. During testing I have changed incidentally '%%' to '%' within the LIKE clauses:

The whole execution plan has changed! Without clear reason!
Because these different execution plans, the cost for the first and second query is 69 but for the third one is 175, the whole statistics for buffer reads and recursive calls are changed too. This unexpected behaviour have provoked me to try to find the reason why the CBO is changing the whole plan.
First of all, there is no difference for the LIKE clause when you put '%%' or '%' symbols after it. The returned results are the same in both cases. Of course when your variable have some value then the generated LIKE will looks like this LIKE '%A%' (for static SQL). In this case the count of percent symbols will matter: LIKE '%A%' will return different results than LIKE '%'. But if you don't use a variable it will does not matter, the query will return the same result set in both cases.
This case is very specific, it will matter only when you write some static SQL with LIKE '%%' clauses but they do not restrict anything, it is expected you do not put unrestrictive clauses within your SQL queries. Anyway, I have decided to make some investigation about that.
I have discovered an additional fact. The execution plan have changed when changes have been applied just for one of the columns that is allowing NULL values - COL1:

You can see above that only one simple change is applied for COL1 and that's changing everything, again cost is 175. For example the following doesn't impact the execution plan in any way:

The only difference between all four columns is that COL2, COL3, COL4 does not allows NULL values but COL1 allows. Another finding - from the execution plan I have seen that the index for COL1 is used only when LIKE '%%' is used, when you write LIKE '%' then the index is not used.
I have supposed that the decision of the CBO depends from the fact that an index for COL1 exists and that column allows NULL values which affects the decision of the CBO.
After that I have changed another of the NOT NULL columns (that have index over it) - COL3 to allow NULL values and rewrite the clauses like that:

But the execution plan have not been affected - Cost: 69. Then I recompute statistics for that table and in result, different execution plan for the above query - Cost: 175.
I think this shows that the CBO is affected of the possibility to find expected rows inside the index, when column is NULLable then the possibility is small because the index will not contain the NULL values. But why it is affected from LIKE clauses in this way? When you put '%%' it is using the index, in case you put only '%' it is not using it.
I have tried to trace the CBO but I didn't succeed to find something additional that can help me on that.

I will appreciate any comments about this issue as well!


How to see the MOD_PLSQL passwords in clear text

If you have some web-based PL/SQL application then you can be interested in the following information.
May be many DBAs who have been involved in the database security have asked themselves: "How to be sure that my DAD files hides well the application schema passwords?"
Well, Oracle doesn't have very good solution for this problem.
Lets take a look at one DAD file used from an Oracle Application Server 9i or the local Apache server:
For every DAD alias you can see two lines (for Local HTTP server):
connect_string = my_appl_schema_name
password = !ZG9udF90aGlua190aGF0X3lvdV9hcmVfc2VjdXJlZA==

For Oracle9i AS you will see two similar lines:
PlsqlDatabaseUsername my_appl_schema_name
PlsqlDatabasePassword !ZG9udF90aGlua190aGF0X3lvdV9hcmVfc2VjdXJlZA==

If some intruder is able to see these two lines then he/she will be able easily to find the real password.
When someone see an encrypted string the first thing that is jumping in mind is to check for leading "!" symbol. This symbol usually means that the password is encoded by simple Base 64 encoding.
Before Oracle Application Server 10g all DAD passwords are only Base 64 encoded.
How to see the passwords in clear text:
1. Open your DAD configuration file
- for local HTTP Server it is located on ORACLE_HOME/Apache/modplsql/
- for Oracle9i AS it can be found on ORACLE_HOME/Apache/modplsql/dads.conf
2. Go to desired DAD alias name and copy the encoded password string without the leading "!" symbol
3. Decode it with some simple Base 64 decoder
3.1. Use Java (in this case do not remove the leading "!" symbol):

import javax.servlet.*;
import javax.servlet.http.*;
import sun.misc.BASE64Decoder;

public class get_password extends HttpServlet
private static final String CONTENT_TYPE = "text/html; charset=UTF-8";

private String psw = "";
private BASE64Decoder decoder = new BASE64Decoder();

public void init(ServletConfig config) throws ServletException

public void service(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException
PrintWriter out = response.getWriter();

psw = new String(decoder.decodeBuffer(request.getParameter("password").substring(1,request.getParameter("password").length())));
out.println("The password is: "+psw);
catch (Exception exc1) {}

3.2. Use an online Base 64 decoder: Online Base 64 Decoder

You will be able easily to see the "hidden" password in clear text.

All tests are performed on an Oracle database with local HTTP server and Oracle Application Server 9i (9.0.3).
Starting from Oracle Application Server 10g (9.0.4) Oracle obfuscates passwords with different algorithm and the password string is leading with "@". I don't know a way how these passwords can be decrypted.

What are the conclusions from these findings:
1. Don't forget to secure at the OS level both your Application Server and Oracle Database PCs. Be aware of that everyone who have access to DAD configuration files probably can have access to your application data as well. The intruder doesn't need to have serious technical skills in order to steal your DAD passwords.
2. Always desing your application to use different level of secured database schemes. For example (lets call them "outer" and "inner" schemes), put in DAD files only password for an "outer" database schema and restrict its grants, then let your application interface to connect directly only to them. The "outer" schema must have only small set of object grants that will be used for executing of procedures from "inside" schemes. The "outer" schema must don't have any rights for creating tables, synonyms and any database objects. This will help you to decrease the damage from potential intrusion attacks. Put all dangerous procedures and critical application data on safe place, in the "inner" schemes.
3. Always wrap your PL/SQL procedures and packages. Although that the wrap encryption can be reverse-engineered till this moment no one have reported that he was succeed to do that. The wrapped PL/SQL code will decrease potential damages as well.
4. This information is another good reason for faster moving to an Oracle 10g environment.


Forcing Oracle to use LOGGING mode

I've just read an interesting newsletter about that how we can force the database (or some tablespace) to use the LOGGING mode for all operations. For example, lets imagine that we don't want someone improperly to start some operation in NOLOGGING mode that will lead the database to impossibility of performing of full database recovery after media failure. This could be important issue if a Standby database is used as well.
The FORCE LOGGING clause was introduced in Oracle9i R2. It tells to Oracle redo logs to be written even when the NOLOGGING was specified in a DDL staement, table in NOLOGGING mode was created, or SQL*Loader was used to fill table via direct path loading.
This command can be applied at the database or tablespace level:


The following query shows the recovery time of the last used NOLOGGING operation for all datafiles:

SELECT name,unrecoverable_change#,unrecoverable_time FROM v$datafile;

Keep in mind that Oracle never generates redo records for temporary tablespaces and temporary segments, so forced logging has no affect for these.
For more details about this feature:
Oracle9i Documentation - Specifying FORCE LOGGING Mode
Metalink note:174951.1 - Force Logging

Another very interesting Metalink note about the problems coming from using of NOLOGGING operations especially when Standby database exists:
Metalink note:290161.1 - The Gains and Pains of Nologging Operations

How to Cope with an ORA-01000 Error

I prepared a document about this issue one week ago and I am posting here some quotes from it.

May be many DBAs have faced with this kind of error but the information that can be found in Internet is very scarce. This error is something like ORA-01555 (snapshot too old) error - there is no usual workaround but there are a lot of suggestions. For the full document, go to the end of my post.


What an Oracle DBA must do when face this kind of error.

Lets first take a look at the documentation:

ORA-01000 maximum open cursors exceeded

Cause: A host language program attempted to open too many cursors. The initialization parameter OPEN_CURSORS determines the maximum number of cursors per user.

Action: Modify the program to use fewer cursors. If this error occurs often, shut down Oracle, increase the value of OPEN_CURSORS, and then restart Oracle.
This error happens a lot in association with some kind of application.
This error also happens at the database level, with just regular inserts, updates, deletes, etc. in PL/SQL or in SQL*Plus, etc.
The reason you receive this error is because Oracle has reached the set limit for open cursors
allowed for that executable or that user session. There are two kinds of open cursors:
implicit and explicit. Here is some background on how cursors work.

To process a SQL statement, Oracle opens a work area called a private SQL area. This private SQL area stores information needed to execute a SQL statement. Cursors are stored in this area to keep track of information. An IMPLICIT cursor is declared for all data definition and data manipulation statements. These are internal to Oracle. For queries that return more than one row, you must declare an EXPLICIT cursor to retrieve all the information. You can tune explicit cursors more easily as you can decide when to open them and close them.

Implicit cursors are harder to tune because they are internal to Oracle. If the application is tuned carefully, it may cut down the number of implicit cursors opened.

Keep in mind that the maximum number of allowed cursors is per session, not per instance.


There are two ways to workaround this ORA-01000 error. You can tune cursor usage at the database level and at the application level.

Tuning at the DATABASE LEVEL

There is a parameter you can set in the init.ora that determines the number of cursors a user can open in a session: OPEN_CURSORS.

OPEN_CURSORS by default is 50 and usually, this is not high enough. The highest value you can set this parameter to is operating system dependant. To solve the ORA-01000 error, set the OPEN_CURSORS to a higher number. You may need to set it to the maximum of the operating system limit.

In many systems it is set to 1000 (or even more – 3000, 50000) without any problem. But setting of this parameter to values more than 1000 must be discussed, if there is such need, it will means that something is wrong with the application that leads to that error. Then the application code must be revised instead of changing of this parameter to higher values. Even values more than 300 could be considered as bigger.

Consequences to changing this parameter:
This parameter does not affect performance in any way but Oracle will now need a little more memory to store the cursors. It will affect only used memory, not the resources thatOracle will need to support this increased value.


For this level is responsible the development team.

onitoring and detecting of the causing problem

The Oracle DBA must try to identify and localize the problem and if it cannot be solved by himself alone then he/she must report it to the development and provide them with all needed information like: frequency of the error, some regularity in appearing of the error, behavior, results from executed scripts and trace files. This will help the problem to be easily identified and solved if it is coming from the application software.

Using scripts
To be able to resolve the problem, a DBA could use several scripts that will help to him/her in that situation.


Start tracing

For deeper investigation of the problem, you can start tracing the problem session or the instance in order to identify the problem. For more details about these actions, see the point Tracing in Advanced Topics subject bellow.



4. Advanced Topics
4.1. Tracing.
4.2. PL/SQL Cursor Sharing
4.2.1. Simple schema of PL/SQL and session cursor sharing
4.2.2. Oracle and PL/SQL Cursor Sharing
4.2.3. PL/SQL Cached Cursors vs. Session Cached Cursors
4.3. Cursor reuse in PL/SQL static SQL
4.4. Connection pooling in MOD_PLSQL
4.4.1. Introduction
4.4.2 Connection Pooling
4.4.3. Closing Pooled Database Sessions
4.4.4. Connection Pooling and Oracle HTTP Server Configuation
4.4.5. Tuning the Number of Database Sessions


You can download the full document from this URL:

I just want to note that the document is prepared for a PL/SQL web-based application that explains the "MOD_PLSQL connection pooling" and "DAD performance configuration" at the end. You can skip this section if you are not interested in that subject.

I hope that this document can help to DBAs when they face with this specific error. This document doesn't give the "silver bullet" solution but it can helps for easily finding of the root of the problem and resolving it.

About me

  • » I'm Radoslav Rusinov
  • » From Sofia, Bulgaria
  • » Employer TechnoLogica Ltd.
  • » I am working as a Database Consultant in Sofia, Bulgaria. My main professional interests are in the database area and especially in the Oracle RDBMS, including database design, development, security and administration.
  • » The views expressed on this blog are my own and do not necessarily reflect the views of my employing company and its affiliates
  • » My profile

Search This Blog with Google

Search This Blog with Free Find

powered by FreeFind

Search This Blog with Technorati

Oracle Resources

Oracle Blogs

Blog Statistics



               Page Rank Checker