Saturday, October 10, 2009

Horizontally data generate by QUERY

SQL>
CREATE TABLE LOOP_TABLE
(
EMPID NUMBER,
EMP_TIME DATE,
EMPSTATUS NUMBER
);


SQL> select * from loop_table;

EMPID EMP_TIME EMPSTATUS
---------- --------- ----------
1 01-JAN-08 800
1 02-JAN-08 588
1 03-JAN-08 478
1 04-JAN-08 123
1 05-JAN-08 789
2 01-JAN-08 111
2 02-JAN-08 222
2 03-JAN-08 555
2 04-JAN-08 333
2 05-JAN-08 444

10 rows selected.

SQL> select empid
2 , max(decode(emp_time, '01-JAN-08', empstatus)) "JAN 1ST"
3 , max(decode(emp_time, '02-JAN-08', empstatus)) "JAN 2ND"
4 , max(decode(emp_time, '03-JAN-08', empstatus)) "JAN 3RD"
5 , max(decode(emp_time, '04-JAN-08', empstatus)) "JAN 4TH"
6 , max(decode(emp_time, '05-JAN-08', empstatus)) "JAN 5TH"
7 from (select * from loop_table)
8 group by empid;

EMPID JAN 1ST JAN 2ND JAN 3RD JAN 4TH JAN 5TH
---------- ---------- ---------- ---------- ---------- ----------
1 800 588 478 123 789
2 111 222 555 333 444

spell out number to in words

A function of spell out number to in words (with JSP)
=======================================================

CREATE OR REPLACE FUNCTION spell_out (p_arg IN NUMBER)
RETURN VARCHAR2 IS
v_result VARCHAR2 (2000);
v_length NUMBER;
v_less_lakh VARCHAR2 (100);
v_crore VARCHAR2 (200);
v_length_crore NUMBER;
BEGIN
v_length := LENGTH (p_arg);

IF p_arg <= 99999 THEN
v_result := TO_CHAR (TO_DATE (p_arg, 'j'), 'Jsp') || 'Taka Only';
ELSIF v_length = 6 THEN
v_less_lakh :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, 2), 'j'), 'Jsp')
|| ' Taka Only';
v_result :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, 1, 1), 'j'), 'Jsp')
|| ' Lakh';
v_result := v_result || ' ' || v_less_lakh;
ELSIF v_length = 7 THEN
v_less_lakh :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, 3), 'j'), 'Jsp')
|| ' Taka Only';
v_result :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, 1, 2), 'j'), 'Jsp')
|| ' Lakh';
v_result := v_result || ' ' || v_less_lakh;
ELSIF v_length > 7 THEN
v_length_crore := v_length - 7;
v_crore :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, 1, v_length_crore), 'j'), 'Jsp')
|| ' Crore';
v_result :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, v_length_crore + 1, 2), 'j'),
'Jsp'
)
|| ' Lakh';
v_less_lakh :=
TO_CHAR (TO_DATE (SUBSTR (p_arg, v_length_crore + 3), 'j'), 'Jsp')
|| ' Taka Only';
v_result := v_crore || ' ' || v_result || ' ' || v_less_lakh;
END IF;

RETURN v_result;
END;


select to_char(to_date(substr(:p_arg,1,:v_length_crore),'j'),'Jsp')||' Crore' from dual


select to_char(to_date(:p_arg,'j'),'Jsp')||' Taka Only' from dual


select to_char(to_date(:p_arg,'j'),'Jsp') from dual

Redo Log Files Sizing 10g

Redo Log Files Sizing
=====================
The size of the redo log files can influence performance, because the behavior of the
database writer and archiver processes depend on the redo log sizes. Generally, larger redo log files provide better performance. Undersized log files increase checkpoint activity and reduce performance.

Although the size of the redo log files does not affect LGWR performance, it can affect DBWR and checkpoint behavior. Checkpoint frequency is affected by several factors,including log file size and the setting of the FAST_START_MTTR_TARGET initialization parameter. If the FAST_START_MTTR_TARGET parameter is set to limit the instance recovery time, Oracle automatically tries to checkpoint as frequently as necessary.Under this condition, the size of the log files should be large enough to avoidadditional checkpointing due to under sized log files. The optimal size can be obtained by querying the OPTIMAL_LOGFILE_SIZE column from the V$INSTANCE_RECOVERY view. You can also obtain sizing advice on the Redo Log Groups page of
Oracle Enterprise Manager Database Control.

It may not always be possible to provide a specific size recommendation for redo log
files, but redo log files in the range of a hundred megabytes to a few gigabytes are
considered reasonable. Size your online redo log files according to the amount of redo your system generates. A rough guide is to switch logs at most once every twenty
minutes.

Configuring Undo Space

Configuring Undo Space
======================
Oracle needs undo space to keep information for read consistency, for recovery, and
for actual rollback statements. This information is kept in one or more undo
tablespaces.

Oracle provides automatic undo management, which completely automates the
management of undo data. A database running in automatic undo management mode
transparently creates and manages undo segments. Oracle Corporation strongly
recommends using automatic undo management, because it significantly simplifies
database management and removes the need for any manual tuning of undo (rollback)
segments. Manual undo management using rollback segments is supported for
backward compatibility reasons.

Adding the UNDO TABLESPACE clause in the CREATE DATABASE statement sets up the
undo tablespace. Set the UNDO_MANAGEMENT initialization parameter to AUTO to
operate your database in automatic undo management mode.

The V$UNDOSTAT view contains statistics for monitoring and tuning undo space.
Using this view, you can better estimate the amount of undo space required for the
current workload. Oracle also uses this information to help tune undo usage in the
system. The V$ROLLSTAT view contains information about the behavior of the undo
segments in the undo tablespace.

Important Initialization Parameters With Performance Impact in 10g

COMPATIBLE :=
Specifies the release with which the Oracle server must maintain compatibility. It lets you take advantage of the maintenance improvements of a new release immediately in your production systems without testing the new functionality
in your environment. If your application was designed for a specific release of Oracle, and you are actually installing a later release, then you might want to set this parameter to the version of the previous release.

DB_BLOCK_SIZE :=
Sets the size of the Oracle database blocks stored in the
database files and cached in the SGA. The range of values depends on the operating system, but it is typically 8192 for transaction processing systems and higher values for database warehouse systems.

SGA_TARGET :=
Specifies the total size of all SGA components. If SGA_TARGET
is specified, then the buffer cache (DB_CACHE_SIZE), Java pool (JAVA_POOL_SIZE), large pool (LARGE_POOL_SIZE), and shared pool (SHARED_POOL_SIZE) memory pools are
automatically sized.

PGA_AGGREGATE_TARGET :=
Specifies the target aggregate PGA memory available to all
server processes attached to the instance.

PROCESSES :=
Sets the maximum number of processes that can be started by
that instance. This is the most important primary parameter to set, because many other parameter values are deduced from this.

SESSIONS :=
This is set by default from the value of processes. However, if
you are using the shared server, then the deduced value is likely to be insufficient.

UNDO_MANAGEMENT:=
Specifies which undo space management mode the system
should use. AUTO mode is recommended.

UNDO_TABLESPACE :=
Specifies the undo tablespace to be used when an instance
starts up.

Important Initialization Parameters Without Performance Impact in 10g

Necessary Initialization Parameters Without Performance Impact
==============================================================

DB_NAME := Name of the database. This should match the ORACLE_SID environment
variable.
DB_DOMAIN := Location of the database in Internet dot notation.

OPEN_CURSORS := Limit on the maximum number of cursors (active SQL
statements) for each session. The setting is
application-dependent; 500 is recommended.

CONTROL_FILES := Set to contain at least two files on different disk drives to
prevent failures from control file loss.
DB_FILES := Set to the maximum number of files that can assigned to the
database.

Oracle Performance Improvement Method

Oracle Performance Improvement Method
========================================

1. Perform the following initial standard checks:
--------------------------------------------------
A.
Get candid feedback from users. Determine the performance project’s scope
and subsequent performance goals, as well as performance goals for the
future. This process is key in future capacity planning.
B.
Get a full set of operating system, database, and application statistics from the
system when the performance is both good and bad. If these are not available,
then get whatever is available. Missing statistics are analogous to missing
evidence at a crime scene: They make detectives work harder and it is more
time-consuming.
C.
Sanity-check the operating systems of all systems involved with user
performance. By sanity-checking the operating system, you look for hardware
or operating system resources that are fully utilized. List any over-used
resources as symptoms for analysis later. In addition, check that all hardware
shows no errors or diagnostics.

2. Check for the top ten most common mistakes with Oracle, and determine if any of
these are likely to be the problem. List these as symptoms for later analysis. These
are included because they represent the most likely problems. ADDM
automatically detects and reports nine of these top ten issues.

3. Build a conceptual model of what is happening on the system using the symptoms
as clues to understand what caused the performance problems.

4. Propose a series of remedy actions and the anticipated behavior to the system,
then apply them in the order that can benefit the application the most. ADDM
produces recommendations each with an expected benefit. A golden rule in
performance work is that you only change one thing at a time and then measure
the differences. Unfortunately, system downtime requirements might prohibit
such a rigorous investigation method. If multiple changes are applied at the same
time, then try to ensure that they are isolated so that the effects of each change can be independently validated.

5. Validate that the changes made have had the desired effect, and see if the user's
perception of performance has improved. Otherwise, look for more bottlenecks,
and continue refining the conceptual model until your understanding of the
application becomes more accurate.

6. Repeat the last three steps until performance goals are met or become impossible
due to other constraints.

This method identifies the biggest bottleneck and uses an objective approach to
performance improvement. The focus is on making large performance improvements
by increasing application efficiency and eliminating resource shortages and
bottlenecks. In this process, it is anticipated that minimal (less than 10%) performance gains are made from instance tuning, and large gains (100% +) are made from isolating application inefficiencies.


Following analysis assumes that statistics for both the operating system and the database have been gathered.

1. Is the response time/batch run time acceptable for a single user on an empty or
lightly loaded system?
If it is not acceptable, then the application is probably not coded or designed
optimally, and it will never be acceptable in a multiple user situation when system
resources are shared. In this case, get application internal statistics, and get SQL
Trace and SQL plan information. Work with developers to investigate problems in
data, index, transaction SQL design, and potential deferral of work to
batch/background processing.

2. Is all the CPU being utilized?
If the kernel utilization is over 40%, then investigate the operating system for
network transfers, paging, swapping, or process thrashing. Otherwise, move onto
CPU utilization in user space. Check to see if there are any non-database jobs
consuming CPU on the system limiting the amount of shared CPU resources, such
as backups, file transforms, print queues, and so on. After determining that the
database is using most of the CPU, investigate the top SQL by CPU utilization.
These statements form the basis of all future analysis. Check the SQL and the
transactions submitting the SQL for optimal execution. Oracle provides CPU
statistics in V$SQL and V$SQLSTATS.
If the application is optimal and there are no inefficiencies in the SQL execution,
consider rescheduling some work to off-peak hours or using a bigger system.

3. At this point, the system performance is unsatisfactory, yet the CPU resources are
not fully utilized.
In this case, you have serialization and unscalable behavior within the server. Get the WAIT_EVENTS statistics from the server, and determine the biggest
serialization point. If there are no serialization points, then the problem is most
likely outside the database, and this should be the focus of investigation.
Elimination of WAIT_EVENTS involves modifying application SQL and tuning
database parameters. This process is very iterative and requires the ability to drill
down on the WAIT_EVENTS systematically to eliminate serialization points.

Modify All the Data Needed in One Statement

Modify All the Data Need in One Statement
========================================

When possible, use array processing. This means that an array of bind variable values
is passed to Oracle for repeated execution. This is appropriate for iterative processes in which multiple rows of a set are subject to the same operation.

For example:

BEGIN
FOR pos_rec IN (SELECT *
FROM order_positions
WHERE order_id = :id) LOOP
DELETE FROM order_positions
WHERE order_id = pos_rec.order_id AND
order_position = pos_rec.order_position;
END LOOP;
DELETE FROM orders
WHERE order_id = :id;
END;


Alternatively, you could define a cascading constraint on orders. In the previous
example, one SELECT and n DELETEs are executed. When a user issues the DELETE on
orders DELETE FROM orders WHERE order_id = :id, the database automatically
deletes the positions with a single DELETE statement.

Or

Can be use

Bulk collect, for all etc functions
see:

http://halimdba.blogspot.com/2009/09/how-to-increase-performance-of-delete.html

Visiting Data as Few Times as Possible

Visiting Data as Few Times as Possible

Applications should try to access each row only once. This reduces network traffic and
reduces database load. Consider doing the following:

■ Combine Multiples Scans with CASE Statements
■ Use DML with RETURNING Clause
■ Modify All the Data Needed in One Statement

Combine Multiples Scans with CASE Statements
=============================================
Often, it is necessary to calculate different aggregates on various sets of tables. Usually, this is done with multiple scans on the table, but it is easy to calculate all the aggregates with one single scan. Eliminating n-1 scans can greatly improve
performance.

Combining multiple scans into one scan can be done by moving the WHERE condition
of each scan into a CASE statement, which filters the data for the aggregation. For eachaggregation, there could be another column that retrieves the data.

The following example asks for the count of all employees who earn less then 2000,
between 2000 and 4000, and more than 4000 each month. This can be done with three
separate queries:

SELECT COUNT (*)
FROM employees
WHERE salary < 2000;
SELECT COUNT (*)
FROM employees
WHERE salary BETWEEN 2000 AND 4000;
SELECT COUNT (*)
FROM employees
WHERE salary>4000;

However, it is more efficient to run the entire query in a single statement. Each number is calculated as one column. The count uses a filter with the CASE statement to count only the rows where the condition is valid.

For example:
SELECT COUNT (CASE WHEN salary < 2000
THEN 1 ELSE null END) count1,
COUNT (CASE WHEN salary BETWEEN 2001 AND 4000
THEN 1 ELSE null END) count2,
COUNT (CASE WHEN salary > 4000
THEN 1 ELSE null END) count3
FROM employees;

This is a very simple example. The ranges could be overlapping, the functions for the
aggregates could be different, and so on.

Use DML with RETURNING Clause
==============================

When appropriate, use INSERT, UPDATE, or DELETE... RETURNING to select and
modify data with a single call. This technique improves performance by reducing the
number of calls to the database.

Structuring the Indexes

Structuring the Indexes
=======================

Often, there is a beneficial impact on performance by restructuring indexes. This can
involve the following:
■ Remove nonselective indexes to speed the DML.
■ Index performance-critical access paths.
■ Consider reordering columns in existing concatenated indexes.
■ Add columns to the index to improve selectivity.

Do not use indexes as a panacea. Application developers sometimes think that
performance will improve if they create more indexes. If a single programmer creates
an appropriate index, then this might indeed improve the application's performance.
However, if 50 programmers each create an index, then application performance will
probably be hampered.


Modifying or Disabling Triggers and Constraints
=============================================

Using triggers consumes system resources. If you use too many triggers, then you can
find that performance is adversely affected and you might need to modify or disable
them.


Restructuring the Data
=======================
After restructuring the indexes and the statement, you can consider restructuring the
data.
■ Introduce derived values. Avoid GROUP BY in response-critical code.
■ Review your data design. Change the design of your system if it can improve performance.
■ Consider partitioning, if appropriate.


Maintaining Execution Plans Over Time
====================================
You can maintain the existing execution plan of SQL statements over time either using
stored statistics or stored SQL execution plans. Storing optimizer statistics for tables will apply to all SQL statements that refer to those tables. Storing an execution plan (that is, plan stability) maintains the plan for a single SQL statement. If both statistics and a stored plan are available for a SQL statement, then the optimizer uses the stored plan.

Structuring the SQL Statements

Structuring the SQL Statements
==============================
==============================

Often, rewriting an inefficient SQL statement is easier than modifying it. If you
understand the purpose of a given statement, then you might be able to quickly and
easily write a new statement that meets the requirement.

1.

Compose Predicates Using AND and =
-------------------------------------------------
To improve SQL efficiency, use equijoins whenever possible. Statements that perform
equijoins on untransformed column values are the easiest to tune.

Avoid Transformed Columns in the WHERE Clause
--------------------------------------------------

Use untransformed column values.
---------------------------------------

For example, use:

WHERE a.order_no = b.order_no
rather than:
WHERE TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))
= TO_NUMBER (SUBSTR(a.order_no, INSTR(b.order_no, '.') - 1))

Do not use SQL functions in predicate clauses or WHERE clauses.
-----------------------------------------------------------------
Any expression using a column, such as a function having the column as its argument, causes the optimizer to ignore the possibility of using an index on that column, even a unique index, unless there is a function-based index defined that can be used.

Avoid mixed-mode expressions, and beware of implicit type conversions.
---------------------------------------------------------
When you want to use an index on the VARCHAR2 column charcol, but the WHERE clause looks
like this:

AND charcol = numexpr
where numexpr is an expression of number type (for example, 1,
USERENV('SESSIONID'), numcol, numcol+0,...),

Oracle translates that expression into:

AND TO_NUMBER(charcol) = numexpr

Avoid the following kinds of complex expressions:
----------------------------------------------

■ col1 = NVL (:b1,col1)
■ NVL (col1,-999) = ….
■ TO_DATE(), TO_NUMBER(), and so on

These expressions prevent the optimizer from assigning valid cardinality or Selectivity estimates and can in turn affect the overall plan and the join method.
Add the predicate versus using NVL() technique.

For example:

SELECT employee_num, full_name Name, employee_id
FROM mtl_employees_current_view
WHERE (employee_num = NVL (:b1,employee_num)) AND (organization_id=:1)
ORDER BY employee_num;

Also:

SELECT employee_num, full_name Name, employee_id
FROM mtl_employees_current_view
WHERE (employee_num = :b1) AND (organization_id=:1)
ORDER BY employee_num;

When you need to use SQL functions on filters or join predicates, do not use them on the columns on which you want to have an index; rather, use them on the opposite side of the predicate, as in the following statement:

TO_CHAR(numcol) = varcol

rather than

varcol = TO_CHAR(numcol)

==========================================================
2.
Write Separate SQL Statements for Specific Tasks
==========================================================

SQL is not a procedural language. Using one piece of SQL to do many different things
usually results in a less-than-optimal result for each task. If you want SQL to
accomplish different things, then write various statements, rather than writing one
statement to do different things depending on the parameters you give it.

[ Note: Oracle Forms and Reports are powerful development tools
that allow application logic to be coded using PL/SQL (triggers or
program units). This helps reduce the complexity of SQL by
allowing complex logic to be handled in the Forms or Reports. You
can also invoke a server side PL/SQL package that performs the
few SQL statements in place of a single large complex SQL
statement. Because the package is a server-side unit, there are no
issues surrounding client to database round-trips and network
traffic.]

It is always better to write separate SQL statements for different tasks, but if you must use one SQL statement, then you can make a very complex statement slightly less
complex by using the UNION ALL operator.

Optimization (determining the execution plan) takes place before the database knows
what values will be substituted into the query. An execution plan cannot, therefore,
depend on what those values are. For example:

SELECT info
FROM tables
WHERE ...
AND somecolumn BETWEEN DECODE(:loval, 'ALL', somecolumn, :loval)
AND DECODE(:hival, 'ALL', somecolumn, :hival);

Written as shown, the database cannot use an index on the somecolumn column,
because the expression involving that column uses the same column on both sides of
the BETWEEN.

This is not a problem if there is some other highly selective, indexable condition you can use to access the driving table. Often, however, this is not the case. Frequently, you might want to use an index on a condition like that shown but need to know the values of :loval, and so on, in advance. With this information, you can rule out the ALL case, which should not use the index.

If you want to use the index whenever real values are given for :loval and :hival (if
you expect narrow ranges, even ranges where :loval often equals :hival), then you
can rewrite the example in the following logically equivalent form:

SELECT /* change this half of UNION ALL if other half changes */ info
FROM tables
WHERE ...
AND somecolumn BETWEEN :loval AND :hival
AND (:hival != 'ALL' AND :loval != 'ALL')
UNION ALL
SELECT /* Change this half of UNION ALL if other half changes. */ info
FROM tables
WHERE ...
AND (:hival = 'ALL' OR :loval = 'ALL');

If you run EXPLAIN PLAN on the new query, then you seem to get both a desirable and
an undesirable execution plan. However, the first condition the database evaluates for either half of the UNION ALL is the combined condition on whether :hival and
:loval are ALL. The database evaluates this condition before actually getting any
rows from the execution plan for that part of the query.

When the condition comes back false for one part of the UNION ALL query, that part is
not evaluated further. Only the part of the execution plan that is optimum for the
values provided is actually carried out. Because the final conditions on :hival and
:loval are guaranteed to be mutually exclusive, only one half of the UNION ALL
actually returns rows. (The ALL in UNION ALL is logically valid because of this
exclusivity. It allows the plan to be carried out without an expensive sort to rule out duplicate rows for the two halves of the query.)

=======================================================
3.
Use of EXISTS versus IN for Subqueries
=======================================================

In certain circumstances, it is better to use IN rather than EXISTS. In general, if the selective predicate is in the subquery, then use IN. If the selective predicate is in the parent query, then use EXISTS.

[Note: This discussion is most applicable in an OLTP environment,
where the access paths either to the parent SQL or subquery are
through indexed columns with high selectivity. In a DSS
environment, there can be low selectivity in the parent SQL or
subquery, and there might not be any indexes on the join columns.
In a DSS environment, consider using semijoins for the EXISTS
case.]

Sometimes, Oracle can rewrite a subquery when used with an IN clause to take
advantage of selectivity specified in the subquery. This is most beneficial when the
most selective filter appears in the subquery and there are indexes on the join columns. Conversely, using EXISTS is beneficial when the most selective filter is in the parent query. This allows the selective predicates in the parent query to be applied before filtering the rows against the EXISTS criteria.

[ Note: You should verify the optimizer cost of the statement with
the actual number of resources used (BUFFER_GETS, DISK_READS,
CPU_TIME from V$SQLSTATS or V$SQLAREA). Situations such as
data skew (without the use of histograms) can adversely affect the
optimizer's estimated cost for an operation. ]

"Example 1: Using IN - Selective Filters in the Subquery" and
"Example 2: Using EXISTS - Selective Predicate in the Parent"
are two examples that demonstrate the benefits of IN and EXISTS. Both examples use the same schema with the following characteristics:

■ There is a unique index on the employees.employee_id field.
■ There is an index on the orders.customer_id field.
■ There is an index on the employees.department_id field.
■ The employees table has 27,000 rows.
■ The orders table has 10,000 rows.
■ The OE and HR schemas, which own these segments, were both analyzed with
COMPUTE.

Example 1:
============

Using IN - Selective Filters in the Subquery This example demonstrates how
rewriting a query to use IN can improve performance. This query identifies all
employees who have placed orders on behalf of customer 144.

The following SQL statement uses EXISTS:
---------------------------------------

SELECT /* EXISTS example */
e.employee_id, e.first_name, e.last_name, e.salary
FROM employees e
WHERE EXISTS (SELECT 1 FROM orders o /* Note 1 */
WHERE e.employee_id = o.sales_rep_id /* Note 2 */
AND o.customer_id = 144); /* Note 3 */

Notes:
■ Note 1: This shows the line containing EXISTS.
■ Note 2: This shows the line that makes the subquery a
correlated subquery.
■ Note 3: This shows the line where the correlated subqueries
include the highly selective predicate customer_id =
number.

Rewriting the statement using IN results in significantly fewer resources used.
--------------------------------------------------------
The SQL statement using IN:

SELECT /* IN example */
e.employee_id, e.first_name, e.last_name, e.salary
FROM employees e
WHERE e.employee_id IN (SELECT o.sales_rep_id /* Note 4 */
FROM orders o
WHERE o.customer_id = 144); /* Note 3 */

Note:
■ Note 3: This shows the line where the correlated subqueries
include the highly selective predicate customer_id = number
■ Note 4: This indicates that an IN is being used. The subquery is
no longer correlated, because the IN clause replaces the join in
the subquery.


Example 2: Using EXISTS - Selective Predicate in the Parent
--------------------------------------------------------
This example demonstrates how rewriting a query to use EXISTS can improve performance. This query identifies all
employees from department 80 who are sales reps who have placed orders.

The following SQL statement uses IN:

SELECT /* IN example */
e.employee_id, e.first_name, e.last_name, e.department_id, e.salary
FROM employees e
WHERE e.department_id = 80 /* Note 5 */
AND e.job_id = 'SA_REP' /* Note 6 */
AND e.employee_id IN (SELECT o.sales_rep_id FROM orders o); /* Note 4 */

Note:
■ Note 4: This indicates that an IN is being used. The subquery is
no longer correlated, because the IN clause replaces the join in the subquery.
■ Note 5 and 6: These are the selective predicates in the parentSQL.

The following SQL statement uses EXISTS:

SELECT /* EXISTS example */
e.employee_id, e.first_name, e.last_name, e.salary
FROM employees e
WHERE e.department_id = 80 /* Note 5 */
AND e.job_id = 'SA_REP' /* Note 6 */
AND EXISTS (SELECT 1 /* Note 1 */
FROM orders o
WHERE e.employee_id = o.sales_rep_id); /* Note 2 */

Note:
■ Note 1: This shows the line containing EXISTS.
■ Note 2: This shows the line that makes the subquery a correlated subquery.
■ Note 5 & 6:These are the selective predicates in the parent SQL.

[Note: An even more efficient approach is to have a concatenated
index on department_id and job_id. This eliminates the need
to access two indexes and reduces the resources used.]



================================================================
4.
Controlling the Access Path and Join Order with Hints
================================================================

You can influence the optimizer's choices by setting the optimizer approach and goal,
and by gathering representative statistics for the query optimizer. Sometimes, the
application designer, who has more information about a particular application's data
than is available to the optimizer, can choose a more effective way to execute a SQL
statement. You can use hints in SQL statements to instruct the optimizer about how the statement should be executed.

Hints, such as /*+FULL */ control access paths. For example:

SELECT /*+ FULL(e) */ e.last_name
FROM employees e
WHERE e.job_id = 'CLERK';

Join order can have a significant effect on performance. The main objective of SQL
tuning is to avoid performing unnecessary work to access rows that do not affect the
result. This leads to three general rules:

■ Avoid a full-table scan if it is more efficient to get the required rows through an
index.
■ Avoid using an index that fetches 10,000 rows from the driving table if you could
instead use another index that fetches 100 rows.
■ Choose the join order so as to join fewer rows to tables later in the join order.
The following example shows how to tune join order effectively:

SELECT info
FROM taba a, tabb b, tabc c
WHERE a.acol BETWEEN 100 AND 200
AND b.bcol BETWEEN 10000 AND 20000
AND c.ccol BETWEEN 10000 AND 20000
AND a.key1 = b.key1
AND a.key2 = c.key2;

1. Choose the driving table and the driving index (if any).
The first three conditions in the previous example are filter conditions applying to only a single table each. The last two conditions are join conditions.
Filter conditions dominate the choice of driving table and index. In general, the
driving table is the one containing the filter condition that eliminates the highest
percentage of the table. Thus, because the range of 100 to 200 is narrow compared
with the range of acol, but the ranges of 10000 and 20000 are relatively large,
taba is the driving table, all else being equal.
With nested loop joins, the joins all happen through the join indexes, the indexes
on the primary or foreign keys used to connect that table to an earlier table in the
join tree. Rarely do you use the indexes on the non-join conditions, except for the
driving table. Thus, after taba is chosen as the driving table, use the indexes on
b.key1 and c.key2 to drive into tabb and tabc, respectively.

2. Choose the best join order, driving to the best unused filters earliest.
The work of the following join can be reduced by first joining to the table with the
best still-unused filter. Thus, if "bcol BETWEEN ..." is more restrictive (rejects a
higher percentage of the rows seen) than "ccol BETWEEN ...", the last join can be
made easier (with fewer rows) if tabb is joined before tabc.

3. You can use the ORDERED or STAR hint to force the join order.



===============================
5.
Use Caution When Managing Views
===============================
Be careful when joining views, when performing outer joins to views, and when
reusing an existing view for a new purpose.

Use Caution When Joining Complex Views
=====================================
Joins to complex views are not recommended,particularly joins from one complex view to another. Often this results in the entire view being instantiated, and then the query is run against the view data. For example, the following statement creates a view that lists employees and departments:

CREATE OR REPLACE VIEW emp_dept
AS
SELECT d.department_id, d.department_name, d.location_id,
e.employee_id, e.last_name, e.first_name, e.salary, e.job_id
FROM departments d
,employees e
WHERE e.department_id (+) = d.department_id;

The following query finds employees in a specified state:

SELECT v.last_name, v.first_name, l.state_province
FROM locations l, emp_dept v
WHERE l.state_province = 'California'
AND v.location_id = l.location_id (+);

Do Not Recycle Views
=====================================
Beware of writing a view for one purpose and then using it for other purposes to which it might be ill-suited. Querying from a view requires all tables from the view to be accessed for the data to be returned. Before reusing a view, determine whether all tables in the view need to be accessed to return the data. If not, then do not use the view. Instead, use the base table(s), or if necessary, define a new view. The goal is to refer to the minimum number of tables and views necessary to return the required data.

Consider the following example:

SELECT department_name
FROM emp_dept
WHERE department_id = 10;

The entire view is first instantiated by performing a join of the employees and
departments tables and then aggregating the data. However, you can obtain
department_name and department_id directly from the departments table. It is
inefficient to obtain this information by querying the emp_dept view.

Use Caution When Unnesting Subqueries
=======================================
Subquery unnesting merges the body of the subquery into the body of the statement that contains it, allowing the optimizer to consider them together when evaluating access paths and joins.

Use Caution When Performing Outer Joins to Views
================================================
In the case of an outer join to a multi-table view, the query optimizer (in Release 8.1.6 and later) can drive from anouter join column, if an equality predicate is defined on it.

An outer join within a view is problematic because the performance implications of the outer join are not visible.


Store Intermediate Results
==========================

Intermediate, or staging, tables are quite common in relational database systems,
because they temporarily store some intermediate results. In many applications they
are useful, but Oracle requires additional resources to create them. Always consider
whether the benefit they could bring is more than the cost to create them. Avoid
staging tables when the information is not reused multiple times.
Some additional considerations:

■ Storing intermediate results in staging tables could improve application
performance. In general, whenever an intermediate result is usable by multiple
following queries, it is worthwhile to store it in a staging table. The benefit of not retrieving data multiple times with a complex statement already at the second
usage of the intermediate result is better than the cost to materialize it.

■ Long and complex queries are hard to understand and optimize. Staging tables
can break a complicated SQL statement into several smaller statements, and then
store the result of each step.

■ Consider using materialized views. These are precomputed tables comprising
aggregated or joined data from fact and possibly dimension tables.

Reviewing the Execution Plan

Reviewing the Execution Plan
============================

When tuning (or writing) a SQL statement in an OLTP environment, the goal is to
drive from the table that has the most selective filter. This means that there are fewer rows passed to the next step. If the next step is a join, then this means that fewer rows are joined. Check to see whether the access paths are optimal.
When examining the optimizer execution plan, look for the following:

■ The plan is such that the driving table has the best filter.
■ The join order in each step means that the fewest number of rows are being
returned to the next step (that is, the join order should reflect, where possible,
going to the best not-yet-used filters).
■ The join method is appropriate for the number of rows being returned. For
example, nested loop joins through indexes may not be optimal when many rows
are being returned.
■ Views are used efficiently. Look at the SELECT list to see whether access to the
view is necessary.
■ There are any unintentional Cartesian products (even with small tables).
■ Each table is being accessed efficiently:
Consider the predicates in the SQL statement and the number of rows in the table.
Look for suspicious activity, such as a full table scans on tables with large number
of rows, which have predicates in the where clause. Determine why an index is not
used for such a selective predicate.

A full table scan does not mean inefficiency. It might be more efficient to perform a full table scan on a small table, or to perform a full table scan to leverage a better join method (for example, hash_join) for the number of rows returned.

If any of these conditions are not optimal, then consider restructuring the SQL
statement or the indexes available on the tables.

how to know whice system privileges assigned to a user

SELECT LPAD(' ', 2*level) || granted_role
"USER PRIVS"
FROM (
SELECT NULL grantee, username granted_role
FROM dba_users
WHERE username LIKE UPPER('%&uname%')
UNION
SELECT grantee, granted_role
FROM dba_role_privs
UNION
SELECT grantee, privilege
FROM dba_sys_privs)
START WITH grantee IS NULL
CONNECT BY grantee = prior granted_role;


Another one:
============

SELECT path
FROM (
SELECT grantee,
sys_connect_by_path(privilege, ':')||':'||
grantee path
FROM (
SELECT grantee, privilege, 0 role
FROM dba_sys_privs
UNION ALL
SELECT grantee, granted_role, 1 role
FROM dba_role_privs)
CONNECT BY privilege=prior grantee
START WITH role = 0)
WHERE grantee IN (
SELECT username
FROM dba_users
WHERE lock_date IS NULL
AND password != 'EXTERNAL'
AND username != 'SYS')
OR grantee='PUBLIC'
/

Disable SQL and SQL*Plus commands in the SQL*Plus Environment

Disable certain SQL and SQL*Plus commands in the SQL*Plus Environment
=====================================================================

SQL*Plus uses the PRODUCT_USER_PROFILE (PUP) table, a table in the SYSTEM account, to provide product-level security that supplements the user-level security provided by the SQL GRANT and REVOKE commands and user roles.

DBAs can use the PUP table to disable certain SQL and SQL*Plus commands in the SQL*Plus environment on a per-user basis. SQL*Plus—not Oracle Database—enforces this security. DBAs can even restrict access to the GRANT, REVOKE, and SET ROLE commands to control users' ability to change their database privileges.

SQL*Plus reads restrictions from the PUP table when a user logs in to SQL*Plus and maintains those restrictions for the duration of the session. Changes to the PUP table will only take effect the next time the affected users log in to SQL*Plus.
When SYSTEM, SYS, or a user authenticating with SYSDBA or SYSOPER privileges connects or logs in, SQL*Plus does not read the PUP table. Therefore, no restrictions apply to these users.

The PUP table applies only to the local database. If accessing objects on a remote database through a database link, the PUP table for the remote database does not apply. The remote database cannot extract the username and password from the database link in order to determine that user's profile and privileges.


PUP (SQLPLUS_PRODUCT_PROFILE) Table Script Found:
============================================================

The PUP tables script found in following path:
G:\oracle\product\10.2.0\db_1\sqlplus\admin\pupbld.sql

Connect as a system user to run above script:

Pup Table name :- SQLPLUS_PRODUCT_PROFILE

Pup table’s column:-

PRODUCT VARCHAR2 (30) NOT NULL,
USERID VARCHAR2 (30),
ATTRIBUTE VARCHAR2 (240),
SCOPE VARCHAR2 (240),
NUMERIC_VALUE DECIMAL (15,2),
CHAR_VALUE VARCHAR2 (240),
DATE_VALUE DATE,
LONG_VALUE LONG


PUP Table Administration
========================================

The DBA username SYSTEM owns and has all privileges on the PUP table. Other Oracle Database usernames should have only SELECT access to this table, which enables a view of restrictions for that username and those restrictions assigned to PUBLIC. The script PUPBLD.SQL, when run, grants SELECT access on the PUP table to PUBLIC.

Disabling SQL*Plus, SQL, and PL/SQL Commands
============================================================

To disable a SQL or SQL*Plus command for a given user, insert a row containing the user's username in the Userid column, the command name in the Attribute column, and DISABLED in the Char_Value column. The Scope, Numeric_Value, and Date_Value columns should contain NULL. For example:

PRODUCT USERID ATTRIBUTE SCOPE NUMBERIC CHAR DATE LONG
VALUE VALUE VALUE VALUE
------- ------ --------- ----- -------- ------ ----- ---
SQL*Plus HR HOST DISABLED
SQL*Plus % INSERT DISABLED
SQL*Plus % UPDATE DISABLED
SQL*Plus % DELETE DISABLED

Note: [To re-enable commands, delete the row containing the restriction.]

================================
A Example of PASSWORD command:
================================

This is an example of how to insert a row into the PUP table to restrict the user SCOTT from using the PASSWORD statement:

1. Log in as SYSTEM with the command

2. SQLPLUS SYSTEM/your_password


3. Insert a row into the PUP table with the command:
INSERT INTO PRODUCT_USER_PROFILE
VALUES ('SQL*Plus', 'SCOTT', ' PASSWORD', NULL, NULL, 'DISABLED', NULL, NULL);
4. Connect as SCOTT and try to change by command PASSWORD something:

CONNECT SCOTT/your_password;

SQL> PASSWORD

• This command causes the following error message:
SP2-0544: Command SELECT disabled in Product User Profile

5. To delete this row and remove the restriction from the user HR, CONNECT again as SYSTEM and enter:

DELETE FROM PRODUCT_USER_PROFILE WHERE USERID = 'SCOTT';


Additional Information
======================

SQL*Plus Commands That Can Be Disabled
---------------------------------------
ACCEPT DEFINE PASSWORD SHUTDOWN
APPEND DEL PAUSE SPOOL
ARCHIVE LOG DESCRIBE PRINT START(@, @@)
ATTRIBUTE DISCONNECT PROMPT STARTUP
BREAK EDIT RECOVER STORE
BTITLE EXECUTE REMARK TIMING
CHANGE EXIT/QUIT REPFOOTER TTITLE
CLEAR GET REPHEADER UNDEFINE
COLUMN HELP (?) RUN VARIABLE
COMPUTE HOST SAVE WHENEVER OSERROR
CONNECT INPUT SET WHENEVER SQLERROR
COPY LIST (;) SHOW


SQL Commands That Can Be Disabled
----------------------------------

ALTER DELETE MERGE SET CONSTRAINTS
ANALYZE DISASSOCIATE NOAUDIT SET ROLE
ASSOCIATE DROP PURGE SET TRANSACTION
AUDIT EXPLAIN RENAME TRUNCATE
CALL FLASHBACK REVOKE UPDATE
COMMENT GRANT ROLLBACK VALIDATE
COMMIT INSERT SAVEPOINT na
CREATE LOCK SELECT na

You can disable the following PL/SQL commands:
--------------------------------------------
BEGIN DECLARE na na


Notes:
===============
•Disabling HOST disables the operating system alias for HOST, such as $ on Windows, and ! on UNIX.
•Disabling LIST disables ; and numbers (numbers entered to go to that line in a script).
•You must disable HELP and ? separately to disable access to command-line help.
•Disabling the SQL*Plus SET command also disables SQL SET CONSTRAINTS, SET ROLE and SET TRANSACTION.
•Disabling SQL*Plus START also disables @ and @@.
•Disabling BEGIN and DECLARE does not prevent the use of SQL*Plus EXECUTE to run PL/SQL. EXECUTE must be disabled separately.
•Disabling EXIT/QUIT is not recommended. If disabled, terminate a command-line session by sending an EOF character such as Ctrl+D in UNIX or Ctrl+Z in Windows. Terminate a Windows GUI session with File > Exit. Otherwise, terminate a session by terminating the SQL*Plus process. If disabled, using EXIT/QUIT to terminate the currently running script in iSQL*Plus is also disabled. If disabled, the EXIT operation in WHENEVER OSERROR and WHENEVER SQLERROR is also disabled.

Friday, October 9, 2009

Verifying Optimizer Statistics

Verifying Optimizer Statistics
==============================

The query optimizer uses statistics gathered on tables and indexes when determining
the optimal execution plan. If these statistics have not been gathered, or if the statistics are no longer representative of the data stored within the database, then the optimizer does not have sufficient information to generate the best plan.

Things to check:
----------------
■ If you gather statistics for some tables in your database, then it is probably best togather statistics for all tables. This is especially true if your application includes SQL statements that perform joins.

■ If the optimizer statistics in the data dictionary are no longer representative of thedata in the tables and indexes, then gather new statistics. One way to check
whether the dictionary statistics are stale is to compare the real cardinality (row
count) of a table to the value of DBA_TABLES.NUM_ROWS. Additionally, if there is
significant data skew on predicate columns, then consider using histograms.

Automatic SQL Tuning Features

Automatic SQL Tuning Features
==========================

Because the manual SQL tuning process poses many challenges to the application developer, the SQL tuning process has been automated by the automatic SQL Tuning manageability features. Theses features have been designed to work equally well for OLTP and Data Warehouse type applications

ADDM

Automatic Database Diagnostic Monitor (ADDM) analyzes the information collected by the AWR for possible performance problems with the Oracle database, including high-load SQL statements.

SQL Tuning Advisor

SQL Tuning Advisor allows a quick and efficient technique for optimizing SQL statements without modifying any statements.

SQL Tuning Sets

When multiple SQL statements are used as input to ADDM or SQL Tuning Advisor, a SQL Tuning Set (STS) is constructed and stored. The STS includes the set of SQL statements along with their associated execution context and basic execution statistics.

SQLAccess Advisor

In addition to the SQL Tuning Advisor, Oracle provides the SQLAccess Advisor that provides advice on materialized views, indexes, and materialized view logs. The SQLAccess Advisor helps you achieve your performance goals by recommending the proper set of materialized views, materialized view logs, and indexes for a given workload. In general, as the number of materialized views and indexes and the space allocated to them is increased, query performance improves. The SQLAccess Advisor considers the trade-offs between space usage and query performance and recommends the most cost-effective configuration of new and existing materialized views and indexes.

Gathering Data

Gathering Data on the SQL Identified
====================================

If you are most concerned with CPU, then examine the top SQL statements that
performed the most BUFFER_GETS during that interval. Otherwise, start with the SQL
statement that performed the most DISK_READS.

Information to Gather During Tuning
-----------------------------------
The tuning process begins by determining the structure of the underlying tables and
indexes. The information gathered includes the following:

1. Complete SQL text from V$SQLTEXT

2. Structure of the tables referenced in the SQL statement, usually by describing the
table in SQL*Plus

3. Definitions of any indexes (columns, column orderings), and whether the indexes
are unique or non-unique

4. Optimizer statistics for the segments (including the number of rows each table,
selectivity of the index columns), including the date when the segments were last
analyzed

5. Definitions of any views referred to in the SQL statement

6. Repeat steps two, three, and four for any tables referenced in the view definitions found in step five

7. Optimizer plan for the SQL statement (either from EXPLAIN PLAN, V$SQL_PLAN,
or the TKPROF output)

8. Any previous optimizer plans for that SQL statement.

Tuning an Application

Tuning an Application / Reducing Load
-------------------------------------

If your whole application is performing suboptimally, or if you are attempting to reduce the overall CPU or I/O load on the database server, then identifying resource-intensive SQL involves the following steps:

1. Determine which period in the day you would like to examine; typically this is the
application's peak processing time.

2. Gather operating system and Oracle statistics at the beginning and end of that period. The minimum of Oracle statistics gathered should be file I/O (V$FILESTAT), system statistics (V$SYSSTAT), and SQL statistics (V$SQLAREA, V$SQL or V$SQLSTATS, V$SQLTEXT, V$SQL_PLAN, and V$SQL_PLAN_STATISTICS).

3. Using the data collected in step two, identify the SQL statements using the mostresources. A good way to identify candidate SQL statements is to query V$SQLSTATS. V$SQLSTATS contains resource usage information for all SQL statements in the shared pool. The data in V$SQLSTATS should be ordered by resource usage.

The most common resources are:

■ Buffer gets (V$SQLSTATS.BUFFER_GETS, for high CPU using statements)
■ Disk reads (V$SQLSTATS.DISK_READS, for high I/O statements)
■ Sorts (V$SQLSTATS.SORTS, for many sorts)

One method to identify which SQL statements are creating the highest load is to compare the resources used by a SQL statement to the total amount of that resource used in the period. For BUFFER_GETS, divide each SQL statement's BUFFER_GETS by the total number of buffer gets during the period. The total number of buffer gets in the system is available in the V$SYSSTAT table, for the statistic session logical reads.

Similarly, it is possible to apportion the percentage of disk reads a statement performs out of the total disk reads performed by the system by dividing V$SQL_STATS.DISK_READS by the value for the V$SYSSTAT statistic physical reads. The SQL sections of the Automatic Workload Repository report include this data, so you do not need to perform the percentage calculations manually.

After you have identified the candidate SQL statements, the next stage is to gather
information that is necessary to examine the statements and tune them.
Identifying High-Load SQL

Monday, October 5, 2009

TNS easy connection

TNS easy connection
=====================

1. sqlplus islbas/islbas@10.11.201.142/stlbas (if port is default 1521)

2. sqlplus islbas/islbas@10.11.201.142:1522/stlbas


3. connect to a URL using the Easy Connect Syntax

sqlplus islbas/islbas@//10.11.201.142:1521/stlbas

Sunday, October 4, 2009

how to insert blob or image file to oracle table

To do apply this example in windows,
first create a os directory like "D:\halim_blob_dir"
and input a image file like 'c.jpg';

then begin

SQL*Plus: Release 10.2.0.1.0 - Production on Thu Sep 3 18:58:02 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

SQL>
SQL> CONN / AS SYSDBA
Connected.
SQL>
SQL> grant dba to halim2 identified by halim2;

Grant succeeded.

SQL> conn halim2/halim2
Connected.
SQL>
SQL> drop directory halim_blob;
drop directory halim_blob
*
ERROR at line 1:
ORA-04043: object HALIM_BLOB does not exist


SQL> create directory halim_blob as 'D:\halim_blob_dir';

Directory created.

SQL> CREATE TABLE image_table (
2 dname VARCHAR2(30), -- directory name
3 sname VARCHAR2(30), -- subdirectory name
4 fname VARCHAR2(30), -- file name
5 iblob BLOB)
6 /

Table created.

SQL>
SQL>
SQL> CREATE OR REPLACE PROCEDURE halim_load_image_file (
2 dir_name VARCHAR2,
3 sub_dir_name VARCHAR2,
4 file_name VARCHAR2
5 ) IS
6 source_file BFILE;
7 destenation_file BLOB;
8 length_file BINARY_INTEGER;
9 BEGIN
10 source_file := BFILENAME ('HALIM_BLOB', file_name);
11
12 -- insert a NULL record to lock
13 INSERT INTO image_table
14 (dname, sname, fname, iblob
15 )
16 VALUES (dir_name, sub_dir_name, file_name, EMPTY_BLOB ()
17 )
18 RETURNING iblob
19 INTO destenation_file;
20
21 -- lock record
22 SELECT iblob
23 INTO destenation_file
24 FROM image_table
25 WHERE dname = dir_name AND sname = sub_dir_name AND fname = file_nam
26 FOR UPDATE;
27
28 -- open the file
29 DBMS_LOB.fileopen (source_file, DBMS_LOB.file_readonly);
30 -- determine length
31 length_file := DBMS_LOB.getlength (source_file);
32 -- read the file
33 DBMS_LOB.loadfromfile (destenation_file, source_file, length_file);
34
35 -- update the blob field
36 UPDATE image_table
37 SET iblob = destenation_file
38 WHERE dname = dir_name AND sname = sub_dir_name AND fname = file_name;
39
40 -- close file
41 DBMS_LOB.fileclose (source_file);
42 END halim_load_image_file;
43 /

Procedure created.

SQL> EXEC halim_load_image_file('HALIM_BLOB','Image_test','c.jpg');

PL/SQL procedure successfully completed.

SQL> SELECT *
2 FROM image_table;
SP2-0678: Column or attribute type can not be displayed by SQL*Plus
SQL>
SQL>
SQL> DECLARE
2 image_file BLOB;
3 BEGIN
4 SELECT iblob
5 INTO image_file
6 FROM image_table;
7
8 DBMS_OUTPUT.put_line (DBMS_LOB.getlength (image_file));
9 END;
10 /

PL/SQL procedure successfully completed.

SQL> set serveroutput on
SQL>
SQL> ed
Wrote file afiedt.buf

1 DECLARE
2 image_file BLOB;
3 BEGIN
4 SELECT iblob
5 INTO image_file
6 FROM image_table;
7 DBMS_OUTPUT.put_line (DBMS_LOB.getlength (image_file));
8* END;
9 /

105542

PL/SQL procedure successfully completed.

SQL>
SQL>
SQL>

Bind variable in parameter's list of value

use of Bind variable in parameter's list of value.

1. First need to create a Global Temporary table .
scrift for Global Temporary table .
this table is useful for global user

CREATE GLOBAL TEMPORARY TABLE STGLOBTM
(
COLMN1 NUMBER(18,2),
COLMN2 NUMBER(18,2),
COLMN3 NUMBER(18,2),
COLMN4 NUMBER(18,4),
COLMN5 NUMBER(18,2),
COLMN6 NUMBER(18,2),
COLMN7 NUMBER(18,2),
COLMN8 NUMBER(18,2),
COLMN9 NUMBER(18,2),
COLMN10 NUMBER(18,2),
COLMN11 NUMBER(18,4),
COLMN12 NUMBER(18,2),
COLMN13 NUMBER(18,2),
COLMN14 NUMBER(18,2),
COLMN15 NUMBER(18,2),
COLMC1 VARCHAR2(25 BYTE),
COLMC2 VARCHAR2(25 BYTE),
COLMC3 VARCHAR2(25 BYTE),
COLMC4 VARCHAR2(50 BYTE),
COLMC5 VARCHAR2(50 BYTE),
COLMC6 VARCHAR2(50 BYTE),
COLMC7 VARCHAR2(200 BYTE),
COLMC8 VARCHAR2(200 BYTE),
COLMC9 VARCHAR2(200 BYTE),
COLMC10 VARCHAR2(50 BYTE),
COLMC11 VARCHAR2(50 BYTE),
COLMC12 VARCHAR2(50 BYTE),
COLMC13 VARCHAR2(50 BYTE),
COLMC14 VARCHAR2(50 BYTE),
COLMC15 VARCHAR2(50 BYTE),
COLMD1 DATE,
COLMD2 DATE,
COLMD3 DATE,
COLMD4 DATE,
COLMD5 DATE,
COLMD6 DATE,
COLMD7 DATE,
COLMD8 DATE,
COLMD9 DATE
)
ON COMMIT PRESERVE ROWS
NOCACHE;


2. Before parameter form trigger

to do that first need to write following (example) query
(list of values query using bind variable :p0) in "before parameter form"
trigger in oracle report builder.
--------------------------------

function BeforePForm return boolean is
Begin
srw.do_sql('delete from stglobtm') ;
for i in (select a.lcnumb,b.acttit
from stilcmas a,stfacmas b
where a.brancd = :p0
and a.lcnumb is not null
and a.actype = 'T01'
and a.brancd = b.brancd
and a.actype = b.actype
and a.actnum=b.actnum)
loop
insert into stglobtm (colmc1,colmc7) values (i.lcnumb,i.acttit) ;
end loop ;

Return (TRUE);
End;
----------------------

3.

then write query in the report parameter's list of value
like following example.


select colmc1,colmc7 from stglobtm

Stglobtm is a Global Temporary table .

First need to create a Global Temporary table .
scrift for Global Temporary table .
this table is useful for global user

CREATE GLOBAL TEMPORARY TABLE STGLOBTM
(
COLMN1 NUMBER(18,2),
COLMN2 NUMBER(18,2),
COLMN3 NUMBER(18,2),
COLMN4 NUMBER(18,4),
COLMN5 NUMBER(18,2),
COLMN6 NUMBER(18,2),
COLMN7 NUMBER(18,2),
COLMN8 NUMBER(18,2),
COLMN9 NUMBER(18,2),
COLMN10 NUMBER(18,2),
COLMN11 NUMBER(18,4),
COLMN12 NUMBER(18,2),
COLMN13 NUMBER(18,2),
COLMN14 NUMBER(18,2),
COLMN15 NUMBER(18,2),
COLMC1 VARCHAR2(25 BYTE),
COLMC2 VARCHAR2(25 BYTE),
COLMC3 VARCHAR2(25 BYTE),
COLMC4 VARCHAR2(50 BYTE),
COLMC5 VARCHAR2(50 BYTE),
COLMC6 VARCHAR2(50 BYTE),
COLMC7 VARCHAR2(200 BYTE),
COLMC8 VARCHAR2(200 BYTE),
COLMC9 VARCHAR2(200 BYTE),
COLMC10 VARCHAR2(50 BYTE),
COLMC11 VARCHAR2(50 BYTE),
COLMC12 VARCHAR2(50 BYTE),
COLMC13 VARCHAR2(50 BYTE),
COLMC14 VARCHAR2(50 BYTE),
COLMC15 VARCHAR2(50 BYTE),
COLMD1 DATE,
COLMD2 DATE,
COLMD3 DATE,
COLMD4 DATE,
COLMD5 DATE,
COLMD6 DATE,
COLMD7 DATE,
COLMD8 DATE,
COLMD9 DATE
)
ON COMMIT PRESERVE ROWS
NOCACHE;