Wednesday, December 23, 2009

How to fine User name of current session

User name of current session
===============================

SQL> SELECT USERNAME
from v$session
where audsid = sys_context('userenv','sessionid');

How to get Client ip address in oracle

How to get client ip address in oracle
=====================================

1. For Client IP Address

SELECT SYS_CONTEXT ('userenv', 'ip_address')
FROM DUAL

How to get server ip address in oracle

how to get server ip address in oracle
================================
1. For IP Address

SELECT UTL_INADDR.get_host_address
FROM DUAL

2. For Host Name

SELECT UTL_INADDR.get_host_name
FROM DUAL

3. For Domain and host name

SELECT sys_context('USERENV', 'HOST')
FROM dual;

4. For host name

SELECT sys_context('USERENV', 'SERVER_HOST')
FROM dual;

Tuesday, December 22, 2009

materialized view Refresh

About materialized view Refresh
=============================

As I find, Materialized View create and ("atomic_refresh => FALSE") refresh is faster and more logical Then create table ,truncate and direct insert.

about materialized view works
=================
1. create a MV with "complete refresh on demand" (means it refresh when we want, not automatic)
2. when we refresh that MV (ON DEMAND REFRESH MODE) then
• ON DEMAND – refreshes are initiated manually using one of the procedures in the DBMS_MVIEW package
• Can be used with all types of materialized views
• Manual Refresh Procedures
• DBMS_MVIEW.REFRESH(, )

• When refresh_option use ("atomic_refresh => TRUE") its means DELETE operation is used to empty the table.
( in this time if job is broken then rollback the transaction and MV remain useable)
• When refresh_option use ("atomic_refresh => FALSE") its means TRUNCATE operation is used to empty the table.
( in this time if job is broken then MV don't remain useable state before another refresh)

• AND we can reduce the LOG generation by creating MV with nologging mode.

Ultimately
================
Oracle performs the following operations when refreshing a materialized view. In the case of a complete refresh (using dbms_mview.refresh)
1. sys.snap$ and sys.mlog$ are updated to reflect the time of the refresh.
2. The materialized base view is truncated/delete.
3. All rows selected from the master table are inserted into the snapshot base table.
4. sys.slog$ is updated to reflect the time of the refresh.


• N.B :- I have attached a “trace file .txt” file. Just search in this file with MV_STTRAILB_038 and find the prob.

ORA-08004: sequence exceeds MAXVALUE and cannot be instantiated

ORA-08004: sequence exceeds MAXVALUE and cannot be instantiated

Soution:=
==============

alter sequence maxvalue ;

Sequence syntax:
---------------
CREATE SEQUENCE
INCREMENT BY
START WITH
MAXVALUE / NOMAXVALUE
MINVALUE / NOMINVALUE
CYCLE / NOCYCLE
CACHE <#> / NOCACHE
ORDER / NOORDER;


CREATE TABLE TEST (ID NUMBER, data VARCHAR2(255));


CREATE SEQUENCE seq_for_test
START WITH 1
INCREMENT BY 1
MAXVALUE 10;


CREATE OR REPLACE TRIGGER trigger_for_seq_test
BEFORE INSERT
ON TEST
FOR EACH ROW
WHEN (NEW.ID IS NULL)
BEGIN
SELECT seq_for_test.NEXTVAL
INTO :NEW.ID
FROM DUAL;
END;


CREATE OR REPLACE TRIGGER trigger_for_seq_test
BEFORE INSERT
ON TEST
FOR EACH ROW
BEGIN
SELECT nvl(:new.id,seq_for_test.NEXTVAL)
INTO :NEW.ID
FROM DUAL;
END;



ALTER SEQUENCE seq_for_test MAXVALUE 15



Sequence in PL/sql
==================


INSERT INTO TEST (id, data)
VALUES (seq_for_test.NEXTVAL, 'halim');

INSERT INTO TEST (id, data)
VALUES (seq_for_test.CurrVAL, 'halim'); --(what happen)


SELECT 'CREATE SEQUENCE test_sequence MINVALUE 0 START WITH '||MAX(id_no)+1||' INCREMENT BY 1 CACHE 20'
INTO v_test
FROM table_name:

EXECUTE IMMEDIATE v_sql;


Sequence Related Queries
==========================
SELECT sequence_name, last_number
FROM user_sequences;

Next Number From Sequence
--------------------------

SELECT sequence_name, (last_number + increment_by) NEXT_VALUE
FROM user_sequences;

Tuesday, December 15, 2009

Merge statment in oracle

Merge statment in oracle
========================


drop table emp1;

create table emp1 as select * from emp ;


drop table dept1;

create table dept1 as select * from dept;


MERGE INTO emp1 a
USING dept1 b
ON (a.deptno = b.deptno)
WHEN MATCHED THEN
UPDATE SET a.sal = b.deptno
WHEN NOT MATCHED THEN
INSERT (empno, sal)
VALUES (b.deptno,b.deptno);

Conditionally insert into ALL tables

Conditionally insert into ALL tables
======================================

INSERT ALL
WHEN SAL>10000 THEN
INTO sal_history VALUES(EMPID,HIREDATE,SAL)
WHEN MGR>200 THEN
INTO mgr_history VALUES(EMPID,MGR,SYSDATE)
SELECT employee_id EMPID, hire_date HIREDATE, salary SAL, manager_id MGR
FROM employees WHERE employee_id > 200;

Example of Oracle External Table

Example of Oracle External Table
================================

CREATE OR REPLACE DIRECTORY dir_test AS 'C:\dir_test';

GRANT READ,WRITE ON DIRECTORY dir_test TO scott;

data on text.txt
/*
00100,halim,md,1234566,halim@yahoo.com,58
00200,jamal,khan,456789,jamal@yahoo.com,50
00300,nazu,haque,456789,nazu@yahoo.com,52
00400,nazim,md,456799221,nazim@yahoo.com,90
00500,niger,sultana,123456787,niger@yahoo.com,50
00600,zakir,Dr,465465464,zakir@yahoo.com,17
*/

DROP TABLE external_table_emp;


CREATE TABLE external_table_emp
( empid VARCHAR2(6),
last_name VARCHAR2(100),
first_name VARCHAR2(100),
phone_no VARCHAR2(20),
email VARCHAR2(200),
age NUMBER(3)
)
ORGANIZATION EXTERNAL
( DEFAULT DIRECTORY dir_test
ACCESS PARAMETERS
( RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ','
)
LOCATION ('test.txt')
);


SELECT * FROM external_table_emp;

Saturday, December 12, 2009

what should be considerd to tuning the Report 6i

what should be considerd to tuning the Report 6i
=====================================================

1. Trace a Report
====================

A) report profile

(Tools -> Preferences >Runtime Settings>Profile)

+-------------------------------------+
| Reports Profiler statistics |
+-------------------------------------+
Total Elapsed Time: 29.00 seconds
Reports Time: 24.00 seconds (82.75% of TOTAL)
Oracle Time: 5.00 seconds (17.24% of TOTAL)
UPI: 1.00 seconds
SQL: 4.00 seconds
TOTAL CPU Time used by process: N/A

From the profile, it is possible to see the execution time (total elapsed time) for the report, the amount of time that was spent formatting the retrieved data (Reports Time) and the amount of time spent waiting for the data to be retrieved (Oracle Time). UPI time is the time spentestablishing the database connection and parsing and executing the SQL. The SQL time is the time spent while the database server fetches the data (percent of time spent executing SRW.DO_SQL() statements, EXEC_SQL statements, PL/SQL Cursors, etc.).

B) Trace a Report

(Tools -> Trace)

or
c:\rwrun60 report=emp.rdf userid=scott/tiger@orcl desformat=pdf
tracemode=replace tracefile=emp.lis traceopts=trace_prf

command line arguments for this option include TRACEMODE and TRACEOPTS.
TRACEMODE specifies whether to replace or append to the existing trace log file.
TRACEOPTS is used to specify the events to record within the log file.

TRACEFILE, TRACEMODE AND TRACEOPTS
TRACEFILE=
where tracefile is any valid file name in which the trace
information will be logged.
TRACEMODE=
where TRACE_APPEND overwrites the contents of the tracefile.
TRACEOPTS=TRACE_SQL | TRACE_TMS | TRACE_DST | TRACE_ALL>
where TRACE_ERR - list error messages and warnings.
TRACE_PRF - performance statistics
TRACE_APP - information on all the report objects.
TRACE_PLS - information on all PL/SQL objects.
TRACE_SQL - Information on all SQL.
TRACE_TMS - timestamp for each entry in file.
TRACE_DST - distribution lists. This may be useful to determine
which section was sent to which destination.
TRACE_ALL - all possible information. (DEFAULT)

Note
Trace cannot be used on a .rep file.
Options can be combined on the command line. For example the TRACEOPTS=(TRACE_APP,
TRACE_PRF) means that the log file will contain information on all report objects and
performance statistics.


2. SQL Tuning
===================

In Oracle Reports 6 users can turn on SQL trace without modifying the report
simply by using the command line option SQLTRACE=YES|NO.

Once a trace file has been generated, theTKPROF database utility can be used to generate an EXPLAIN PLAN, which is a map of the execution plan used by the Oracle Optimizer.

3. PL/SQL Tuning in report builder.
==============================

PL/SQL that perform a significant amount of database operations will perform better if it is implemented as stored database procedures. These stored procedures will run on the Oracle database and so they can perform the operations more quickly than PL/SQL that is local to the report (which would use the PL/SQL parser in reports, then the SQL parser in the database and include a network trip as well). The opposite is true if the PL/SQL does not involve any database operations. In this case the PL/SQL should be coded as locally as possible using the program units node in the report object navigator.

Accessing to data tuning
==============================================
If the performance tools show that a report is spending a large amount of time accessing data from the database, then it is often beneficial to review the structure of the data and how it is being used. A bad schema design can have a dramatic effect on the performance of a reporting system. For example, an overly normalized data-model can result in many avoidable joins or queries.

4.INDEXES
=============
Columns used in the WHERE clause should be indexed. The impact of indexes used on the
columns in the master queries of a report will be minor as these queries will only access the database once. However, for a significant performance improvement indexes should be used on any linked columns in the detail query. A lack of appropriate indexes can result in many full-table scans which can have a major impact
on performance.

5. CALCULATIONS
=================
When performing calculations within a report (either through summary or formula columns), the general rule of thumb is the more calculations that can be performed within the SQL of the report queries, the better. If the calculations are included in the SQL, then they are performed before the data is retrieved by the database, rather than the performed on the retrieved data by the Report.
Database-stored user-defined PL/SQL functions can also be included in the query select list. This is more efficient then using a local PL/SQL function (e.g. in a formula column), since the calculated data is returned as part of the result set from the database.

6. REDUNDANT QUERIES
=====================
Ideally a report should have no redundant queries (queries which return data which is not required in the report), since they will clearly have an effect on performance. In general, the fewer queries you have the faster your report will run. So, single query data models tend to execute more quickly than multi-query data models. However, situations can arise where a report not only needs to
produce a different format for different users, but also needs to utilize different query statements. Clearly this could be achieved by producing two different reports, but it may be desirable to have a single report for easier maintenance. In this instance, the redundant queries should be disabled by use of the SRW.SET_MAXROW() procedure.
Example
The following code in the Before Report trigger will disable either Query_Emp or Query_Dept depending on a user parameter:
----------------------------------
IF :Parameter_1 = 'A' then
SRW.SET_MAXROW('Query_Emp',0);
ELSE
SRW.SET_MAX_ROW('Query_Dept',0);
END IF;
----------------------------------
Note
The only meaningful place to use SRW.SET_MAXROW() is in the Before Report trigger (after the query has been parsed). If SRW.SET_MAXROW() is called after this point then the SRW.MAXROW_UNSET packaged exception is raised.
The query will still be parsed and bound, but no data will be returned to the report.


7. BREAK GROUPS
===================
Limiting the number of break groups can improve the performance of a report. For each column in the data model that has the break order property set (except the lowest child group), Oracle Reports appends this as an extra column to the ORDER BY clause for the appropriate query.
Clearly, the fewer columns in the ORDER BY clause, the less work the database has to do before returning the data in the required order. The creation of a break group may make an ORDER BY clause defined as part of the query redundant. If this is the case then the redundant ORDER BY should be removed, since this will require extra processing on the database.
If the report requires the use of break groups, then care should be taken to ensure that the break order property is set for as few columns in the break group as possible. A break order column is indicated by a small triangle or arrow to the left of the column name in the group in the data model. Each break group above the lowest child group of a query requires at least one column within in to have the break order property set. Simply taking the break order off columns where
sorting is not required can also increase performance. Try to limit break groups to a single column. These columns should be as small as possible and be
database columns (as opposed to summary or formula columns) wherever feasible. Both of these conditions can help the local caching that Oracle Reports does before the data is formatted for maximum efficiency. Clearly, these conditions cannot always be met, but will increase efficiency when utilized.

The following profile output was generated by the same report that generated the profile output on page 2. With the original run, no break orders were set, to generate the following profile all columns had break order set:

+-------------------------------------+
| Reports Profiler statistics |
+-------------------------------------+
Total Elapsed Time: 34.00 seconds
Reports Time: 26.00 seconds (76.47% of TOTAL)
Oracle Time: 8.00 seconds (23.52% of TOTAL)
UPI: 1.00 seconds
SQL: 7.00 seconds
TOTAL CPU Time used by process: N/A

The SQL took about twice as long to run with break order set. As background, this report is a single query tabular report that retrieves about 20 records from about 500000 present in the table,where the table resides in a well-structured and well-indexed database.

8. GROUP FILTERS
===================
The main use for group filters in the database is to reduce the number of records retrieved from the database. When using a group filter, the query is still passed to the database and all the data is returned to reports, where the filtering will take place. Therefore, even if the filter is defined to only displays the top five records, the result set returned to reports will contain all the records
returned by the query. For this reason, it is usually more efficient to incorporate the functionality of the group filter into the query WHERE clause, or into the maximum rows property of the query wherever possible.

This will restrict the data returned by the database.

9. TO LINK OR NOT TO LINK
============================
As with most operations in Reports, there are a number of ways to create data models that include more than one table. Consider, for example, the standard case of the dept/emp join, i.e., the requirement is to create a report that lists all the employees in each department in the company. In Reports the user can either use the following in a single query:
---------------------------------
Select d.dname, e.ename
From emp e, dept d
Where e.deptno(+) = d.deptno
Or they can create two queries
Select deptno, dname from dept
Select deptno, ename from emp
----------------------------------
and create a column link between the two on deptno.
When designing the data model in the report, it is preferable to minimize the actual number of queries by using fewer, larger (multi-table) queries, rather than several simpler (single-table) queries. Each time a query is issued, Oracle Reports needs to parse, bind and execute a cursor. A single query report is therefore able to return all the required data in a single cursor rather than many. Also be aware that with master-detail queries, the detail query will be re-parsed, re-bound and re-executed for each master record retrieved. In this instance it is often more efficient to
merge the two queries and use break groups to create the master-detail effect.
It should be noted, however, that the larger and more complex a query gets, the more difficult it can be to maintain. Each site needs to decide at what point to balance the performance versus the maintenance requirements.

FORMATTING THEFormatting the Data Tuning
===============================================================
FORMATTING THE DATA Once the data has been retrieved from the database, Reports needs to format the output following the layout model that the user has created. The time taken to generate the layout is dependent on a number of factors, but in general it comes down to how much work is involved in preventing an object from being overwritten by another object and the efficiency of any calculations or functions performed in the format triggers.


1. LAYOUT
===================
When generating a default layout Oracle Reports puts a frame around virtually every object, so that it is protected from being overwritten when the report is run. At runtime, every layout object (frames, fields, boilerplate, etc.) is examined to determine the likelihood of that object being overwritten. In some situations (for example boilerplate text column headings), there is clearly no risk of the objects being overwritten and hence the immediately surrounding frame can be removed. This reduces the number of objects that Oracle Reports has to format and hence
improves performance. Similarly, when an object is defined as having an undefined size (variable, expanding or contracting in either or both the horizontal and vertical directions) then extra processing is required since Oracle Reports must determine that instance of the object's size before formatting that object and
those around it. If this sizing can be set to fixed then this additional processing is not required, since the size and positional relationships between the objects is already known.
Furthermore, instead of truncating a character string from a field in the Report Builder Layout, it is better to use the SUBSTR function in the report query to truncate the data at the database level. This reduces unnecessary processing and formatting after the data retrieval.

2. FORMAT TRIGGERS
=====================
Format triggers have two major purposes:

Dynamically disable and enable objects at runtime.
Dynamically change the appearance of an object at runtime.

Care should always be exercised when using format triggers, since the triggers do not only fire for every instance of their associated object produced, but every time the object is formatted at runtime.
These two scenarios may sound the same, but consider the following situation: A tabular report includes a single repeating frame that can expand vertically and has page protect set on. As this report is formatted, there is room for one more line at the bottom of the first page. Reports starts to format the next instance of the repeating frame and fires its associated format trigger. One of the objects inside the repeating frame is found to have expanded and this instance of the repeating
frame is therefore moved to the following page and the format trigger for the repeating frame is fired again. Hence, although the repeating frame only appears once (at the top of the second page), the format trigger has fired twice. Because you can not be sure how many times a format trigger will fire for a particular object, DML should not be performed in a format trigger. With the example above, had the format trigger contained an INSERT statement then two rows of data would have been inserted.

Format triggers should also be placed at the highest level possible in the object/frame hierarchy, so that the trigger fires at the lowest possible frequency. For example:
-------------------------------------------
Format Trigger at the
field level must be
executed 4 times,
once for each field.
-------------------------------------------------------
------------------------------------------------------
Format Trigger at the
Frame level
executed only once !
-------------------------------------------------------
When defining any triggers or PL/SQL program units within Oracle Reports, it is important to maximize the efficiency of the code. For example, if the display attributes of a field are to change dynamically (for example, to draw attention to values outside the norm), then the attributes should be changed using the individual built-ins such as SRW.SET_TEXT_COLOR. For general PL/SQL tuning issues, please refer to the PL/SQL Users Guide and Reference.Reports performance can be improved by giving layout objects (e.g., frames and repeating frames) a transparent border and fill pattern. Transparent objects do not need to be rendered in a bitmapped file. As a result, processing is faster when objects are transparent.


3. GENERAL LAYOUT GUIDELINES
==========================
The following guidelines can improve performance when creating or changing a report layout:
· Make your non-graphical layout objects (e.g. boilerplate text or fields with text) fixed in size --that is, set theVertical and Horizontal Elasticity property of the field to Fixed. In particular,making repeating frames and their contents fixed in size can improve performance. Nongraphical objects that are variable in size require more processing because Report Builder must determine their size before formatting them. Non-graphical objects that are fixed in size do not require this additional processing because their size is already known.

· Make your graphical layout objects (e.g., images and Oracle Graphics objects) variable in size -- that is, Vertical and Horizontal Elasticity of Variable. Graphical objects that are fixed in size usually need to have their contents scaled to fit inside of the object. Scaling an object's contents requires more processing. If the object is variable in size, it can grow or shrink with the contents and scaling is not necessary.

· Specify Reduce Image Resolution for image objects whose size you reduce. (This option is available as a drawing option under the Format menu). When you reduce the size of an image,it requires less information to display it than when it was larger. Reduce Image Resolution eliminates the unnecessary information and reduces the amount of space needed to store the image. This can be particularly useful for large, multi-colored images.
· Make fields that contain text one line long and ensure that their contents fit within their specified width (e.g., by using the SUBSTR function). If a field with text spans more than one line, then Report Builder must use its word-wrapping algorithm to format the field. Ensuring that a field only takes one line to format avoids the additional processing of the word-wrapping algorithm.

· Minimize the use of different formatting attributes (e.g., fonts) within the same field or boilerplate text. If text in a field or boilerplate object contains numerous different formatting attributes, it requires longer to format.

4. FETCHING AHEAD
==================
Oracle Reports provides the report developer with the ability to display data such as total number of pages, or grand totals in the report margins, or on the report header pages. This is an extremely useful function, but has the requirement that the entire report is ‘fetched ahead’, therefore the entire report is processed before the first page can be output. The usual Oracle Reports model is to format pages on an as-needed basis. Using the read-ahead functionality will not affect the
overall time that the report takes to generate, but it does affect the amount of temporary storage required and the amount of time taken before the first page can be viewed in the Live Previewer or Previewer (if running to Screen). This is an example of the perceived performance as opposed to the actual performance. If the report is going to be run to the screen in the production environment then read-ahead should be avoided unless the performance is deemed acceptable.

5. BURSTING AND DISTRIBUTION
===============================
With the introduction of report bursting in Reports 6.0, a report layout can be made up of three distinct sections: a header, body and trailer. A report can now comprise all three of these sections or it could be viewed as three separate reports within one report. This is made possible by a set of properties each individual section. The performance gain is evident when bursting is used in conjunction with another feature new to Reports 6.0 - Distribution. This allows each section of a
report to have multiple different formats and be sent to multiple destinations. Once the distribution options have been set the report needs only to be run once to be output to multiple destinations with a single execution of the query(s) where previously it would have required the report to be
run multiple times.

Calling report from FORMs
=====================================================

In an application built using Oracle Forms and Oracle Reports, it's often the case that some kind of report is required on data that has already been retrieved/updated by the Oracle Forms section of the application. The tight product integration between the Oracle Forms and Oracle Reports components allows for the passing of blocks of data between the associated products, thus removing the need for subsequent queries to the database. Between Forms and Reports, this passing of data is achieved by use of record groups and data parameters, then using the
Run_Product built-in procedure. Run_Product is being replaced by Run_Report_Object for calling Reports from Forms. This paper will not discuss the details of converting Run_Product to Run_Report_Object, except to say that it is advisable to migrate towards this new built-in when reviewing performance efficiencies.

Note
Unless these data parameters are reasonably large, or the queries are particularly complicated then the perceived performance improvements would be negligible. It should also be noted that only top-level groups in a report can accept data parameters passed from Forms. When calling a report from Forms, Reports could re-query the data, but it is more efficient to have Forms create a record group to hold the data and pass that along as a parameter to Reports. This technique is referred to as query partitioning. This means that Oracle Reports will do no work on
querying and will only be responsible for the formatting of the data. This means that any dynamic alteration of queries (via triggers and lexical parameters) will be ignored. The first time that a report is called from a form using run_product on a Windows platform then the Reports background engine will be loaded into memory. Putting the background engine executable in the start-up group so that it is initialized every time Windows is booted up can hide this start-up cost.

Monday, December 7, 2009

Restrict DDL on a system

CREATE OR REPLACE TRIGGER SYS.restrict_ddl_in_db
BEFORE ddl ON DATABASE
DECLARE
machinename VARCHAR2(64);
message VARCHAR2(150) := 'You have no rights to execute DDL statements on DEVELOPBETA! Call the DBA! ';
BEGIN
SELECT SYS_CONTEXT ('USERENV', 'host') INTO machinename FROM DUAL;
IF machinename IS NULL then
message := message || ' ' || 'NULL';
else
message := message || ' ' || machinename;
end if;
IF INSTR(upper(machinename),'*****WORKGROUP\XPSP3-WBB',1) <> 0
OR INSTR(lower(machinename),'bg\fed',1) <> 0
OR INSTR(lower(machinename),'bg\plamen',1) <> 0
OR INSTR(lower(machinename),'developbeta',1) <> 0
OR INSTR(lower(machinename),'oralin1',1) <> 0
OR machinename IS NULL
THEN
null;
ELSE
RAISE_APPLICATION_ERROR (num => -20000,msg => message);
END IF;
END;
/

Tuesday, November 24, 2009

sql for Days of Month

=============================
=============================
sql for Days of year
=============================
=============================

SELECT TRUNC(SYSDATE,'Y')+ROWNUM-1 THEDATE ,To_char(SYSDATE,'mm')
FROM ( SELECT 1 just_a_column
FROM dual
CONNECT BY LEVEL <= 366
)
where rownum <= add_months(trunc(:p11,'MM'),12) - trunc(:p11,'MM')


SELECT TRUNC(SYSDATE,'Y')+ROWNUM-1 THEDATE
FROM ( SELECT 1 just_a_column
FROM dual
CONNECT BY LEVEL <= 366
)
WHERE ROWNUM <= ADD_MONTHS(TRUNC(SYSDATE,'Y'),12)-TRUNC(SYSDATE,'Y')



select trunc(:p11,'MM')-1+rownum Date_month
from all_objects
where rownum <= add_months(trunc(:p11,'MM'),12) - trunc(:p11,'MM')


=================================
=================================
sql for Days of Month
=================================
=================================

SELECT TRUNC(SYSDATE,'Y')+ROWNUM-1 THEDATE ,To_char(SYSDATE,'mm')
FROM ( SELECT 1 just_a_column
FROM dual
CONNECT BY LEVEL <= 366
)
where rownum <= add_months(trunc(:p11,'MM'),1) - trunc(:p11,'MM')



select trunc(:p11,'MM')-1+rownum Date_month
from all_objects
where rownum <= add_months(trunc(:p11,'MM'),1) - trunc(:p11,'MM')

SQL or PL/SQL techniques to generate rows.

=======================================================
=======================================================
SQL or PL/SQL techniques to generate rows.
=======================================================
=======================================================

1.

SELECT ROWNUM
FROM ( SELECT 1 FROM DUAL GROUP BY CUBE(1,2,3,4,5,6,7,8,9,10) )
WHERE ROWNUM <= &N

2.

SELECT *
FROM ( SELECT ROWNUM FROM DUAL CONNECT BY LEVEL <= &N )

3.

SELECT ROWNUM FROM DUAL CONNECT BY LEVEL <= &N;

4.

create or replace type row_table as table of number;


Create or replace function gen_row (num_rows in number)
return row_table
parallel_enable pipelined is
begin
for x in 1..num_rows loop
pipe row (x);
end loop;
return;
end;


select * from table(gen_row(&N));

5.

select to_number(column_value) colval
from xmltable('for $i in 1 to &n return $i');

6.

select integer_value
from dual
where 1=2
model
dimension by ( 0 as key )
measures ( 0 as integer_value )
rules upsert ( integer_value[ for key from 1 to 9 increment 1 ] = cv(key) );

7.

select rownum from all_objects where rownum<=5;


8.

select val
from (select 1 rn from dual)
model
dimension by (rn)
measures (1 val)
rules
iterate (&N)
( val[iteration_number] = iteration_number+1 )
order by val


[note: this is not mine]

Sunday, November 22, 2009

Composite Partitioned Table in oracle 10g

====================================================
Composite Partitioned Table - By Range And hash
====================================================

CREATE TABLE composite_rng_hash (
cust_id NUMBER(10),
cust_name VARCHAR2(25),
amount_sold NUMBER(10,2),
time_id DATE)
PARTITION BY RANGE(time_id)
SUBPARTITION BY HASH(cust_id)
SUBPARTITION TEMPLATE(
SUBPARTITION sp1 TABLESPACE users,
SUBPARTITION sp2 TABLESPACE users,
SUBPARTITION sp3 TABLESPACE users,
SUBPARTITION sp4 TABLESPACE users) (
PARTITION sales_pre05
VALUES LESS THAN(TO_DATE('01/01/2007','DD/MM/YYYY')),
PARTITION sales_2007
VALUES LESS THAN(TO_DATE('01/01/2008','DD/MM/YYYY')),
PARTITION sales_2008
VALUES LESS THAN(TO_DATE('01/01/2009','DD/MM/YYYY')),
PARTITION sales_future
VALUES LESS THAN(MAXVALUE));

set linesize 121
col table_name format a20

SELECT table_name, partitioned, secondary
FROM user_tables;

desc user_tab_partitions


SELECT table_name, partition_name, composite, subpartition_count SPC, high_value
FROM user_tab_partitions;

desc user_tab_subpartitions


SELECT table_name, partition_name, subpartition_name, subpartition_position
FROM user_tab_subpartitions;

desc user_subpartition_templates


SELECT subpartition_name, tablespace_name, high_bound
FROM user_subpartition_templates;



GRANT select ON emp TO uwclass;
GRANT select on dept TO uwclass;

INSERT INTO composite_rng_hash
SELECT c.deptno, c.dname || ' ' || c.dname,
s.sal, s.hiredate
FROM emp s, dept c
WHERE s.deptno = c.deptno
AND rownum < 250001; commit; exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', 'SALES_PRE05'); exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', 'SALES_2005'); exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', 'SALES_2006'); exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', 'SALES_2007'); exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', 'SALES_2008'); exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', 'SALES_FUTURE'); SELECT table_name, partition_name, num_rows FROM user_tab_partitions; SELECT table_name, partition_name, subpartition_name, num_rows FROM user_tab_subpartitions; exec dbms_stats.gather_table_stats(USER, 'COMPOSITE_RNG_HASH', GRANULARITY=>'ALL');

SELECT table_name, partition_name, subpartition_name, num_rows
FROM user_tab_subpartitions;

set long 1000000
select dbms_metadata.get_ddl('TABLE', 'COMPOSITE_RNG_HASH');



====================================================
Composite Partitioned Table - By Range And List
====================================================

CREATE TABLE composite_rng_list (
cust_id NUMBER(10),
cust_name VARCHAR2(25),
cust_state VARCHAR2(2),
time_id DATE)
PARTITION BY RANGE(time_id)
SUBPARTITION BY LIST (cust_state)
SUBPARTITION TEMPLATE(
SUBPARTITION west VALUES ('OR', 'WA') TABLESPACE users,
SUBPARTITION east VALUES ('NY', 'CT') TABLESPACE users,
SUBPARTITION cent VALUES ('OK', 'TX') TABLESPACE users) (
PARTITION per1 VALUES LESS THAN (TO_DATE('01/01/2000','DD/MM/YYYY')),
PARTITION per2 VALUES LESS THAN (TO_DATE('01/01/2005','DD/MM/YYYY')),
PARTITION per3 VALUES LESS THAN (TO_DATE('01/01/2010','DD/MM/YYYY')),
PARTITION future VALUES LESS THAN(MAXVALUE));

desc composite_rng_list

SELECT table_name, partition_name, composite, high_value
FROM user_tab_partitions
where table_name='COMPOSITE_RNG_LIST';


SELECT table_name, partition_name, subpartition_name, num_rows
FROM user_tab_subpartitions
where table_name='COMPOSITE_RNG_HASH';

Saturday, November 21, 2009

How to stop a running job of DBMS_JOB ?

Stop running job with DBMS_JOB

1. Find the Job You Want to stop.
---------------------------------

select j.sid,
j.log_user,
j.job,
j.broken,
j.failures,
j.last_date||':'||j.last_sec last_date,
j.this_date||':'||j.this_sec this_date,
j.next_date||':'||j.next_sec next_date,
j.next_date - j.last_date interval,
j.what
from (select djr.SID,
dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES,
dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC,
dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j;


2. Mark the DBMS_JOB as Broken
------------------------------
Use the following command for the job that you have to deal with.

EXEC DBMS_JOB.BROKEN(job#,TRUE);

All this command does is mark the job so that if we get it to stop, it won't start again. Let's make
one thing perfectly clear, after executing this command the job is still running.

As a side note, if you are trying to shut down a database with jobs that run throughout the day,
they may hinder your attempts to bring down the database cleanly. This is a wonderful command
to make sure no jobs are executing during the shutdown process. Just be aware that you will need
to mark the jobs as unbroken when the database comes back up, more on that later.

3. Kill the Oracle Session
------------------------------

Since the job is still running and it isn't going to end soon, you will need to kill the Oracle
session that is executing the job. Use the following command for to kill the job.

ALTER SYSTEM KILL SESSION 'sid,serial#';

select j.sid,
s.spid,
s.serial#,
j.log_user,
j.job,
j.broken,
j.failures,
j.last_date||':'||j.last_sec last_date,
j.this_date||':'||j.this_sec this_date,
j.next_date||':'||j.next_sec next_date,
j.next_date - j.last_date interval,
j.what
from (select djr.SID,
dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES,
dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC,
dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j,
(select p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr ) s
where j.sid = s.sid;


4. Kill the O/S Process
--------------------------
More often than not the previous step will still leave the job attached to the database and still running.
When this happens you will need to go out to the operating system level and get rid of the process
that has spawned from the running job. In order to do this you must login to the database box and
issue the following command, depending on the type of operating system you have.

For Windows, at the DOS Prompt: orakill sid spid

For UNIX at the command line> kill '9 spid

The orakill is an Oracle command, while kill is a Unix command.

5. Check if the Job is Still Running (by the above query)
------------------------------------

6. Determine the Current Number of Job Queue Processes
------------------------------------------------------


select name,value from v$parameter where name = 'job_queue_processes';


7. Alter the Job Queue to Zero
------------------------------

ALTER SYSTEM SET job_queue_processes = 0;

This will bring down the entire job queue processes.


8. check that No Processes are Using the Job Queue (by the above query)
-----------------------------------------------------------------------


9. Mark the DBMS_JOB as Not Broken
----------------------------------

You can now reset the broken job to not broken so they can run again. Just issue the command.


SQL> EXEC DBMS_JOB.BROKEN(job#,FALSE):


10. Alter the Job Queue to Original Value
--------------------------------

Set the job queue to its' original value so that the jobs can run again.

ALTER SYSTEM SET job_queue_processes = original_value;

11. check that DBMS_JOB Is Running
---------------------------------

Wednesday, November 18, 2009

Single Sign On

Single Sign On or Same Sign On?
==================================

Just thinking aloud ... one of the things Oracle has introduced is 'userid consistency' across their applications. This is called 'SSO'.

I thought that SSO stood for 'Single Sign On', and the functionality is 'sign on once and use all applications'.

I am finding it actually means 'Same Sign On' and the functionality is 'log in to each of the the different applications one at time, using the same userid & password'. However, signing out does efficiently log me off all applications.

cannot allocate new log

Thread 1 cannot allocate new log
Checkpoint not complete
===================================

If this problem is shown in alert log then increase the redo log group files.
or
Increase the size of redo log files.

It is batter to increase the redo log files.

==============================================
This occurrs when Oracle attempts to reuse a log file but the checkpoint that would flush
the blocks that may have redo in this log file has not yet completed -- we must wait
until that checkpoint completes before we can reuse that file -- thats when this message
is printed. during this time (when we cannot allocate a new log) processing is suspended
in the database while the checkpoint is made to complete ASAP.

The major way to relieve this is to have sufficient log to carry you through peak times.
that way, we can complete the checkpoint while you are not busy.

also make sure your checkpoints happen as fast as they can (eg: enable ASYNC IO or
configure >1 DBWR if ansyc IO cannot be used, make sure disks are not contending with
other apps and so on)

Another way is to make the log files smaller, hence increasing the frequency with which
we checkpoint (log checkpoint interval and other init.ora parameters achieve the same
effect btw).

Kill session statement generate

Kill session statement generate
======================

select'alter system kill session '''||''||sid||','|| serial#||''''||';' kill_statement
from v$session
where username ='SCOTT' ---give user name
andsid=&p_sid;---give sid or omit it

__________________

Load image from directory to oracle table (blob)

Load image from directory to oracle table
==============================
SQL>
SQL> CONN / AS SYSDBA
Connected.
SQL>
SQL> grant dba to halim2 identified by halim2;
SQL> Conn halim2/halim2
SQL> create directory halim_blob as 'D:\halim_blob_dir';
Directory created.
SQL> CREATE TABLE image_table (
dname VARCHAR2(30), -- directory name
sname VARCHAR2(30), -- subdirectory name
fname VARCHAR2(30), -- file name
iblob BLOB)
/
Table created.

SQL> CREATE OR REPLACE PROCEDURE halim_load_image_file (
dir_name VARCHAR2,
sub_dir_name VARCHAR2,
file_name VARCHAR2
) IS
source_file BFILE;
destenation_file BLOB;
length_file BINARY_INTEGER;
BEGIN
source_file := BFILENAME ('HALIM_BLOB', file_name);

-- insert a NULL record to lock
INSERT INTO image_table
(dname, sname, fname, iblob
)
VALUES (dir_name, sub_dir_name, file_name, EMPTY_BLOB ()
)
RETURNING iblob
INTO destenation_file;

-- lock record
SELECT iblob
INTO destenation_file
FROM image_table
WHERE dname = dir_name AND sname = sub_dir_name AND fname = file_name
FOR UPDATE;

-- open the file
DBMS_LOB.fileopen (source_file, DBMS_LOB.file_readonly);
-- determine length
length_file := DBMS_LOB.getlength (source_file);
-- read the file
DBMS_LOB.loadfromfile (destenation_file, source_file, length_file);

-- update the blob field
UPDATE image_table
SET iblob = destenation_file
WHERE dname = dir_name AND sname = sub_dir_name AND fname = file_name;

-- close file
DBMS_LOB.fileclose (source_file);
END halim_load_image_file;
/
Procedure created.
-----------image load by procedure
SQL> EXEC halim_load_image_file('HALIM_BLOB','Image_test','c .jpg');
PL/SQL procedure successfully completed.
SQL> SELECT *
FROM image_table;
SP2-0678: Column or attribute type can not be displayed by SQL*Plus
SQL>
SQL>
----------image load by sql statement
SQL> DECLARE
image_file BLOB;
BEGIN
SELECT iblob
INTO image_file
FROM image_table;

DBMS_OUTPUT.put_line (DBMS_LOB.getlength (image_file));
END;
/
PL/SQL procedure successfully completed.
SQL> set serveroutput on
SQL>
SQL> ed
Wrote file afiedt.buf
DECLARE
image_file BLOB;
BEGIN
SELECT iblob
INTO image_file
FROM image_table;
DBMS_OUTPUT.put_line (DBMS_LOB.getlength (image_file));
END;
/

PL/SQL procedure successfully completed.
SQL>
SQL>
SQL> exit

What happen when not exception handle ininsert statement ?

What happen when not exception handle ininsert statement ?
================================================== ========
----create a test table
CREATE TABLE test( col number);
---add primary ket to that table
ALTER TABLE test
ADD CONSTRAINT pk_test
PRIMARY KEY(col)
USING INDEX;
---create a procedure to insert record into that table
CREATE OR REPLACE PROCEDURE proce_insert IS
BEGIN
INSERT INTO testVALUES(1);
END proce_insert;
/

----create and Run a anonymous block without exception handling
BEGIN
proce_insert;
proce_insert;
END;
/
--generating error and
-- no records inserted as expected
SELECT * FROM test;

----Create and Run a anonymous block with exception handling
BEGIN
proce_insert;
proce_insert;
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
/
--no error occure and one record inserted
SELECT * FROM test;

commit;

delete from test;

Monday, November 16, 2009

How can I increase redo log file size

How can I increase redo log file size (no archive mode)
=======================================
1.
SELECT GROUP#, ARCHIVED, STATUS FROM V$LOG;


GROUP# ARC STATUS
---------- --- ----------------
1 YES INACTIVE
2 YES INACTIVE
3 NO CURRENT

2. Whichever is not current , drop that group and recreate it with appropriate size.
-----------------------------------------------------------------------------------------------

ALTER DATABASE DROP LOGFILE GROUP 1;

3. Manually drop the physical log file.

ALTER DATABASE ADD LOGFILE GROUP 1 ('G:\ORACLE\PRODUCT\10.2.0\ORADATA\BEFTN\REDO02.LOG') SIZE 60M;

ALTER DATABASE DROP LOGFILE GROUP 2;

ALTER DATABASE ADD LOGFILE GROUP 2 ('G:\ORACLE\PRODUCT\10.2.0\ORADATA\BEFTN\REDO03.LOG') SIZE 60M;


4. Then switch the logfile
--------------------------

alter system switch logfile

---alter system checkpoint ----for doing inactive

----ALTER SYSTEM ARCHIVE LOG CURRENT; ---when archive mode ---Wait for the next group to be archived.

5. Then recreate the remaining group.
-----------------------------------

ALTER DATABASE DROP LOGFILE GROUP 3;
ALTER DATABASE ADD LOGFILE GROUP 3 ('F:\Oracle\Oradata\NEDARR\REDO03A') SIZE 60M;

ORA-07445/ORA-00600:

ORA-07445/ORA-00600:
==========================

yesterday our client cannot connect to system because when they
are trying to connect its become hang.
we also try to connect with that database user from sqlplus
it also hang. but we can connect with sys user.
some alert log error here....


ORA-07445 : exception encountered: core dump [%s] [%s]

ORA-1688: unable to extend table SYS.WRH$_LATCH_CHILDREN partition WRH$_LATCH__2432247821_10332
by 128 in tablespace SYSAUX
Sat Nov 14 22:18:56 2009
MMON Flush encountered SYSAUX out of space error(1688).

Errors in file /d04/admin/stlbas/udump/stlbas_ora_7294.trc:
ORA-00600: internal error code, arguments: [12333], [0], [0], [0], [], [], [], []
Sat Oct 24 16:44:32 2009
Error occured while spawning process P409; error = 3135



ORA-07445 is a critical error generated by an oracle database when it has receives a fatal
signal from the operating system. This error may be raised in either a foreground or
background process. The most common reason for this error is where a pointer in memory
is overwritten in error by a software problem. In this event, when the pointer is used
to lookup the data it is pointing at, this may likely reference an area of memory without
the process address space. Read more about the cause, interpretation and resolution of the
error ORA-07445 error below

ORA-07445 : What can cause this error

Bug in Oracle Code

Change in reference libraries [ If you have recently applied OS patches, it may be wise to
relink the binaries ]
Code changes in application that causes invalid/null value conditions where a valid value is
expected
Software issues like incorrect Libaries due to invalid environment variables primarily related
to PATH and LD_LIBRARY_PATH
ORA-07445 : Where is the diagnostic information
When an ORA-07445 occurs in an oracle database, the error is written to the database alertlog.
Along with the error, a trace file is written to the background or user dump dest directory of oracle. Also a core dump may be created in the core dump destination directory.

ORA 7445 : Anatomy of the Error
There are two typical signals seen, i.e, SIGBUS (signal 10, bus error) and SIGSEGV
(signal 11, segmentation violation).

Examples of the Error can be as follows :

ORA-7445: exception encountered: core dump [some_SQL()+268] [SIGBUS] [Invalid address
alignment] [] [] []

ORA-7445: exception encountered: core dump [10] [2100262800] [261278112] [] [] []

Example 1 shows that the error occurred in some_SQL() and a SIGBUS signal was sent.
Example 2 has very less information indicating a signal 10 was received but the function
is not identified.

Oracle metalink provides detailed information on troubleshooting the errors. The Note 153788.1
- Troubleshoot an ORA-7445 Error



ORA-7445 : Resolution and working with Oracle Support
=======================================================
ORA 7445 error cannot just be eliminated by the oracle database administrator due to the complexity
involved in resolving this error.

To reduce the amount of time taken by the Oracle Support team to reduce this error, dba should
try to reproduce the situation and identify the SQL or Operation that can reproduce this error
Open an ITAR with Oracle using metalink and Create an RDA of the Oracle database and update the
TAR with the alert log, trace files and the core dumps that the error has generated
Review. the following notes provided by oracle to successfully resolving an ORA 7445 error.
Documents and Links mentioned below will require that you have access to Oracle Metalink
Metalink Note 153788.1 : ORA 7445 Search Tool
Metalink Note 164968.1 : What is an ORA 7445 Error



Solution
============

we find a trace file with ORA-7445 and then find a select statmente , for which the
error is generate.

At last we find that those sql is define in a ON LOGON DATABASE trigger of our database. here many update and insert statment is present.
for this reason it generating many redo, and for this reason Database is hang.

Monday, November 9, 2009

Creating a Report using XML Query Data source

Creating a Report using XML Query Data source
===============================================

1.

CREATE TABLE HALIM (ID NUMBER,NAME VARCHAR2(10));

INSERT INTO HALIM ( ID, NAME ) VALUES ( 14, 'nazim');
INSERT INTO HALIM ( ID, NAME ) VALUES ( 10, 'halim');
INSERT INTO HALIM ( ID, NAME ) VALUES ( 11, 'nuri');
INSERT INTO HALIM ( ID, NAME ) VALUES ( 12, 'halu');

COMMIT;

2.

CREATE DIRECTORY my_xml AS 'D:\MY_XML\';

3.

CREATE OR REPLACE PROCEDURE BEFTN.GENERATE_XML_TO_DIR
( p_filename in varchar2 default 'HALIM',
p_dir in varchar2 default 'MY_XML' )
IS
v_OutFile utl_file.file_type;
v_value varchar2(2000);
v_QryCtx DBMS_XMLQuery.ctxType;
v_Xml CLOB;
v_More BOOLEAN;
BEGIN
---v_OutFile := utl_file.fopen( p_dir, p_filename||'.xml, 'w' ); ----'.xml', w'
v_OutFile := utl_file.fopen( p_dir, p_filename||'.xsd', 'w' ); ----'.xml', w'
---for manually input if needed
--- utl_file.put_line(v_OutFile,'');
--- utl_file.put_line(v_OutFile,' ');
---------------------------------------------------------------------------------
v_QryCtx := DBMS_XMLQuery.newContext('SELECT * FROM HALIM');
--- DBMS_XMLQuery.setRowsetTag(v_QryCtx, 'halim');
--- DBMS_XMLQuery.setRowTag(v_QryCtx, 'halim');
v_Xml := DBMS_XMLQuery.GetXML(v_QryCtx,1); -- 1 for DTD,2 for Schema
DBMS_XMLQuery.closeContext(v_QryCtx);

while v_more loop --for eliminating version clause ?xml version=''1.0'
utl_File.Put(v_OutFile, v_xml); --Substr(v_Xml, 24, 32767));
if length(v_Xml) > 32767 then
v_Xml := substr(v_Xml, 32768);
else
v_More := FALSE;
end if;
end loop;

--- utl_file.put_line(v_OutFile,'
');
utl_file.fclose( v_OutFile );

Exception
when others then
utl_file.fclose( v_OutFile );
End;
/

4.

EXEC BEFTN.GENERATE_XML_TO_DIR;

5.
create a report with THIS D:\MY_XML\HALIM.XSD FILE.
USEING DATA SOURCE FROM XML DATA SOURCE.

Creating a Report using XML Query Data source

Creating a Report using XML Query Data source
===============================================

1.

CREATE TABLE HALIM (ID NUMBER,NAME VARCHAR2(10));

INSERT INTO HALIM ( ID, NAME ) VALUES ( 14, 'nazim');
INSERT INTO HALIM ( ID, NAME ) VALUES ( 10, 'halim');
INSERT INTO HALIM ( ID, NAME ) VALUES ( 11, 'nuri');
INSERT INTO HALIM ( ID, NAME ) VALUES ( 12, 'halu');

COMMIT;

2.

CREATE DIRECTORY my_xml AS 'D:\MY_XML\';

3.

CREATE OR REPLACE PROCEDURE BEFTN.GENERATE_XML_TO_DIR
( p_filename in varchar2 default 'HALIM',
p_dir in varchar2 default 'MY_XML' )
IS
v_OutFile utl_file.file_type;
v_value varchar2(2000);
v_QryCtx DBMS_XMLQuery.ctxType;
v_Xml CLOB;
v_More BOOLEAN:=TRUE;
BEGIN
---v_OutFile := utl_file.fopen( p_dir, p_filename||'.xml, 'w' ); ----'.xml', w'
v_OutFile := utl_file.fopen( p_dir, p_filename||'.xsd', 'w' ); ----'.xml', w'
---for manually input if needed
--- utl_file.put_line(v_OutFile,'');
--- utl_file.put_line(v_OutFile,' ');
---------------------------------------------------------------------------------
v_QryCtx := DBMS_XMLQuery.newContext('SELECT * FROM HALIM');
--- DBMS_XMLQuery.setRowsetTag(v_QryCtx, 'halim');
--- DBMS_XMLQuery.setRowTag(v_QryCtx, 'halim');
v_Xml := DBMS_XMLQuery.GetXML(v_QryCtx,1); -- 1 for DTD,2 for Schema
DBMS_XMLQuery.closeContext(v_QryCtx);

while v_more loop --for eliminating version clause ?xml version=''1.0'
utl_File.Put(v_OutFile, v_xml); --Substr(v_Xml, 24, 32767));
if length(v_Xml) > 32767 then
v_Xml := substr(v_Xml, 32768);
else
v_More := FALSE;
end if;
end loop;

--- utl_file.put_line(v_OutFile,'
');
utl_file.fclose( v_OutFile );

Exception
when others then
utl_file.fclose( v_OutFile );
End;
/

4.

EXEC BEFTN.GENERATE_XML_TO_DIR;

5.
create a report with THIS D:\MY_XML\HALIM.XSD FILE.
USEING DATA SOURCE FROM XML DATA SOURCE.

Monday, November 2, 2009

who lock the row procedure

CREATE OR REPLACE procedure BASEL2.who_lock_the_row_1
is
v_sid varchar2(200);
v_object_name varchar2(200);
v_roid varchar(200);
v_actnum varchar2(200);
v_sql varchar2(400);
begin

select do.object_name,
dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# ) roid
into v_object_name,v_roid
from v$session s, dba_objects do
where sid=(select s2.sid
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 )
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;


v_sql:= 'select actnum from '||v_object_name ||' where rowid='||v_roid||';';

execute immediate v_sql;

dbms_output.put_line(v_actnum);

end;

Database User Profile resource_type = 'PASSWORD'

Database User Profile resource_type = 'PASSWORD'
================================================
syntax:-

CREATE PROFILE profile
LIMIT
{ { FAILED_LOGIN_ATTEMPTS
| PASSWORD_LIFE_TIME
| PASSWORD_REUSE_TIME
| PASSWORD_REUSE_MAX
| PASSWORD_LOCK_TIME
| PASSWORD_GRACE_TIME } { expr | UNLIMITED | DEFAULT }
| PASSWORD_VERIFY_FUNCTION { function | NULL | DEFAULT }
};

FAILED_LOGIN_ATTEMPTS
-------------------
if a user attempts to login more than the specified number of times the account
will be locked. Default is 10 days.

PASSWORD_LIFE_TIME
------------------
number of days the same password can be used unless a grace period is specified.
Default is 108 days.

PASSWORD_REUSE_TIME
------------------
number of days that must pass before the same password can be used again.
Default is unlimited.

PASSWORD_REUSE_MAX
-----------------
number of times a password must be changed before a previous password can be used again.
Default is unlimited.

PASSWORD_LOCK_TIME
-----------------
number of days an account will remain locked after failed login attempts has been triggered.
Default is 1 day.

PASSWORD_GRACE_TIME
-----------------
number of grace days for user to change password.
Default is 7 days.

PASSWORD_VERIFY_FUNCTION
-----------------
allows you to define PL/SQL that can be used for password verification.


===========================Example==============================

SQL> SELECT * FROM dba_profiles
WHERE profile = 'DEFAULT'
AND resource_type = 'PASSWORD';


SQL> CREATE USER auser IDENTIFIED BY apassword
DEFAULT TABLESPACE dfts
TEMPORARY TABLESPACE tpts
PROFILE aprofile;


SQL> CREATE PROFILE restrictive
LIMIT FAILED_LOGIN_ATTEMPTS 1
PASSWORD_LIFE_TIME 30
PASSWORD_LOCK_TIME 7
PASSWORD_GRACE_TIME 0;


SQL> ALTER USER scott PROFILE restrictive;

SQL> ALTER USER scott PROFILE default;


SQL> ALTER PROFILE default
LIMIT PASSWORD_REUSE_TIME 3
PASSWORD_REUSE_MAX unlimited;

how to collect values from multiple rows into a single, comma delimited string. (using stragg = string aggregation)

Prerequisites Of STRAGG (Tom Kyte's)
==========================================

create or replace type stragg_type as object
(
string varchar2(4000),

static function ODCIAggregateInitialize
( sctx in out stragg_type )
return number ,

member function ODCIAggregateIterate
( self in out stragg_type ,
value in varchar2
) return number ,

member function ODCIAggregateTerminate
( self in stragg_type,
returnvalue out varchar2,
flags in number
) return number ,

member function ODCIAggregateMerge
( self in out stragg_type,
ctx2 in stragg_type
) return number
);
/

create or replace type body stragg_type
is

static function ODCIAggregateInitialize
( sctx in out stragg_type )
return number
is
begin

sctx := stragg_type( null ) ;

return ODCIConst.Success ;

end;

member function ODCIAggregateIterate
( self in out stragg_type ,
value in varchar2
) return number
is
begin

self.string := self.string || ',' || value ;

return ODCIConst.Success;

end;

member function ODCIAggregateTerminate
( self in stragg_type ,
returnvalue out varchar2 ,
flags in number
) return number
is
begin

returnValue := ltrim( self.string, ',' );

return ODCIConst.Success;

end;

member function ODCIAggregateMerge
( self in out stragg_type ,
ctx2 in stragg_type
) return number
is
begin

self.string := self.string || ctx2.string;

return ODCIConst.Success;

end;

end;
/

create or replace function stragg
( input varchar2 )
return varchar2
deterministic
parallel_enable
aggregate using stragg_type
;
/

Data:-
=========
GROUP_KEY VAL
---------- ----------
Group 1 a

Group 2 a
Group 2 b

Group 3 a
Group 3 b

use of stragg
================
select
group_key ,
STRAGG( VAL ) as string
from
t
group by
group_key
;


select
group_key ,
stragg( DISTINCT val ) as string
from
t
group by
group_key
;

Sunday, November 1, 2009

How to collect values from multiple rows into a single, comma delimited string (using vm_concat ,stragg)

SQL> select deptno, job, ename from emp order by 1,2;
DEPTNO JOB ENAME
---------- --------- ----------
10 PRESIDENT KING
20 ANALYST FORD
20 CLERK ADAMS
20 MANAGER JONES
30 CLERK JAMES
30 MANAGER BLAKE
30 SALESMAN ALLEN

7 rows selected.

SQL> col names format a80 heading "JOB/EMPLOYEE_NAME LIST"
SQL> select deptno, wm_concat(job||','||ename) names
2 from emp
3 group by deptno
4 order by deptno
5 /
DEPTNO JOB/EMPLOYEE_NAME LIST
---------- --------------------------------------------
10 PRESIDENT,KING
20 ANALYST,FORD,MANAGER,JONES,CLERK,ADAMS
30 CLERK,JAMES,MANAGER,BLAKE,SALESMAN,ALLEN

3 rows selected.

SQL> select deptno,
2 substr(replace(sys_connect_by_path(name,'/'),'/',','),2) names
3 from ( select deptno, job||','||ename name,
4 row_number() over(partition by deptno order by job, ename) rn
5 from emp
6 )
7 where connect_by_isleaf = 1
8 connect by prior deptno = deptno and prior rn = rn-1
9 start with rn = 1
10 /
DEPTNO JOB/EMPLOYEE_NAME LIST
---------- ----------------------------------------------------------------------
10 PRESIDENT,KING
20 ANALYST,FORD,CLERK,ADAMS,MANAGER,JONES
30 CLERK,JAMES,MANAGER,BLAKE,SALESMAN,ALLEN

3 rows selected.

SQL> select deptno,
2 trim (both ',' from
3 replace(
4 replace(
5 max(decode(job,'ANALYST',job))||','||max(decode(job,'ANALYST',ename))||','||
6 max(decode(job,'CLERK',job))||','||max(decode(job,'CLERK',ename))||','||
7 max(decode(job,'MANAGER',job))||','||max(decode(job,'MANAGER',ename))||','||
8 max(decode(job,'PRESIDENT',job))||','||max(decode(job,'PRESIDENT',ename))||','||
9 max(decode(job,'SALESMAN',job))||','||max(decode(job,'SALESMAN',ename)),
10 ',,,',','),
11 ',,',',')
12 ) names
13 from emp
14 group by deptno
15 order by deptno
16 /
DEPTNO JOB/EMPLOYEE_NAME LIST
---------- --------------------------------------------------------------------------------
10 PRESIDENT,KING
20 ANALYST,SCOTT,CLERK,SMITH,MANAGER,JONES
30 CLERK,JAMES,MANAGER,BLAKE,SALESMAN,WARD

3 rows selected.


---------------------------------------------------------------
----another Example - 2

create table temp
(
source_1 varchar2(2),
period varchar2(4),
id_l number
)

insert into temp values ('A','2008',1);
insert into temp values ('A','2008',2);
insert into temp values ('A','2008',3);
insert into temp values ('A','2008',7);
insert into temp values ('B','2008',4);
insert into temp values ('B','2008',5);
insert into temp values ('B','2008',6);


select source_1,period,
rtrim(replace(replace(xmlagg(xmlelement("a",id_l)).getstringval(),'',NULL),'',','),',')
from temp
group by source_1,period


gives the output that you asked for..

A 2008 1,2,3,7
B 2008 4,5,6,8

======================

select deptno,ename,wm_concat(ename||' '||job) over (partition by deptno order by deptno) "ename/job"
from emp

Rows to columns wise query


Oracle 9i xmlagg

In Oracle 9i we can use the xmlagg function to aggregate multiple rows onto one column:

select
deptno,
rtrim (xmlagg (xmlelement (e, ename || ',')).extract ('//text()'), ',') enames
from
emp
group by
deptno
;

DEPTNO ENAMES
---------- ----------------------------------------
10 CLARK,MILLER,KING
20 SMITH,FORD,ADAMS,SCOTT,JONES
30 ALLEN,JAMES,TURNER,BLAKE,MARTIN,WARD



Use 11g SQL pivot for single row output


The SQL pivot operator allows you to take multiple rows and display them on a single line.

select *
from
(select fk_department
from employee)
pivot
(count(fk_department)
for fk_department in ('INT', 'WEL', 'CEN', 'POL'));

'INT' 'WEL' 'CEN' 'POL'
---------- ---------- ---------- -------
7 6 0 8


Use SQL within group for moving rows onto one line and listagg to display multiple column values in a single column

In Oracle 11g, we have the within group SQL clause to pivot multiple rows onto a single row. We also a have direct SQL mechanism for non first-normal form SQL display. This allows multiple table column values to be displayed in a single column, using the listagg built-in function :

select
deptno,
listagg (ename, ',')
WITHIN GROUP
(ORDER BY ename) enames
FROM
emp
GROUP BY
deptno
/
DEPTNO ENAMES
---------- --------------------------------------------------
10 CLARK,KING,MILLER
20 ADAMS,FORD,JONES,SCOTT,SMITH
30 ALLEN,BLAKE,JAMES,MARTIN,TURNER,WARD


Use the SYS_CONNECT_BY_PATH operator




select
deptno,
substr(SYS_CONNECT_BY_PATH(lname, ','),2) name_list
from
(
select
lname,
deptno,
count(*) OVER ( partition by deptno ) cnt,
ROW_NUMBER () OVER ( partition by deptno order by lname) seq
from emp
where
deptno is not null)
where
seq=cnt
start with
seq=1
connect by prior
seq+1=seq
and prior
deptno=deptno;

DEPTNO NAME_LIST
1 Komers,Mokrel,Stenko
2 Hung,Tong
3 Hamer
4 Mansur


Use a Cross join


Matt contributed this handy SQL techniques to pivot one row of several columns into a single column with several row, using the Oracle Cross join syntax. Matt notes that the Cross join "has other uses in conjunction with a WHERE clause to create triangular result sets for rolling totals etc (though thanks to analytic functions those things are very nice and easy)".

SELECT
ite,
case
when ite = 'item1' then item1
when ite = 'item2' then item2
when ite = 'item3' then item3
end as val
FROM
(
SELECT
pivoter.ite,
item1,
item2,
item3
FROM
someTable
CROSS JOIN
(
select 'item1' as ite from dual
UNION ALL
select 'item2' as ite from dual
UNION ALL
select 'item3' as ite from dual
)pivoter
)



Use the Oracle analytic Lag-Over Function



Analytic functions have a pronounced performance improvement since they avoid an expensive self-join and only make one full-table scan to get the results. This site shows an example of using the Oracle LAG function to display multiple rows on a single column:

SELECT
ename,
hiredate,sal,LAG (sal, 1, 0)
OVER (ORDER BY hiredate) AS PrevSal
FROM
emp
WHERE
job = 'CLERK';

Use the SQL CASE operator to pivot rows onto one line

You can use the CASE statement to create a crosstab to convert the rows to columns. Below, the Oracle CASE function to create a "crosstab" of the results, such as this example from SearchOracle:

select Sales.ItemKey
, sum(Sales.QtySold) as Qty
, sum(
case when OH.MOHClass = 'Fixed'
then OH.Amt
else .00 end ) as MOHFixed
, sum(
case when OH.MOHClass = 'Var'
then OH.Amt
else .00 end ) as MOHVar
, sum(
case when OH.MOHClass = 'Cap'
then OH.Amt
else .00 end ) as MOHCap
from Sales
left outer
join OH
on Sales.ItemKey = OH.ItemKey
group
by Sales.ItemKey

Tuesday, October 27, 2009

how to find out, who lock the same row?

session-1
=====================

create table LOCK_TEST (COL1 varchar2(1), COL2 varchar2(1));

Table created.

insert into LOCK_TEST values (12,'DATA1');

1 row created.

insert into LOCK_TEST values (54, 'DATA2');

1 row created.

select * from LOCK_TEST ;

commit ;


select * from LOCK_TEST for update ;



Session-2
================

update tstlock
set COL1='DATA1'
where COL1='DATA1';





Session-3(DBA)
================
select * from v$lock ;


select l1.sid, ' IS BLOCKING ', l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1
and l1.id2=l2.id2



select /*+ no_query_transformation first_rows */ l1.sid, ' IS BLOCKING ', l2.sid
from v$lock l1, v$lock l2
where l1.block =1 and l2.request > 0
and l1.id1=l2.id1 ;


==========================================JUST ENOUGH=============================================

select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;

---SYS@WORKGROUP\USER-15FC74B60E ( SID=150 ) is blocking SYS@WORKGROUP\USER-15FC74B60E ( SID=148 )


select do.object_name,
row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=:SID--148
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;



select * from tstlock
where rowid='AAANpsAABAAAPNCAAA' ;

=====================================================================


============================Information==============================

Lock type and the ID1 / ID2 columns
====================================
In this case, we already know that the blocking lock is an exclusive DML lock,
since we are the ones who issued the locking statement. But most of the time,
you wont be so lucky. Fortunately, you can read this information from the v$lock
table with little effort.

The first place to look is the TYPE column. There are dozens of lock types,
but the vast majority are system types. System locks are normally only held
for a very brief amount of time, and its not generally helpful to try to tune
your library cache, undo logs, etc. by looking in v$lock!


There are only three types of user locks,

1. TX,
2. TM
3. and UL.

1. UL is a user-defined lock a lock defined with the DBMS_LOCK package.

2. The TX lock is a row transaction lock; its acquired once for every transaction
that changes data, no matter how many objects you change in that transaction.
The ID1 and ID2 columns point to the rollback segment and transaction table
entries for that transaction.

3. The TM lock is a DML lock. It's acquired once for each object that's being changed.
The ID1 column identifies the object being modified.


Lock Modes
=====================
You can see more information on TM and TX locks just by looking at the lock modes.
The LMODE and REQUEST columns both use the same numbering for lock modes,
in order of increasing exclusivity: from 0 for no lock, to 6 for exclusive lock.
A session must obtain an exclusive TX lock in order to change data; LMODE will be 6.
If it cant obtain an exclusive lock because some of the rows it wants to change
are locked by another session, then it will request a TX in exclusive mode;
LMODE will be 0 since it does not have the lock, and REQUEST will be 6.
You can see this interaction in the rows we selected earlier from v$lock:


Note that ID1 and ID2 in Session 2, which is requesting the TX lock (LMODE=0, REQUEST=6),
point back to the rollback and transaction entries for Session 1.
Thats what lets us determine the blocking session for Session 2.


You may also see TX locks in mode 4, Shared mode. If a block containing rows
to be changed doesnt have any interested transaction list (ITL) entries left,
then the session acquires a TX lock in mode 4 while waiting for an ITL entry.
If you see contention for TX-4 locks on an object, you probably need to
increase INITRANS for the object.

TM locks are generally requested and acquired in modes 3,
Taka Shared-Row Exclusive, and 6. DDL requires a TM Exclusive lock.
(Note that CREATE TABLE doesnt require a TM lock -- it doesn't need to
lock any objects, because the object in question doesnt exist yet!) DML
requires a Shared-Row Exclusive lock. So, in the rows we selected earlier
from v$lock, you can see from the TM locking levels that these are DML locks:


Identifying the locked object
===================================

Now that we know that each TM row points to a locked object,
we can use ID1 to identify the object.


SQL> select object_name from dba_objects where object_id=55514 ;


Sometimes just knowing the object is enough information; but we can dig even deeper.
We can identify not just the object, but the block and even the
row in the block that Session 2 is waiting on.


Identifying the locked row
=============================

We can get this information from v$session by looking at the v$session
entry for the blocked session:


SQL> select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
from v$session where sid=148 ;

This gives us the object ID, the relative file number, the block in the datafile,
and the row in the block that the session is waiting on. If that list of
data sounds familiar, its because those are the four components of an extended ROWID.
We can build the rows actual extended ROWID from these components using
the DBMS_ROWID package. The ROWID_CREATE function takes these arguments
and returns the ROWID:


SQL> select do.object_name,
row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=:148
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;


And, of course, this lets us inspect the row directly.


SQL> select * from tstlock where rowid='AAAVnHAAQAAAp0tAAA' ;



select * from v$lock
where type='TM'


select object_name from dba_objects where object_id=55916 ; --(v$lock.id1 OF TM LOCK )


select row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#
from v$session where sid=148 --(SID OF TM LOCK )

---------------------------------------------------------------------------
===========================================================================


select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;

---SYS@WORKGROUP\USER-15FC74B60E ( SID=150 ) is blocking SYS@WORKGROUP\USER-15FC74B60E ( SID=148 )


select do.object_name,
row_wait_obj#, row_wait_file#, row_wait_block#, row_wait_row#,
dbms_rowid.rowid_create ( 1, ROW_WAIT_OBJ#, ROW_WAIT_FILE#, ROW_WAIT_BLOCK#, ROW_WAIT_ROW# )
from v$session s, dba_objects do
where sid=:SID--148
and s.ROW_WAIT_OBJ# = do.OBJECT_ID ;



select * from tstlock
where rowid='AAANpsAABAAAPNCAAA' ;

move table

SELECT 'ALTER TABLE '|| OWNER ||'.'|| TABLE_NAME || CHR(10) ||
'MOVE TABLESPACE '||TABLESPACE_NAME ||';'
FROM DBA_TABLES
WHERE OWNER IN('TEST')
AND TEMPORARY ='N'

Move Index and rebuild index

---first create a folder C:/Tuning

set echo off
set feedback off
set pagesize 0
set verify off
PROMPT Enter the name of the application owner:
ACCEPT app_owner
PROMPT Enter the name of the new tablespace for the application indexes:
ACCEPT new_idx_tablespace

spool C:\TUNING\STIMG_MoveIndexes.tmp

-- rebuild all indexes on the moved tables, even those not owned
-- by the specified user because moving the tables will set their
-- status to UNUSABLE (unless they are IOT tables)

SELECT 'ALTER INDEX '||I.owner||'.'||I.index_name||CHR(10)||
'REBUILD TABLESPACE '||I.tablespace_name||' ONLINE PARALLEL;'
FROM DBA_INDEXES I,DBA_TABLES T
WHERE I.table_name = T.table_name
AND I.owner = T.owner
AND T.owner = UPPER('&app_owner');

-- rebuild any other indexes owned by this user that may not be on
-- the above tables

SELECT 'ALTER INDEX '||owner||'.'||index_name||CHR(10)||
'REBUILD TABLESPACE &new_idx_tablespace ONLINE PARALLEL;'
FROM dba_indexes
WHERE owner = UPPER('&&app_owner');

spool off

set echo on
set feedback on
set pagesize 60

spool C:\TUNING\STIMG_MoveIndexes.log

@C:\TUNING\STIMG_MoveIndexes.tmp

spool off

Tuesday, October 20, 2009

External table in oracle 11g

External table in oracle 11g


A) First need to Create a Directory
=======================================
=======================================

SQL> CREATE DIRECTORY My_dir AS 'D:\dir_external\dir';


B) Then create External Table
=======================================
=======================================

1. External table normal in oracle 10g
=========================================

SQL> CREATE TABLE Noncompressed_all_objects
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY my_dir
LOCATION ( 'Noncompressed_all_obj.dmp' )
)
AS
SELECT *
FROM all_objects;


To see The size of file
-----------------------
SQL> host dir /od D:\dir_external\dir\*.DMP


2. External Table Compressed in Oracle 11g
=============================================

SQL> CREATE TABLE compressed_all_objects
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY my_dir
ACCESS PARAMETERS (COMPRESSION ENABLED)----Difference
LOCATION ( 'compressed_all_obj.dmp' )
)
AS
SELECT *
FROM all_objects;

To see compressed size
------------------
SQL> host dir /od D:\dir_external\dir\*.DMP


3. External Table Compressed in Oracle 11g
==============================================

SQL> CREATE TABLE encrypted_all_objects
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY my_dir
ACCESS PARAMETERS (ENCRYPTION ENABLED) ----Difference
LOCATION ( 'encrypted_all_obj.dmp' )
)
AS
SELECT *
FROM all_objects;


4. External Table Combining compression and encryption in Oracle 11g
===================================================================

SQL> CREATE TABLE combining_all_objects
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY my_dir
ACCESS PARAMETERS (ENCRYPTION ENABLED COMPRESSION ENABLED)
LOCATION ( 'combining_all_obj.dmp' )
)
AS
SELECT *
FROM all_objects;

[note: To change the location of this external table to use the compressed file]

SQL> ALTER TABLE Noncompressed_all_objects LOCATION ('compressed_all_obj.dmp');

oracle dblink create

oracle dblink create
=================

Create this Database public database link in local database
with remote specified user and password .
and TNS service name (which is created for that remote database).

* The single quotes around the service name are mandatory
* The service name must be in the TNSNAMES.ORA file on the server


============================
Needed Privilege
============================

Privilege Database Required For
--------------------------------------------------------------------------------
CREATE DATABASE LINK Local Creation of a private database link.
CREATE PUBLIC DATABASE LINK Local Creation of a public database link.
CREATE SESSION Remote Creation of any type of database link.


select *
FROM ROLE_SYS_PRIVS
WHERE PRIVILEGE IN ( 'CREATE SESSION','CREATE DATABASE LINK',
'CREATE PUBLIC DATABASE LINK')


CREATE DATABASE LINK with_REPORT
CONNECT TO basel2 IDENTIFIED BY basel2
USING 'REPORT';


DROP DATABASE LINK with_REPORT


SELECT * FROM STPASTDU@WITH_REPORT

truncate table stpastdu;

insert into stpastdu
select * from stpastdu@with_report ;

commit;

truncate table BASELGLTRA;

insert into BASELGLTRA
select * from BASELGLTRA@with_report ;

commit ;

truncate table BASELTRANS


SELECT * FROM all_db_links;

desc gv$session_connect_info

--------------

1.

CREATE DATABASE LINK supply.us.acme.com;

(A private link using the global database name to the remote supply database.
The link uses the userid/password of the connected user.
So if scott (identified by tiger) uses the link in a query,
the link establishes a connection to the remote database as scott/tiger.)

2.

CREATE DATABASE LINK link_2 CONNECT TO jane IDENTIFIED BY doe USING 'us_supply';

(A private fixed user link called link_2 to the database with service name us_supply.
The link connects to the remote database with the userid/password of jane/doe
regardless of the connected user.)

3.

CREATE DATABASE LINK link_1 CONNECT TO CURRENT_USER USING 'us_supply';


(A private link called link_1 to the database with service name us_supply.
The link uses the userid/password of the current user to log onto the remote database.)


1.Connected User Link

CREATE [SHARED] [PUBLIC] DATABASE LINK
CONNECT TO CURRENT_USER
USING '';

2.Current User Link

CREATE [PUBLIC] DATABASE LINK
CONNECT TO CURRENT_USER
USING '';

3.Fixed User Link

CREATE [PUBLIC] DATABASE LINK
CONNECT TO
IDENTIFIED BY
USING '';


4.Shared Link

CREATE SHARED DATABASE LINK
AUTHENTICATED BY IDENTIFIED BY
USING '';

conn test/test

CREATE SHARED DATABASE LINK shared
CONNECT TO scott IDENTIFIED BY tiger
AUTHENTICATED BY test IDENTIFIED BY test
USING 'link';

SELECT * FROM user_db_links;

SELECT table_name, tablespace_name FROM
user_tables@shared;

==============================
Executing Remote Procedures
==============================
@();

exec testproc@remote_db(1);

or

CREATE OR REPLACE SYNONYM testproc FOR remote_db.testproc;

exec testproc(1);

=======================
Test Link
=======================
declare
v_t varchar2(200);
BEGIN
ALTER SESSION CLOSE DATABASE LINK db_link;
SELECT table_name
INTO v_t
FROM all_tables@db_link
WHERE rownum = 1;
EXCEPTION
WHEN OTHERS THEN
RAISE_APPLICATION_ERROR(-20999, 'No Connection here');
END;

=======================
Close Link
=======================
ALTER SESSION CLOSE DATABASE LINK ;

ALTER SESSION CLOSE DATABASE LINK curr_user;

============================
Drop Database Link
=============================

DROP DATABASE LINK ;
DROP DATABASE LINK test_link;

DROP PUBLIC DATABASE LINK ;
DROP PUBLIC DATABASE LINK test_link;

Saturday, October 17, 2009

Oracle 11g Release 2 Database New Features

Oracle 11g Release 2 Database New Features
=================================
=================================
(From Oracle Magazin)

The many new features of and enhancements to Oracle Database 11g Release2 include upgrades to application development, availability, business intelligence and data warehousing, clustering, performance, security, server manageability, and unstructured-data management. For a complete list and description of the new features, see the Oracle Database New Features Guide
11g Release 2, available with the Oracle
Database 11g Release 2 documentation
(see “nextSTEPS” at the end of this article for the link).

Application Development
========================
Application developers can benefit from new application development features of
Oracle Database 11g Release 2. Oracle Application Express includes several Enhancements:

■ Application date format
■ Custom themes
■ Declarative binary large object (BLOB) support
■ Documented JavaScript libraries
■ Report printing
■ Forms conversion
■ Security
■ Interactive reporting region
■ Runtime-only installation

Availability
==================
To support high availability, Oracle Database 11g Release 2 includes
enhanced backup-and-recovery features:
■ Automatic Block Repair
■ Backup to Amazon Simple Storage Service (S3) using Oracle Secure Backup
cloud computing
■ DUPLICATE without connection to target database
■ Tablespace point-in-time recovery
■ DUPLICATE options make online application maintenance and upgrades
easier:
■ Edition-Based Redefinition
■ CREATE or REPLACE TYPE to allow FORCE
■ Fine-grained dependencies for triggers
■ IGNORE_ROW_ON_DUPKEY_INDEX hint for INSERT statement Edition- Based Redefinition makes it possible to change an application’s database objects without interrupting the application’s availability, by making the changes in a new edition.


High availability in Oracle Database 11g Release 2 is also enhanced by new
Oracle Data Guard features:
=====================================
■ Compressed table support in logical standby databases
■ Configurable real-time query apply lag limit
■ Integrated support for application failover in an Oracle Data Guard configuration
■ Support for as many as 30 standby databases


Business Intelligence and Data Warehousing Oracle Database 11g Release 2 improves many business intelligence and data warehouse capabilities, including analytics:
=============================================
■ Analytic Functions 2.0
■ Recursive WITH clause Data loading has been made simpler:
■ EXECUTE privilege for DIRECTORY objects
■ Preprocessing data for ORACLE_LOADER access driver in external tables



Partitioning in Oracle Database 11g Release 2 has been improved in several
ways:
====================================================
■ Virtual columns in the primary key or foreign key for reference partitioning
■ System-managed indexes for list partitioning



Features that enhance Performance and scalability have been added to Oracle Database 11g Release 2:
====================================================
■ In-memory parallel execution
■ Minimal-effort parallel execution—auto degree of parallelism and queuing
■ DBMS_PARALLEL_EXECUTE package many improvements have been made
to Oracle Warehouse Builder:
■ Find support in mapping editor
■ Business intelligence tool integration
■ Copy and paste of operators and attributes in mapping editor
■ Current configuration list in Design Center tool bar
■ Support for flat-file imports
■ Table function support
■ Experts available in Editor menu
■ Expression editing in Operator Edit dialog box
■ Grouping and spotlighting of objects in mapping editor
■ Management of locations registered in multiple control centers
■ User interface for managing locations
■ Key lookup operator enhancements
■ Mapping debugger enhancements
■ New Oracle JDeveloper–style user interface
■ Operator references included in generated PL/SQL code
■ Quick mapper
■ Repository Browser changes
■ Simplified Oracle Warehouse Builder repository upgrades
■ Support for extracting data from tables containing LONG datatype and
Subqueries in JOIN operator

Clustering
=========================
Oracle Real Application Clusters (OracleRAC) now provides many features for
simplifying installation and management of clusters and Oracle RAC databases:
■ Configuration Assistants, which support new Oracle RAC features
■ Cluster Verification Utility
■ Integration of Cluster Verification Utility and Oracle Universal Installer
■ Cluster Time Service
■ Oracle Cluster Registry enhancements
■ Grid Plug and Play
■ Oracle Restart
■ Policy-Based Cluster and Capacity Management
■ Oracle Clusterware resource modeling
■ Role-separated management
■ Agent development framework
■ Zero Downtime Patching for Oracle Clusterware and Oracle RAC
■ Oracle Enterprise Manager–based Oracle Clusterware resource management
■ Oracle Enterprise Manager provisioning for Oracle Clusterware and Oracle
RAC
■ Oracle Enterprise Manager support for Grid Plug and Play and Oracle Restart
■ Oracle Universal Installer support for removing Oracle RAC installations
■ Improved deinstallation support with Oracle Universal Installer
■ Scripts that support DBControl database downgrades
■ Oracle Restart integration with Oracle Universal Installer
■ Support for out-of-place Oracle Clusterware upgrade
■ Server Control Utility enhancements
■ Oracle Universal Connection Pool integrations

These improvements include faster relocation of services on node failure,support for as many as five copies of Oracle Cluster Registry for improved availability of the cluster, and storage of Oracle Cluster Registry in Oracle Automatic Storage Management.


Performance
============================
Server performance capabilities are improved by new features in Oracle
Database 11g Release 2:

■ Stored outlines migration to SQL plan management
■ Client result cache using table annotations support
■ Support for 4K-sector disk drives


Security
===========================
New auditing and encryption features of Oracle Database 11g Release 2 make
deployments more secure:
■ Audit Trail Cleanup
■ Tablespace master key

Audit Trail Cleanup provides for automatic file completion; the new encryption
key management functionality enables complete integration with hardware
security modules and increased performance for transparent data encryption.

Server Manageability
===========================
Oracle Database 11g Release 2 is easier to use than its predecessors, because of features that enhance manageability:
■ Oracle Automatic Storage Management Cluster File System
■ Oracle Automatic Storage Management Dynamic Volume Manager
■ Oracle Automatic Storage Management FS Snapshot
■ Oracle Cluster Registry and voting disk on Oracle Automatic Storage
Management
■ Oracle Automatic Storage Management intelligent data placement
■ Oracle Automatic Storage Management Configuration Assistant
■ Oracle Automatic Storage Management file access control
■ ASMCMD command extensions
■ Oracle Enterprise Manager support for Oracle Automatic Storage Management
Cluster File System
■ Oracle Enterprise Manager integration for Oracle Automatic Storage
Management file access control

Oracle Automatic Storage Management now supports all types of data, including database files, Oracle Clusterware files, and file system data such as Oracle homes and binaries. Oracle Automatic Storage Management Cluster File System
extends Oracle Automatic Storage Management by providing a file system
for Oracle binaries, report files, trace files, alert logs, and other application
data files.
Database management features complement storage management features
for general ease of use:

■ Oracle RAC configuration for Oracle Enterprise Manager
■ Patch application with DBControl
■ Automatic patching of time stamp with time zone data
■ Local time zone datatype
■ Segment creation on demand
■ Zero-size unusable indexes and index partitions
■ Metadata SXML Comparison Tool
■ Replay compare period reports
■ SQL Tuning Set comparisons
■ Sampling for Oracle Active Data Guard
■ Oracle Exadata simulation
■ Global Oracle RAC active session history report plus Automatic Database
Diagnostic Monitor backward compatibility
■ Oracle multithreaded server and Oracle Streams support
■ Parallel query plus alt plan recording, export of SQL Tuning Set enhancements
■ Synchronization controls

Unstructured-Data Management
=================================
Developers can achieve significant productivity gains with improvements to unstructured-data management capabilities:
■ Attribute extraction of requested attributes only
■ Client-side Digital Imaging and Communications in Medicine (DICOM)attribute
extraction
■ DICOM enhancements
■ Watermarking and image processing enhancements
■ Visualization framework
■ Network data model enhancements
■ GeoRaster Java API
■ Raster reprojections and ground control point-based georeferencing
■ Routing and geocoding enhancements
■ Database file system
■ LZO support for SecureFiles
■ Binary XML enhancements
■ Oracle XML DB repository performance improvements and guidelines
■ XMLIndex enhancements
■ XMLType partitioning

Other Database Capabilities
===========================
Oracle Database 11g Release 2 supports other functionality as well, including
improvements to Oracle Scheduler:

■ E-mail notification
■ File watcher
■ Multiple-destination jobs
■ Remote database jobs Other enhancements improve database operations:
■ Flashback Data Archive support for data definition languages
■ Instance caging
■ Data Pump Legacy Mode utility
■ IPv6 support for JDBC thin clients,Java virtual machine, and Java Debugger
■ Oracle Enterprise Manager Support Workbench for Oracle Automatic Storage Management