Halim is a Sr. Database Engineer/Data Architect (in Atlanta, USA) who is an Oracle certified (OCP) DBA, (OCP) Developer, Certified Cloud Architect Professional as well as OCI Autonomous DB specialist with extensive expertise in Database design, configuration, tuning, capacity planning, RAC, DG, Scripting, Python, PL/SQL etc. He achieved 16th position in worldwide first-ever PL/SQL Challenge cup playoff- http://plsql-challenge.blogspot.com/2010/07/winners-of-first-plsql-challenge.html
Wednesday, July 13, 2011
why Lots of Archive Logs file generating even not working the database
checklists :-
1) Sometimes Parallel Rollback of Large Transaction may become very slow. After
killing a large running transaction (either by killing the shadow process or
aborting the database) then database seems to hang, or smon and parallel query
servers taking all the available cpu. After killing shadow process or aborting
the database the v$transaction entry is lost, so you cannot estimate by
examining v$transaction.used_ublk how the rollback procedure proceeds.
solution:-
If you fall into the above case you can use
alter system set fast_start_parallel_rollback = false;
in order to disable parallel rollback.
If it hangs, shutdown database and define it inside init.ora file:
fast_start_parallel_rollback = false
2) Make the changes to your database by setting temp tablespace type to TEMPORARY
and setting the temp tablespace to the nologging option.
3) Make sure any submitted and/or scheduled jobs are running in OS level and database level.If so, disable the jobs.
You can also check your redo log file or archive log file using log miner utility.
using log miner you can see the redo entry/transaction. then will be sure what are cause of generating so many archive log or redo log.
for this see here...
http://halimdba.blogspot.com/2010/08/using-logminer-to-analyze-redo-log.html
How can partition a non-partitioned table in oracle 10g
============================================================
You can partition a non-partitioned table in one of four ways:
A) Export/import method
B) Insert with a subquery method
C) Partition exchange method
D) DBMS_REDEFINITION
Either of these four methods will create a partitioned table from an existing non-partitioned table.
A. Export/import method
--------------------
1) Export your table:
exp usr/pswd tables=numbers file=exp.dmp
2) Drop the table:
drop table numbers;
3) Recreate the table with partitions:
create table numbers (qty number(3), name varchar2(15))
partition by range (qty)
(partition p1 values less than (501),
partition p2 values less than (maxvalue));
4) Import the table with ignore=y:
imp usr/pswd file=exp.dmp ignore=y
The ignore=y causes the import to skip the table creation and
continues to load all rows.
B. Insert with a subquery method
-----------------------------
1) Create a partitioned table:
create table partbl (qty number(3), name varchar2(15))
partition by range (qty)
(partition p1 values less than (501),
partition p2 values less than (maxvalue));
2) Insert into the partitioned table with a subquery from the
non-partitioned table:
insert into partbl (qty, name)
select * from origtbl;
3) If you want the partitioned table to have the same name as the
original table, then drop the original table and rename the
new table:
drop table origtbl;
alter table partbl rename to origtbl;
C. Partition Exchange method
-------------------------
ALTER TABLE EXCHANGE PARTITION can be used to convert a partition (or
subpartition) into a non-partitioned table and a non-partitioned table into a
partition (or subpartition) of a partitioned table by exchanging their data
and index segments.
1) Create table dummy_t as select with the required partitions
2) Alter table EXCHANGE partition partition_name
with table non-partition_table;
Example
-------
SQL> CREATE TABLE p_emp
2 (sal NUMBER(7,2))
3 PARTITION BY RANGE(sal)
4 (partition emp_p1 VALUES LESS THAN (2000),
5 partition emp_p2 VALUES LESS THAN (4000));
Table created.
SQL> CREATE TABLE dummy_y as SELECT sal
FROM emp WHERE sal<2000; Table created. SQL> CREATE TABLE dummy_z as SELECT sal FROM emp WHERE sal
BETWEEN 2000 AND 3999;
Table created.
SQL> alter table p_emp exchange partition emp_p1
with table dummy_y;
Table altered.
SQL> alter table p_emp exchange partition emp_p2
with table dummy_z;
Table altered.
D. DBMS_REDEFINITION
-----------------
1) Create unpartitioned table with the name unpar_table
SQL> CREATE TABLE unpar_table (
id NUMBER(10),
create_date DATE,
name VARCHAR2(100)
);
2) Apply some constraints to the table:
SQL> ALTER TABLE unpar_table ADD (
CONSTRAINT unpar_table_pk PRIMARY KEY (id)
);
SQL> CREATE INDEX create_date_ind ON unpar_table(create_date);
3) Gather statistics on the table:
SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'unpar_table', cascade => TRUE);
4) Create a Partitioned Interim Table:
SQL> CREATE TABLE par_table (
id NUMBER(10),
create_date DATE,
name VARCHAR2(100)
)
PARTITION BY RANGE (create_date)
(PARTITION unpar_table_2005 VALUES LESS THAN (TO_DATE('01/01/2005', 'DD/MM/YYYY')),
PARTITION unpar_table_2006 VALUES LESS THAN (TO_DATE('01/01/2006', 'DD/MM/YYYY')),
PARTITION unpar_table_2007 VALUES LESS THAN (MAXVALUE));
5) Start the Redefinition Process:
a) Check the redefinition is possible using the following command:
SQL> EXEC Dbms_Redefinition.can_redef_table(USER, 'unpar_table');
b)If no errors are reported, start the redefintion using the following command:
SQL> BEGIN
DBMS_REDEFINITION.start_redef_table(
uname => USER,
orig_table => 'unpar_table',
int_table => 'par_table');
END;
/
Note: This operation can take quite some time to complete.
c) Optionally synchronize new table with interim name before index creation:
SQL> BEGIN
dbms_redefinition.sync_interim_table(
uname => USER,
orig_table => 'unpar_table',
int_table => 'par_table');
END;
/
d) Create Constraints and Indexes:
SQL> ALTER TABLE par_table ADD (
CONSTRAINT unpar_table_pk2 PRIMARY KEY (id)
);
SQL> CREATE INDEX create_date_ind2 ON par_table(create_date);
e) Gather statistics on the new table:
SQL> EXEC DBMS_STATS.gather_table_stats(USER, 'par_table', cascade => TRUE);
f) Complete the Redefintion Process:
SQL> BEGIN
dbms_redefinition.finish_redef_table(
uname => USER,
orig_table => 'unpar_table',
int_table => 'par_table');
END;
/
At this point the interim table has become the "real" table and their names have been switched in the name dictionary.
g) Remove original table which now has the name of the interim table:
SQL> DROP TABLE par_table;
h)Rename all the constraints and indexes to match the original names.
ALTER TABLE unpar_table RENAME CONSTRAINT unpar_table_pk2 TO unpar_table_pk;
ALTER INDEX create_date_ind2 RENAME TO create_date_ind;
i) Check whether partitioning is successful or not:
SQL> SELECT partitioned
FROM user_tables
WHERE table_name = 'unpar_table';
PAR
---
YES
SQL> SELECT partition_name
FROM user_tab_partitions
WHERE table_name = 'unpar_table';
SYSAUX tablespace is growing so much. how can i protect it ?
=====================================================
AWR is identified to be consuming more space.
SQL> select * from v$sysaux_occupants ;
SQL> SELECT occupant_name, occupant_desc, space_usage_kbytes
FROM v$sysaux_occupants
ORDER BY space_usage_kbytes DESC
OCCUPANT_NAME OCCUPANT_DESC SPACE_USAGE_KBYTES
SM/AWR Server Manageability - Automatic Workload Repository 19906624
SM/OPTSTAT Server Manageability - Optimizer Statistics History 191808
EM Enterprise Manager Repository 59648
Solution
1.)
----------------see the snapshots interval and retention period time
SQL> select * from DBA_HIST_WR_CONTROL;
Check whether the retention period is set too high. In this case, it is set to 4 and so no need to modify the retention.
If it is set to high value, then consider reducing the retention period to avoid growing of space.
By default, it is set to 7 days.
The new retention time is specified in minutes.
SQL> execute DBMS_WORKLOAD_REPOSITORY.MODIFY_SNAPSHOT_SETTINGS(retention => 4320);
Here 4320 is 3*24*60 ( Convert the days to minutes). For 3 days, it 4320 minutes.
2). Next thing is to check the amount of snapshots have been generated and available.
If this is too high, then try to drop the unwanted snapshots to reclaim the space.
SQL> exec dbms_workload_repository.drop_snapshot_range(low_snap_id,high_snap_id);
low_snap_id : The low snapshot id of snapshots to drop.
high_snap_id : The high snapshot id of snapshots to drop.
3) Then check whether the statistics_level parameter is set to typical.
SQL> show parameter statistics
NAME TYPE VALUE
------------------------------------ ----------- -----------------------------
statistics_level string ALL <= Should be set TYPICAL timed_os_statistics integer 5 timed_statistics boolean TRUE If it is set to ALL, please change it to TYPICAL. Because statistics_level=ALL will gather lot of additional information in AWR repository which would consume more space. Most of the cases, if the statistics_level is set to TYPICAL then the growth would be stopped. SQL> alter system set statistics_level=typical; ---dynamic parameter
Monday, July 11, 2011
SMON: Parallel transaction recovery tried message
Sunday, July 10, 2011
How To Understand AWR Report / Statspack Report ?
==============================================
script is here $ORACLE_HOME\RDBMS\ADMIN\
awrrpt.sql
awrrpti.sql
execute like below
SQL*Plus: Release 11.1.0.6.0 - Production on Sun Jul 10 14:37:04 2011
Copyright (c) 1982, 2007, Oracle. All rights reserved.
SQL> CONN sys@stlbas105 as sysdba
Enter password:
Connected.
SQL>
SQL>
SQL>
SQL> @G:\app\Administrator\product\11.1.0\db_1\RDBMS\ADMIN\awrrpt.sql
Current Instance
~~~~~~~~~~~~~~~~
DB Id DB Name Inst Num Instance
----------- ------------ -------- ------------
2515622958 STLBAS 1 stlbas
Specify the Report Type
~~~~~~~~~~~~~~~~~~~~~~~
Would you like an HTML report, or a plain text report?
Enter 'html' for an HTML report, or 'text' for plain text
Defaults to 'html'
Enter value for report_type: text
Type Specified: text
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
DB Id Inst Num DB Name Instance Host
------------ -------- ------------ ------------ ------------
* 2515622958 1 STLBAS stlbas TESTSERVER
Using 2515622958 for database Id
Using 1 for instance number
Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed. Pressing
specifying a number lists all completed snapshots.
Enter value for num_days:
.......................
.........................
.........................
..........................
----------------------------------------------------------------------------
This document shows you some basic checks to be done from AWR to identify the problem.
Older Database versions like 8i,9i have Statspack report only.
From 10g onwards, AWR is available along with Statspack.
Automatic Workload Repository (AWR) : The Automatic Workload Repository (AWR) provides information to different manageabilities components. AWR consists of two components: in-memory statistics accessible through V$ dynamic views, and AWR snapshots saved in the database that represent the persistent and historical portion.
AWR snapshots can be generated at will using the following syntax:
EXECUTE dbms_workload_repository.create_snapshot();
By default in 10g, the AWR snapshots are generated automatically on hourly basis.
If you are facing any performance problem in the database and you have license for AWR,
then AWR reports can be generated for the problem period.
If there is no proper license for AWR available then statspack report can be generated.
The AWR/Statspack report should be taken for the interval not more than 60 minutes during problem.
Please dont take AWR / Statspack report for duration of like five or six hours as that would not be reliable.
The AWR report can be taken in both html/text format.
1) The first thing to be checked in AWR report is the following:-
Snap Id Snap Time Sessions Cursors/Session
Begin Snap: 112 11-Jun-09 00:00:57 191 6.7
End Snap: 113 11-Jun-09 01:00:11 173 7.4
Elapsed: 59.23 (mins)
DB Time: 710.73 (mins)
Check the "DB Time" metric. If it is much higher than the elapsed time, then it indicates that the sessions are waiting for something.
Here in this example, the Elapsed Time is around 60 minutes while the DB Time is around 700 minutes. This means that 700 minutes of time is spent by the sessions on waiting.
2) Next thing to be looked is the following:-
Instance Efficiency Percentages (Target 100%)
Buffer Nowait %: 100.00 Redo NoWait %: 100.00
Buffer Hit %: 98.67 In-memory Sort %: 100.00
Library Hit %: 98.60 Soft Parse %: 99.69
Execute to Parse %: 5.26 Latch Hit %: 99.31
Parse CPU to Parse Elapsd %: 12.78 %Non-Parse CPU: 99.10
As per the thumb rule, Instance Efficieny Percentages should be ideally above 90%.
3) Then comes the Shared Pool Statistics.
Shared Pool Statistics
Begin End
Memory Usage %: 85.49 80.93
% SQL with executions>1: 42.46 82.96
% Memory for SQL w/exec>1: 47.77 81.03
The memory usage statistics of shared pool is shown.
Idealy this should be lesser. If it is very high like beyond 90, this shows the contention
in the shared pool.
4) Next thing to be looked after is the Top 5 Timed Events table.
This shows the most significant waits contributing to the DB Time.
Top 5 Timed Events
Event Waits Time(s) Avg Wait(ms) % Total Call Time Wait Class
db file sequential read 4,076,086 28,532 7 66.9 User I/O
CPU time 11,214 26.3
Backup: sbtbackup 4 4,398 1,099,452 10.3 Administrative
log file sync 37,365 2,421 65 5.7 Commit
log file parallel write 37,928 1,371 36 3.2 System I/O
Here, the significant wait is the db file sequential read which contributes to 67% of DB Time.
5) Then , SQL Statistics can be checked.
SQL Statistics
SQL ordered by Elapsed Time
SQL ordered by CPU Time
SQL ordered by Gets
SQL ordered by Reads
SQL Statistics section would have commonly the above four sections.
Each section shows the list of SQLs based on the order of the respective metric.
For example, SQL ordered by Elapsed Time section shows the list of SQLs in the order
of the Elapsed Time. High resource consuming SQLs can be spotted out and meant for
tuning.
Note: All the above four sections of SQL Statistics show the list of SQLs in descending order.
i.e, For ex: Highest elapsed time is shown as first.
6) Then comes the IO Stats section.
This shows the IO Statistics for each tablespaces in the database.
As the thumb rule, the Av Rd(ms) [Average Reads in milliseconds] should not cross beyond 30, add myself(not greater that 30)
which is considered to be IO bottleneck.
Tablespace IO Stats
ordered by IOs (Reads + Writes) desc
Tablespace Reads Av Reads/s Av Rd(ms) Av Blks/Rd Writes Av Writes/s Buffer Waits Av Buf Wt(ms)
TEMP 3,316,082 933 4.91 1.00 28,840 8 0 0.00
DAT1 520,120 146 16.06 1.21 185,846 52 902 13.00
DAT3 93,411 26 42.82 2.98 13,442 4 16 23.13
DAT2 98,171 28 91.97 7.97 5,333 2 325 34.89
In the above example, the Av Rd(ms) is high in all tablespaces indicating the IO contention.
7) Then , Advisory Statistics can be checked.
This section shows the following:-
Buffer Pool Advisory
PGA Aggr Summary
PGA Aggr Target Stats
PGA Aggr Target Histogram
PGA Memory Advisory
Shared Pool Advisory
SGA Target Advisory
Streams Pool Advisory
Java Pool Advisory
It is very commonly used to check the advisories for the most important SGA structures like shared pool, buffer cache etc and PGA.
8) Then finally, init.ora Parameters is shown which shows the list of parameters set at instance level.
init.ora Parameters
All the above said sections except the DB Time can be checked from Statspack report also.
The statspack snapshots are not generated automatically as in AWR.
It has to be generated during the problem period as follows:-
Take 2 snapshots between 60 minutes interval during the problem and generate the statspack report
exec statspack.snap
wait for 60 minutes
exec statspack.snap
Please run $ORACLE_HOME/rdbms/admin/spreport.sql
and specify BEGIN and END ID's of the snapshots taken during the problem.
The above said sections are the most common checks can be performed from user level.
Further intensive checking can be done through Oracle Support.
also you can find details and very good explanation here
http://jonathanlewis.wordpress.com/statspack-examples/
http://jonathanlewis.wordpress.com/2006/12/27/analysing-statspack-2/
http://www.oracle.com/technetwork/database/focus-areas/performance/statspack-opm4-134117.pdf
http://oracledoug.com/serendipity/index.php?/archives/1153-A-Small-Statspack-Success.html
http://joordsblog.vandenoord.eu/2009/11/installing-and-using-standby-statspack.html
http://jonathanlewis.wordpress.com/2011/02/23/awr-reports/
Error occured while spawning process P** ; error = 3135 ,Process P** died
==========================================================================
From some days ago, i am seeing the following message
in my Database alert log file.
After diagnosis, I found that,there are many session creating many PX parallel processes
in out system.
Then i thought, i must need to limit/control the parallel processes because sometime
it crossing the "Processes" parameter value of init parameter file.
so, i done it.....see below
SQL> show parameter paralle
NAME TYPE VALUE
------------------------------------ ----------- ------------------
fast_start_parallel_rollback string LOW
parallel_adaptive_multi_user boolean TRUE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2152
parallel_instance_group string
parallel_max_servers integer 640
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1
parallel_threads_per_cpu integer 2
NAME TYPE VALUE
------------------------------------ ----------- ------------------
recovery_parallelism integer 0
SQL>
Here :-
parallel_max_servers = CPU_COUNT x PARALLEL_THREADS_PER_CPU x (2 if PGA_AGGREGATE_TARGET > 0; otherwise 1) x 5
CPU_COUNT = 32
So
parallel_max_servers = 32*2*2*5 = 640
I change the value to 300
SQL> alter system set parallel_max_servers = 300 scope=both ;
after that the following message is disappeared from my alert log file .
here is the content of alert log ......................
Error occured while spawning process PB05; error = 3135
Thu Jun 9 14:33:21 2011
Timed out trying to start process PB39.
Thu Jun 9 14:33:27 2011
Timed out trying to start process PB47.
Thu Jun 9 14:34:03 2011
Error occured while spawning process PB83; error = 3135
Thu Jun 9 14:34:18 2011
Error occured while spawning process PB95; error = 3135
Thu Jun 9 14:34:51 2011
Timed out trying to start process PC19.
Thu Jun 9 14:35:15 2011
Error occured while spawning process PC42; error = 3135
Thu Jun 9 14:35:16 2011
Timed out trying to start process PC45.
Thu Jun 9 14:35:32 2011
Timed out trying to start process PC53.
Thu Jun 9 14:35:59 2011
Timed out trying to start process PC70.
Thu Jun 9 14:36:31 2011
Timed out trying to start process PC90.
Thu Jun 9 14:36:31 2011
Error occured while spawning process PC93; error = 3135
Thu Jun 9 14:36:37 2011
Error occured while spawning process PD26; error = 3135
Thu Jun 9 14:37:03 2011
Error occured while spawning process PD77; error = 3135
Thu Jun 9 14:38:39 2011
Error occured while spawning process PE61; error = 3135
Thu Jun 9 14:40:55 2011
Error occured while spawning process PF81; error = 3135
Thu Jun 9 14:40:56 2011
Error occured while spawning process PF87; error = 3135
Thu Jun 9 14:40:58 2011
Error occured while spawning process PG02; error = 3135
Thu Jun 9 14:41:02 2011
Timed out trying to start process PG28.
Thu Jun 9 14:41:50 2011
Error occured while spawning process PG80; error = 3135
Thu Jun 9 14:42:27 2011
Error occured while spawning process P526; error = 3135
Thu Jun 9 14:43:00 2011
Error occured while spawning process P526; error = 3135
Thu Jun 9 14:43:26 2011
Timed out trying to start process P961.
Thu Jun 9 14:46:31 2011
Error occured while spawning process PI59; error = 3135
Thu Jun 9 14:49:43 2011
Timed out trying to start process PJ98.
Thu Jun 9 14:49:43 2011
Error occured while spawning process PJ99; error = 3135
Thu Jun 9 14:50:53 2011
Error occured while spawning process PK54; error = 3135
Thu Jun 9 14:51:08 2011
Timed out trying to start process PK69.
Thu Jun 9 14:51:08 2011
Error occured while spawning process PK73; error = 3135
Thu Jun 9 14:51:10 2011
Error occured while spawning process PK82; error = 3135
Thu Jun 9 14:51:15 2011
Error occured while spawning process PJ98; error = 3135
Thu Jun 9 14:51:38 2011
Error occured while spawning process PL44; error = 3135
Thu Jun 9 14:51:45 2011
Timed out trying to start process PL84.
Thu Jun 9 14:51:50 2011
Timed out trying to start process PM21.
Thu Jun 9 14:51:51 2011
Error occured while spawning process PM27; error = 3135
Thu Jun 9 14:51:52 2011
Timed out trying to start process PM34.
Thu Jun 9 14:52:01 2011
Error occured while spawning process PN14; error = 3135
Thu Jun 9 14:52:05 2011
Timed out trying to start process PN43.
Thu Jun 9 14:52:06 2011
Timed out trying to start process PN54.
Thu Jun 9 14:52:10 2011
Timed out trying to start process P310.
Thu Jun 9 14:53:07 2011
Process P529 died
Thu Jun 9 14:53:09 2011
Process P533 died
Thu Jun 9 14:53:13 2011
Process P537 died
Thu Jun 9 14:53:14 2011
Process P541 died
Thu Jun 9 14:53:16 2011
Process P556 died
Thu Jun 9 14:53:17 2011
Process P561 died
Thu Jun 9 14:53:20 2011
Process P576 died
Thu Jun 9 14:53:22 2011
Process P581 died
Thu Jun 9 14:53:23 2011
Process P586 died
Thu Jun 9 14:53:24 2011
Process P591 died
Thu Jun 9 14:53:25 2011
Process P596 died
Thu Jun 9 14:53:26 2011
Process P601 died
Thu Jun 9 14:53:27 2011
Process P606 died
Thu Jun 9 14:53:29 2011
Process P616 died
Thu Jun 9 14:53:30 2011
Process P621 died
Thu Jun 9 14:53:31 2011
Process P626 died
Thu Jun 9 14:53:32 2011
Process P631 died
Thu Jun 9 14:53:33 2011
Process P636 died
Thu Jun 9 14:53:34 2011
Process P529 died
Thu Jun 9 14:53:35 2011
Process P533 died
Thu Jun 9 14:53:36 2011
Process P537 died
Thu Jun 9 14:53:37 2011
Process P651 died
Thu Jun 9 14:53:38 2011
Process P541 died
Thu Jun 9 14:53:39 2011
Process P556 died
Thu Jun 9 14:53:40 2011
Process P561 died
Thu Jun 9 14:53:42 2011
Process P661 died
Thu Jun 9 15:01:52 2011
Thread 1 advanced to log sequence 89889 (LGWR switch)
Current log# 2 seq# 89889 mem# 0: /d01/oracle/oradata/stlbas/redo02.log
Thu Jun 9 15:02:34 2011
Thread 1 advanced to log sequence 89890 (LGWR switch)
Current log# 3 seq# 89890 mem# 0: /d01/oracle/oradata/stlbas/redo03.log
Thu Jun 9 15:17:10 2011
.......................................
.........................................
........................................
Current log# 12 seq# 92947 mem# 0: /d01/oracle/oradata/stlbas/redo12.log
Sun Jul 10 09:29:47 2011
Error occured while spawning process P746; error = 3135
Sun Jul 10 09:47:17 2011
Thread 1 advanced to log sequence 92948 (LGWR switch)
Current log# 1 seq# 92948 mem# 0: /d01/oracle/oradata/stlbas/redo01.log
Sun Jul 10 09:55:48 2011
ALTER SYSTEM SET parallel_max_servers=300 SCOPE=BOTH;
Sun Jul 10 10:02:25 2011
Thread 1 advanced to log sequence 92949 (LGWR switch)
Current log# 2 seq# 92949 mem# 0: /d01/oracle/oradata/stlbas/redo02.log
Sun Jul 10 10:10:26 2011
Thread 1 advanced to log sequence 92950 (LGWR switch)
Current log# 3 seq# 92950 mem# 0: /d01/oracle/oradata/stlbas/redo03.log
Sun Jul 10 10:52:50 2011
Thread 1 advanced to log sequence 92951 (LGWR switch)
Current log# 4 seq# 92951 mem# 0: /d01/oracle/oradata/stlbas/redo04.log
Sun Jul 10 11:02:09 2011
Thread 1 advanced to log sequence 92952 (LGWR switch)
Current log# 5 seq# 92952 mem# 0: /d01/oracle/oradata/stlbas/redo05.log
Sun Jul 10 11:08:37 2011
Thread 1 advanced to log sequence 92953 (LGWR switch)
Current log# 6 seq# 92953 mem# 0: /d01/oracle/oradata/stlbas/redo06.log
---------------------cheers---------------------------
My Blog List
-
Everything Changes2 weeks ago
-
-
UKOUG Discover 20243 weeks ago
-
-
-
-
-
-
-
-
-
Moving Sideways8 years ago
-
-
Upcoming Events...10 years ago
-