The Emergency Performance Method is as follows:
1. Survey the performance problem and collect the symptoms of the performance
problem. This process should include the following:
■ User feedback on how the system is underperforming. Is the problem
throughput or response time?
■ Ask the question, "What has changed since we last had good performance?"
This answer can give clues to the problem. However, getting unbiased
answers in an escalated situation can be difficult. Try to locate some
reference points, such as collected statistics or log files, that were
taken before and after the problem.
■ Use automatic tuning features to diagnose and monitor the problem. In
addition, you can use Oracle Enterprise Manager performance features to
identify top SQL and sessions.
2. Sanity-check the hardware utilization of all components of the application system.
Check where the highest CPU utilization is, and check the disk, memory usage,
and network performance on all the system components. This quick process
identifies which tier is causing the problem. If the problem is in the application,
then shift analysis to application debugging. Otherwise, move on to database
server analysis.
3. Determine if the database server is constrained on CPU or if it is spending time
waiting on wait events. If the database server is CPU-constrained, then investigate
the following:
■ Sessions that are consuming large amounts of CPU at the operating system
level and database; check V$SESS_TIME_MODEL for database CPU usage
■ Sessions or statements that perform many buffer gets at the database level;
check V$SESSTAT and V$SQLSTATS
■ Execution plan changes causing sub-optimal SQL execution; these can be
difficult to locate
■ Incorrect setting of initialization parameters
■ Algorithmic issues as a result of code changes or upgrades of all components
If the database sessions are waiting on events, then follow the wait events listed in
V$SESSION_WAIT to determine what is causing serialization.
The V$ACTIVE_SESSION_HISTORY view contains a sampled history of session activity which can be used to perform diagnosis even after an incident has ended and the system has returned to normal operation. In cases of massive contention for the library cache,it might not be possible to logon or submit SQL to the database. In this case, use historical data to determine why there is suddenly contention on this latch. If most waits are for I/O, then examine V$ACTIVE_SESSION_HISTORY to determine the SQL being run by the sessions that are performing all of the inputs and outputs.
4. Apply emergency action to stabilize the system. This could involve actions that
take parts of the application off-line or restrict the workload that can be applied to the system. It could also involve a system restart or the termination of job in
process. These naturally have service level implications.
5. Validate that the system is stable. Having made changes and restrictions to the
system, validate that the system is now stable, and collect a reference set of
statistics for the database. Now follow the rigorous performance method described
earlier in this book to bring back all functionality and users to the system. This
process may require significant application re-engineering before it is complete.
Everything Changes
2 weeks ago
No comments:
Post a Comment