
4th Qtr 2010 ■ Page 5
I personally prefer to have 35 days retention, so that it can cover the whole
month. If you can afford to store this data for longer periods, then I would
strongly suggest you do that. In any case, to trend data for a specified period,
you will need to retain the AWR data for that period.
Benefits of AWR?
The following list includes some key benefits of Automatic Workload repository:
•
Easy to find recent spike in load
•
Helpful in capacity planning
•
Design Load testing based on current capacity and load
•
Easy to write queries against AWR tables
•
SQL statistics history
•
Easy to find if the SQL execution plan for particular SQL statement that
changed recently
I have detailed the following AWR scripts that I use in my day-to-day tasks.
These are based upon DBA_HIST* tables and trends that same data that a set
of AWR reports would have otherwise provided. You can modify the script to
trend the data for particular hours, days, weeks or for the whole data available
based on the AWR retention you have set for that database.
System Event Trending
The following script can be very helpful when you want to see how an event trended
over a period of time. The script takes two arguments—date and event name.
event_response.sql
alter session set nls_date_format=’dd-mon-yy’;
set lines 150 pages 100 echo off feedback off
col date_time heading ‘Date time|mm/dd/yy_hh_mi_hh_mi’ for a25
col event_name for a26
col waits for 99,999,999,999 heading ‘Waits’
col time for 99,999 heading ‘Total Wait|Time(sec)’
col avg_wait_ms for 99,999 heading ‘Avg Wait|(ms)’
prompt “Enter the date in DD-Mon-YY Format:”
WITH system_event AS (
select
sn.begin_interval_time begin_interval_time,
sn.end_interval_time end_interval_time,
se.event_name event_name,
se.total_waits e_total_waits,
lag(se.total_waits,1) over (order by se.snap_id) b_total_waits,
se.total_timeouts e_total_timeouts,
lag(se.total_timeouts,1) over (order by se.snap_id) b_total_timeouts,
se.time_waited_micro e_time_waited_micro,
lag(se.time_waited_micro,1) over (order by se.snap_id) b_time_waited_micro
from dba_hist_system_event se,
dba_hist_snapshot sn
where
trunc(sn.begin_interval_time)
=’&Date’
and se.snap_id = sn.snap_id
and se.dbid = sn.dbid
and se.instance_number = sn.instance_number
and se.dbid = (select dbid from v$database)
and se.instance_number = (select instance_number from v$instance)
and se.event_name=’&event_name’
)
select
to_char(se1.BEGIN_INTERVAL_TIME,’mm/dd/yy_hh24_mi’)||
to_char(se1.END_INTERVAL_TIME,’_hh24_mi’) date_time,
se1.event_name,
se1.e_total_waits-nvl(se1.b_total_waits,0) waits,
(se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,0)) / 1000000 time,
((se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,0)) / 1000) /
(se1.e_total_waits - nvl(se1.b_total_waits,0)) avg_wait_ms
from system_event se1
where (se1.e_total_waits-nvl(se1.b_total_waits,0)) > 0
and nvl(se1.b_total_waits,0) > 0
/
continued on page 6
Sample Output
SQL> @event_response
“Enter the date in DD-Mon-YY Format:”
Enter value for date: 29-jan-10
old 15: trunc(sn.begin_interval_time) =’&Date’
new 15: trunc(sn.begin_interval_time) =’29-jan-10’
Enter value for event_name: db file sequential read
old 21: and se.event_name=’&event_name’
new 21: and se.event_name=’db file sequential read’
Total Avg
Date time Wait Time Wait
mm/dd/yy_hh_mi_hh_mi EVENT_NAME Waits (sec) (ms)
----------------------- ------------------------- ---------- ---------- -------
01/29/10_01_00_02_00 db file sequential read 551,356 4,500 8
01/29/10_02_00_03_00 db file sequential read 1,114,616 7,921 7
01/29/10_03_00_04_00 db file sequential read 764,481 5,926 8
01/29/10_04_00_05_00 db file sequential read 845,195 6,633 8
01/29/10_05_00_06_00 db file sequential read 1,385,501 8,501 6
01/29/10_06_00_07_00 db file sequential read 3,785,824 14,703 4
01/29/10_07_00_08_00 db file sequential read 2,393,513 6,996 3
01/29/10_08_00_09_00 db file sequential read 2,590,092 6,273 2
01/29/10_09_00_10_00 db file sequential read 2,322,715 5,390 2
01/29/10_10_00_11_00 db file sequential read 2,806,934 6,913 2
01/29/10_11_00_12_00 db file sequential read 2,691,573 3,501 1
01/29/10_12_00_13_00 db file sequential read 1,737,420 3,349 2
01/29/10_13_00_14_00 db file sequential read 489,453 2,297 5
01/29/10_14_00_15_00 db file sequential read 791,114 2,842 4
Load Profile Trending
The first page of the AWR report gives us lot of information about database
behavior such as whether database is more read intensive or write intensive
(depending upon Physical Reads/sec or Physical Writes/Sec statistics), whether
it does lots of logical IO/sec, has a high OLTP component (as derived from
higher number of Transactions/Sec), or does a lot of parsing (Hard or soft).
By looking at a one-hour-interval report you can get some idea, but if you can
look at the trend for the whole day, you will get a broader picture as some
databases trend from an OLTP workload during the day and DSS workload
during off hours.
The Load Profile section also helps to determine if load has changed over time
compared to the baseline (i.e., an AWR report when the system was healthy).
There is no good or best value for these statistics. Although these numbers
varies by database and application, when the number of Logons/sec more than
10 or the database has a higher hard parse/sec (>100 or so), this could be an
indication of an underlying configuration or application design issue that
implements itself as a performance issue. In this regard, the number of
Logical Reads/sec is also a good statistics to look at.
The script below trends the physical reads/sec.
lp.sql
alter session set nls_date_format=’dd-mon-yy’;
set lines 130 pages 1000 echo off feedback off
col stat_name for a25
col date_time for a20
col BEGIN_INTERVAL_TIME for a20
col END_INTERVAL_TIME for a20
prompt “Enter the date in DD-Mon-YY Format and Stats you want to trend like
‘redo size’,’physical reads’,’physical writes’,’session logical reads’ etc.”
WITH sysstat AS (
select
sn.begin_interval_time begin_interval_time,
评论