暂无图片
暂无图片
暂无图片
暂无图片
暂无图片
Oracle AWR (Automatic Workload Repository) 趋势分析
549
6页
17次
2020-03-16
免费下载
Page 4 4th Qtr 2010
Oracle AWR (Automatic
Workload Repository) Trending
By Kapil Goyal
T
rending is very important for database performance
analysis. It exposes the performance profile of a
database in terms of IO, CPU usage, wait-event response
time, etc. Starting in Oracle Database 10g, AWR performance
data that is collected and stored out of the box is very helpful
in creating historical and comparative trend analysis to
gain insight into critical database performance issues. AWR
provides a rich history of Oracle performance statistics that
shows how an Oracle database system has been trending for
as long as the AWR data is retained.
So Where Do We Start?
Any statistic in AWR can be trended quite easily. The AWR report consists of
SQLs running against various “DBA_HIST_%” views taking the difference
between two snapshots at a time. This implies that we can develop and execute
similar SQL against those views to report the trend for any required statistic or
database wait event.
When analyzing AWR reports during the time a performance problem manifested,
and some wait event or response time figures seem high or a new set of SQL
statements come up, it is always a good idea to go back in history and check
how the identified events, statistics, SQL or job behaved during similar times
previously—it could yesterday or last week or even last month. Of course, this
Side Note: AWR is not a replacement for real time monitoring; it contains historical data
and can be used to investigate what happened or what caused the performance issue.
One important difference between STATSPACK in Oracle9i and AWR in Oracle Database
10g is that 9i exposes the source code for statspack while 10g does not. However, you
can use most of the 9i code scripts to understand AWR structures making it less hard to
write queries against 10g DBA_HIST* tables. For your information, STATSPACK source
code in 9i can be seen in the $ORACLE_HOME/rdbms/admin/sprepins.sql file. This file
provides a fair idea about the “STATS$_%” tables used to store and generate STATSPACk
reports. In most cases, you can use the same SQL and replace the corresponding
“STAT$_%” tables with “DBA_HIST_%” tables.
comparison needs to be done when same type of user jobs were running
(same load), just to rule out whether that average response time is normal for
this database. If a job had the same response time earlier and users were not
complaining, it usually means that the problem is somewhere else and we need to
dig further, probably using specific SQL trace for that portion of the application.
What is AWR?
AWR takes periodic snapshots of performance statistics (by default every hour)
and exposes this using DBA_HIST* views. Please note that AWR is a licensed
product under Diagnostic Pack; hence, even to access these views directly, requires
licenses needs to be purchased. Oracle Database 11g further expands on this.
For example, there are 79 DBA_HIST* views available in Oracle 10g (10.2.0.4).
SQL> select count (*) from dictionary where table_name like ‘DBA_HIST%’;
COUNT(*)
----------
79
On the other hand, there are 100 DBA_HIST* views available in Oracle 11g
(11.1.0.7)
Select count(*) from dictionary where table_name like ‘DBA_HIST%’;
COUNT(*)
----------------
100
How Much Space Does AWR Use?
The following query can be used to see the current occupied space by AWR data.
Select SPACE_USAGE_KBYTES from v$sysaux_occupants where occupant_name like
‘%AWR%’;
SPACE_USAGE_KBYTES
------------------
4,384,640
1 row selected.
Note that this size will vary depending on retention period, the snapshot
frequency, the number of datafiles, etc. You can monitor this space in a
development/test instance to project space requirements in case you want to
increase the retention period in production.
Retention
You can use the following query to see the current retention policy.
Select
extract( day from snap_interval) *24*60+
extract( hour from snap_interval) *60+
extract( minute from snap_interval ) “Snapshot Interval”,
extract( day from retention) *24*60+
extract( hour from retention) *60+
extract( minute from retention ) “Retention Interval(in Minutes)” ,
extract(day from retention) “Retention (in Days)” from dba_hist_wr_control;
Snapshot Interval Retention Interval(in Minutes) Retention (in Days)
----------------- ------------------------------ -------------------
30 129,600 90
4th Qtr 2010 Page 5
I personally prefer to have 35 days retention, so that it can cover the whole
month. If you can afford to store this data for longer periods, then I would
strongly suggest you do that. In any case, to trend data for a specified period,
you will need to retain the AWR data for that period.
Benefits of AWR?
The following list includes some key benefits of Automatic Workload repository:
Easy to find recent spike in load
Helpful in capacity planning
Design Load testing based on current capacity and load
Easy to write queries against AWR tables
SQL statistics history
Easy to find if the SQL execution plan for particular SQL statement that
changed recently
I have detailed the following AWR scripts that I use in my day-to-day tasks.
These are based upon DBA_HIST* tables and trends that same data that a set
of AWR reports would have otherwise provided. You can modify the script to
trend the data for particular hours, days, weeks or for the whole data available
based on the AWR retention you have set for that database.
System Event Trending
The following script can be very helpful when you want to see how an event trended
over a period of time. The script takes two arguments—date and event name.
event_response.sql
alter session set nls_date_format=’dd-mon-yy’;
set lines 150 pages 100 echo off feedback off
col date_time heading ‘Date time|mm/dd/yy_hh_mi_hh_mi’ for a25
col event_name for a26
col waits for 99,999,999,999 heading ‘Waits’
col time for 99,999 heading ‘Total Wait|Time(sec)’
col avg_wait_ms for 99,999 heading ‘Avg Wait|(ms)’
prompt “Enter the date in DD-Mon-YY Format:”
WITH system_event AS (
select
sn.begin_interval_time begin_interval_time,
sn.end_interval_time end_interval_time,
se.event_name event_name,
se.total_waits e_total_waits,
lag(se.total_waits,1) over (order by se.snap_id) b_total_waits,
se.total_timeouts e_total_timeouts,
lag(se.total_timeouts,1) over (order by se.snap_id) b_total_timeouts,
se.time_waited_micro e_time_waited_micro,
lag(se.time_waited_micro,1) over (order by se.snap_id) b_time_waited_micro
from dba_hist_system_event se,
dba_hist_snapshot sn
where
trunc(sn.begin_interval_time)
=&Date
and se.snap_id = sn.snap_id
and se.dbid = sn.dbid
and se.instance_number = sn.instance_number
and se.dbid = (select dbid from v$database)
and se.instance_number = (select instance_number from v$instance)
and se.event_name=&event_name
)
select
to_char(se1.BEGIN_INTERVAL_TIME,’mm/dd/yy_hh24_mi’)||
to_char(se1.END_INTERVAL_TIME,’_hh24_mi’) date_time,
se1.event_name,
se1.e_total_waits-nvl(se1.b_total_waits,0) waits,
(se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,0)) / 1000000 time,
((se1.e_time_waited_micro - nvl(se1.b_time_waited_micro,0)) / 1000) /
(se1.e_total_waits - nvl(se1.b_total_waits,0)) avg_wait_ms
from system_event se1
where (se1.e_total_waits-nvl(se1.b_total_waits,0)) > 0
and nvl(se1.b_total_waits,0) > 0
/
continued on page 6
Sample Output
SQL> @event_response
“Enter the date in DD-Mon-YY Format:”
Enter value for date: 29-jan-10
old 15: trunc(sn.begin_interval_time) =’&Date’
new 15: trunc(sn.begin_interval_time) =’29-jan-10’
Enter value for event_name: db file sequential read
old 21: and se.event_name=’&event_name’
new 21: and se.event_name=’db file sequential read’
Total Avg
Date time Wait Time Wait
mm/dd/yy_hh_mi_hh_mi EVENT_NAME Waits (sec) (ms)
----------------------- ------------------------- ---------- ---------- -------
01/29/10_01_00_02_00 db file sequential read 551,356 4,500 8
01/29/10_02_00_03_00 db file sequential read 1,114,616 7,921 7
01/29/10_03_00_04_00 db file sequential read 764,481 5,926 8
01/29/10_04_00_05_00 db file sequential read 845,195 6,633 8
01/29/10_05_00_06_00 db file sequential read 1,385,501 8,501 6
01/29/10_06_00_07_00 db file sequential read 3,785,824 14,703 4
01/29/10_07_00_08_00 db file sequential read 2,393,513 6,996 3
01/29/10_08_00_09_00 db file sequential read 2,590,092 6,273 2
01/29/10_09_00_10_00 db file sequential read 2,322,715 5,390 2
01/29/10_10_00_11_00 db file sequential read 2,806,934 6,913 2
01/29/10_11_00_12_00 db file sequential read 2,691,573 3,501 1
01/29/10_12_00_13_00 db file sequential read 1,737,420 3,349 2
01/29/10_13_00_14_00 db file sequential read 489,453 2,297 5
01/29/10_14_00_15_00 db file sequential read 791,114 2,842 4
Load Profile Trending
The first page of the AWR report gives us lot of information about database
behavior such as whether database is more read intensive or write intensive
(depending upon Physical Reads/sec or Physical Writes/Sec statistics), whether
it does lots of logical IO/sec, has a high OLTP component (as derived from
higher number of Transactions/Sec), or does a lot of parsing (Hard or soft).
By looking at a one-hour-interval report you can get some idea, but if you can
look at the trend for the whole day, you will get a broader picture as some
databases trend from an OLTP workload during the day and DSS workload
during off hours.
The Load Profile section also helps to determine if load has changed over time
compared to the baseline (i.e., an AWR report when the system was healthy).
There is no good or best value for these statistics. Although these numbers
varies by database and application, when the number of Logons/sec more than
10 or the database has a higher hard parse/sec (>100 or so), this could be an
indication of an underlying configuration or application design issue that
implements itself as a performance issue. In this regard, the number of
Logical Reads/sec is also a good statistics to look at.
The script below trends the physical reads/sec.
lp.sql
alter session set nls_date_format=’dd-mon-yy’;
set lines 130 pages 1000 echo off feedback off
col stat_name for a25
col date_time for a20
col BEGIN_INTERVAL_TIME for a20
col END_INTERVAL_TIME for a20
prompt “Enter the date in DD-Mon-YY Format and Stats you want to trend like
‘redo size’,’physical reads’,’physical writes’,’session logical reads’ etc.”
WITH sysstat AS (
select
sn.begin_interval_time begin_interval_time,
of 6
免费下载
【版权声明】本文为墨天轮用户原创内容,转载时必须标注文档的来源(墨天轮),文档链接,文档作者等基本信息,否则作者和墨天轮有权追究责任。如果您发现墨天轮中有涉嫌抄袭或者侵权的内容,欢迎发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。

评论

关注
最新上传
暂无内容,敬请期待...
下载排行榜
Top250 周榜 月榜