问题描述
甲骨文大师你好,
上次我使用AUTOTRACE来收集有关SELECT的统计信息。
我使用SQLcl从SQL * Plus和...哦,那是一个肖克!统计数据与SQL * Plus非常不同。
因此,我决定使用SQL * Plus,SQLcl,SQL Developer和Toad进行测试。
你能告诉我为什么,对于三个甲骨文官方产品,我们在统计数量上有这么大的差异吗?
根据您的说法,哪些是与SQLcl (最富有的产品) 一起使用的最好的产品?
=
使用SQL * Plus自动跟踪
=
首先,我创建表格。
SQL> create table zz01(id number primary key, name varchar2(50 CHAR), date_deb date);
Table creee.
SQL> insert into zz01 select rownum, 'PARKER' ||to_char(rownum), sysdate from dual connect by level < 10001;
10000 lignes creees.
SQL> commit;
Validation effectuee.我激活自动跟踪并执行我的选择: 有11个统计信息。
SQL> set autotrace on
SQL> select * from zz01 where rownum < 6;
ID NAME DATE_DEB
---------- -------------------------------------------------- --------
1 PARKER1 04/01/19
2 PARKER2 04/01/19
3 PARKER3 04/01/19
4 PARKER4 04/01/19
5 PARKER5 04/01/19
Plan d'execution
----------------------------------------------------------
Plan hash value: 2489330668
---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 5 | 245 | 2 (0)| 00:00:01 |
|* 1 | COUNT STOPKEY | | | | | |
| 2 | TABLE ACCESS FULL| ZZ01 | 10000 | 478K| 2 (0)| 00:00:01 |
---------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter(ROWNUM<6)
Note
-----
- dynamic statistics used: dynamic sampling (level=2)
Statistiques
----------------------------------------------------------
8 recursive calls
0 db block gets
93 consistent gets
0 physical reads
0 redo size
846 bytes sent via SQL*Net to client
552 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
5 rows processed
=
使用SQLcl自动跟踪
=
使用SQLcl,我们有:
-87个不同的统计数据 :-)
-这些来自SQL * Plus的SQLcl缺少:
-排序 (磁盘)
-已处理的行
有些对我来说是不可理解的:
-CCursor sql区域被驱逐
-HSC堆段块更改
-共享哈希锁存器升级-无需等待
-...
Statistics
-----------------------------------------------------------
1 ASSM cbk:blocks examined
1 ASSM gsp:L1 bitmaps examined
1 ASSM gsp:L2 bitmaps examined
1 ASSM gsp:get free block
11 CCursor + sql area evicted
3 CPU used by this session
8 CPU used when call started
24 DB time
3 HSC Heap Segment Block Changes
188 Requests to/from client
188 SQL*Net roundtrips to/from client
5 blocks cleaned out using minact
627 buffer is not pinned count
2 buffer is pinned count
3600 bytes received via SQL*Net from client
105942 bytes sent via SQL*Net to client
318 calls to get snapshot scn: kcmgss
3 calls to kcmgas
190 calls to kcmgcs
81920 cell physical IO interconnect bytes
87 cluster key scan block gets
76 cluster key scans
6 commit cleanouts
6 commit cleanouts successfully completed
1054 consistent gets
338 consistent gets examination
336 consistent gets examination (fastpath)
1054 consistent gets from cache
716 consistent gets pin
709 consistent gets pin (fastpath)
3 cursor authentications
21 db block changes
17 db block gets
17 db block gets from cache
13 db block gets from cache (fastpath)
3 deferred (CURRENT) block cleanout applications
25 enqueue releases
25 enqueue requests
303 execute count
20 file io service time
39978 file io wait time
10 free buffer requested
3 immediate (CURRENT) block cleanout applications
101 index fetch by key
116 index scans kdiixs1
8773632 logical read bytes from cache
1 messages sent
529 no work - consistent read gets
207 non-idle wait count
4 non-idle wait time
305 opened cursors cumulative
1 opened cursors current
23 parse count (hard)
35 parse count (total)
2 parse time cpu
9 parse time elapsed
10 physical read IO requests
81920 physical read bytes
10 physical read total IO requests
81920 physical read total bytes
10 physical reads
10 physical reads cache
1 process last non-idle time
699 recursive calls
3 recursive cpu usage
12 redo entries
3172 redo size
1154 redo synch time (usec)
24 redo synch time overhead (usec)
1 redo synch time overhead count ( 2ms)
1 redo synch writes
1 redo write info find
12 rows fetched via callback
293 session cursor cache hits
1071 session logical reads
1 shared hash latch upgrades - no wait
77 sorts (memory)
2306 sorts (rows)
31 sql area evicted
109 table fetch by rowid
222 table scan blocks gotten
18525 table scan disk non-IMC rows gotten
18530 table scan rows gotten
89 table scans (short tables)
708 undo change vector size
4 user I/O wait time
189 user calls=
使用SQL开发人员自动跟踪
=
使用SQL Developer,我们有:
-45个不同的统计数据
-SQL * Plus缺少这些SQL Developer:
-db块获取
-物理读数
-重做尺寸
-已处理的行
Statistics
-----------------------------------------------------------
1 CPU used by this session
3 CPU used when call started
9 DB time
38 Requests to/from client
38 SQL*Net roundtrips to/from client
76 buffer is not pinned count
547 bytes received via SQL*Net from client
70062 bytes sent via SQL*Net to client
38 calls to get snapshot scn: kcmgss
13 calls to kcmgcs
7 cluster key scan block gets
7 cluster key scans
111 consistent gets
47 consistent gets examination
47 consistent gets examination (fastpath)
111 consistent gets from cache
64 consistent gets pin
64 consistent gets pin (fastpath)
1 cursor authentications
4 enqueue releases
4 enqueue requests
33 execute count
13 index fetch by key
17 index scans kdiixs1
909312 logical read bytes from cache
53 no work - consistent read gets
46 non-idle wait count
33 opened cursors cumulative
1 opened cursors current
5 parse count (hard)
6 parse count (total)
1 parse time elapsed
1 process last non-idle time
52 recursive calls
3 rows fetched via callback
28 session cursor cache hits
111 session logical reads
5 sorts (memory)
1813 sorts (rows)
16 table fetch by rowid
15 table scan blocks gotten
1044 table scan disk non-IMC rows gotten
1044 table scan rows gotten
3 table scans (short tables)
39 user calls =
带有蟾蜍12.10的自动跟踪
=
现在,众所周知的产品蟾蜍。
老实说,我没有真正看到他们能做什么与那些SQL * Plus相比,他们对我来说是不可用的,因为它错过了内存或硬盘上读取的块数量的所有部分。
有了蟾蜍,我们有:
-11个不同的统计数据
-这些来自SQL * Plus的Toad缺少:
-db块获取
-一致的获取
-物理读数
-重做尺寸
-通过SQL * Net发送到客户端的字节
-通过SQL * Net从客户端接收到的字节
-SQL * 往返客户端的净往返
-排序 (内存)
-排序 (磁盘)
Statistics
----------------------------------------------------------
0 recursive calls
0 spare statistic 11
0 spare statistic 15
0 spare statistic 24
0 commit cleanout failures: buffer being written
0 securefile direct read ops
0 securefile direct write ops
0 securefile inode read time
0 securefile uncompressed bytes
0 securefile bytes deduplicated
5 rows pr非常感谢您的帮助。
大卫·D·来自巴黎
专家解答
我无法评论蟾蜍,但要快速总结一下:
1) SQL PLus,列表是固定的,并且 “已处理的行” 并不是真正的统计信息。它是一个SQL加 “发明”
2) SQL Developer/SQLcl我很确定工作原理相同,即delta。如果统计在执行过程中 * 更改 *,那么我们将其打印出来。
例如,如果你这样做:
从小表顺序中选择 * x,y,z
然后你会看到 “排序 (记忆)”
但是如果你这样做:
通过x,y,z从HUGE_TABLE顺序中选择 *
您可能还会看到 “排序 (磁盘)”
希望这有助于澄清事情。
1) SQL PLus,列表是固定的,并且 “已处理的行” 并不是真正的统计信息。它是一个SQL加 “发明”
2) SQL Developer/SQLcl我很确定工作原理相同,即delta。如果统计在执行过程中 * 更改 *,那么我们将其打印出来。
例如,如果你这样做:
从小表顺序中选择 * x,y,z
然后你会看到 “排序 (记忆)”
但是如果你这样做:
通过x,y,z从HUGE_TABLE顺序中选择 *
您可能还会看到 “排序 (磁盘)”
希望这有助于澄清事情。
文章转载自ASKTOM,如果涉嫌侵权,请发送邮件至:contact@modb.pro进行举报,并提供相关证据,一经查实,墨天轮将立刻删除相关内容。




