
我们要做的事情,就是通过学习性测试,把各种知识点串起来。
0. 导言
有这样一个问题:
1. 数据库瞬时负载很高,都是同一个语句?
2. 这个语句可以很快,但是计划不对?
3. 诶,真的没有合适的索引,来,加一个。
4. 我去,怎么计划还是不对,是不是没有更新统计信息?
5. 更新了统计信息还是不对,是不是清理下计划缓存就可以了?
6. 清理了计划缓存还是不对?看看10053?
7. 看到了10053,查看了系统中的统计信息,是不是优化器有问题啊,走走走 提 bug 去?
8. 原来是姿势不对,打开方式不对。问题解决。
......
要模拟问题,怎么快速的把数据迁移到一个测试区,是一个问题:
使用dexp dimp?使用 dts?使用B树备份(表的物理备份)?其实都不合适。最合适的方式其实是用 dmfldr 通过“特定分隔符”的文本文件。
目标表:
select count(*),table_used_space('owner','TABLE4KW')/1024.0/1024.0/1024*page from owner.TABLE4KW
41134897 79.8427734375
目标SQL:
SELECT
A.LOTID ,
A.WO ,
A.DEVICE ,
A.ERPDEVICE ,
A.OLDOPERATION,
PRODUCT ,
ROUTE ,
ROUTEVER ,
OLDQTY ,
A.OPERATIONSEQ,
A.RESOURCENAME,
A.OLDUNIT
FROM
TABLE4KW A
WHERE
A.TRANSTIME >= '2021/01/30 00:00:00'
AND A.TRANSTIME <='2021/01/30 23:59:59'
AND A.OLDOPERATION ='XX字符串'
AND A.TRANSACTION IN ('CheckOut')
AND SUBSTR(A.WO, 3, 1)<>'D'
sql 问题:想要走 OLDOPERATION,RANSTIME的联合索引,但是怎么都只走 TRANSTIME 的单列索引。
通过dmfldr导出
[root@bin_new]# cat out.ctl
load data
infile '/xxx/big.table'
into table owner.TABLE4KW
[root@bin_new]# ./dmfldr user/password@ip CONTROL=\'out.ctl\' mode=\'OUT\'
dmfldr: 2021-02-10 19:45:57 Load finish 41132285 rows,
time:656354.802(ms)
通过dmfldr导入
[root@matebook-dm-hql bin]# ./dmfldr SYSDBA/SYSDBA CONTROL=\'out.ctl\'
[root@matebook-dm-hql bin]# cat out.ctl
LOAD DATA
INFILE 'big.table'
replace
into table TABLE4KW
[root@matebook-dm-hql bin]#
41132285行记录已提交
目标表:TABLE4KW
41132285 行加载成功。
由于数据格式错误,0行 丢弃。
由于数据错误,0行 没有加载。
跳过的逻辑记录总数:0
读取的逻辑记录总数:41132285
拒绝的逻辑记录总数:0
用时:3305432.434(ms)
用非分区表加载,耗时差不多:3462272.336(ms)
这里有一组数据,大家可以参考:
导出实例 实例1:页大小 16K 字符集 UTF8 分区表 79.8GB 导出耗时:656秒导入实例 实例2:页大小 8KB 字符集 GBK 分区表 52GB 导入耗时:3305秒非分区表 47.39GB 导入耗时:3462秒表上有16个索引(含一个唯一索引,表占用的全部空间大小约为 53%为纯数据大小,其他为索引占用空间 )CREATE INDEX "TABLE4KW_FACTORY_616639" ON "SYSDBA"."TABLE4KW"("FACTORY"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_HISTSID_922248" ON "SYSDBA"."TABLE4KW"("WIP_HIST_SID"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_IDX05_671904" ON "SYSDBA"."TABLE4KW"("TRANSACTION"ASC,"SEQUENCE" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_IDX1_846796" ON "SYSDBA"."TABLE4KW"("LINKSID" ASC)STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_IDX2_520026" ON "SYSDBA"."TABLE4KW"("TRANSACTION"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_IDX3_184487" ON "SYSDBA"."TABLE4KW"("WIP_COMM_SID"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_INDEX04_931094" ON"SYSDBA"."TABLE4KW"("NEWOPERATION" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_LOTID_586007" ON "SYSDBA"."TABLE4KW"("LOTID" ASC)STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_LOTSID_696946" ON "SYSDBA"."TABLE4KW"("WIP_LOT_SID"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_OPERATIN_371195" ON"SYSDBA"."TABLE4KW"("OLDOPERATION" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_SEQUENCE_258858" ON "SYSDBA"."TABLE4KW"("SEQUENCE"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_SHIFTDATE_516588" ON "SYSDBA"."TABLE4KW"("SHIFTDATE"ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE UNIQUE INDEX "IDX_0801_03_431192" ON "SYSDBA"."TABLE4KW"("STARTTIME"ASC,"WIP_HIST_SID" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "WIP_HIST_IDX03_116417" ON "SYSDBA"."TABLE4KW"("TRANSACTION"ASC,"RULENAME" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_IDX11" ON "SYSDBA"."TABLE4KW"("OLDOPERATION"ASC,"TRANSTIME" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;CREATE INDEX "TABLE4KW_TRANSACTIONTIME_40" ON"SYSDBA"."TABLE4KW"("TRANSTIME" ASC) STORAGE(ON "TEST1", CLUSTERBTR) ;这些索引的空间大小(单位是MB):TAB TABM DATEM IDXNAME INDEXM HAVE_PRIMARYSYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_LOTID_586007 1502.84375 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_OPERATIN_371195 1481.8984375 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_IDX3_184487 873.84375 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_IDX2_520026 1293.7421874999998 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_IDX11 2354.90625 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_TRANSACTIONTIME_40 1601.8671874999998 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 IDX_0801_03_431192 2427.015625 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_SEQUENCE_258858 1038.6328125 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_IDX05_671904 1597.953125 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 WIP_HIST_IDX03_116417 1994.4140625 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_FACTORY_616639 863.8984375000001 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_SHIFTDATE_516588 1211.875 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_IDX1_846796 1558.8203124999998 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_INDEX04_931094 1481.9609375 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_HISTSID_922248 1558.8203124999998 1SYSDBA.TABLE4KW 52478.53906249999 28077.1640625 TABLE4KW_LOTSID_696946 1558.8203124999998 1
发现问题:
确实计划不对 缺索引;添加了索引 发现还是不对;收集了统计信息发现还是不对;清空了执行计划缓存 还是比对;查看系统中的统计信息 发现确实也不符合预期 少了一个分区的统计信息;
通过实体表(非分区表 原问题的单表查询sql 是分区表)灌入数据 更新统计信息 发现计划符合预期;
怎么查看系统中的统计信息?:(主要是这个表 sysstats 自己根据列定义,做好关联即可 分为 列、表、索引 的统计信息 )
select sch.name,tab.name,st.* from sysstats st,sysobjects tab,sysobjects sch where sch.id=tab.schid and st.id=tab.id and
sch.name||'.'||tab.name =upper('USER_NAME.TABLE4KW');
- 这里有个问题,和文章中的问题没直接关系,就是索引的统计信息和列的统计信息有什么关系?我们这做个例子:
(见小标题:“另外一个关于 索引 的统计信息 和 列的统计信息的 差同测试”
发现这个问题,更大程度上还是统计信息的问题。通过各种方式想确认统计信息的问题。
这里有两个分析:
一个分析是分区表自己的统计信息(发现那哪怕是100的采样比例,还是和实际数据量有差异)
一个分析是分区子表的统计信息
分区子表会发现“缺失目标子表”,通过查看手册可能是预期内的“跳过”,通过【正确的语法】收集后,发现子分区的预估行数是100%收集的正确值了,而且分区也没有跳过;
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS',GRANULARITY=>'GLOBAL AND PARTITION');
但是查看分区总表的统计信息,发现预估行数还是不对,通过这里进行收集后,发现目标语句的计划正常:
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS',GRANULARITY=>'GLOBAL');
解决问题:
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS',GRANULARITY=>'GLOBAL');
尝试过的不可以解决问题的方式:
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100,FALSE); -- 执行太久,手动终止。
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100);
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS');
stat 100 on tab(col);
直接通过分区子表,更新全部列。
dbms_stats.gather_table_stats('USER_NAME','TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS',GRANULARITY=>'GLOBAL AND PARTITION');
补充知识点:
1. 怎么监控到数据运行的高负载,怎么发现都是同一个语句?
这里其实部署DEM,通过DEM的会话管理可以看到某个历史时间的所有活动会话情况和SQL语句。
同样的,这里DEM只是一个采集后的容器而已,自己通过任何合理的方式自己采集也是可行的(自定义监控),注意:这里的通信流量比较高,1min采集一次,语句较长,平均活动会话超过5,可能一天的流量会超过200GB。
2. 怎么查看语句的执行计划?
使用图形化客户端工具(manager.exe),选中需要查看计划的语句,点击执行按钮往右数7个的按钮,就可以查看执行计划(当然也可以选中后,直接按F9)
3. 怎么知道语句的执行计划不对?
这里其实需要有业务层面的理解,
如果执行的不好,而且是高频语句,要么是业务不合理,要么就是执行计划有问题。
4. 怎么更新统计信息?合理的更新统计信息很重要。
更新统计信息有几个关键知识点:
1. 非特殊情况,一般不要更新统计信息(特别是数据分布稳定之后)
2. 但是有三个特殊:1)有大量数据导入后,一般需要更新。2)如果应用代码里的SQL很多都是拼接的具体参数,且是时间含义的时间类型或者字符串类型。那么一般每天都必须更新对应列的统计信息。3)在CBO代码统计信息,代价模型之外的特殊情况,可能需要人为进行特殊处理的统计信息(可能甚至是清空统计信息等)
3. 收集统一信息,一般进行全覆盖收集(也就是采样率设置为100)
这里提供几个常用的命令:
- 收集某个表的统计信息:
sp_TAB_stat_init(user,TABLE4KW); -- 每一个参数都是字符串,需要前后带上单引号,如 sp_TAB_stat_init('SYSDBA','TAB1');
- 清理某个列的统计信息:
sp_col_stat_deinit(user,TABLE4KW,column_name); -- 每一个参数都是字符串,需要前后带上单引号,如 sp_col_stat_deinit('SYSDBA','TAB1','COL1');
- 收集某个表的统计信息(非常重要):
dbms_stats.gather_table_stats(USER,'TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS',GRANULARITY=>'GLOBAL');
dbms_stats.gather_table_stats(USER,'TABLE4KW',NULL,100,FALSE,'FOR ALL INDEXED COLUMNS',GRANULARITY=>'GLOBAL AND PARTITION');
4. GATHER_TABLE_STATS
根据设定的参数,收集表、表中的列和表上的索引的统计信息。其中,对于表,只搜集表的总行数、总的页数、已经使用的页数等基本信息。
语法如下:
PROCEDURE GATHER_TABLE_STATS (
OWNNAME VARCHAR(128),
TABNAME VARCHAR(128),
PARTNAME VARCHAR(128) DEFAULT NULL,
ESTIMATE_PERCENT DOUBLE DEFAULT
TO_ESTIMATE_PERCENT_TYPE(GET_PREFS('ESTIMATE_PERCENT')),
BLOCK_SAMPLE BOOLEAN DEFAULT FALSE,
METHOD_OPT VARCHAR DEFAULT GET_PREFS('METHOD_OPT'),
DEGREE INT DEFAULT TO_DEGREE_TYPE(GET_PREFS('DEGREE')),
GRANULARITY VARCHAR DEFAULT GET_PREFS('GRANULARITY'),
CASCADE BOOLEAN DEFAULT TO_CASCADE_TYPE(GET_PREFS('CASCADE')),
STATTAB VARCHAR DEFAULT NULL,
STATID VARCHAR DEFAULT NULL,
STATOWN VARCHAR DEFAULT NULL,
NO_INVALIDATE BOOLEAN DEFAULT TO_NO_INVALIDATE_TYPE(GET_PREFS('NO_INVALIDATE')),
FORCE BOOLEAN DEFAULT FALSE
);
参数详解
OWNNAME 模式名,区分大小写。
TABNAME 表名,区分大小写。
PARTNAME 分区表名,默认为 NULL,区分大小写。
ESTIMATE_PERCENT 收集的百分比,范围为 0.000001~100,默认系统自定。
BLOCK_SAMPLE 保留参数,是否使用随机块代替随机行,默认为 TRUE。
METHOD_OPT 控制列的统计信息集合和直方图的创建的格式,默认为 FOR ALL COLUMNS SIZE AUTO。
其中 BLOB、 IMAGE、 LONGVARBINARY、 CLOB、 TEXT、 LONGVARCHAR、 BOOLEAN 类型不能被收集。格式选项如下:
FOR ALL [INDEXED | HIDDEN] COLUMNS [<size_clause>]
<size_clause>:: = SIZE {INTEGER | REPEAT | AUTO | SKEWONLY}
或者
FOR COLUMNS [<size clause>] < <column_name>|[<size_clause>] > {,<column_name |[<size_clause>]>}
各参数解释如下:
INDEXED | HIDDEN 表示只统计索引或者隐藏的列, 缺省为都统计
INTEGER 直方图的桶数,范围 1~254
REPEAT 只统计已经有直方图的列
AUTO 根据数据分布和工作量自动决定统计直方图的列
SKEWONLY 根据数据分布决定统计直方图的列
DEGREE 保留参数,收集的并行度,默认为 1。
GRANULARITY 保留参数,收集的粒度,默认为 AUTO;
GRANULARITY 可选参数如下:AUTO | DEFAULT | ALL | PARTITION | SUBPARTITION | GLOBAL | GLOBAL AND PARTITION。各
参数解释如下:
ALL 收集全部的统计信息(SUBPARTITION, PARTITION 和 GLOBAL)
AUTO 默认值,根据分区的类型来决定如何收集
DEFAULT 和 AUTO 功能相同
GLOBAL 收集 GLOBAL 表的统计信息
GLOBAL AND PARTITION 收集 GLOBAL 和 PARTITION 表的统计信息
PARTITION 收集 PARTITION 表的统计信息
SUBPARTITION收集SUBPARTITION表的统计信息。当子分区表个数超过50时,因为采用跳跃采样,所以统计信息会有误差。即使采样率为 100,也还是会有误差。
可以通过减少统计层次的方式降低这种误差,即将 GRANULARITY=>'SUBPARTITION' 修改成 GRANULARITY=>'GLOBAL AND PARTITION' 或 GRANULARITY=>'GLOBAL'。
CASCADE 是否收集索引信息, TRUE 或 FALSE, 默认为 TRUE。当表对象类型为 HUGE 表时,CASCADE 参数应置为 FALSE。
STATTAB 保留参数,统计信息存放的表,默认为 NULL。
STATID 保留参数,统计信息的 ID,默认为 NULL。
STATOWN 保留参数,统计信息的模式,默认为 NULL。
NO_INVALIDATE 保留参数,是否让依赖游标失效,默认为 TRUE。
FORCE 保留参数,是否强制收集统计信息,默认为 FALSE。
5. 怎么清理执行计划? 正确的清理执行计划很重要。
select 'sp_clear_plan_cache('||cache_item||');' from V$CACHEPLN where sqlstr like '%select act9x0_.BAZ614 as BAZ%';
通过我们有问题的SQL中间的文本关键词,找到执行计划,通过 sp_clear_plan_cache函数,进行清理。
说明:
1. 执行 sp_clear_plan_cache 时,必须带入参数,否则是清空数据库实例内所有的缓存计划,可能对生产产生致命影响!
21) SP_CLEAR_PLAN_CACHE
定义:
SP_CLEAR_PLAN_CACHE()
功能说明:
清空执行缓存信息。
参数说明:
无
返回值:
无
举例说明:
清空执行缓存信息
CALL SP_CLEAR_PLAN_CACHE();
23) SP_CLEAR_PLAN_CACHE
定义:
SP_CLEAR_PLAN_CACHE(
plan_id bigint
)
功能说明:
清空指定的执行缓存信息
参数说明:
plan_id:指定计划ID,其值可以从动态视图 V$CACHEPLN 中的 CACHE_ITEM 列获得
返回值:
无
举例说明:
清空 ID 为 473546872 的执行缓存信息
CALL SP_CLEAR_PLAN_CACHE(473546872);
6. 怎么查看10053,10053是什么?
10053是优化器的思考过程,如何获取到
-- 通过执行下面“SQL”语句后,可以执行或者查看”问题语句“计划
alter session 0 set events '10053 trace name context forever,level 2';
-- 执行或者查看”问题语句“计划后,通过下面“SQL”,关闭 10053 的记录
ALTER SESSION 0 SET EVENTS '10053 TRACE NAME CONTEXT off'
说明:
1) 先关trace文件生成在数据库实例data目录下的 trace 文件夹下
2) 除了 level 2 还有 level 1,level2 时,查看执行计划时,就会记录 trace,level1时,需要执行才会记录 trace
3) 如果用上述操作操作方式,未能获取到trace,要注意计划缓存。因为当命中计划缓存时,会直接使用该计划,这种情况下,就没有trace生成。
这里面涉及到的两个 10053
[root@matebook-dm-hql trace]# cat DMSERVER_0211_1*
DM Database Server x64 V8
*** 2021-02-11 12:49:19.512000
*** Start trace 10053 event [level 2]
Current SQL Statement:
EXPLAIN SELECT
A.LOTID ,
A.WO ,
A.DEVICE ,
A.ERPDEVICE ,
A.OLDOPERATION,
PRODUCT ,
ROUTE ,
ROUTEVER ,
OLDQTY ,
A.OPERATIONSEQ,
A.RESOURCENAME,
A.OLDUNIT
FROM
TABLE4KW A
WHERE
A.TRANSTIME >= '2021/01/30 00:00:00'
AND A.TRANSTIME <='2021/01/30 23:59:59'
AND A.OLDOPERATION ='XX字符串'
AND A.TRANSACTION IN ('CheckOut')
AND SUBSTR(A.WO, 3, 1)<>'D'
*****************************
Parameters for this statement
*****************************
olap_flag = 2
mpp_flag = 0
enable_monitor = 1
max_opt_n_tables = 6
enable_hash_join = 1
enable_index_join = 1
enable_merge_join = 1
mpp_index_join_opt_flag = 0
mpp_nli_opt_flag = 0
enable_in_value_list_opt = 6
enhanced_bexp_trans_gen = 3
batch_param_opt = 0
use_pln_pool = 1
parallel_degree = 1
global_page_size = 8192
global_extent_size = 16
view_pullup_flag = 0
view_pullup_max_tab = 7
refed_exists_opt_flag = 1
hash_pll_opt_flag = 0
partial_join_evaluation_flag = 1
motion_opt_flag = 0
use_fk_remove_tables_flag = 1
use_filter_joining_remove_table = 0
use_hagr_flag = 0
outer_join_index_opt_flag = 0
hagr_parallel_opt_flag = 0
group_opt_flag = 4
from_opt_flag = 0
hagr_distinct_opt_flag = 2
mpp_hash_lr_rate = 10
lpq_hash_lr_rate = 30
slct_opt_flag = 0
slct_err_process_flag = 0
like_opt_flag = 15
filter_push_down = 0
star_transformation_enable = 0
max_opt_n_or_bexps = 7
dist_in_subquery_opt = 0
count_64bit = 1
use_htab = 1
sel_item_htab_flag = 0
or_cvt_htab_flag = 1
op_subq_cvt_in_flag = 1
enhanced_subq_merging = 0
case_when_cvt_flag = 9
or_nbexp_cvt_case_when_flag = 0
nbexp_opt_flag = 3
nonconst_or_cvt_in_lst_flag = 0
dblink_opt_flag = 509
outer_cvt_inner_pull_up_cond_flag = 1
opt_or_for_huge_table_flag = 0
enable_rq_to_spl = 1
multi_in_cvt_exists = 0
prjt_replace_npar = 1
enable_rq_to_inv = 0
subq_exp_cvt_flag = 0
use_refer_tab_only = 0
max_phc_be_num = 512
refed_subq_cross_flag = 1
order_by_nulls_flag = 0
in_list_as_join_key = 0
outer_join_flating_flag = 0
top_order_opt_flag = 0
place_group_by_flag = 0
single_htab_remove_flag = 1
phf_ntts_opt = 1
use_ftts = 0
upd_del_opt = 2
mpp_dml_force_opt = 0
use_mclct = 2
top_dis_hash_flag = 1
enable_rq_to_nonref_spl = 0
optimizer_mode = 1
enable_index_filter = 0
parallel_mode_common_degree = 1
hash_cmp_opt_flag = 0
nonrefed_subquery_as_const = 0
outer_opt_nlo_flag = 0
use_index_skip_scan = 0
distinct_use_index_skip = 2
index_skip_scan_rate = 0.003000
complex_view_merging = 0
optimizer_dynamic_sampling = 0
speed_semi_join_plan = 1
multi_hash_dis_opt = 1
del_hp_opt_flag = 0
cnntb_opt_flag = 0
cte_opt_flag = 1
adaptive_npln_flag = 3
optimizer_or_nbexp = 0
multi_upd_opt_flag = 0
multi_upd_max_col_num = 128
optimizer_version = 70097
optimizer_max_perm = 7200
enable_invisible_col = 1
push_subq = 0
invocation_opt_flag = 1
view_filter_merging = 2
enable_partition_wise_opt = 0
opt_mem_check = 0
optimizer_aggr_groupby_elim = 1
enable_join_factorization = 0
query_info_bits = 0x0
explain_show_factor = 1
error_compatible_flag = 0x0
hi_right_order_flag = 0
enable_nest_loop_join_cache = 0
enable_table_exp_ref_flag = 0
opt_cvt_var_cost_factor = 1
opt_max_adaptive_exec_times = 1000
sql_safe_update_rows = 0
sort_flag = 0
bexp_calc_st_flag = 0
enable_adjust_nli_cost = 1
cpu_cost_factor = 9500
sel_rate_equ = 0.025000
base_mi_cpu = 1000000.000000
build_hash_base_cost = 0
sel_rate_single = 0.050000
having_rate_factor = 1
inter_rate_factor = 1
ret_null_rate_factor = 100
nlij_amplify_factor = 4
cpu_speed = 3145728
n_runs = 1
mem_bandwidth = 3145728
mem_intense_degree = 0
base_scan_cpu = 140000.000000
base_seek_cpu = 140000.000000
base_lkup_cpu = 62000.000000
base_nlij_cpu = 16000.000000
base_hi_cpu = 2683000.000000
base_nl_cpu = 1000000.000000
base_flt_cpu = 140000.000000
scan_cpu = 330.000000
seek_cpu = 400.000000
lkup_cpu = 2500.000000
nlij_cpu = 200.000000
hi_cpu = 400.000000
hi_search_cpu = 200.000000
mi_cpu = 300.000000
nl_cpu = 200.000000
flt_cpu = 30.000000
join_st_opt_flag = 0
enable_adjust_nli_cost = 1
memory_pool = 200
huge_buffer = 80
buffer = 100
keep = 8
recycle = 64
sort_buf_size = 2
hagr_hash_size = 100000
hj_buf_global_size = 500
hj_buf_size = 50
hj_blk_size = 1
hagr_buf_global_size = 500
hagr_buf_size = 50
hagr_blk_size = 1
mtab_mem_size = 8
mmt_size = 0
mmt_flag = 1
dict_buf_size = 5
mal_buf_size = 100
mal_vpool_size = 128
vm_stack_size = 256
vm_pool_size = 64
sess_pool_size = 64
max_os_memory = 95
worker_threads = 16
worker_cpu_percent = 0
task_threads = 16
enhance_bind_peeking = 0
*** Plan before optimized:
project[0x7f24d408a950] (0);
select (1); ((A.OLDOPERATION = 'XX字符串' AND A.TRANSACTION = 'CheckOut' AND A.TRANSTIME >= '2021/01/30 00:00:00' AND A.TRANSTIME <= '2021/01/30 23:59:59' AND exp11 <> 'D'))
base table(A, FULL SEARCH) (0);
<<<<< selectivity estimate of table A >>>>>
*** stdesc 1: column = TRANSTIME, scan_type = GE_LE, key = ('2021/01/30 00:00:00', '2021/01/30 23:59:59')
stat_info(1284,14,'C')= {
#Valid = 'Y',
#Type = 'Hb',
#Card = 37865142,
#NDV = 3047367,
#Nulls = 1,
#LP = 173746,
#LVLS = 2,
#CLUF = 0,
#NK = 0,
#NS = 19233088}
---> st = 0.00000
*** stdesc 2: column = OLDOPERATION, scan_type = EQU, key = ('XX字符串')
stat_info(1284,15,'C')= {
#Valid = 'Y',
#Type = 'Freq',
#Card = 37865142,
#NDV = 522,
#Nulls = 1,
#LP = 160474,
#LVLS = 2,
#CLUF = 0,
#NK = 0,
#NS = 19233088}
---> st = 0.00000
*** stdesc 3: column = TRANSACTION, scan_type = EQU, key = ('CheckOut')
stat_info(1284,4,'C')= {
#Valid = 'N',
#Type = '-',
#Card = 41132285,
#NDV = 13710761,
#Nulls = 411322,
#LP = 9000,
#LVLS = 3,
#CLUF = 0,
#NK = 0,
#NS = 0}
---> st = 0.02500
>>>>> total: 41132285, estimate match rows: 0, st: 0.00000; -- st_other: 0.950, n_stdesc: 3
---------------- single table access path probe for A ----------------
*** path 1: INDEX33555452 (FULL search), cost: 10134.70152
*** path 2: TABLE4KW_FACTORY_616639 (FULL search), cost: 83940.36115
*** path 3: TABLE4KW_HISTSID_922248 (FULL search), cost: 83940.36115
*** path 4: TABLE4KW_IDX05_671904 (GE_L search), cost: 1998.07056
*** path 5: TABLE4KW_IDX1_846796 (FULL search), cost: 83940.36115
*** path 6: TABLE4KW_IDX2_520026 (EQU search), cost: 1998.07056
*** path 7: TABLE4KW_IDX3_184487 (FULL search), cost: 83940.36115
*** path 8: TABLE4KW_INDEX04_931094 (FULL search), cost: 83940.36115
*** path 9: TABLE4KW_LOTID_586007 (FULL search), cost: 83940.36115
*** path 10: TABLE4KW_LOTSID_696946 (FULL search), cost: 83940.36115
*** path 11: TABLE4KW_OPERATIN_371195 (EQU search), cost: 0.21752
*** path 12: TABLE4KW_SEQUENCE_258858 (FULL search), cost: 83940.36115
*** path 13: TABLE4KW_SHIFTDATE_516588 (FULL search), cost: 83940.36115
*** path 14: IDX_0801_03_431192 (FULL search), cost: 83940.36115
*** path 15: WIP_HIST_IDX03_116417 (GE_L search), cost: 1998.07056
*** path 16: TABLE4KW_IDX11 (GE_LE search), cost: 0.11067
*** path 17: TABLE4KW_TRANSACTIONTIME_40 (GE_LE search), cost: 0.11066
>>> best access path: TABLE4KW_TRANSACTIONTIME_40 (GE_LE search), cost: 0.11066
*** BEST PLAN FOR THIS STATEMENT ***
project[0x7f24cc0c0128] (cost: 0.11066, rows: 1);
parallel(FULL SEARCH) (cost: 0.11066, rows: 1);
select (cost: 0.11066, rows: 1); ((A.OLDOPERATION = 'XX字符串' AND A.TRANSACTION = 'CheckOut' AND exp11 <> 'D'))
base table(A, TABLE4KW_TRANSACTIONTIME_40, GE_LE SEARCH) (cost: 0.11066, rows: 1);
-------------------------- END --------------------------
DM Database Server x64 V8
*** 2021-02-11 17:08:03.166000
*** Start trace 10053 event [level 2]
Current SQL Statement:
EXPLAIN SELECT
A.LOTID ,
A.WO ,
A.DEVICE ,
A.ERPDEVICE ,
A.OLDOPERATION,
PRODUCT ,
ROUTE ,
ROUTEVER ,
OLDQTY ,
A.OPERATIONSEQ,
A.RESOURCENAME,
A.OLDUNIT
FROM
TABLE4KW A
WHERE
A.TRANSTIME >= '2021/01/30 00:00:00'
AND A.TRANSTIME <='2021/01/30 23:59:59'
AND A.OLDOPERATION ='XX字符串'
AND A.TRANSACTION IN ('CheckOut')
AND SUBSTR(A.WO, 3, 1)<>'D'
*****************************
Parameters for this statement
*****************************
olap_flag = 2
mpp_flag = 0
enable_monitor = 1
max_opt_n_tables = 6
enable_hash_join = 1
enable_index_join = 1
enable_merge_join = 1
mpp_index_join_opt_flag = 0
mpp_nli_opt_flag = 0
enable_in_value_list_opt = 6
enhanced_bexp_trans_gen = 3
batch_param_opt = 0
use_pln_pool = 1
parallel_degree = 1
global_page_size = 8192
global_extent_size = 16
view_pullup_flag = 0
view_pullup_max_tab = 7
refed_exists_opt_flag = 1
hash_pll_opt_flag = 0
partial_join_evaluation_flag = 1
motion_opt_flag = 0
use_fk_remove_tables_flag = 1
use_filter_joining_remove_table = 0
use_hagr_flag = 0
outer_join_index_opt_flag = 0
hagr_parallel_opt_flag = 0
group_opt_flag = 4
from_opt_flag = 0
hagr_distinct_opt_flag = 2
mpp_hash_lr_rate = 10
lpq_hash_lr_rate = 30
slct_opt_flag = 0
slct_err_process_flag = 0
like_opt_flag = 15
filter_push_down = 0
star_transformation_enable = 0
max_opt_n_or_bexps = 7
dist_in_subquery_opt = 0
count_64bit = 1
use_htab = 1
sel_item_htab_flag = 0
or_cvt_htab_flag = 1
op_subq_cvt_in_flag = 1
enhanced_subq_merging = 0
case_when_cvt_flag = 9
or_nbexp_cvt_case_when_flag = 0
nbexp_opt_flag = 3
nonconst_or_cvt_in_lst_flag = 0
dblink_opt_flag = 509
outer_cvt_inner_pull_up_cond_flag = 1
opt_or_for_huge_table_flag = 0
enable_rq_to_spl = 1
multi_in_cvt_exists = 0
prjt_replace_npar = 1
enable_rq_to_inv = 0
subq_exp_cvt_flag = 0
use_refer_tab_only = 0
max_phc_be_num = 512
refed_subq_cross_flag = 1
order_by_nulls_flag = 0
in_list_as_join_key = 0
outer_join_flating_flag = 0
top_order_opt_flag = 0
place_group_by_flag = 0
single_htab_remove_flag = 1
phf_ntts_opt = 1
use_ftts = 0
upd_del_opt = 2
mpp_dml_force_opt = 0
use_mclct = 2
top_dis_hash_flag = 1
enable_rq_to_nonref_spl = 0
optimizer_mode = 1
enable_index_filter = 0
parallel_mode_common_degree = 1
hash_cmp_opt_flag = 0
nonrefed_subquery_as_const = 0
outer_opt_nlo_flag = 0
use_index_skip_scan = 0
distinct_use_index_skip = 2
index_skip_scan_rate = 0.003000
complex_view_merging = 0
optimizer_dynamic_sampling = 0
speed_semi_join_plan = 1
multi_hash_dis_opt = 1
del_hp_opt_flag = 0
cnntb_opt_flag = 0
cte_opt_flag = 1
adaptive_npln_flag = 3
optimizer_or_nbexp = 0
multi_upd_opt_flag = 0
multi_upd_max_col_num = 128
optimizer_version = 70097
optimizer_max_perm = 7200
enable_invisible_col = 1
push_subq = 0
invocation_opt_flag = 1
view_filter_merging = 2
enable_partition_wise_opt = 0
opt_mem_check = 0
optimizer_aggr_groupby_elim = 1
enable_join_factorization = 0
query_info_bits = 0x0
explain_show_factor = 1
error_compatible_flag = 0x0
hi_right_order_flag = 0
enable_nest_loop_join_cache = 0
enable_table_exp_ref_flag = 0
opt_cvt_var_cost_factor = 1
opt_max_adaptive_exec_times = 1000
sql_safe_update_rows = 0
sort_flag = 0
bexp_calc_st_flag = 0
enable_adjust_nli_cost = 1
cpu_cost_factor = 9500
sel_rate_equ = 0.025000
base_mi_cpu = 1000000.000000
build_hash_base_cost = 0
sel_rate_single = 0.050000
having_rate_factor = 1
inter_rate_factor = 1
ret_null_rate_factor = 100
nlij_amplify_factor = 4
cpu_speed = 3145728
n_runs = 1
mem_bandwidth = 3145728
mem_intense_degree = 0
base_scan_cpu = 140000.000000
base_seek_cpu = 140000.000000
base_lkup_cpu = 62000.000000
base_nlij_cpu = 16000.000000
base_hi_cpu = 2683000.000000
base_nl_cpu = 1000000.000000
base_flt_cpu = 140000.000000
scan_cpu = 330.000000
seek_cpu = 400.000000
lkup_cpu = 2500.000000
nlij_cpu = 200.000000
hi_cpu = 400.000000
hi_search_cpu = 200.000000
mi_cpu = 300.000000
nl_cpu = 200.000000
flt_cpu = 30.000000
join_st_opt_flag = 0
enable_adjust_nli_cost = 1
memory_pool = 200
huge_buffer = 80
buffer = 100
keep = 8
recycle = 64
sort_buf_size = 2
hagr_hash_size = 100000
hj_buf_global_size = 500
hj_buf_size = 50
hj_blk_size = 1
hagr_buf_global_size = 500
hagr_buf_size = 50
hagr_blk_size = 1
mtab_mem_size = 8
mmt_size = 0
mmt_flag = 1
dict_buf_size = 5
mal_buf_size = 100
mal_vpool_size = 128
vm_stack_size = 256
vm_pool_size = 64
sess_pool_size = 64
max_os_memory = 95
worker_threads = 16
worker_cpu_percent = 0
task_threads = 16
enhance_bind_peeking = 0
*** Plan before optimized:
project[0x7f250c064cf0] (0);
select (1); ((A.OLDOPERATION = 'XX字符串' AND A.TRANSACTION = 'CheckOut' AND A.TRANSTIME >= '2021/01/30 00:00:00' AND A.TRANSTIME <= '2021/01/30 23:59:59' AND exp11 <> 'D'))
base table(A, FULL SEARCH) (0);
<<<<< selectivity estimate of table A >>>>>
*** stdesc 1: column = TRANSTIME, scan_type = GE_LE, key = ('2021/01/30 00:00:00', '2021/01/30 23:59:59')
stat_info(1468,14,'C')= {
#Valid = 'Y',
#Type = 'Hb',
#Card = 41132285,
#NDV = 7951854,
#Nulls = 1,
#LP = 188681,
#LVLS = 3,
#CLUF = 0,
#NK = 0,
#NS = 41132285}
---> st = 0.00000
*** stdesc 2: column = OLDOPERATION, scan_type = EQU, key = ('XX字符串')
stat_info(1468,15,'C')= {
#Valid = 'Y',
#Type = 'Freq',
#Card = 41132285,
#NDV = 632,
#Nulls = 1,
#LP = 174435,
#LVLS = 3,
#CLUF = 0,
#NK = 0,
#NS = 41132285}
---> st = 0.00000
*** stdesc 3: column = TRANSACTION, scan_type = EQU, key = ('CheckOut')
stat_info(1468,4,'C')= {
#Valid = 'N',
#Type = '-',
#Card = 41132285,
#NDV = 13710761,
#Nulls = 411322,
#LP = 9000,
#LVLS = 3,
#CLUF = 0,
#NK = 0,
#NS = 0}
---> st = 0.02500
>>>>> total: 41132285, estimate match rows: 0, st: 0.00000; -- st_other: 0.950, n_stdesc: 3
---------------- single table access path probe for A ----------------
*** path 1: INDEX33558580 (FULL search), cost: 10134.70152
*** path 2: TABLE4KW_FACTORY_6166139 (FULL search), cost: 83940.36115
*** path 3: TABLE4KW_HISTSID_9212248 (FULL search), cost: 83940.36115
*** path 4: TABLE4KW_IDX05_6711904 (GE_L search), cost: 1998.07056
*** path 5: TABLE4KW_IDX1_8467196 (FULL search), cost: 83940.36115
*** path 6: TABLE4KW_IDX2_5201026 (EQU search), cost: 1998.07056
*** path 7: TABLE4KW_IDX3_1841487 (FULL search), cost: 83940.36115
*** path 8: TABLE4KW_INDEX04_9311094 (FULL search), cost: 83940.36115
*** path 9: TABLE4KW_LOTID_5861007 (FULL search), cost: 83940.36115
*** path 10: TABLE4KW_LOTSID_6916946 (FULL search), cost: 83940.36115
*** path 11: TABLE4KW_OPERATIN_3711195 (EQU search), cost: 0.21752
*** path 12: TABLE4KW_SEQUENCE_2518858 (FULL search), cost: 83940.36115
*** path 13: TABLE4KW_SHIFTDATE_5116588 (FULL search), cost: 83940.36115
*** path 14: IDX_0801_03_4311192 (FULL search), cost: 83940.36115
*** path 15: WIP_HIST_IDX03_11161417 (GE_L search), cost: 1998.07056
*** path 16: TABLE4KW_IDX111 (GE_LE search), cost: 0.11067
*** path 17: TABLE4KW_TRANSA1CTIONTIME_40 (GE_LE search), cost: 0.11843
>>> best access path: TABLE4KW_IDX111 (GE_LE search), cost: 0.11067
*** BEST PLAN FOR THIS STATEMENT ***
project[0x7f24c809aa70] (cost: 0.11067, rows: 1);
select (cost: 0.11067, rows: 1); ((A.TRANSACTION = 'CheckOut' AND exp11 <> 'D'))
base table(A, TABLE4KW_IDX111, GE_LE SEARCH) (cost: 0.11067, rows: 1);
-------------------------- END --------------------------
另外一个关于 索引 的统计信息 和 列的统计信息的 差同测试
/*
请注意在空实例的测试环境执行
*/
--清空整个实例的统计信息
sp_db_stat_deinit();
--构造数据
drop table test1;
create table test1(v1 bigint,v2 int);
insert into test1 VALUES
(1,1),
(1,2),
(2,1),
(2,2);
--补充索引和收集统计信息
stat 100 on test1(v1);
stat 100 on test1(v2);
create index idx_01 on test1(v1,v2);
SP_INDEX_STAT_INIT(user,'IDX_01');
--查看实例上的统计信息
select distinct
-- "ID",
"COLID",
-- "T_FLAG",
"T_TOTAL",
"N_SMAPLE",
"N_DISTINCT",
"N_NULL",
"V_MIN",
"V_MAX",
"BLEVEL",
"N_LEAF_PAGES",
"N_LEAF_USED_PAGES",
"CLUSTER_FACTOR",
"N_BUCKETS",
-- "DATA",
"COL_AVG_LEN",
-- "LAST_GATHERED",
"INFO1",
"INFO2" from sysstats;
select * from sysstats;
DATA字段 分析查看:直方桶数据
0x 0200 0100 0800 0700 0200 0000 0100 0000 0200 0000 0200 0000
0x 0200 0100 0800 0700 0200 0000 0100 0000 0200 0000 0200 0000
0x 0200 0100 0800 0700 0200 0000 0100 0000 0200 0000 0200 0000
0001 0002 0007 0008 0000 0002 0000 0001 0000 0002 0000 0002
0x 0200 0100 1000 0800 0200 0000 0000 0000
0100 0000 0000 0000
0200 0000 0000 0000
0200 0000 0000 0000
通过系统包,展示方便我们查看的 统计信息(含直方桶数据):
dbms_stats.column_statS_show(USER,'TEST1','V1');
NUM_DISTINCT LOW_VALUE HIGH_VALUE NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
2 1 2 0 2 4 FREQUENCY
OWNER TABLE_NAME COLUMN_NAME HISTOGRAM ENDPOINT_VALUE ENDPOINT_HEIGHT ENDPOINT_KEYHEIGHT ENDPOINT_DISTINCT
SYSDBA TEST1 V1 FREQUENCY 1 2 NULL NULL
SYSDBA TEST1 V1 FREQUENCY 2 2 NULL NULL
0x 0200 0100 0800 0700 0200 0000 0100 0000
0200 0000 0200 0000
0x 0200 0100 1000 0800 0200 0000 0000 0000
0100 0000 0000 0000
0200 0000 0000 0000
0200 0000 0000 0000
dbms_stats.column_statS_show(USER,'TEST1','V2');
NUM_DISTINCT LOW_VALUE HIGH_VALUE NUM_NULLS NUM_BUCKETS SAMPLE_SIZE HISTOGRAM
2 1 2 0 2 4 FREQUENCY
OWNER TABLE_NAME COLUMN_NAME HISTOGRAM ENDPOINT_VALUE ENDPOINT_HEIGHT ENDPOINT_KEYHEIGHT ENDPOINT_DISTINCT
SYSDBA TEST1 V2 FREQUENCY 1 2 NULL NULL
SYSDBA TEST1 V2 FREQUENCY 2 2 NULL NULL
dbms_stats.INDEX_statS_show(USER,'IDX_01');
BLEVEL LEAF_BLOCKS DISTINCT_KEYS CLUSTERING_FACTOR NUM_ROWS SAMPLE_SIZE
0 1 2 0 4 4
OWNER NAME COLUMN_NAME HISTOGRAM ENDPOINT_VALUE ENDPOINT_HEIGHT ENDPOINT_KEYHEIGHT ENDPOINT_DISTINCT
SYSDBA IDX_01 V1 FREQUENCY 1 2 NULL NULL
SYSDBA IDX_01 V1 FREQUENCY 2 2 NULL NULL
一个小问题:select id_code; -- 1-1-172-21.02.01-134275-ENT
注意,stat col的语法可以收集多个列
sp_db_stat_deinit();
stat 100 size 100 on test1(v1,v2);
select * from sysstats;
收集完之后,却没有统计信息,所以,stat 100 col 多个列的语法,慎用。
如何快速进行这个测试
drop table test1;
create table test1(v1 bigint,v2 int,v3 int);
insert into test1 VALUES
(1,1,2),
(1,2,3),
(2,1,5),
(2,2,6);
stat 100 size 1 on test1(v1,v2);
select * from sysstats;
dbms_stats.column_stats_show(USER,'TEST1','V1');




